id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
73787498 | Gibbs rotational ensemble | Statistical ensemble
The Gibbs rotational ensemble represents the possible states of a mechanical system in thermal and rotational equilibrium at temperature formula_0 and angular velocity formula_1. The Jaynes procedure can be used to obtain this ensemble. An ensemble is the set of microstates corresponding to a given macrostate.
The Gibbs rotational ensemble assigns a probability formula_2 to a given microstate characterized by energy formula_3 and angular momentum formula_4 for a given temperature formula_0 and rotational velocity formula_5.
formula_6
where formula_7 is the partition function
formula_8
Derivation.
The Gibbs rotational ensemble can be derived using the same general method as to derive any ensemble, as given by E.T. Jaynes in his 1956 paper Information Theory and Statistical Mechanics. Let formula_9 be a function with expectation value
formula_10
where formula_2 is the probability of formula_11, which is not known a priori. The probabilities formula_2 obey normalization
formula_12
To find formula_2, the Shannon entropy formula_13 is maximized, where the Shannon entropy goes as
formula_14
The method of Lagrange multipliers is used to maximize formula_13 under the constraints formula_15 and the normalization condition, using Lagrange multipliers formula_16 and formula_17 to find
formula_18
formula_16 is found via normalization
formula_19
and formula_15 can be written as
formula_20
where formula_7 is the partition function
formula_21
This is easily generalized to any number of equations formula_9 via the incorporation of more Lagrange multipliers.
Now investigating the Gibbs rotational ensemble, the method of Lagrange multipliers is again used to maximize the Shannon entropy formula_13, but this time under the constraints of energy expectation value formula_22 and angular momentum expectation value formula_23, which gives formula_2 as
formula_24
Via normalization, formula_25 is found to be
formula_26
Like before, formula_22 and formula_23 are given by
formula_27
formula_28
The entropy formula_29 of the system is given by
formula_30
such that
formula_31
where formula_32 is the Boltzmann constant. The system is assumed to be in equilibrium, follow the laws of thermodynamics, and have fixed uniform temperature formula_0 and angular velocity formula_1. The first law of thermodynamics as applied to this system is
formula_33
Recalling the entropy differential
formula_34
Combining the first law of thermodynamics with the entropy differential gives
formula_35
Comparing this result with the entropy differential given by entropy maximization allows determination of formula_36 and formula_37
formula_38
formula_39
which allows the probability of a given state formula_2 to be written as
formula_40
which is recognized as the probability of some microstate given a prescribed macrostate using the Gibbs rotational ensemble. The term formula_41 can be recognized as the effective Hamiltonian formula_42 for the system, which then simplifies the Gibbs rotational partition function to that of a normal canonical system
formula_43
Applicability.
The Gibbs rotational ensemble is useful for calculations regarding rotating systems. It is commonly used for describing particle distribution in centrifuges. For example, take a rotating cylinder (height formula_7, radius formula_44) with fixed particle number formula_45, fixed volume formula_46, fixed average energy formula_22, and average angular momentum formula_47. The expectation value of number density of particles formula_48 at radius formula_49 can be written as
formula_50
Using the Gibbs rotational partition function, formula_7 can be calculated to be
formula_51
Density of a particle at a given point can be thought of as unity divided by an infinitesimal volume, which can be represented as a delta function.
formula_52
which finally gives formula_48 as
formula_53
which is the expected result.
Difference between Grand canonical ensemble and Gibbs canonical ensemble.
The Grand canonical ensemble and the Gibbs canonical ensemble are two different statistical ensembles used in statistical mechanics to describe systems with different constraints.
The grand canonical ensemble describes a system that can exchange both energy and particles with a reservoir. It is characterized by three variables: the temperature (T), chemical potential (μ), and volume (V) of the system. The chemical potential determines the average particle number in this ensemble, which allows for some variation in the number of particles. The grand canonical ensemble is commonly used to study systems with a fixed temperature and chemical potential, but a variable particle number, such as gases in contact with a particle reservoir.
On the other hand, the Gibbs canonical ensemble describes a system that can exchange energy but has a fixed number of particles. It is characterized by two variables: the temperature (T) and volume (V) of the system. In this ensemble, the energy of the system can fluctuate, but the number of particles remains fixed. The Gibbs canonical ensemble is commonly used to study systems with a fixed temperature and particle number, but variable energy, such as systems in thermal equilibrium.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "\\vec\\omega"
},
{
"math_id": 2,
"text": "p_i"
},
{
"math_id": 3,
"text": "E_i"
},
{
"math_id": 4,
"text": "\\vec J_i"
},
{
"math_id": 5,
"text": "\\vec \\omega"
},
{
"math_id": 6,
"text": "\np_i = \\frac{1}{Z} e^{-\\beta(E_i-\\vec\\omega\\cdot\\vec J_i)}\n"
},
{
"math_id": 7,
"text": "Z"
},
{
"math_id": 8,
"text": "\n Z = \\sum_i e^{-\\beta(E_i-\\vec\\omega\\cdot\\vec J_i)}\n"
},
{
"math_id": 9,
"text": "f(x)"
},
{
"math_id": 10,
"text": "\n \\langle f(x)\\rangle = \\sum_i p_i f(x_i)\n"
},
{
"math_id": 11,
"text": "x_i"
},
{
"math_id": 12,
"text": "\n \\sum_i p_i = 1\n"
},
{
"math_id": 13,
"text": "H"
},
{
"math_id": 14,
"text": "\n H\\sim\\sum_i p_i \\ln(p_i)\n"
},
{
"math_id": 15,
"text": "\\langle f(x)\\rangle"
},
{
"math_id": 16,
"text": "\\lambda"
},
{
"math_id": 17,
"text": "\\mu"
},
{
"math_id": 18,
"text": "\n p_i = e^{-\\lambda-\\mu f(x_i)}\n"
},
{
"math_id": 19,
"text": "\n \\lambda = \\ln\\left(\\sum_i e^{-\\mu f(x_i)}\\right) = \\ln(Z(\\mu))\n"
},
{
"math_id": 20,
"text": "\n \\langle f(x)\\rangle = -\\frac{\\partial}{\\partial \\mu} \\ln\\left(\\sum_i e^{-\\mu f(x_i)}\\right)=-\\frac{\\partial}{\\partial \\mu}\\ln(Z(\\mu))\n"
},
{
"math_id": 21,
"text": "\n Z(\\mu) = \\sum_i e^{-\\mu f(x_i)}\n"
},
{
"math_id": 22,
"text": "\\langle E\\rangle"
},
{
"math_id": 23,
"text": "\\langle J\\rangle"
},
{
"math_id": 24,
"text": "\n p_i = e^{-\\lambda_0 E_i-\\vec \\lambda_1\\cdot\\vec J_i-\\lambda_3}\n"
},
{
"math_id": 25,
"text": "\\lambda_3"
},
{
"math_id": 26,
"text": "\n \\lambda_3=\\ln\\left(\\sum_i e^{-\\lambda_0 E_i-\\vec \\lambda_1 \\cdot\\vec J_i}\\right)=\\ln(Z)\n"
},
{
"math_id": 27,
"text": "\n\\langle E\\rangle=-\\frac{\\partial}{\\partial\\lambda_0}\\ln\\left(\\sum_i e^{-\\lambda_0 E_i-\\vec \\lambda_1\\cdot\\vec J_i}\\right)=-\\frac{\\partial}{\\partial\\lambda_0}\\ln\\left(Z\\right)\n"
},
{
"math_id": 28,
"text": "\n \\langle J\\rangle=-\\frac{\\partial}{\\partial\\lambda_1}\\ln\\left(\\sum_i e^{-\\lambda_0 E_i-\\vec\\lambda_1\\cdot\\vec J_i}\\right)=-\\frac{\\partial}{\\partial\\lambda_1}\\ln(Z)\n"
},
{
"math_id": 29,
"text": "S"
},
{
"math_id": 30,
"text": "\n S=-k\\sum_i p_i\\ln(p_i)=k(\\lambda_0 \\langle E\\rangle +\\vec\\lambda_1\\cdot\\langle \\vec{J} \\rangle+\\ln(Z))\n"
},
{
"math_id": 31,
"text": "\n dS = k(\\lambda_0 \\mathrm{d}\\langle E\\rangle+\\vec\\lambda_1\\mathrm{d}\\langle \\vec{J}\\rangle+\\mathrm{d}\\ln(Z))\n"
},
{
"math_id": 32,
"text": "k"
},
{
"math_id": 33,
"text": "\n \\mathrm{d}E = \\mathrm{d}Q+\\vec\\omega\\cdot\\mathrm{d}\\langle \\vec J\\rangle\n"
},
{
"math_id": 34,
"text": "\n \\mathrm{d}S = \\frac{\\mathrm{d}Q}{T}\n"
},
{
"math_id": 35,
"text": "\n \\mathrm{d}S = \\frac{\\mathrm{d}E}{T}-\\frac{\\vec\\omega\\cdot\\mathrm{d}\\langle \\vec J\\rangle}{T}\n"
},
{
"math_id": 36,
"text": "\\lambda_0"
},
{
"math_id": 37,
"text": "\\vec\\lambda_1"
},
{
"math_id": 38,
"text": "\n \\lambda_0 = \\beta\n"
},
{
"math_id": 39,
"text": "\n \\vec\\lambda_1=-\\beta \\vec\\omega\n"
},
{
"math_id": 40,
"text": "\n p_i=\\frac{1}{Z}e^{-\\beta(E_i-\\vec{\\omega}\\cdot\\vec J_i)}\n"
},
{
"math_id": 41,
"text": "E_i-\\vec\\omega\\cdot\\vec J_i"
},
{
"math_id": 42,
"text": "\\mathcal{H}"
},
{
"math_id": 43,
"text": "\n Z=\\sum_i e^{-\\beta\\mathcal{H}_i}\n"
},
{
"math_id": 44,
"text": "R"
},
{
"math_id": 45,
"text": "N"
},
{
"math_id": 46,
"text": "V"
},
{
"math_id": 47,
"text": "\\langle \\vec J\\rangle"
},
{
"math_id": 48,
"text": "\\langle n(r)\\rangle"
},
{
"math_id": 49,
"text": "r"
},
{
"math_id": 50,
"text": "\n \\langle n(r)\\rangle =\\frac1Z \\int n(r) \\frac{\\mathrm{d}^3 p\\; \\mathrm{d}^3q}{h^3}e^{-\\beta(E-\\vec{\\omega}\\cdot\\vec J)}\n"
},
{
"math_id": 51,
"text": "\n Z=\\frac{\\pi ^{5/2} Z \\sqrt{\\beta m} \\left(e^{\\frac{1}{2} \\beta m R^2 \\omega ^2}-1\\right)}{\\sqrt{2} \\beta ^3 h^3 \\omega ^2}\n"
},
{
"math_id": 52,
"text": "\n n(r) = \\frac{1}{\\mathrm{d}r\\; r\\;\\mathrm{d}\\theta \\;\\mathrm{d}z}\\rightarrow\\frac{\\delta(r'-r)\\delta(\\theta'-\\theta)\\delta(z'-z)}{r'}\n"
},
{
"math_id": 53,
"text": "\n \\langle n(r)\\rangle=\\frac{\\beta m \\omega ^2}{2\\pi Z}\\frac{ e^{\\frac{1}{2} \\beta m r^2 \\omega ^2}}{e^{\\frac{1}{2} \\beta m R^2 \\omega ^2}-1}\n"
}
]
| https://en.wikipedia.org/wiki?curid=73787498 |
73788761 | Faber–Evans model | Phenomenon in solid-state physics
The Faber–Evans model for crack deflection, is a fracture mechanics-based approach to predict the increase in toughness in two-phase ceramic materials due to crack deflection. The effect is named after Katherine Faber and her mentor, Anthony G. Evans, who introduced the model in 1983. The Faber–Evans model is a principal strategy for tempering brittleness and creating effective ductility.
Fracture toughness is a critical property of ceramic materials, determining their ability to resist crack propagation and failure. The Faber model considers the effects of different particle morphologies, including spherical, rod-shaped, and disc-shaped particles, and their influence on the driving force at the tip of a tilted and/or twisted crack. The model first suggested that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. The findings provide a basis for designing high-toughness two-phase ceramic materials, with a focus on optimizing particle shape and volume fraction.
Fracture mechanics and crack deflection.
Fracture mechanics is a fundamental discipline for understanding the mechanical behavior of materials, particularly in the presence of cracks. The critical parameter in fracture mechanics is the stress intensity factor (K), which is related to the strain energy release rate (G) and the fracture toughness (Gc). When the stress intensity factor reaches the material's fracture toughness, crack propagation becomes unstable, leading to failure.
In two-phase ceramic materials, the presence of a secondary phase can lead to crack deflection, a phenomenon where the crack path deviates from its original direction due to interactions with the second-phase particles. Crack deflection can lead to a reduction in the driving force at the crack tip, increasing the material's fracture toughness. The effectiveness of crack deflection in enhancing fracture toughness depends on several factors, including particle shape, size, volume fraction, and spatial distribution.
The study presents weighting functions, F(θ), for the three particle morphologies, which describe the distribution of tilt angles (θ) along the crack front:
formula_0
formula_1
formula_2
The weighting functions are used to determine the net driving force on the tilted crack for each morphology. The relative driving force for spherical particles is given by:
formula_3
where formula_4 and formula_5 prescribes the strain energy release rate only for that portion of the crack front which tilts. To characterize the entire crack front at initial tilt, formula_5 must be qualified by the fraction of the crack length intercepted and superposed on the driving force that derives from the remaining undeflected portion of the crack. The resultant toughening increment, derived directly from the driving forces, is given by:
formula_6
formula_7
formula_8
where formula_9 represents the fracture toughness of the matrix material without the presence of any reinforcing particles, formula_10 is the volume fraction of spheres, formula_11 relates the rod length formula_12 to its radius, formula_13, and formula_14 is the ratio of the disc radius, formula_13, to its thickness, formula_15.
Spatial location and orientation of particles.
The spatial location and orientation of adjacent particles play a crucial role in determining whether the inter-particle crack front will tilt or twist. If adjacent particles produce tilt angles of opposite sign, twist of the crack front will result. Conversely, tilt angles of like sign at adjacent particles cause the entire crack front to tilt. Therefore, to evaluate the toughening increment, all possible particle configurations must be considered.
For spherical particles, the average twist angle is determined by the mean center-to-center nearest neighboring distance, formula_16, between particles with spheres of radius r:
formula_17
The maximum twist angle occurs when the particles are nearly co-planar with the crack, given by:
formula_18
and depends exclusively on the volume fraction.
For rod-shaped particles, the analysis of crack front twist is more complex due to difficulties in describing the rod orientation with respect to the crack front and adjacent rods. The twist angle, formula_19, is determined by the effective tilt angle, formula_20, and the inter-particle spacing between randomly arranged rod-shaped particles. The twist of the crack front is influenced not only by the volume fraction of rods but also by the ratio of the rod length to radius:
formula_21
where formula_22 represents the dimensionless effective inter-particle spacing between two adjacent rod-shaped particles.
Morphology and volume effects on fracture toughness.
The analysis reveals that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks, with the potential to increase fracture toughness by up to four times. This toughening arises primarily from the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in increasing fracture toughness.
For disc-shaped particles with high aspect ratios, initial crack front tilt can provide significant toughening, although the twist component still dominates. In contrast, neither sphere nor rod particles derive substantial toughening from the initial tilting process. As the volume fraction of particles increases, an asymptotic toughening effect is observed for all three morphologies at volume fractions above 0.2. For spherical particles, the interparticle spacing distribution has a significant impact on toughening, with greater enhancements when spheres are nearly contacting and twist angles approach π/2.
The Faber–Evans model suggests that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in enhancing toughness. However, the interparticle spacing distribution plays a significant role in the toughening by spherical particles, with greater toughening achieved when spheres are nearly contacting.
In designing high-toughness two-phase ceramic materials, the focus should be on optimizing particle shape and volume fraction. The model proved that ideal second phase should be chemically compatible and present in amounts of 10 to 20 volume percent, with particles having high aspect ratios, particularly those with rod-shaped morphologies, providing the maximum toughening effect. This model is often used in the development of advanced ceramic materials with improved performance when the factors that contribute to the increase in fracture toughness is a consideration. | [
{
"math_id": 0,
"text": "F(\\theta)_{\\rm sphere}=\\left ( \\frac{4}{\\pi} \\right )\\sin^2\\theta \\,d\\theta"
},
{
"math_id": 1,
"text": "F(\\theta)_{\\rm disk}=\\left ( \\frac{4}{\\pi} \\right )\\sin^2\\theta \\,d\\theta"
},
{
"math_id": 2,
"text": "F(\\theta)_{\\rm rod}\\approx (1.55 + 1.10\\theta - 2.42\\theta^2 + 1.78\\theta^3)sin\\theta \\,cos\\theta \\,d\\theta"
},
{
"math_id": 3,
"text": "\\left \\langle {G} \\right \\rangle_{sphere}^t/G_\\infty=\\left ( \\frac{4}{\\pi} \\right )\\sin^2\\theta[(k_1^t)^2+(k_2^t)]d\\theta"
},
{
"math_id": 4,
"text": "k_i=k_i/K_1"
},
{
"math_id": 5,
"text": "\\left \\langle {G} \\right \\rangle^t"
},
{
"math_id": 6,
"text": "(G_{\\rm c}^{t})_{sphere}=(1+0.87V_f)G_{\\rm c}^{m}"
},
{
"math_id": 7,
"text": "(G_{\\rm c}^{t})_{rod}\\approx(1+V_f(0.6+0.007(H/r)-0.0001(H/r)^2)G_{\\rm c}^{m}"
},
{
"math_id": 8,
"text": "(G_{\\rm c}^{t})_{disk}=[1+0.56V_f(r/t)]G_{\\rm c}^{m}"
},
{
"math_id": 9,
"text": "G_{\\rm c}^{m}"
},
{
"math_id": 10,
"text": "V_f"
},
{
"math_id": 11,
"text": "(H/r)"
},
{
"math_id": 12,
"text": "H"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "(r/t)"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": "\\Delta"
},
{
"math_id": 17,
"text": "\\frac{\\Delta}{r}=\\frac{e^{8V_f}}{V_f^{1/3}}\\int_{8V_f}^{\\infty} x^{1/3}e^{-x}dx"
},
{
"math_id": 18,
"text": "\\phi_{max}=\\sin^{-1}\\left ( \\frac{2r}{\\Delta} \\right )"
},
{
"math_id": 19,
"text": "\\phi"
},
{
"math_id": 20,
"text": "\\lambda"
},
{
"math_id": 21,
"text": "\\phi=\\tan^{-1}\\left \\{ \\frac{\\alpha\\sin\\theta_1+(1-\\beta)\\sin\\theta_2}{\\Delta'} \\right \\}"
},
{
"math_id": 22,
"text": "\\Delta'"
}
]
| https://en.wikipedia.org/wiki?curid=73788761 |
73803388 | Tetraoxidane | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Tetraoxidane is an inorganic compound of hydrogen and oxygen with the chemical formula H2O4. This is one of the unstable hydrogen polyoxides.
Synthesis.
The compound is prepared by a chemical reaction between hydroperoxyl radicals () at low temperatures:
formula_0
Physical properties.
This is the fourth member of the polyoxidanes. The first three are water [(mon)oxidane], hydrogen peroxide (dioxidane), and trioxidane. Tetroxidane is more unstable than the previous compounds. The term "tetraoxidane" extends beyond the parent compound to several daughter compounds of the general formula R2O4, where R can be hydrogen, halogen atoms, or various inorganic and organic monovalent radicals. The two Rs together can be replaced by a divalent radical, so heterocyclic tetroxidanes also exist.
Ionization.
Tetroxidane autoionizes when in liquid form:
formula_1
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{2HO_2^\\bullet \\xrightarrow{} H_2O_4}"
},
{
"math_id": 1,
"text": "\\mathrm{H_2O_4 \\rightleftarrows H^+ + HO_4^-}"
},
{
"math_id": 2,
"text": "\\mathrm{2H_2O_4 \\rightleftarrows H_3O_4^+ + HO_4^-}"
}
]
| https://en.wikipedia.org/wiki?curid=73803388 |
7381179 | Avrami equation | Description of constant-temperature solid phase changes
The Avrami equation describes how solids transform from one phase to another at constant temperature. It can specifically describe the kinetics of crystallisation, can be applied generally to other changes of phase in materials, like chemical reaction rates, and can even be meaningful in analyses of ecological systems.
The equation is also known as the Johnson–Mehl–Avrami–Kolmogorov (JMAK) equation. The equation was first derived by Johnson, Mehl, Avrami and Kolmogorov (in Russian) in a series of articles published in the Journal of Chemical Physics between 1939 and 1941. Moreover, Kolmogorov treated statistically the crystallization of a solid in 1937 (in Russian, Kolmogorov, A. N., Izv. Akad. Nauk. SSSR., 1937, 3, 355).
Transformation kinetics.
Transformations are often seen to follow a characteristic s-shaped, or sigmoidal, profile where the transformation rates are low at the beginning and the end of the transformation but rapid in between.
The initial slow rate can be attributed to the time required for a significant number of nuclei of the new phase to form and begin growing. During the intermediate period the transformation is rapid as the nuclei grow into particles and consume the old phase while nuclei continue to form in the remaining parent phase.
Once the transformation approaches completion, there remains little untransformed material for further nucleation, and the production of new particles begins to slow. Additionally, the previously formed particles begin to touch one another, forming a boundary where growth stops.
Derivation.
The simplest derivation of the Avrami equation makes a number of significant assumptions and simplifications:
If these conditions are met, then a transformation of formula_1 into formula_2 will proceed by the nucleation of new particles at a rate formula_3 per unit volume, which grow at a rate formula_4 into spherical particles and only stop growing when they impinge upon each other. During a time interval formula_5, nucleation and growth can only take place in untransformed material. However, the problem is more easily solved by applying the concept of an "extended volume" – the volume of the new phase that would form if the entire sample was still untransformed. During the time interval formula_6 to formula_7 the number of nuclei "N" that appear in a sample of volume "V" will be given by
formula_8
where formula_3 is one of two parameters in this simple model: the nucleation rate per unit volume, which is assumed to be constant. Since growth is isotropic, constant and unhindered by previously transformed material, each nucleus will grow into a sphere of radius formula_9, and so the extended volume of formula_2 due to nuclei appearing in the time interval will be
formula_10
where formula_4 is the second of the two parameters in this simple model: the growth velocity of a crystal, which is also assumed constant. The integration of this equation between formula_11 and formula_12 will yield the total extended volume that appears in the time interval:
formula_13
Only a fraction of this extended volume is real; some portion of it lies on previously transformed material and is virtual. Since nucleation occurs randomly, the fraction of the extended volume that forms during each time increment that is real will be proportional to the volume fraction of untransformed formula_1. Thus
formula_14
rearranged
formula_15
and upon integration:
formula_16
where "Y" is the volume fraction of formula_2 (formula_17).
Given the previous equations, this can be reduced to the more familiar form of the Avrami (JMAK) equation, which gives the fraction of transformed material after a hold time at a given temperature:
formula_18
where formula_19, and formula_20.
This can be rewritten as
formula_21
which allows the determination of the constants "n" and formula_22 from a plot of formula_23 vs formula_0. If the transformation follows the Avrami equation, this yields a straight line with slope "n" and intercept formula_24.
Final crystallite (domain) size.
Crystallization is largely over when formula_25 reaches values close to 1, which will be at a crystallization time formula_26 defined by formula_27, as then the exponential term in the above expression for formula_25 will be small. Thus crystallization takes a time of order
formula_28
i.e., crystallization takes a time that decreases as one over the one-quarter power of the nucleation rate per unit volume, formula_3, and one over the three-quarters power of the growth velocity formula_4. Typical crystallites grow for some fraction of the crystallization time formula_26 and so have a linear dimension formula_29, or
formula_30
i.e., the one quarter power of the ratio of the growth velocity to the nucleation rate per unit volume. Thus the size of the final crystals only depends on this ratio, within this model, and as we should expect, fast growth rates and slow nucleation rates result in large crystals. The average volume of the crystallites is of order this typical linear size cubed.
This all assumes an exponent of formula_20, which is appropriate for the uniform (homogeneous) nucleation in three dimensions. Thin films, for example, may be effectively two-dimensional, in which case if nucleation is again uniform the exponent formula_31. In general, for uniform nucleation and growth, formula_32, where formula_33 is the dimensionality of space in which crystallization occurs.
Interpretation of Avrami constants.
Originally, "n" was held to have an integer value between 1 and 4, which reflected the nature of the transformation in question. In the derivation above, for example, the value of 4 can be said to have contributions from three dimensions of growth and one representing a constant nucleation rate. Alternative derivations exist, where "n" has a different value.
If the nuclei are preformed, and so all present from the beginning, the transformation is only due to the 3-dimensional growth of the nuclei, and "n" has a value of 3.
An interesting condition occurs when nucleation occurs on specific sites (such as grain boundaries or impurities) that rapidly saturate soon after the transformation begins. Initially, nucleation may be random, and growth unhindered, leading to high values for "n" (3 or 4). Once the nucleation sites are consumed, the formation of new particles will cease.
Furthermore, if the distribution of nucleation sites is non-random, then the growth may be restricted to 1 or 2 dimensions. Site saturation may lead to "n" values of 1, 2 or 3 for surface, edge and point sites respectively.
Applications in biophysics.
The Avrami equation was applied in cancer biophysics in two aspects. First aspect is connected with tumor growth and cancer cells kinetics, which can be described by the sigmoidal curve. In this context the Avrami function was discussed as an alternative to the widely used Gompertz curve. In the second aspect the Avrami nucleation and growth theory was used together with multi-hit theory of carcinogenesis to show how the cancer cell is created. The number of oncogenic mutations in cellular DNA can be treated as nucleation particles which can transform whole DNA molecule into cancerous one (neoplastic transformation). This model was applied to clinical data of gastric cancer, and shows that Avrami's constant "n" is between 4 and 5 which suggest the fractal geometry of carcinogenic dynamics. Similar findings were published for breast and ovarian cancers, where "n"=5.3.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ln{t}"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "\\dot{N}"
},
{
"math_id": 4,
"text": "\\dot{G}"
},
{
"math_id": 5,
"text": "0 < \\tau < t"
},
{
"math_id": 6,
"text": "\\tau"
},
{
"math_id": 7,
"text": "\\tau+\\mathrm{d}\\tau"
},
{
"math_id": 8,
"text": "\\mathrm{d}N = V\\dot{N}\\,\\mathrm{d}\\tau,"
},
{
"math_id": 9,
"text": "\\dot{G}(t - \\tau)"
},
{
"math_id": 10,
"text": "\\mathrm{d}V_\\beta^e = \\frac{4\\pi}{3} \\dot{G}^3(t - \\tau)^3 V\\dot{N}\\,d\\tau,"
},
{
"math_id": 11,
"text": "\\tau = 0"
},
{
"math_id": 12,
"text": "\\tau = t"
},
{
"math_id": 13,
"text": "V_\\beta^e = \\frac{\\pi}{3} V\\dot{N}\\dot{G}^3 t^4."
},
{
"math_id": 14,
"text": "\\mathrm{d}V_\\beta = \\mathrm{d}V_\\beta^e \\left(1 - \\frac{V_\\beta}{V} \\right),"
},
{
"math_id": 15,
"text": "\\frac{1}{1 - V_\\beta/V}\\,\\mathrm{d}V_\\beta = \\mathrm{d}V_\\beta^e,"
},
{
"math_id": 16,
"text": "\\ln(1 - Y) = -V_\\beta^e/V,"
},
{
"math_id": 17,
"text": "V_\\beta/V"
},
{
"math_id": 18,
"text": "Y = 1 - \\exp[-K\\cdot t^n],"
},
{
"math_id": 19,
"text": "K = \\pi\\dot{N}\\dot{G}^3/3"
},
{
"math_id": 20,
"text": "n = 4"
},
{
"math_id": 21,
"text": "\\ln\\big(-\\ln[1 - Y(t)]\\big) = \\ln K + n \\ln t,"
},
{
"math_id": 22,
"text": "K"
},
{
"math_id": 23,
"text": "\\ln{\\left(\\ln{\\tfrac{1}{1-Y}}\\right)}"
},
{
"math_id": 24,
"text": "\\ln{K}"
},
{
"math_id": 25,
"text": "Y"
},
{
"math_id": 26,
"text": "t_X"
},
{
"math_id": 27,
"text": "Kt_X^n \\sim 1"
},
{
"math_id": 28,
"text": "t_X \\sim \\frac{1}{\\left(\\dot{N}\\dot{G}^3\\right)^{1/4}},"
},
{
"math_id": 29,
"text": "\\dot{G} t_X"
},
{
"math_id": 30,
"text": "\\text{crystallite linear size} \\sim \\dot{G} t_X \\sim \\left(\\frac{\\dot{G}}{\\dot{N}}\\right)^{1/4},"
},
{
"math_id": 31,
"text": "n = 3"
},
{
"math_id": 32,
"text": "n = D + 1"
},
{
"math_id": 33,
"text": "D"
}
]
| https://en.wikipedia.org/wiki?curid=7381179 |
73813366 | Ecohydraulics | Interdisciplinary research field
Ecohydraulics is an interdisciplinary science studying the hydrodynamic factors that affect the survival and reproduction of aquatic organisms and the activities of aquatic organisms that affect hydraulics and water quality. Considerations include habitat maintenance or development, habitat-flow interactions, and organism responses. Ecohydraulics assesses the magnitude and timing of flows necessary to maintain a river ecosystem and provides tools to characterize the relation between flow discharge, flow field, and the availability of habitat within a river ecosystem. Based on this relation and insights into the hydraulic conditions optimal for different species or communities, ecohydraulics-modeling predicts how hydraulic conditions in a river change, under different development scenarios, the aquatic habitat of species or ecological communities. Similar considerations also apply to coastal, lake, and marine eco-systems.
In the past century, hydraulic engineers have been challenged by habitat modeling, complicated by lack of knowledge regarding ecohydraulics. Since the 1990s, especially after the first International Symposium on Ecohydraulics in 1994, ecohydraulics has developed rapidly, mainly to assess the impacts of human-induced changes of water flow and sediment conditions in river ecosystems...
Ecohydraulics analyzes, models, and seeks to mitigate the adverse impacts of changes in hydraulic characteristics caused by dam construction and other human activities, on the suitability of habitat for organisms, such as fish and invertebrates, and to predict changes in biological communities and biodiversity. Many articles report research findings about fluvial ecohydraulics. For example, the International Association for Hydro-Environment Engineering and Research (IAHR) and Taylor & Francis have been publishing the Journal of Ecohydraulics since 2016. The journal spans all topics in natural and applied ecohydraulics in all environmental settings.
Key Concepts.
An aquatic ecosystem is defined as a community of aquatic organisms, with the species dependent on each other and on their physical-chemical environment and linked through flows of energy and materials. The distribution patterns of species are affected by the spatial and temporal characteristics of water flow.
Flow velocity affects the delivery of food and nutrients to organisms. It can also dislodge organisms and prevent them from remaining at a site. Some vertebrates and invertebrates, such as the shellfish "Corbicula fluminea", filter their food through flowing water. Flow velocity and turbulence are critical to the life activities of many species. For example, some fish migrate and some fish spawn when they detect high flows. However, extremely high flow velocity, or high intensities of turbulence, created by hydraulic engineering infrastructure can exert pressure on most fish and invertebrates and even kill them. When the flow velocity is below 0.1 m/s, the biological community in a river is similar to that in a lake. Usually, in rivers, flow velocity between 0.1–1 m/s is most suitable for major-stream fish species.
High flow velocity and turbulence are cues for timing migration and spawning of some fish. Asian carp lay floating eggs when they sense increasing discharge resulting from a spring flood flow. The settling velocity of the eggs varies in the range of 0.7-1.5 cm/s. Once a carp egg settles on the riverbed, the egg cannot hatch. Only if flow velocity exceeds the settling velocity can an egg remain in suspension and complete incubation within 24–40 hours.
Golden mussels ("Limnoperna fortune") are an invasive filter-feeding macro-invertebrate species. Dense attachment of the species to the boundaries of water-transfer tunnels and pipelines results in biofouling, causing high resistance to water flow and damage to pipeline walls. This consequence, along with the decay of dead mussels, harm water quality. Golden mussel larvae can be killed by high-frequency turbulence and increased flow velocity. Experiments show that the larvae can be killed in a flow field with velocities in excess of 0.08-0.15 m/s and a turbulence frequency higher than 30 Hz. Preliminary results have shown that the higher the turbulence intensity the higher the mortality of golden mussel larvae. On the other hand, low vertical mixing or turbulence is a key factor in favoring the development of harmful algal blooms.
Reservoirs are operated according to the requirements of power generation, water supply, navigation, and, in recent decades, environmental flows. Thus, the time and magnitude of peak discharge of floods may change, which thereby affect the life cycle and habitat of aquatic bio-communities. Most faunal species in a river cannot adapt to the non-natural change of flow and disappear from the reach downstream of the dam. Fish stranding caused by reservoir operation has occurred downstream of hydropower stations in many countries. A hydro-power dam, such as Fengshuba Dam on the East River, China, releases water suddenly during daytime and shuts off at night to meet an unsteady power demand. The instantaneous fluctuation in flow discharge and velocity kills most species except for those (e.g., the small shrimp, "Palaemonidae") that can hide in crevices in riverbed sediment.
Water depth is crucial for large fauna. The habitats created by shallow rapids of small rivers in mountainous areas typically suit invertebrates and small vertebrates. Only mountain streams with many deep pools can have medium-sized creatures such as rainbow trout. White-flag dolphin, Chinese sturgeon, and finless porpoise require the water depths associated with the middle and lower reaches of the Yangtze River, where there is sufficient water depth for them to grow and hide. On the other hand, few animals can live in the lower layers of deep lakes and reservoirs, because of low dissolved-oxygen (DO) concentration.
Temperature is an important factor for many species. Salmon can only survive in cold water rivers. The Mississippi and Yangtze rivers are not suitable for salmon due to high temperatures. However, aquatic insects grow and develop more rapidly in tropical and subtropical rivers than in temperate rivers. Some species may complete two or more generations per year at warmer sites yet only one or fewer at cooler sites. Some dragonfly species on the Tibetan Plateau live for more than ten years in cold water before attaining sexual maturity and eclosion.
Variability of hydraulic characteristics is essential for biodiversity. A wide variety of flow velocities, water depths, and temperatures, both spatial and temporal, are needed to maintain high levels of biodiversity in aquatic ecosystems.
Eutrophication refers to the enrichment of a water body by nutrients to a level that results in algal blooms, deterioration in water quality, and undesirable disruption to the balance of an aquatic ecosystem. Eutrophication and algal blooms occur in rivers, lakes, estuaries, coastal, and marine waters. Algal blooms in lakes and coastal waters may lead to massive fish kills. The onset and the risk of algal blooms are closely related to the hydraulic flow and vertical turbulent mixing processes. This relationship has been shown by a real-time forecasting and warning system established to monitor algal and DO dynamics. Monitoring shows that diurnal DO fluctuations mirror the algal biomass. Algae of high density can increase fluid viscosity by more than 100%. Real-time monitoring and early warning systems can help with adaptive management to mitigate the harmful effects of massive algal blooms.
Emergent vegetation (e.g., reeds and bulrushes) on floodplains and riparian wetlands imposes significant resistance to overbank flow. The resistance of emergent vegetation is so great that the resistance coefficient in the equations of hydraulics requires adjustment. For instance, the Manning's n increases tenfold as flow depth increases from 0.03 to 0.5 m, mainly due to emergent vegetation. Emergent and submerged vegetation change the turbulence structure and sediment transportation, and may cause these quantities to vary with flow velocity over a floodplain.
Aquatic animals may change flow and sediment transportation. Initiation of motion for sediment and transportation are affected by salmonid spawning. Clustering of bed gravel is important to embryo survival of the species. The spawning fish move the riverbed pebbles and bury their eggs underneath and the egg burial depth tends to be just deeper than the observed scour depth. The species has adapted its egg placement strategy to the process of flood scouring. Beavers may construct wood dams across small streams and the beaver dams alter the hydrological process and hydraulic characteristics of a stream. Invasion of Zebra mussels and Golden mussels into pipelines, such as the cooling water pipeline of a hydro-power plant, can block pipelines and hamper power generation.
Habitat is an area where plants or animals normally live, grow, feed, reproduce, and otherwise exist for any portion of their life cycle. Because each species responds differently to environmental and biotic conditions, the term habitat is specific to a species, and in more general terms, specific to guilds of species; for example, 'fish habitat' is specific to fish. Hydraulic attributes are considered to be the most important features of habitat for almost all organisms in rivers. The biological diversity and species abundance in streams depend on the diversity of available habitat. The slope, planform, confinement, and cross-sectional shape and dimensions of a stream, and the grain-size distribution of bed sediment affect aquatic habitat. Under less disturbed situations, a narrow, steep-walled cross section provides less physical area for habitat than does a wide cross section. A steep, confined stream is a high-energy environment that may limit the occurrence, diversity, and stability of habitat.
Substrate is a general term that refers to all material that constitutes a riverbed or stream bed, which in most cases mainly comprise sediment. Stream-bed and bank erosion, sediment transportation, and deposition are among the most important factors that affect aquatic habitat. Stable streams are streams with a stable channel bed, which normally features energy-dissipation structures and little bed-load motion (transport of particles from a bed). Such streams have the best habitat for fish and benthic invertebrates. Incised streams are streams experiencing channel bed erosion, which provide the second-best habitat. Streams with intensive bed-load motion and sedimentation provide bad habitat for organisms. The taxa richness or biodiversity of these different types of rivers varies extremely because of different magnitudes of erosion, sedimentation, and sediment transportation. A uniform sand bed in a stream provides less potential habitat diversity than a bed with a step-pool system, boulder cascades, rapids, pool-riffle sequences, or other types of "bed structures" because of the resting places such feature provide.
Hyporheic zone is a layer of substrate on the riverbed in which benthic animals normally live or exist for any portion of their life cycle. Animals in the hyporheic zone usually are protected from severe washouts and temperature extremes. Other species prefer the stream bed surface for its higher DO concentration, direct contact with flowing water, and high food availability. Macro-invertebrates inhabit a sediment bed layer with a thickness of about 40–55 cm in gravel beds, 60 cm in cobble beds, 10–30 cm in coarse sand beds, and 5–10 cm in fine sand beds. The thickness of the zone in clay and silt beds is about 30 cm because the bed is relatively soft; some macro-invertebrates can move within the fluid mud layer.
Environmental Flows are defined as the quantity, timing, and quality of freshwater flows and levels necessary to sustain aquatic ecosystems which, in turn, support human cultures, economies, sustainable livelihoods, and well-being. The natural flow regime plays a critical role in sustaining native biodiversity and ecosystem integrity in rivers. The concepts and terminology vary across countries,such as minimum flow, environmental flow regime, environmental water, ecological flows. In the 2010s, the assessments of environmental flows at the basin scale have greatly evolved with the application of habitat-based methods or holistic methods, to balance the environmental flows and water uses, e.g. agriculture and hydropower, in the water planning at the watershed or river basin scale. In addition, some methodologies of water planning evaluate performance in river systems including stress tests, which consider the uncertainty associated with climate and global change, and evaluate the feasibility of balancing environmental flows and other water uses. For instance, several irrigation schemes were being considered for development of the Kilombero River Basin, Tanzania. It was determined what quantity of water could be abstracted from the river without degrading the ecological condition.
Basic Principles and Models.
High habitat diversity supports high biodiversity. Or stated alternatively, biodiversity depends on habitat diversity, which is defined as the diversity of habitat types suitable for different bio-communities. The physical conditions of stream habitats depend mainly on the following factors: 1) substrate; 2) water depth; and 3) flow velocity. Different physical conditions support different bio-communities, so diversified physical conditions may support diversified bio-communities. Habitats with flow velocity less than 0.3 m/s are suitable for species that swim slowly. Habitats with flow velocity higher than 1 m/s are suitable for species that like high flow velocities. Fish species diversity and richness are strongly related to the combination of the effects of substrate, velocity, and depth, which can be represented by the Habitat Diversity Index. Field investigations have shown that a stream with different substrates is suitable for a large variety of invertebrate species and has a high biodiversity. The species richness, or number of species, "S," is proportional to the habitat diversity index.
Cut-off of connections of habitats impairs ecology. Connections of habitats are essential for complex bio-communities and high biodiversity. Cut-off of the connections with artificial dams or locks reduces biodiversity and undermines the bio-communities. Some projects are intended to restore the connections of habitats.
The Yangtze River once connected thousands of riparian lakes in its middle and lower reaches, thereby forming a complex habitat system. Water flowed from the river to the lakes during the rising stage of floods and vice versa during the recession stage of floods. The river had high biodiversity and was home to 400 species of fish, 3 species of whales, and numerous species of amphibians, reptiles, birds, and invertebrates. The connection between the upper reaches and the middle and lower reaches, and the connections between the river and riparian lakes have been cut-off to reduce the cost of levee construction and to promote fish farming, resulting in the fragmentation of the complex habitats. Investigations have shown that cutting the connections has reduced the numbers of macroinvertebrate species by 60% and fish species by 40-50% in the lakes. There are 101 fish species in Poyang Lake, which remains connected to the Yangtze River, but only 57 and 47 fish species in Honghu Lake and Zhangdu Lake, respectively, which have been cut off from the river. Experiments have shown that a substantial reduction in the number of species and the abundance of macro-invertebrates occurs within 4 months after a riparian wetland is isolated from the river.
Resilience refers to an ecosystem's stability and capability to tolerate disturbances and restore itself. The resilience of an eco-system involves both the process and the outcome of successfully adapting to ecological stresses, and the ability to maintain its normal patterns of biomass production after being subjected to damage. If a disturbance were of sufficient magnitude or duration, a threshold may be reached where the ecosystem undergoes a regime shift, possibly permanently. Ecological projects, in some sense, are designed to enhance the resilience of ecosystems, reduce the time required for the ecosystem to return to an equilibrium, and increase the ecosystem's capacity to absorb disturbances and reorganize. A new paradigm in river and coastal management is evolving ecological enhancement, recreation, and aesthetics, as well as complying with strict environmental protection legislation. These complex projects require extensive data and simulation tools to assist decision makers and communities in selecting management strategies which offer the maximum benefits, whilst preserving and enhancing the ecological integrity of the river system.
Models, especially numerical models often are needed. A common approach to habitat studies is to apply numerical hydraulic modeling with the models included in PHABSIM. This approach is based on a one‑dimensional hydraulic characterization of a limited river reach under steady flow conditions. The model was tested to assess its capability to evaluate suitable habitat for Pacific and Atlantic salmon spawning and the results showed that the model works well for this lifestage, as spawning involves adult fish and is tightly coupled with hydrogeomorphology.
Vegetation affects the turbulence intensity and turbulence structure. Modeling of the dynamic process of vegetative succession describes the relation between the hydraulic characteristics of flood disturbances and the colonization and succession processes of vegetation on sediment bars and floodplains. The model is composed of modules for hydraulic, wood, and herbaceous plants, and soil nutrients. The model's hydraulic module simulates the processes of flood inundation, flushing, and sedimentation. The timing and locations of plant recruitment use the characteristics of a flood. The mortality of plants at each location during a flood is estimated from surface erosion rates obtained from a hydrodynamic model.
The gap between the existing model technology and the requirements of modeling the whole aquatic ecosystem on a wide range of spatial and temporal scales requires investigation. Physical habitat models are particularly useful for assessing the impact of hydropower projects, analyzing the effects of water abstraction on river ecology, and determining the minimum flow requirements of aquatic populations.
As mentioned above, hydraulic variables profoundly affect habitat utilization by biota. Fluvial habitat suitability curves have been developed for forty years. Also, habitat suitability models are applied to evaluate the ability of a habitat to support a particular species. Fish behavior has been analyzed in designing microhabitat in a meter-resolution two-dimensional (2D) microhabitat modeling.
Suitability indices are the core for habitat modeling, which may be illustrated for the Chinese sturgeon. The life cycle of the Chinese sturgeon in the Yangtze River mainly comprises spawning, hatching, and maturation. Brood fish seek suitable spawning sites and adhere fertilized eggs to stones, which hatch after about 120 to 150 h. Juvenile sturgeon swim to the East China Sea and stay there until they reach maturity. Ten aquatic eco-factors influence the habitat suitability of the Chinese sturgeon: 1) water temperatures for adults and juveniles (V1, °C); 2) water depth for adults (V2, m); 3) substrate for adults (V3); 4) water temperature for spawning (V4, °C); 5) water depth for spawning (V5, m); 6) substrate for spawning and hatching (V6); 7) water temperature during hatching (V7, °C); 8) flow velocity during spawning (V8, m/s); 9) suspended sediment concentration during spawning (V9, mg/L); and 10) the ratio of estimated brood sturgeon to eggs-predatory fish (V10). The ratio, V10, is important because 90% of eggs suffer predation. The Habitat Suitability Index (HSI) is given by as
formula_0
Figure 1 shows suitability curves for the ten eco-factors. Using these curves, the Habitat Suitability Index (HSI) was calculated, in which the velocity, depth, temperature, and substrate were estimated using a two-dimensional model of hydraulics and sediment movement. The habitat suitability HSI ranges from 0 (unsuitable) to 1 (optimal). Yi et al. indicated that the space and time suitable for spawning were reduced after the completion of the Three Gorges Dam in 2003. The model proved that reservoir operation revised to mimic the natural flow regime would enhance habitat suitability.
Applications.
Construction of dams has caused insurmountable obstacles to migratory fish. At least 1/5 of the world's 9000 species of freshwater fish have disappeared due to dams. This proportion is even higher in rivers with more dams, which is 2/5 in the United States and 3/4 in Germany. More than 130 dams have been built on the Columbia River and its tributaries, blocking salmon spawning upstream, resulting in a fishery loss of $6.5 billion between 1960 and 1980.
A fish ladder is designed to help the migrating fish and brood fish to cross a dam to the upstream spawning ground, and a fish pass helps the juvenile fish to cross the dam to the downstream and the sea. Fish ladders and passes can be designed separately or can be combined into one channel. The main concept of fish ladder design is to create extremely high resistance, letting the water from upstream to downstream of the dam flow at a low velocity while maintaining a large depth. The design of the inlet and outlet of the fish ladder is critical. If the downstream outlet velocity is too high, fish cannot swim into the fish ladder. If the outlet flow velocity is too low, the fish cannot determine whether it leads to the upstream spawning ground. Also important is turbulence along a ladder or pass.
The earliest fish ladder was constructed by Denil in 1909. The ladder consists of a series of baffles positioned on the walls and floor of a channel all of which enable the upstream moving brood fish, specifically Atlantic salmon, to bypass weirs and small dams. Generally, a fish ladder maximizes energy dissipation and reduces flow velocity, the shape and position of the baffles create a secondary outward circulation of flow, producing a momentum transfer from the central portion of the channel towards the walls.
Research has been done regarding Denil's fish ladder, focusing mainly on refining the baffles. Additionally, research concentrates on understanding organism response to the hydrodynamics (flow velocity and turbulence) under experimental settings. Attention has been paid to the turbulence intensity, eddy size, and hydrodynamic drag in fishways. On the other hand, fish biologists have worked closely with hydraulic engineers to understand how fish respond to complex fluid dynamics. Humans have created a variety of fish ladders, such as the submerged jet from a vertical gap type, which is suitable for large fish; step-pool type and submerged window type, which is suitable for medium fish; and overflow weir type, which is suitable for small fish (Figure 2).
Figure 2 Various types of fish ladder
In 2004, a fish ladder was built for the brood fish to bypass the Itaipu Dam on the Parana River, and for juvenile fish to pass down the river. The maximum flow velocity was less than 3 m/s. At the initial stage, the flow discharge used for trapping fish was 20 m3/s, and when the fish were swimming into the passage channel, the flow discharge dropped to 11.4 m3/s.
The most successful fish passages in the world are the fish ladders and fish passes bypassing the eight dams on the Columbia River. The U.S. government legislates that dams on the Columbia River must be built with a fishway. Bonneville Dam is the most downstream dam on the river, with a height of 60 m. The fish ladder was designed as a series of "cabins" using vertical gap jet diffusion and energy dissipation. Since the 1930s, a yearly average of 721,000 brood fish have crossed the dam and entered an upstream spawning ground.
Reservoir operation: Since 2010, the Three Gorges Reservoir has been operated to promote spawning of the Asian carp. In June 2011, the discharge from the reservoir increased by 2,000 m3/s every day, and the flow velocity and turbulence intensity increased continuously for 5 days. Stimulating flood flow, brood fish gathered downstream and spawned. In 2022, the reservoir increased the discharge from 12,800 m3/s on June 3 to 22,400 m3/s on June 8. The number of drifting eggs spawned by the carps in the Yichang-Yidu section increased by more than 400 million.
Artificial step-pools: In the past decades, artificial step-pools have been applied in mountain rivers to increase the habitat diversity, and thereby improve river ecology in Germany, Italy, the United States, Canada, Switzerland, Austria, and other countries. An experiment done in the Diaoga River in Yunnan, China, proved that artificial step-pools may create stable and diverse habitat with low velocity and deep-water pools and high velocity waterfalls. Thus, different species can find suitable habitat for survival and reproduction. "Myriophyllum" and "Periphyton" (forms of algae) grew on the riverbed, and the original white gravel bed was covered with green aquatic plants. The number of species of invertebrates doubled, and the number of individuals per unit area increased by 10 to 85 times. The artificial step-pool system created great resistance to flow and reduced debris-flow problems.
Elsewhere, step-pool systems are used. For example, Germany invested 400,000 Euros to build an artificial step-pool system on the Mangfall River, a tributary of the Inn River. Italy imitated a step-pool system and constructed a group of small dams with boulders, achieving significant results in stabilizing streams and restoring river ecology in northern mountain rivers. Artificial step-pool system constructed on the Kleinschmidt River in Montana and on the Little Snake River in Wyoming restored salmon and rainbow trout habitats.
Wetland restoration: Channelization of the Kissimmee River in central Florida destroyed or degraded most of the fish and wildlife habitat once provided by the river and its floodplain wetlands. A subsequent project restored the river's biological resources from 1984 to 1989. The straight channel was re-meandered, flow velocity was reduced, and water-stage increased. Reintroduction of flow through remnant river channels increased habitat diversity and led to favorable responses by fish and invertebrate communities. The habitat was restored forming a shallow and wide "River of Grass" which flowed slowly across everglades sawgrass toward mangrove estuaries in the Gulf of Mexico
Restoration of habitat connectivity: Restoration of connectivity between habitats that became fragments mainly involves dredging and excavating channels to connect lakes, wetlands, and rivers and creating ecological corridors for aquatic animals. In 2012, the city of Wuhan, China, connected 20 lakes on the left bank of the Yangtze River. A channel with a maximum width of 60 m and a depth of 1.5 m was dug between the lakes. The city built the ecological network of the Great East Lake on the right bank of the Yangtze River, reconnecting six lakes to the river. During the project, pump stations were used to exchange water between the lakes and the river to improve the water quality. The reconnection of the fragmented habitat repaired the damaged ecosystem.
Case Studies.
Many examples can be given. A few ensue.
Ecohydraulics for land development: Sand Motor (Netherlands)-Ecohydraulics is increasingly used in the quest for nature-based solutions for sustainable development. A landmark example of building with nature, the "sand motor", was first implemented in the Netherlands in 2011 as a pilot project to provide an alternative solution for depositing a large amount of sand along the shore to nourish the coast and safeguard the hinterland from being eroded. A hook-shaped peninsula of about 21.5 million m3 of sand was constructed to protrude 1 km into the sea and cover about 2 km alongshore (Figure 3). By making use of natural processes such as waves, wind and tide to redistribute the sand, this innovative approach succeeded in limiting the disturbance of local ecosystems, while also providing new areas for nature and more types of recreation
Figure 3 Creating land by natural processes to minimize ecosystem disruption: The Sand Motor after completion in July 2011 (left) and 5 years later in January 2016 (right)
Ecohydraulics for restoring habitat of migratory birds (South Korea)- Nakdong River estuary is regulated by a 2,400-meter-long dam built in 1987 to control the inflow of seawater into farmland and secure drinking and agricultural water for nearby regions, including Busan, Ulsan and South Gyeongsang Province (Figure 4). However, the biodiversity of the river had been diminished since the establishment of the barrage; the stoppage of upstream sea water intrusion limited the supply of brackish water to the rice paddy fields which provided a natural habitat for migratory birds. A controlled partial gate-opening project was started in 2019 to restore and protect the biodiversity of the estuary, and by its third opening in July 2020, improvement was confirmed as the estuary's eco-species, including eels and anchovies, were found again in the waters upstream of the gates. A tidal flat was formed towards the seaside where the sand and mud carried over along the river accumulates provides fertile soil, rendering the area agriculturally rich and the habitat for migratory birds restored.
Other examples of eco-hydraulics can also be found in the IAHR Media Library.
Figure 4 The Nakdong Estuary Dam traps freshwater and prevents salt water from flowing upstream, affecting the rice paddy habitat of migratory birds. Ecological restoration involves opening a small part of the dam using modern eco-hydraulics (Courtesy of K-water)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "HSI=\\min\\{\\min\\bigl(V_1,V_2,V_3),\\min(V_4,V_5,V_6),V_10\\min(V_6,V_7,V_8,V_9)\\}"
}
]
| https://en.wikipedia.org/wiki?curid=73813366 |
73825961 | Network Coordinate System | A Network Coordinate System (NC system) is a system for predicting characteristics such as the latency or bandwidth of connections between nodes in a network by assigning coordinates to nodes. More formally, It assigns a coordinate embedding formula_0 to each node formula_1 in a network using an optimization algorithm such that a predefined operation formula_2 estimates some directional characteristic formula_3 of the connection between node formula_4 and formula_5.
Uses.
In general, Network Coordinate Systems can be used for peer discovery, optimal-server selection, and characteristic-aware routing.
Latency Optimization.
When optimizing for latency as a connection characteristic i.e. for low-latency connections, NC systems can potentially help improve the quality of experience for many different applications such as:
Bandwidth Optimization.
NC systems can also optimize for bandwidth (although not all designs can accomplish this well). Optimizing for high-bandwidth connections can improve the performance of large data transfers.
Sybil Attack Detection.
Sybil attacks are of much concern when designing peer-to-peer protocols. NC systems, with their ability to assign a location to the source of traffic can aid in building systems that are Sybil-resistant.
Design Space.
Landmark-Based vs Decentralized.
Almost any NC system variant can be implemented in either a landmark-based or fully decentralized configuration. Landmark-based systems are generally secure so long as none of the landmarks are compromised, but they aren't very scalable. Fully decentralized configurations are generally less secure, but they can scale indefinitely.
Alternatives.
Network Coordinate Systems are not the only way to predict network properties. There are also methods such as iPlane and iPlane Nano which take a more analytical approach and try to mechanistically simulate the behavior of internet routers to predict by what route some packets will flow, and thus what properties a connection will have.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec c_n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\vec c_a \\otimes \\vec c_b \\rightarrow d_{ab}"
},
{
"math_id": 3,
"text": "d_{ab}"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "d_{ab} = ||\\vec c_a - \\vec c_b||"
},
{
"math_id": 8,
"text": "a \\rightarrow b"
},
{
"math_id": 9,
"text": "b \\rightarrow a"
},
{
"math_id": 10,
"text": "(a \\rightarrow b) + (b \\rightarrow c) \\geq (a \\rightarrow c)"
},
{
"math_id": 11,
"text": "X : \\R_{n \\times n}"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "n_i"
},
{
"math_id": 15,
"text": "n_j"
},
{
"math_id": 16,
"text": "d_{ab} = \\vec u_a \\vec v_b"
},
{
"math_id": 17,
"text": "\\vec u_n"
},
{
"math_id": 18,
"text": "\\vec v_n"
},
{
"math_id": 19,
"text": "X"
},
{
"math_id": 20,
"text": "U : R_{n \\times r}"
},
{
"math_id": 21,
"text": "V : R_{r \\times n}"
},
{
"math_id": 22,
"text": "UV \\approxeq X"
},
{
"math_id": 23,
"text": "\\vec u_j"
},
{
"math_id": 24,
"text": "\\vec v_j"
},
{
"math_id": 25,
"text": "U"
},
{
"math_id": 26,
"text": "V"
},
{
"math_id": 27,
"text": "d_{ij} = \\vec u_i \\vec v_j"
},
{
"math_id": 28,
"text": "\\vec u_i"
}
]
| https://en.wikipedia.org/wiki?curid=73825961 |
73829878 | Spherical Collapse Model | Dark matter halo formation model
The spherical collapse model describes the evolution of nearly homogeneous matter in the early Universe into collapsed virialized structures - dark matter halos. This model assumes that halos are spherical and dominated by gravity which leads to an analytical solution for several of the halos' properties such as density and radius over time.
The framework for spherical collapse was first developed to describe the infall of matter into clusters of galaxies. At this time, in the early 1970s, astronomical evidence for dark matter was still being collected, and it was believed that the Universe was dominated by ordinary, visible matter. However, it is now thought that dark matter is the dominating species of matter.
Derivation and key equations.
The simplest halo formation scenario involves taking a sufficiently overdense spherical patch, which we call a proto-halo (e.g., Descjacques et al. 2018), of the early Universe and tracking its evolution under the effect of its self-gravity. Once the proto-halo has collapsed and virialized, it becomes a halo.
Since the matter outside this sphere is spherically symmetric, we can apply Newton's shell theorem or Birkhoff's theorem (for a more general description), so that external forces average to zero and we can treat the proto-halo as isolated from the rest of the Universe. The proto-halo has a density formula_0, mass formula_1, and radius formula_2 (given in physical coordinates) which are related by formula_3.
To model the collapse of the spherical region, we can either use Newton's law or the second Friedmann equation, giving
formula_4
The effect of the accelerated expansion of the Universe can be included if desired, but it is a subdominant effect.
The above equation admits the explicit solution
formula_5
where formula_6 is the maximum radius, assumed to occur at time formula_7, and formula_8 is the quantile function of the Beta distribution, also known as the inverse function of the regularized incomplete beta function formula_9. The time
formula_10
is the free-fall time, where formula_11.
Long before the derivation of this explicit solution, the spherical collapse equation has been known to admit a parametric solution
formula_12
in terms of a parameter formula_13. The origin of time, formula_7, now occurs at a vanishing radius, and the time increases with increasing formula_13. The coefficients formula_14 are given by the energy contents of the sphere (cf. equation 5.89 in Dodelson et al.). Initially the sphere expands at the rate of the Universe (formula_15), but then it slows down, turns around (formula_16), and ultimately collapses (formula_17).
If we split the density into a background formula_18 and perturbation formula_19 by formula_20, we can solve for the fully nonlinear perturbation
formula_21
Initially formula_22, at the turn-around point formula_23, and at collapse formula_24.
Alternatively, if one considers linear perturbations, or equivalently small times formula_25, the above equation gives us an expression for linear perturbations
formula_26
We can then extrapolate the linear perturbation into nonlinear regimes (more on the usefulness of this below). At turn-around formula_27 and at collapse we get the spherical collapse threshold
formula_28
Although the halo does not physically have an overdensity of 1.69 at collapse, the above collapse threshold is nevertheless useful. It tells us that if we model the initial (linear) density field and extrapolate into the future, wherever formula_29 can be thought of as a collapsing region that will form a halo.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\rho=M/(4\\pi r^3/3)"
},
{
"math_id": 4,
"text": "\\ddot r^2 = -\\frac{G M}{r^2} ~."
},
{
"math_id": 5,
"text": "r(t)=r_0~Q\\left(1-\\frac{t}{t_{\\text{ff}}};\\frac{3}{2},\\frac{1}{2}\\right) ~,"
},
{
"math_id": 6,
"text": "r_0=r(0)"
},
{
"math_id": 7,
"text": "t=0"
},
{
"math_id": 8,
"text": "Q(x;\\alpha,\\beta)"
},
{
"math_id": 9,
"text": "I_x(\\alpha,\\beta)"
},
{
"math_id": 10,
"text": "t_{\\text{ff}}=\\pi\\sqrt{\\frac{r^3_0}{8GM}}=\\sqrt{\\frac{3\\pi}{32G\\rho_0}}~,"
},
{
"math_id": 11,
"text": "r(t_{\\text{ff}})=0"
},
{
"math_id": 12,
"text": " r(\\theta) = A (1 - \\cos\\theta) ~,\n\\qquad\nt(\\theta) = B(\\theta - \\sin\\theta) ~,\n"
},
{
"math_id": 13,
"text": "\\theta"
},
{
"math_id": 14,
"text": "A,B"
},
{
"math_id": 15,
"text": "\\theta=0"
},
{
"math_id": 16,
"text": "\\theta=\\pi"
},
{
"math_id": 17,
"text": "\\theta=2 \\pi"
},
{
"math_id": 18,
"text": "\\bar \\rho "
},
{
"math_id": 19,
"text": "\\delta "
},
{
"math_id": 20,
"text": "\\rho(\\theta) = \\bar\\rho\\left[1+\\delta(\\theta)\\right] "
},
{
"math_id": 21,
"text": " 1+\\delta = \\frac{9}{2}\\frac{(\\theta - \\sin\\theta)^2}{(1 - \\cos\\theta)^3}~.\n"
},
{
"math_id": 22,
"text": "\\delta=0"
},
{
"math_id": 23,
"text": "\\delta \\approx 4.55"
},
{
"math_id": 24,
"text": "\\delta = \\infty "
},
{
"math_id": 25,
"text": "\\theta\\ll 1"
},
{
"math_id": 26,
"text": " \\delta_L = \\frac{3}{20} \\theta^2 ~.\n"
},
{
"math_id": 27,
"text": " \\delta_L=1.06"
},
{
"math_id": 28,
"text": "\\delta_L \\approx 1.69 ~."
},
{
"math_id": 29,
"text": "\\delta_L \\geq 1.69 "
}
]
| https://en.wikipedia.org/wiki?curid=73829878 |
73842665 | Stewartson layer | In fluid dynamics, a Stewartson layer is a thin cylindrical shear layer that connects two differentially rotating regions in the radial direction, namely the inside and outside the cylinder. The Stewartson layer, typically, also connects different Ekman boundary layers in the axial direction. The layer was first identified by Ian Proudman and was first described by Keith Stewartson. This layer should be compared with the Ekman layer which occurs near solid boundaries.
Structure.
The Stewartson layer is not elementary but possesses a complex structure and emerges when the relevant Ekman number is formula_0; here formula_1 is the kinematic viscosity, formula_2 and formula_3 are the characteristic scales for the angular speed and length. The fundamental balance that occurs in the Stewartson shear layer is between Coriolis forces and viscous forces.
Spherical geometry.
For simplicity, consider the example of two concentric spheres that rotate about a common axis with slightly different angular velocity. The fluid domain corresponds to the annular region. In this problem, the Stewartson layer emerges as a cylinder formula_4 circumscribing the inner sphere with its generators lying parallel to the rotation axis. Outside formula_4, the fluid rotates as a solid body with a speed that of the outer sphere. Inside formula_4 (in the annular region), again the fluid rotates as a solid body, except near the inner and outer sphere walls, where Ekman boundary layers of thickness formula_5 are set up that help adjusting the flow to transition from uniform rotation to their respective rotating values on the solid walls. Across formula_4, there is a jump in the azimuthal velocity and on formula_4, there is an axial flow connecting the two Ekman layers. The structure of formula_4 is the Stewartson layer.
The Stewartson layer consists of two outer layers, one on the inner side of formula_4 with a thicknesses formula_6 and one on the outer side of formula_4 with a thickness formula_7; these outer layers flank a thin inner layer of thickness formula_8. The differential rotation between inside and outside formula_4 is smoothed out in the outer layers (primarily in the outer layer lying on the outer side of formula_4). The adjustment of azimuthal motion in the outer layers induces secondary axial flow. The inner layer becomes necessary partly to accommodate this induced axial motion and partly to accommodate the transport of flow between one Ekman boundary layer to the other one (from the Ekman layer on the faster-rotating sphere to the slower one). Note that the thickness of the Ekman layer is formula_5, which is much smaller than the inner Stewartson layer. In the inner layer, change in the azimuthal velocity is very small, because the outer layers are already smoothed out jump in the azimuthal velocity. In addition, the outer layers (again primarily in the outler layer lying outer side of the cylinder) also transport axially flow from the fast rotating sphere to slower one.
Cylindrical geometry.
In cylindrical geometries, the thickness of both the two outer layers is formula_7 and the thickness of inner layer is formula_8.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Ek}=\\nu/\\Omega L^2\\ll 1"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "\\mathcal{D}"
},
{
"math_id": 5,
"text": "\\mathrm{Ek}^{1/2}"
},
{
"math_id": 6,
"text": "\\mathrm{Ek}^{2/7}"
},
{
"math_id": 7,
"text": "\\mathrm{Ek}^{1/4}"
},
{
"math_id": 8,
"text": "\\mathrm{Ek}^{1/3}"
}
]
| https://en.wikipedia.org/wiki?curid=73842665 |
7385565 | Thue number | In the mathematical area of graph theory, the Thue number of a graph is a variation of the chromatic index, defined by and named after mathematician Axel Thue, who studied the squarefree words used to define this number.
Alon et al. define a "nonrepetitive coloring" of a graph to be an assignment of colors to the edges of the graph, such that there does not exist any even-length simple path in the graph in which the colors of the edges in the first half of the path form the same sequence as the colors of the edges in the second half of the path. The Thue number of a graph is the minimum number of colors needed in any nonrepetitive coloring.
Variations on this concept involving vertex colorings or more general walks on a graph have been studied by several authors including Barát and Varjú, Barát and Wood (2005), Brešar and Klavžar (2004), and Kündgen and Pelsmajer.
Example.
Consider a pentagon, that is, a cycle of five vertices. If we color the edges with two colors, some two adjacent edges will have the same color x; the path formed by those two edges will have the repetitive color sequence xx. If we color the edges with three colors, one of the three colors will be used only once; the path of four edges formed by the other two colors will either have two consecutive edges or will form the repetitive color sequence xyxy. However, with four colors it is not difficult to avoid all repetitions. Therefore, the Thue number of "C"5 is four.
Results.
Alon et al. use the Lovász local lemma to prove that the Thue number of any graph is at most quadratic in its maximum degree; they provide an example showing that for some graphs this quadratic dependence is necessary. In addition they show that the Thue number of a path of four or more vertices is exactly three, that the Thue number of any cycle is at most four, and that the Thue number of the Petersen graph is exactly five.
The known cycles with Thue number four are "C"5, "C"7, "C"9, "C"10, "C"14, and "C"17. Alon et al. conjecture that the Thue number of any larger cycle is three; they verified computationally that the cycles listed above are the only ones of length ≤ 2001 with Thue number four. Currie resolved this in a 2002 paper, showing that all cycles with 18 or more vertices have Thue number 3.
Computational complexity.
Testing whether a coloring has a repetitive path is in NP, so testing whether a coloring is nonrepetitive is in co-NP, and Manin showed that it is co-NP-complete. The problem of finding such a coloring belongs to formula_0 in the polynomial hierarchy, and again Manin showed that it is complete for this level. | [
{
"math_id": 0,
"text": "\\Sigma_2^P"
}
]
| https://en.wikipedia.org/wiki?curid=7385565 |
73857687 | Zero-sum Ramsey theory | Study of structures where a subset must sum to zero
In mathematics, zero-sum Ramsey theory or zero-sum theory is a branch of combinatorics. It deals with problems of the following kind: given a combinatorial structure whose elements are assigned different weights (usually elements from an Abelian group formula_0), one seeks for conditions that guarantee the existence of certain substructure whose weights of its elements sum up to zero (in formula_0). It combines tools from number theory, algebra, linear algebra, graph theory, discrete analysis, and other branches of mathematics.
The classic result in this area is the 1961 theorem of Paul Erdős, Abraham Ginzburg, and Abraham Ziv: for any formula_1 elements of formula_2, there is a subset of size formula_3 that sums to zero. (This bound is tight, as a sequence of formula_4 zeroes and formula_4 ones cannot have any subset of size formula_3 summing to zero.) There are known proofs of this result using the Cauchy-Davenport theorem, Fermat's little theorem, or the Chevalley–Warning theorem.
Generalizing this result, one can define for any abelian group "G" the minimum quantity formula_5 of elements of "G" such that there must be a subsequence of formula_6 elements (where formula_6 is the order of the group) which adds to zero. It is known that formula_7, and that this bound is strict if and only if formula_8.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "2m - 1"
},
{
"math_id": 2,
"text": "\\mathbb{Z}_m"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "m - 1"
},
{
"math_id": 5,
"text": "EGZ(G)"
},
{
"math_id": 6,
"text": "o(G)"
},
{
"math_id": 7,
"text": "EGZ(G)\\leq 2o(G) - 1"
},
{
"math_id": 8,
"text": "G = \\mathbb{Z}_m"
}
]
| https://en.wikipedia.org/wiki?curid=73857687 |
7386159 | Hash chain | Method of producing many one-time keys from a single key
A hash chain is the successive application of a cryptographic hash function to a piece of data. In computer security, a hash chain is a method used to produce many one-time keys from a single key or password. For non-repudiation, a hash function can be applied successively to additional pieces of data in order to record the chronology of data's existence.
Definition.
A hash chain is a successive application of a cryptographic hash function formula_0 to a string formula_1.
For example,
formula_2
gives a hash chain of length 4, often denoted formula_3
Applications.
Leslie Lamport suggested the use of hash chains as a password protection scheme in an insecure environment. A server which needs to provide authentication may store a hash chain rather than a plain text password and prevent theft of the password in transmission or theft from the server. For example, a server begins by storing formula_4 which is provided by the user. When the user wishes to authenticate, they supply formula_5 to the server. The server computes formula_6 and verifies this matches the hash chain it has stored. It then stores formula_5 for the next time the user wishes to authenticate.
An eavesdropper seeing formula_5 communicated to the server will be unable to re-transmit the same hash chain to the server for authentication since the server now expects formula_7. Due to the one-way property of cryptographically secure hash functions, it is infeasible for the eavesdropper to reverse the hash function and obtain an earlier piece of the hash chain. In this example, the user could authenticate 1000 times before the hash chain were exhausted. Each time the hash value is different, and thus cannot be duplicated by an attacker.
Binary hash chains.
Binary hash chains are commonly used in association with a hash tree. A binary hash chain takes two hash values as inputs, concatenates them and applies a hash function to the result, thereby producing a third hash value.
The above diagram shows a hash tree consisting of eight leaf nodes and the hash chain for the third leaf node. In addition to the hash values themselves the order of concatenation (right or left 1,0) or "order bits" are necessary to complete the hash chain.
Winternitz chains.
<templatestyles src="Template:Visible anchor/styles.css" />Winternitz chains (also known as function chains) are used in hash-based cryptography. The chain is parameterized by the <templatestyles src="Template:Visible anchor/styles.css" />Winternitz parameter "w" (number of bits in a "digit" "d") and "security parameter" "n" (number of bits in the hash value, typically double the security strength, 256 or 512). The chain consists of formula_8 values that are results of repeated application of a one-way "chain" function "F" to a secret key "sk": formula_9. The chain function is typically based on a standard cryptographic hash, but needs to be parameterized ("randomized"), so it involves few invocations of the underlying hash. In the Winternitz signature scheme a chain is used to encode one digit of the "m"-bit message, so the Winternitz signature uses approximately formula_10 bits, its calculation takes about formula_11 applications of the function F. Note that some signature standards (like Extended Merkle signature scheme, XMSS) define "w" as the number of possible values in a digit, so formula_12 in XMSS corresponds to formula_13 in standards (like Leighton-Micali Signature, LMS) that define "w" in the same way as above - as a number of bits in the digit.
Hash chain vs. blockchain.
A hash chain is similar to a blockchain, as they both utilize a cryptographic hash function for creating a link between two nodes. However, a blockchain (as used by Bitcoin and related systems) is generally intended to support distributed agreement around a public ledger (data), and incorporates a set of rules for encapsulation of data and associated data permissions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "h(h(h(h(x))))"
},
{
"math_id": 3,
"text": "h^{4}(x)"
},
{
"math_id": 4,
"text": "h^{1000}(\\mathrm{password})"
},
{
"math_id": 5,
"text": "h^{999}(\\mathrm{password})"
},
{
"math_id": 6,
"text": "h(h^{999}(\\mathrm{password})) = h^{1000}(\\mathrm{password})"
},
{
"math_id": 7,
"text": "h^{998}(\\mathrm{password})"
},
{
"math_id": 8,
"text": "2^w"
},
{
"math_id": 9,
"text": "sk, F(sk), F(F(sk)), ..., F^{2^{w-1}}(sk)"
},
{
"math_id": 10,
"text": "mn/w"
},
{
"math_id": 11,
"text": "2^wm/w"
},
{
"math_id": 12,
"text": "w=16"
},
{
"math_id": 13,
"text": "w=4"
}
]
| https://en.wikipedia.org/wiki?curid=7386159 |
73871740 | List of spacetimes | This is a list of well-known spacetimes in general relativity. Where the metric tensor is given, a particular choice of coordinates is used, but there are often other useful choices of coordinate available.
In general relativity, spacetime is described mathematically by a metric tensor (on a smooth manifold), conventionally denoted formula_0 or formula_1. This metric is sufficient to formulate the vacuum Einstein field equations. If matter is included, described by a stress-energy tensor, then one has the Einstein field equations with matter.
On certain regions of spacetime (and possibly the entire spacetime) one can describe the points by a set of coordinates. In this case, the metric can be written down in terms of the coordinates, or more precisely, the coordinate one-forms and coordinates.
During the course of the development of the field of general relativity, a number of explicit metrics have been found which satisfy the Einstein field equations, a number of which are collected here. These model various phenomena in general relativity, such as possibly charged or rotating black holes and cosmological models of the universe. On the other hand, some of the spacetimes are more for academic or pedagogical interest rather than modelling physical phenomena.
Maximally symmetric spacetimes.
These are spacetimes which admit the maximum number of isometries or Killing vector fields for a given dimension, and each of these can be formulated in an arbitrary number of dimensions.
formula_2
formula_3
where formula_4 is real and formula_5 is the standard hyperbolic metric.
formula_6
Black hole spacetimes.
These spacetimes model black holes. The Schwarzschild and Reissner–Nordstrom black holes are spherically symmetric, while Schwarzschild and Kerr are electrically neutral.
formula_7
where formula_8 is the round metric on the sphere, and formula_9 is a positive, real parameter.
formula_10
where formula_11 is defined implicitly.
formula_12
formula_13
See Boyer–Lindquist coordinates for details on the terms appearing in this formula.
Cosmological spacetimes.
formula_14,
where formula_15 is often restricted to take values in the set formula_16.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g"
},
{
"math_id": 1,
"text": "ds^2"
},
{
"math_id": 2,
"text": " g = -dt^2 + \\sum_{i = 1}^{n-1} dx_i^2"
},
{
"math_id": 3,
"text": " g = -dt^2 + \\alpha^2 \\sinh^2\\left(\\frac{1}{\\alpha}t\\right) dH_{n-1}^2,"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "dH_{n-1}^2"
},
{
"math_id": 6,
"text": " g =\\frac{1}{y^2}\\left(-dt^2+dy^2+\\sum_{i=1}^{n-2}dx_i^2\\right)"
},
{
"math_id": 7,
"text": " g = -\\left(1 - \\frac{2M}{r} \\right) dt^2 + \\left(1-\\frac{2M}{r}\\right)^{-1} dr^2 + r^2 d\\Omega^2,"
},
{
"math_id": 8,
"text": "d\\Omega^2 = d\\theta^2 + \\sin^2\\theta d\\phi^2"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": " g = - \\frac{ 32 M^3 e^{-r(U,V)/2M} }{r(U,V)} dU dV + r(U,V)^2 d\\Omega^2,"
},
{
"math_id": 11,
"text": "r(U,V)"
},
{
"math_id": 12,
"text": " g = -\\left(1 - \\frac{2M}{r} + \\frac{e^2}{r^2}\\right) dt^2 + \\left(1-\\frac{2M}{r} + \\frac{e^2}{r^2}\\right)^{-1} dr^2 + r^2 d\\Omega^2"
},
{
"math_id": 13,
"text": " g = -\\frac{\\Delta}{\\rho^2}\\left(dt - a \\sin^2\\theta \\,d\\phi \\right)^2 +\\frac{\\sin^2\\theta}{\\rho^2}\\Big(\\left(r^2+a^2\\right)\\,d\\phi - a \\,dt\\Big)^2 + \\frac{\\rho^2}{\\Delta}dr^2 + \\rho^2 \\,d\\theta^2."
},
{
"math_id": 14,
"text": " g = - dt^2 + a(t)^2 \\left(\\frac{dr^2}{1 - kr^2} + r^2 d\\Omega^2\\right)"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "-1, 0, 1"
}
]
| https://en.wikipedia.org/wiki?curid=73871740 |
73883643 | 2S43 Malva | Russian 152 mm self-propelled howitzer
The 2S43 "Malva" (in Russian: 2С43 Мальва, "Malva" referring to the flower) is a 152 mm Russian self-propelled gun mounted on an 8x8 wheeled chassis.
Development.
The 2S43 Malva was developed by the central institute of research Burevestnik based in Nizhny Novgorod. The development of the system was done in the context of the Nabrosok program, which is supposed to develop an entirely new range of artillery systems for the Russian armed forces.
The main innovation of the project is the use of an AWD wheeled chassis of eight wheels. This increases the mobility and lowers the mass, but with an unchanged combat ability. Wheeled chassis are also less expensive to exploit and produce. It is produced by Uraltransmash, a branch of Uralvagonzavod. The chassis, BAZ-6610-02 "Voshchina" is produced by the Bryansk Automobile Plant.
In 2021, the technical and tactical exigencies were fixed, in anticipation of future tests. These tests began in 2021 and ended on 17 May 2023.
On 26 October 2023, the first batch of Malva howitzers was reported to be delivered to the army. The second batch was reportedly delivered in June 2024. Deliveries reportedly continue since July 2024.
Deployment in Ukraine.
On June 2, 2024 an aerial image of the 2S43 surfaced showing the vehicle’s deployment to a firing position in the Kharkiv region in Ukraine as part of the ongoing Russian invasion of Ukraine. While the image is confirmed to be a 2S43, there is very little information available regarding how many are on the front or their uses.
In August 11, during the August 2024 Kursk Oblast incursion, a 2S43 Malva was found destroyed around 14km east of Tetkino.
Description.
The 2S43 "Malva" possesses a 152 mm 2A64 cannon, with a 30 rounds ammunitions stowage. It has an effective range of 24.5 km, a gun elevation of +70°, depression of -3° and azimuth of formula_030°. Other reports say that the 2S43 could be equipped with the 2A88 cannon which is used by the 2S35 Koalitsiya-SV. It has cabin armor to be protected against small arms and shrapnel. With an operational mass of 32 tons, the 2S43 is much more mobile than other self-propelled guns like the 42-ton 2S19 Msta or other tracked self-propelled artillery.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pm"
}
]
| https://en.wikipedia.org/wiki?curid=73883643 |
73898294 | Nano-I-beam | Type of nanostructure
Nano-I-beams are nanostructures characterized by their I-shaped cross-section, resembling the letter I in macroscopic scale. They are typically made from hybrid organic/inorganic materials and have unique properties that make them suitable for various applications in structural nano-mechanics.
Compared to traditional carbon nanotubes, nano-I-beams exhibit higher structural stiffness, reduced induced stress, and longer service life. They have the potential to outperform carbon nanotubes in various applications, offering enhanced mechanical properties and improved functionality. The Wide Flange Nano-I-beam variation has been found to provide even higher structural stiffness and longer service life compared to the Equal Flange & Web Nano-I-beam.
Origin of the name.
Nano-I-beams are named after the I-beams used in construction and structural engineering. The I-beam, also known as the H-beam or universal beam, is a widely used structural element due to its high strength-to-weight ratio and structural stability. The shape of the I-beam, with its central vertical web and horizontal flanges, provides excellent load-bearing capabilities and resistance to bending and torsion.
Inspired by the structural properties of I-beams, the nano-I-beam was developed as a nanoscale counterpart, utilizing the same I-shaped cross-section. The nano-I-beam inherits the geometric characteristics of the macroscopic I-beam, but at a much smaller scale, making it suitable for applications in the realm of nanotechnology
Kinetics and growth of nano-I-beam.
The Ritz method, based on the shell theory, is frequently utilised for dynamic analysis of carbon nanotubes (CNTs). The Ritz method, connected to Hamilton's principle, is employed to determine the equilibrium state and minimize the energy functional of a conservative structural system undergoing kinematically admissible growth or deformation. Hamilton's principle considers the interplay of different energy elements, including the kinetic energy (T), strain energy (U), and potential energy (WP). By applying the Ritz method based on Hamilton's principle, the strain energy "U" of Single & Multi-Walled Nano-I-beams (SWNT) is formulated as:
formula_0
When considering the kinetic energy, observations are often made in a moving frame of reference. To account for this, the time derivative of the observed variables in the fixed frame of reference (ρ, θ, z) is utilized. As a result, the formulation of the kinetic energy, denoted as T, takes into account these considerations.
formula_1
Application and suitability.
Both CNTs and I-beams have distinct properties and advantages, and their suitability depends on the specific application and requirements. CNTs offer exceptional mechanical properties, including high tensile strength and stiffness. They have a high strength-to-weight ratio, making them lightweight yet strong. CNTs also exhibit excellent electrical and thermal conductivity, making them suitable for applications in electronics and energy storage. However, challenges in large-scale production, potential toxicity concerns, and difficulties in achieving uniform dispersion within materials are some drawbacks associated with CNTs.
Among the variations of the Hybrid Organic/Inorganic Nano-I-beam, research highlights the good performance of the Wide Flange Nano-I-Beam. It demonstrates decent structural stiffness, reduced induced stress, and an extended service life when compared to the Equal Flange & Web Nano-I-Beam. This distinction makes the Wide Flange variation particularly desirable for various applications, including nano-heat engines and sensors as an attractive option for cost-effective and high-performance material.
Ultimately, the choice between CNTs and Nano-I-beams depends on the specific requirements of the application, considering factors such as scale, performance needs, and cost-effectiveness. Each material has its own strengths and limitations, and the selection should be based on a careful evaluation of the desired properties and constraints of the project at hand.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U={\\int }_{V}({\\sigma }_{\\rho }{{\\epsilon }}_{\\rho }+{\\sigma }_{\\theta }{{\\epsilon }}_{\\theta }+{\\sigma }_{z}{{\\epsilon }}_{z}+{\\sigma }_{\\rho \\theta }{{\\epsilon }}_{\\rho \\theta }+{\\sigma }_{\\rho z}{{\\epsilon }}_{\\rho z}+{\\sigma }_{\\theta z}{{\\epsilon }}_{\\theta z})dV"
},
{
"math_id": 1,
"text": "T=\\frac{1}{2}\\,\\gamma {\\int }_{V}^{V}[{(\\frac{\\partial u}{\\partial t})}^{2}+{(\\frac{\\partial v}{\\partial t})}^{2}+{(\\frac{\\partial w}{\\partial t})}^{2}]\\,dV"
}
]
| https://en.wikipedia.org/wiki?curid=73898294 |
73898687 | Francis Brown (mathematician) | English-French mathematician
Francis Brown is a Franco-British mathematician who works on Arithmetic geometry and Quantum Field Theory.
Career.
Brown studied at the University of Cambridge and the École normale supérieure (Paris) and University of Bordeaux, with Pierre Cartier, graduating in 2006 with a Ph.D. He then spent time at the Max Planck Institute for Mathematics and Mittag-Leffler Institute. In 2007 he moved to Institut de mathématiques de Jussieu – Paris Rive Gauche where he won a European Research Council starter grant in 2010. In 2012, he moved to the Institut des Hautes Études Scientifiques and was awarded a CNRS Bronze Medal and Élie Cartan Prize for his proof of two conjectures related to multiple zeta functions. He had a Von Neumann Fellowship at the Institute for Advanced Study from 2014 to 2015 and is currently a senior research fellow at All Souls College, at the University of Oxford.
Brown's work is on the intersection of algebraic geometry and number theory. He has published on Tate Motives. He also works on Zeta functions in quantum field theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mathfrak M}_{0,n}"
},
{
"math_id": 1,
"text": " \\Z"
},
{
"math_id": 2,
"text": "P^1\\setminus\\left\\{0,1,\\infty\\right\\}"
}
]
| https://en.wikipedia.org/wiki?curid=73898687 |
739166 | Inverse Galois problem | <templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Is every finite group the Galois group of a Galois extension of the rational numbers?
In Galois theory, the inverse Galois problem concerns whether or not every finite group appears as the Galois group of some Galois extension of the rational numbers formula_0. This problem, first posed in the early 19th century, is unsolved.
There are some permutation groups for which generic polynomials are known, which define all algebraic extensions of formula_0 having a particular group as Galois group. These groups include all of degree no greater than 5. There also are groups known not to have generic polynomials, such as the cyclic group of order 8.
More generally, let G be a given finite group, and K a field. If there is a Galois extension field "L"/"K" whose Galois group is isomorphic to G, one says that G is realizable over K.
Partial results.
Many cases are known. It is known that every finite group is realizable over any function field in one variable over the complex numbers formula_1, and more generally over function fields in one variable over any algebraically closed field of characteristic zero. Igor Shafarevich showed that every finite solvable group is realizable over formula_0. It is also known that every simple sporadic group, except possibly the Mathieu group "M"23, is realizable over formula_0.
David Hilbert showed that this question is related to a rationality question for G:
If K is any extension of formula_0 on which G acts as an automorphism group, and the invariant field "KG" is rational over formula_0, then G is realizable over formula_0.
Here "rational" means that it is a purely transcendental extension of formula_0, generated by an algebraically independent set. This criterion can for example be used to show that all the symmetric groups are realizable.
Much detailed work has been carried out on the question, which is in no sense solved in general. Some of this is based on constructing G geometrically as a Galois covering of the projective line: in algebraic terms, starting with an extension of the field formula_2 of rational functions in an indeterminate t. After that, one applies Hilbert's irreducibility theorem to specialise t, in such a way as to preserve the Galois group.
All permutation groups of degree 16 or less are known to be realizable over formula_0; the group PSL(2,16):2 of degree 17 may not be.
All 13 non-abelian simple groups smaller than PSL(2,25) (order 7800) are known to be realizable over formula_0.
A simple example: cyclic groups.
It is possible, using classical results, to construct explicitly a polynomial whose Galois group over formula_0 is the cyclic group Z/"n"Z for any positive integer n. To do this, choose a prime p such that "p" ≡ 1 (mod "n"); this is possible by Dirichlet's theorem. Let Q("μ") be the cyclotomic extension of formula_0 generated by μ, where μ is a primitive "p"-th root of unity; the Galois group of Q("μ")/Q is cyclic of order "p" − 1.
Since n divides "p" − 1, the Galois group has a cyclic subgroup H of order ("p" − 1)/"n". The fundamental theorem of Galois theory implies that the corresponding fixed field, "F" = Q("μ")"H", has Galois group Z/"n"Z over formula_0. By taking appropriate sums of conjugates of μ, following the construction of Gaussian periods, one can find an element α of F that generates F over formula_0, and compute its minimal polynomial.
This method can be extended to cover all finite abelian groups, since every such group appears in fact as a quotient of the Galois group of some cyclotomic extension of formula_0. (This statement should not though be confused with the Kronecker–Weber theorem, which lies significantly deeper.)
Worked example: the cyclic group of order three.
For "n" = 3, we may take "p" = 7. Then Gal(Q("μ")/Q) is cyclic of order six. Let us take the generator η of this group which sends μ to "μ"3. We are interested in the subgroup "H" = {1, "η"3} of order two. Consider the element "α" = "μ" + "η"3("μ"). By construction, α is fixed by H, and only has three conjugates over formula_0:
"α" = "η"0("α") = "μ" + "μ"6,
"β" = "η"1("α") = "μ"3 + "μ"4,
"γ" = "η"2("α") = "μ"2 + "μ"5.
Using the identity:
1 + "μ" + "μ"2 + ⋯ + "μ"6 = 0,
one finds that
"α" + "β" + "γ" = −1,
"αβ" + "βγ" + "γα" = −2,
"αβγ" = 1.
Therefore α is a root of the polynomial
("x" − "α")("x" − "β")("x" − "γ") = "x"3 + "x"2 − 2"x" − 1,
which consequently has Galois group Z/3Z over formula_0.
Symmetric and alternating groups.
Hilbert showed that all symmetric and alternating groups are represented as Galois groups of polynomials with rational coefficients.
The polynomial "xn" + "ax" + "b" has discriminant
formula_3
We take the special case
"f"("x", "s") = "xn" − "sx" − "s".
Substituting a prime integer for s in "f"("x", "s") gives a polynomial (called a specialization of "f"("x", "s")) that by Eisenstein's criterion is irreducible. Then "f"("x", "s") must be irreducible over formula_4. Furthermore, "f"("x", "s") can be written
formula_5
and "f"("x", 1/2) can be factored to:
formula_6
whose second factor is irreducible (but not by Eisenstein's criterion). Only the reciprocal polynomial is irreducible by Eisenstein's criterion. We have now shown that the group Gal("f"("x", "s")/Q("s")) is doubly transitive.
We can then find that this Galois group has a transposition. Use the scaling (1 − "n")"x" = "ny" to get
formula_7
and with
formula_8
we arrive at:
"g"("y", "t") = "yn" − "nty" + ("n" − 1)"t"
which can be arranged to
"yn" − "y" − ("n" − 1)("y" − 1) + ("t" − 1)(−"ny" + "n" − 1).
Then "g"("y", 1) has 1 as a double zero and its other "n" − 2 zeros are simple, and a transposition in Gal("f"("x", "s")/Q("s")) is implied. Any finite doubly transitive permutation group containing a transposition is a full symmetric group.
Hilbert's irreducibility theorem then implies that an infinite set of rational numbers give specializations of "f"("x", "t") whose Galois groups are "Sn" over the rational field formula_0. In fact this set of rational numbers is dense in formula_0.
The discriminant of "g"("y", "t") equals
formula_9
and this is not in general a perfect square.
Alternating groups.
Solutions for alternating groups must be handled differently for odd and even degrees.
Odd Degree.
Let
formula_10
Under this substitution the discriminant of "g"("y", "t") equals
formula_11
which is a perfect square when n is odd.
Even Degree.
Let:
formula_12
Under this substitution the discriminant of "g"("y", "t") equals:
formula_13
which is a perfect square when n is even.
Again, Hilbert's irreducibility theorem implies the existence of infinitely many specializations whose Galois groups are alternating groups.
Rigid groups.
Suppose that "C"1, …, "Cn" are conjugacy classes of a finite group G, and A be the set of n-tuples ("g"1, …, "gn") of G such that "gi" is in "Ci" and the product "g"1…"gn" is trivial. Then A is called rigid if it is nonempty, G acts transitively on it by conjugation, and each element of A generates G.
showed that if a finite group G has a rigid set then it can often be realized as a Galois group over a cyclotomic extension of the rationals. (More precisely, over the cyclotomic extension of the rationals generated by the values of the irreducible characters of G on the conjugacy classes "Ci".)
This can be used to show that many finite simple groups, including the monster group, are Galois groups of extensions of the rationals. The monster group is generated by a triad of elements of orders 2, 3, and 29. All such triads are conjugate.
The prototype for rigidity is the symmetric group "Sn", which is generated by an n-cycle and a transposition whose product is an ("n" − 1)-cycle. The construction in the preceding section used these generators to establish a polynomial's Galois group.
A construction with an elliptic modular function.
Let "n" > 1 be any integer. A lattice Λ in the complex plane with period ratio τ has a sublattice Λ′ with period ratio "nτ". The latter lattice is one of a finite set of sublattices permuted by the modular group PSL(2, Z), which is based on changes of basis for Λ. Let j denote the elliptic modular function of Felix Klein. Define the polynomial "φn" as the product of the differences ("X" − "j"(Λ"i")) over the conjugate sublattices. As a polynomial in X, "φn" has coefficients that are polynomials over formula_0 in "j"("τ").
On the conjugate lattices, the modular group acts as PGL(2, Z/"nZ). It follows that "φn" has Galois group isomorphic to PGL(2, Z/"nZ) over formula_14.
Use of Hilbert's irreducibility theorem gives an infinite (and dense) set of rational numbers specializing "φn" to polynomials with Galois group PGL(2, Z/"nZ) over formula_0. The groups PGL(2, Z/"nZ) include infinitely many non-solvable groups.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Q}"
},
{
"math_id": 1,
"text": "\\mathbb{C}"
},
{
"math_id": 2,
"text": "\\mathbb{Q}(t)"
},
{
"math_id": 3,
"text": "(-1)^{\\frac{n(n-1)}{2}} \\!\\left( n^n b^{n-1} + (-1)^{1-n} (n-1)^{n-1} a^n \\right)\\!."
},
{
"math_id": 4,
"text": "\\mathbb{Q}(s)"
},
{
"math_id": 5,
"text": "x^n - \\tfrac{x}{2} - \\tfrac{1}{2} - \\left( s - \\tfrac{1}{2} \\right)\\!(x+1)"
},
{
"math_id": 6,
"text": "\\tfrac{1}{2} (x-1)\\!\\left( 1+ 2x + 2x^2 + \\cdots + 2 x^{n-1} \\right)"
},
{
"math_id": 7,
"text": " y^n - \\left \\{ s \\left ( \\frac{1-n}{n} \\right )^{n-1} \\right \\} y - \\left \\{ s \\left ( \\frac{1-n}{n} \\right )^n \\right \\}"
},
{
"math_id": 8,
"text": " t = \\frac{s (1-n)^{n-1}}{n^n},"
},
{
"math_id": 9,
"text": " (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} (1-t),"
},
{
"math_id": 10,
"text": "t = 1 - (-1)^{\\tfrac{n(n-1)}{2}} n u^2"
},
{
"math_id": 11,
"text": "\\begin{align}\n(-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} (1-t)\n&= (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \\left (1 - \\left (1 - (-1)^{\\tfrac{n(n-1)}{2}} n u^2 \\right ) \\right) \\\\\n&= (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \\left ((-1)^{\\tfrac{n(n-1)}{2}} n u^2 \\right ) \\\\\n&= n^{n+1} (n-1)^{n-1} t^{n-1} u^2 \n\\end{align}"
},
{
"math_id": 12,
"text": "t = \\frac{1}{1 + (-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2}"
},
{
"math_id": 13,
"text": "\\begin{align}\n(-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} (1-t)\n&= (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \\left (1 - \\frac{1}{1 + (-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2} \\right ) \\\\\n&= (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \\left ( \\frac{\\left ( 1 + (-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2 \\right ) - 1}{1 + (-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2} \\right ) \\\\\n&= (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \\left ( \\frac{(-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2}{1 + (-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2} \\right ) \\\\\n&= (-1)^{\\frac{n(n-1)}{2}} n^n (n-1)^{n-1} t^{n-1} \\left (t (-1)^{\\tfrac{n(n-1)}{2}} (n-1) u^2 \\right ) \\\\\n&= n^n (n-1)^n t^n u^2\n\\end{align}"
},
{
"math_id": 14,
"text": "\\mathbb{Q}(\\mathrm{J}(\\tau))"
}
]
| https://en.wikipedia.org/wiki?curid=739166 |
73918719 | Milo Mazurkiewicz | Milo Andrea Mazurkiewicz (28 January 1995 – 6 May 2019) was a Polish queer activist, linguist, and information systems technician. They used the Polish neutral pronoun "ono" and accepted the use of feminine pronouns.
Early life and education.
Mazurkiewicz was born in Złotów, Poland. Their parents split while they were still in kindergarten, and they were raised by their mother. Mazurkiewicz showed prowess for linguistics in early childhood, and their first documented attempts at creating artificial languages date back to 2001.
In the school year 2013/2014, as a student from the 1st High School in Złotów, Mazurkiewicz won the final of the 12th Olympiad in Mathematical Linguistics (scoring 115 out of 120 possible points, the next competitor had 30 points less) and won the 1st place individually during the 12th International Linguistics Olympiad in Beijing (competing with 152 competitors from 28 countries). The jury awarded 41 individual medals (7 gold, 13 silver and 21 bronze) and 9 prizes for the best solved tasks. Milo scored 84 points (the next competitors on the podium, from the US and Canada, scored 81 and 73 points respectively) and the prize for the best solved problem.
After graduating from high school with a matura diploma, Milo studied computer science at the Poznań University of Technology. They received a scholarship from the Minister of National Education, graduating in 2017 with a degree in computer science. They were known for their humorous invention of the mathematical constant "ęć", where π + ęć = 5 (formula_0 is pronounced as in Polish, and when combined with "ęć" creates "pięć" (), the Polish word for five).
Activism.
Mazurkiewicz spoke out about the prolonged and invasive process to transition in Poland, saying in a public speech "why should anyone tell me what to look like? Who can I feel and who not? I feel as if my genitals belong to the state!".
Personal life.
Mazurkiewicz came out to their father as nonbinary in 2018 and was met with acceptance; however, their mother was reluctant to acknowledge their gender.
On April 24, Milo officially changed the data on their ID card at the Registry Office. Reportedly, they had an Andrea Bocelli record on them in case the clerk doubted that there is such a masculine name; the Polish equivalent of the name Andrea is Andrzej, the name Milo was registered under as a baby, while Andrea is a name more commonly associated with femininity in Polish due to the suffix -a. They also resigned from the part of the surname that is inflected, i.e. from Dubieński, and were left with "Mazurkiewicz".
Mazurkiewicz presented as gender-nonconforming, and thus had struggled to obtain a diagnosis of gender dysphoria, as reportedly their doctors requested them to "dress female".
Death.
On May 6, 2019, Mazurkiewicz was supposed to have an appointment with a psychologist in Warsaw; it was never confirmed if they appeared.
Information about their death appeared on May 17 on the Facebook page of the Stonewall Group, which reported that the activist died on May 6 after they jumped from the Łazienkowski bridge in Warsaw. Their death has been attributed to the high rates of transphobia in Poland.
A widely publicized post Milo wrote for their Facebook account on May 2 illustrated their state of mind during their final days:I'm fed up. I'm fed up with being treated like a piece of shit. I'm fed up with people (psychologists, doctors, therapists) telling me I can't be who I am because I don't look like that. Treating me as if it was all in my head and telling me I need papers proving it. Caring more about how I dress than how I feel. Telling me that it's good that my chosen name is neutral-sounding, that it's good my body is not extremely feminine, that it's good I haven't come out at work (yet). Telling me that maybe I should stop being (trying to be) myself and wait until other doctors and therapists decide I can. I'm fed up of all of that. Sometimes it makes me fight even more. Sometimes it makes me want to end it all and stop my life right here. Sometimes it just makes me want to cry. I wish I could see any hope on the horizon. I wish I could hear another answer than “some day, eventually, it's gonna be better”. I wish I could feel alive today, at this moment.Their last post, saying "I'm sorry", has a timestamp of May 6, 14:51 GMT+1.
Mazurkiewicz's body was found in Vistula on May 15, roughly 9 days after they were last seen. They were cremated and buried in their hometown of Złotów.
Legacy.
On May 24, 2019, a group of queer activists tried to hang a rainbow flag on the Łazienkowski bridge in commemoration of Mazurkiewicz. A group of passers-by accosted the mourners and tried to tear down the flag. The flag was finally unfurled only when policemen had arrived to protect the demonstrators.
A fund for transgender women and transfeminine individuals in Poland has been set up in Mazurkiewicz's name, citing her profound dissatisfaction with the current state of Polish transgender healthcare and legal status, specifically the complete lack of National Health Fund coverage for any gender-affirming procedures.
Since the 21st edition of the Olympiad in Mathematical Linguistics, the Milo Mazurkiewicz Prize is awarded at the final for the best solution to one of the tasks.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=73918719 |
739199 | Generalized continued fraction | In complex analysis, a branch of mathematics, a generalized continued fraction is a generalization of regular continued fractions in canonical form, in which the partial numerators and partial denominators can assume arbitrary complex values.
A generalized continued fraction is an expression of the form
formula_0
where the "a""n" ("n" > 0) are the partial numerators, the "b""n" are the partial denominators, and the leading term "b"0 is called the "integer" part of the continued fraction.
The successive convergents of the continued fraction are formed by applying the fundamental recurrence formulas:
formula_1
where "A""n" is the "numerator" and "B""n" is the "denominator", called continuants, of the "n"th convergent. They are given by the recursion
formula_2
with initial values
formula_3
If the sequence of convergents {"x""n"} approaches a limit the continued fraction is convergent and has a definite value. If the sequence of convergents never approaches a limit the continued fraction is divergent. It may diverge by oscillation (for example, the odd and even convergents may approach two different limits), or it may produce an infinite number of zero denominators "B""n".
History.
The story of continued fractions begins with the Euclidean algorithm, a procedure for finding the greatest common divisor of two natural numbers "m" and "n". That algorithm introduced the idea of dividing to extract a new remainder – and then dividing by the new remainder repeatedly.
Nearly two thousand years passed before devised a technique for approximating the roots of quadratic equations with continued fractions in the mid-sixteenth century. Now the pace of development quickened. Just 24 years later, in 1613, Pietro Cataldi introduced the first formal notation for the generalized continued fraction. Cataldi represented a continued fraction as
formula_4
with the dots indicating where the next fraction goes, and each & representing a modern plus sign.
Late in the seventeenth century John Wallis introduced the term "continued fraction" into mathematical literature. New techniques for mathematical analysis (Newton's and Leibniz's calculus) had recently come onto the scene, and a generation of Wallis' contemporaries put the new phrase to use.
In 1748 Euler published a theorem showing that a particular kind of continued fraction is equivalent to a certain very general infinite series. Euler's continued fraction formula is still the basis of many modern proofs of convergence of continued fractions.
In 1761, Johann Heinrich Lambert gave the first proof that π is irrational, by using the following continued fraction for tan "x":
formula_5
Continued fractions can also be applied to problems in number theory, and are especially useful in the study of Diophantine equations. In the late eighteenth century Lagrange used continued fractions to construct the general solution of Pell's equation, thus answering a question that had fascinated mathematicians for more than a thousand years. Lagrange's discovery implies that the canonical continued fraction expansion of the square root of every non-square integer is periodic and that, if the period is of length "p" > 1, it contains a palindromic string of length "p" − 1.
In 1813 Gauss derived from complex-valued hypergeometric functions what is now called Gauss's continued fractions. They can be used to express many elementary functions and some more advanced functions (such as the Bessel functions), as continued fractions that are rapidly convergent almost everywhere in the complex plane.
Notation.
The long continued fraction expression displayed in the introduction is easy for an unfamiliar reader to interpret. However, it takes up a lot of space and can be difficult to typeset. So mathematicians have devised several alternative notations. One convenient way to express a generalized continued fraction sets each nested fraction on the same line, indicating the nesting by dangling plus signs in the denominators:
formula_6
Sometimes the plus signs are typeset to vertically align with the denominators but not under the fraction bars:
formula_7
Pringsheim wrote a generalized continued fraction this way:
formula_8
Carl Friedrich Gauss evoked the more familiar infinite product Π when he devised this notation:
formula_9
Here the "K" stands for "Kettenbruch", the German word for "continued fraction". This is probably the most compact and convenient way to express continued fractions; however, it is not widely used by English typesetters.
Some elementary considerations.
Here are some elementary results that are of fundamental importance in the further development of the analytic theory of continued fractions.
Partial numerators and denominators.
If one of the partial numerators "a""n" + 1 is zero, the infinite continued fraction
formula_10
is really just a finite continued fraction with "n" fractional terms, and therefore a rational function of "a"1 to "a""n" and "b"0 to "b""n" + 1. Such an object is of little interest from the point of view adopted in mathematical analysis, so it is usually assumed that all "a""i" ≠ 0. There is no need to place this restriction on the partial denominators "b""i".
The determinant formula.
When the "n"th convergent of a continued fraction
formula_11
is expressed as a simple fraction "x""n"
we can use the "determinant formula"
to relate the numerators and denominators of successive convergents "x""n" and "x""n" − 1 to one another.
The proof for this can be easily seen by induction.
Base case
The case "n"
1 results from a very simple computation.
Inductive step
Assume that (1) holds for "n" − 1. Then we need to see the same relation holding true for "n". Substituting the value of "An" and "Bn" in (1) we obtain:
formula_12
which is true because of our induction hypothesis.
formula_13
Specifically, if neither "B""n" nor "B""n" − 1 is zero ("n" > 0) we can express the difference between the ("n" − 1)th and "n"th convergents like this:
formula_14
The equivalence transformation.
If {"c""i"}
{"c"1, "c"2, "c"3, ...} is any infinite sequence of non-zero complex numbers we can prove, by induction, that
formula_15
where equality is understood as equivalence, which is to say that the successive convergents of the continued fraction on the left are exactly the same as the convergents of the fraction on the right.
The equivalence transformation is perfectly general, but two particular cases deserve special mention. First, if none of the "a""i" are zero, a sequence {"c""i"} can be chosen to make each partial numerator a 1:
formula_16
where "c"1
, "c"2
, "c"3
, and in general "c""n" + 1
Second, if none of the partial denominators "b""i" are zero we can use a similar procedure to choose another sequence {"d""i"} to make each partial denominator a 1:
formula_17
where "d"1
and otherwise "d""n" + 1
These two special cases of the equivalence transformation are enormously useful when the general convergence problem is analyzed.
Notions of convergence.
As mentioned in the introduction, the continued fraction
formula_18
converges if the sequence of convergents {"x""n"} tends to a finite limit. This notion of convergence is very natural, but it is sometimes too restrictive. It is therefore useful to introduce the notion of general convergence of a continued fraction. Roughly speaking, this consists in replacing the formula_19 part of the fraction by "w""n", instead of by 0, to compute the convergents. The convergents thus obtained are called "modified convergents". We say that the continued fraction "converges generally" if there exists a sequence formula_20 such that the sequence of modified convergents converges for all formula_21 sufficiently distinct from formula_20. The sequence formula_20 is then called an "exceptional sequence" for the continued fraction. See Chapter 2 of for a rigorous definition.
There also exists a notion of absolute convergence for continued fractions, which is based on the notion of absolute convergence of a series: a continued fraction is said to be "absolutely convergent" when the series
formula_22
where formula_23 are the convergents of the continued fraction, converges absolutely. The Śleszyński–Pringsheim theorem provides a sufficient condition for absolute convergence.
Finally, a continued fraction of one or more complex variables is "uniformly convergent" in an open neighborhood Ω when its convergents converge uniformly on Ω; that is, when for every "ε" > 0 there exists "M" such that for all "n" > "M", for all formula_24,
formula_25
Even and odd convergents.
It is sometimes necessary to separate a continued fraction into its even and odd parts. For example, if the continued fraction diverges by oscillation between two distinct limit points "p" and "q", then the sequence {"x"0, "x"2, "x"4, ...} must converge to one of these, and {"x"1, "x"3, "x"5, ...} must converge to the other. In such a situation it may be convenient to express the original continued fraction as two different continued fractions, one of them converging to "p", and the other converging to "q".
The formulas for the even and odd parts of a continued fraction can be written most compactly if the fraction has already been transformed so that all its partial denominators are unity. Specifically, if
formula_26
is a continued fraction, then the even part "x"even and the odd part "x"odd are given by
formula_27
and
formula_28
respectively. More precisely, if the successive convergents of the continued fraction "x" are {"x"1, "x"2, "x"3, ...}, then the successive convergents of "x"even as written above are {"x"2, "x"4, "x"6, ...}, and the successive convergents of "x"odd are {"x"1, "x"3, "x"5, ...}.
Conditions for irrationality.
If "a"1, "a"2... and "b"1, "b"2... are positive integers with "ak" ≤ "bk" for all sufficiently large "k", then
formula_18
converges to an irrational limit.
Fundamental recurrence formulas.
The partial numerators and denominators of the fraction's successive convergents are related by the "fundamental recurrence formulas":
formula_29
The continued fraction's successive convergents are then given by
formula_30
These recurrence relations are due to John Wallis (1616–1703) and Leonhard Euler (1707–1783).
These recurrence relations are simply a different notation for the relations obtained by Pietro Antonio Cataldi (1548-1626).
As an example, consider the regular continued fraction in canonical form that represents the golden ratio φ:
formula_31
Applying the fundamental recurrence formulas we find that the successive numerators "A""n" are {1, 2, 3, 5, 8, 13, ...} and the successive denominators "B""n" are {1, 1, 2, 3, 5, 8, ...}, the Fibonacci numbers. Since all the partial numerators in this example are equal to one, the determinant formula assures us that the absolute value of the difference between successive convergents approaches zero quite rapidly.
Linear fractional transformations.
A linear fractional transformation (LFT) is a complex function of the form
formula_32
where "z" is a complex variable, and "a", "b", "c", "d" are arbitrary complex constants such that "c" + "dz" ≠ 0. An additional restriction that "ad" ≠ "bc" is customarily imposed, to rule out the cases in which "w"
"f"("z") is a constant. The linear fractional transformation, also known as a Möbius transformation, has many fascinating properties. Four of these are of primary importance in developing the analytic theory of continued fractions.
formula_33
which is clearly a quadratic equation in "z". The roots of this equation are the fixed points of "f"("z"). If the discriminant ("c" − "b")2 + 4"ad" is zero the LFT fixes a single point; otherwise it has two fixed points.
formula_34
such that "f"("g"("z"))
"g"("f"("z"))
"z" for every point "z" in the extended complex plane, and both "f" and "g" preserve angles and shapes at vanishingly small scales. From the form of "z"
"g"("w") we see that "g" is also an LFT.
0 the LFT reduces to
formula_35
which is a very simple meromorphic function of "z" with one simple pole (at −) and a residue equal to . (See also Laurent series.)
The continued fraction as a composition of LFTs.
Consider a sequence of simple linear fractional transformations
formula_36
Here we use "τ" to represent each simple LFT, and we adopt the conventional circle notation for composition of functions. We also introduce a new symbol Τn to represent the composition of "n" + 1 transformations "τi"; that is,
formula_37
and so forth. By direct substitution from the first set of expressions into the second we see that
formula_38
and, in general,
formula_39
where the last partial denominator in the finite continued fraction K is understood to be "b""n" + "z". And, since "b""n" + 0
"b""n", the image of the point "z"
0 under the iterated LFT Τn is indeed the value of the finite continued fraction with "n" partial numerators:
formula_40
A geometric interpretation.
Defining a finite continued fraction as the image of a point under the iterated linear functional transformation Τn("z") leads to an intuitively appealing geometric interpretation of infinite continued fractions.
The relationship
formula_41
can be understood by rewriting Τ"n"("z") and Τ"n" + 1("z") in terms of the fundamental recurrence formulas:
formula_42
In the first of these equations the ratio tends toward as "z" tends toward zero. In the second, the ratio tends toward as "z" tends to infinity. This leads us to our first geometric interpretation. If the continued fraction converges, the successive convergents are eventually arbitrarily close together. Since the linear fractional transformation Τ"n"("z") is a continuous mapping, there must be a neighborhood of "z"
0 that is mapped into an arbitrarily small neighborhood of Τ"n"(0)
. Similarly, there must be a neighborhood of the point at infinity which is mapped into an arbitrarily small neighborhood of Τ"n"(∞)
. So if the continued fraction converges the transformation Τ"n"("z") maps both very small "z" and very large "z" into an arbitrarily small neighborhood of "x", the value of the continued fraction, as "n" gets larger and larger.
For intermediate values of "z", since the successive convergents are getting closer together we must have
formula_43
where "k" is a constant, introduced for convenience. But then, by substituting in the expression for Τ"n"("z") we obtain
formula_44
so that even the intermediate values of "z" (except when "z" ≈ −"k"−1) are mapped into an arbitrarily small neighborhood of "x", the value of the continued fraction, as "n" gets larger and larger. Intuitively, it is almost as if the convergent continued fraction maps the entire extended complex plane into a single point.
Notice that the sequence {Τ"n"} lies within the automorphism group of the extended complex plane, since each Τ"n" is a linear fractional transformation for which "ab" ≠ "cd". And every member of that automorphism group maps the extended complex plane into itself: not one of the Τ"n" can possibly map the plane into a single point. Yet in the limit the sequence {Τ"n"} defines an infinite continued fraction which (if it converges) represents a single point in the complex plane.
When an infinite continued fraction converges, the corresponding sequence {Τ"n"} of LFTs "focuses" the plane in the direction of "x", the value of the continued fraction. At each stage of the process a larger and larger region of the plane is mapped into a neighborhood of "x", and the smaller and smaller region of the plane that's left over is stretched out ever more thinly to cover everything outside that neighborhood.
For divergent continued fractions, we can distinguish three cases:
Interesting examples of cases 1 and 3 can be constructed by studying the simple continued fraction
formula_45
where "z" is any real number such that "z" < −.
Euler's continued fraction formula.
Euler proved the following identity:
formula_46
From this many other results can be derived, such as
formula_47
and
formula_48
Euler's formula connecting continued fractions and series is the motivation for the , and also the basis of elementary approaches to the convergence problem.
Examples.
Transcendental functions and numbers.
Here are two continued fractions that can be built via Euler's identity.
formula_49
formula_50
Here are additional generalized continued fractions:
formula_51
formula_52
formula_53
This last is based on an algorithm derived by Aleksei Nikolaevich Khovansky in the 1970s.
Example: the natural logarithm of 2 (= [0; 1, 2, 3, 1, 5, , 7, , 9, ..., 2"k" − 1, ...] ≈ 0.693147...):
formula_54
π.
Here are three of π's best-known generalized continued fractions, the first and third of which are derived from their respective formulas above by setting "x"
"y"
1 and multiplying by 4. The Leibniz formula for π:
formula_55
converges too slowly, requiring roughly 3 × 10"n" terms to achieve "n" correct decimal places. The series derived by Nilakantha Somayaji:
formula_56
is a much more obvious expression but still converges quite slowly, requiring nearly 50 terms for five decimals and nearly 120 for six. Both converge "sublinearly" to π. On the other hand:
formula_57
converges "linearly" to π, adding at least three digits of precision per four terms, a pace slightly faster than the arcsine formula for π:
formula_58
which adds at least three decimal digits per five terms.
1 + 1. For the "folded" general continued fractions of both expressions, the rate convergence , hence , whose common logarithm is 1.531... ≈ >, thus adding at least three digits per two terms. This is because the folded GCF folds each pair of fractions from the unfolded GCF into one fraction, thus doubling the convergence pace. The Manny Sardina reference further explains "folded" continued fractions.
formula_59
with "u"
5 and "v"
239.
Roots of positive numbers.
The "n"th root of any positive number "z""m" can be expressed by restating "z"
"x""n" + "y", resulting in
formula_60
which can be simplified, by folding each pair of fractions into one fraction, to
formula_61
The square root of "z" is a special case with "m"
1 and "n"
2:
formula_62
which can be simplified by noting that
:
formula_63
The square root can also be expressed by a periodic continued fraction, but the above form converges more quickly with the proper "x" and "y".
Example 1.
The cube root of two (21/3 or 3√2 ≈ 1.259921...) can be calculated in two ways:
Firstly, "standard notation" of "x"
1, "y"
1, and 2"z" − "y"
3:
formula_64
Secondly, a rapid convergence with "x"
5, "y"
3 and 2"z" − "y"
253:
formula_65
Example 2.
Pogson's ratio (1001/5 or 5√100 ≈ 2.511886...), with "x"
5, "y"
75 and 2"z" − "y"
6325:
formula_66
Example 3.
The twelfth root of two (21/12 or 12√2 ≈ 1.059463...), using "standard notation":
formula_67
Example 4.
Equal temperament's perfect fifth (27/12 or 12√27 ≈ 1.498307...), with "m"
7:
With "standard notation":
formula_68
A rapid convergence with "x"
3, "y"
−7153, and 2"z" − "y"
219 + 312:
formula_69
formula_70
More details on this technique can be found in "General Method for Extracting Roots using (Folded) Continued Fractions".
Higher dimensions.
Another meaning for generalized continued fraction is a generalization to higher dimensions. For example, there is a close relationship between the simple continued fraction in canonical form for the irrational real number "α", and the way lattice points in two dimensions lie to either side of the line "y"
"αx". Generalizing this idea, one might ask about something related to lattice points in three or more dimensions. One reason to study this area is to quantify the mathematical coincidence idea; for example, for monomials in several real numbers, take the logarithmic form and consider how small it can be. Another reason is to find a possible solution to Hermite's problem.
There have been numerous attempts to construct a generalized theory. Notable efforts in this direction were made by Felix Klein (the Klein polyhedron), Georges Poitou and George Szekeres.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "x = b_0 + \\cfrac{a_1}{b_1 + \\cfrac{a_2}{b_2 + \\cfrac{a_3}{b_3 + \\cfrac{a_4}{b_4 + \\ddots\\,}}}}"
},
{
"math_id": 1,
"text": "\\begin{align}\nx_0 &= \\frac{A_0}{B_0} = b_0, \\\\[4px]\nx_1 &= \\frac{A_1}{B_1} = \\frac{b_1b_0+a_1}{b_1}, \\\\[4px]\nx_2 &= \\frac{A_2}{B_2} = \\frac{b_2(b_1b_0+a_1) + a_2b_0}{b_2b_1 + a_2},\\ \\dots\n\\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align}\nA_n &= b_n A_{n-1} + a_n A_{n-2}, \\\\\nB_n &= b_n B_{n-1} + a_n B_{n-2} \\qquad \\text{for } n \\ge 1 \n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\nA_{-1} &= 1,& A_0&=b_0,\\\\ B_{-1}&=0, & B_0&=1.\n\\end{align}"
},
{
"math_id": 4,
"text": "{a_0\\cdot} \\,\\&\\, \\frac{n_1}{d_1\\cdot} \\,\\&\\, \\frac{n_2}{d_2\\cdot} \\,\\&\\, \\frac{n_3}{d_3}"
},
{
"math_id": 5,
"text": "\\tan(x) = \\cfrac{x}{1 + \\cfrac{-x^2}{3 + \\cfrac{-x^2}{5 + \\cfrac{-x^2}{7 + {}\\ddots}}}}"
},
{
"math_id": 6,
"text": "\nx = b_0+\n\\frac{a_1}{b_1+}\\,\n\\frac{a_2}{b_2+}\\,\n\\frac{a_3}{b_3+}\\cdots\n"
},
{
"math_id": 7,
"text": "\nx = b_0 +\n\\frac{a_1}{b_1}{{}\\atop+}\n\\frac{a_2}{b_2}{{}\\atop+}\n\\frac{a_3}{b_3}{{}\\atop+\\cdots}\n"
},
{
"math_id": 8,
"text": "\nx = b_0 + \\frac{a_1 \\mid}{\\mid b_1} + \\frac{a_2 \\mid}{\\mid b_2} + \\frac{a_3 \\mid}{\\mid b_3}+\\cdots.\\,\n"
},
{
"math_id": 9,
"text": "\nx = b_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{a_i}{b_i}.\\,\n"
},
{
"math_id": 10,
"text": "\nb_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{a_i}{b_i}\\,\n"
},
{
"math_id": 11,
"text": "\nx_n = b_0 + \\underset{i=1}\\overset{n}\\operatorname{K} \\frac{a_i}{b_i}\\,\n"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n &=b_n A_{n-1} B_{n-1} + a_n A_{n-1} B_{n-2} - b_n A_{n-1} B_{n-1} - a_n A_{n-2} B_{n-1} \\\\\n &=a_n(A_{n-1}B_{n-2} - A_{n-2} B_{n-1})\n\\end{align}\n"
},
{
"math_id": 13,
"text": "\nA_{n-1}B_n - A_nB_{n-1} = \\left(-1\\right)^na_1a_2\\cdots a_n = \\prod_{i=1}^n (-a_i)\\,\n"
},
{
"math_id": 14,
"text": "\nx_{n-1} - x_n = \\frac{A_{n-1}}{B_{n-1}} - \\frac{A_n}{B_n} = \n\\left(-1\\right)^n \\frac{a_1a_2\\cdots a_n}{B_nB_{n-1}} = \\frac{\\prod_{i=1}^n (-a_i)}{B_nB_{n-1}}.\\,\n"
},
{
"math_id": 15,
"text": "\nb_0 + \\cfrac{a_1}{b_1 + \\cfrac{a_2}{b_2 + \\cfrac{a_3}{b_3 + \\cfrac{a_4}{b_4 + \\ddots\\,}}}} =\nb_0 + \\cfrac{c_1a_1}{c_1b_1 + \\cfrac{c_1c_2a_2}{c_2b_2 + \\cfrac{c_2c_3a_3}{c_3b_3 + \\cfrac{c_3c_4a_4}{c_4b_4 + \\ddots\\,}}}}\n"
},
{
"math_id": 16,
"text": "\nb_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{a_i}{b_i} = \nb_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{1}{c_i b_i}\\,\n"
},
{
"math_id": 17,
"text": "\nb_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{a_i}{b_i} = \nb_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{d_i a_i}{1}\\,\n"
},
{
"math_id": 18,
"text": "\nx = b_0 + \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{a_i}{b_i}\\,\n"
},
{
"math_id": 19,
"text": "\\operatorname{K}_{i = n}^\\infty \\tfrac{a_i}{b_i}"
},
{
"math_id": 20,
"text": "\\{w_n^*\\}"
},
{
"math_id": 21,
"text": "\\{w_n\\}"
},
{
"math_id": 22,
"text": " f = \\sum_n \\left( f_n - f_{n-1}\\right),"
},
{
"math_id": 23,
"text": "f_n = \\operatorname{K}_{i = 1}^n \\tfrac{a_i}{b_i}"
},
{
"math_id": 24,
"text": "z \\in \\Omega"
},
{
"math_id": 25,
"text": "\n|f(z) - f_n(z)| < \\varepsilon.\n"
},
{
"math_id": 26,
"text": "\nx = \\underset{i=1}\\overset{\\infty}\\operatorname{K} \\frac{a_i}{1}\\,\n"
},
{
"math_id": 27,
"text": "\nx_\\text{even} = \\cfrac{a_1}{1+a_2-\\cfrac{a_2a_3} {1+a_3+a_4-\\cfrac{a_4a_5} {1+a_5+a_6-\\cfrac{a_6a_7} {1+a_7+a_8-\\ddots}}}}\\,\n"
},
{
"math_id": 28,
"text": "\nx_\\text{odd} = a_1 - \\cfrac{a_1a_2}{1+a_2+a_3-\\cfrac{a_3a_4} {1+a_4+a_5-\\cfrac{a_5a_6} {1+a_6+a_7-\\cfrac{a_7a_8} {1+a_8+a_9-\\ddots}}}}\\,\n"
},
{
"math_id": 29,
"text": "\n\\begin{align}\nA_{-1}& = 1& B_{-1}& = 0\\\\\nA_0& = b_0& B_0& = 1\\\\\nA_{n+1}& = b_{n+1} A_n + a_{n+1} A_{n-1}& B_{n+1}& = b_{n+1} B_n + a_{n+1} B_{n-1}\\,\n\\end{align}\n"
},
{
"math_id": 30,
"text": "x_n=\\frac{A_n}{B_n}.\\,"
},
{
"math_id": 31,
"text": "x = 1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\ddots\\,}}}} "
},
{
"math_id": 32,
"text": "\nw = f(z) = \\frac{a + bz}{c + dz},\\,\n"
},
{
"math_id": 33,
"text": "\nf(z) = z \\Rightarrow dz^2 + cz = a + bz\\,\n"
},
{
"math_id": 34,
"text": "\nz = g(w) = \\frac{-a + cw}{b - dw}\\,\n"
},
{
"math_id": 35,
"text": "\nw = f(z) = \\frac{a}{c + dz},\\,\n"
},
{
"math_id": 36,
"text": "\\begin{align}\n\\tau_0(z) &= b_0 + z, \\\\[4px] \\tau_1(z) &= \\frac{a_1}{b_1 + z}, \\\\[4px]\n\\tau_2(z) &= \\frac{a_2}{b_2 + z},\\\\[4px] \\tau_3(z) &= \\frac{a_3}{b_3 + z},\\\\&\\;\\vdots\n\\end{align}"
},
{
"math_id": 37,
"text": "\\begin{align}\n\\boldsymbol{\\Tau}_\\boldsymbol{1}(z) &= \\tau_0\\circ\\tau_1(z) = \\tau_0\\big(\\tau_1(z)\\big),\\\\\n\\boldsymbol{\\Tau}_\\boldsymbol{2}(z) &= \\tau_0\\circ\\tau_1\\circ\\tau_2(z) = \\tau_0\\Big(\\tau_1\\big(\\tau_2(z)\\big)\\Big),\\,\n\\end{align}"
},
{
"math_id": 38,
"text": "\n\\begin{align}\n\\boldsymbol{\\Tau}_\\boldsymbol{1}(z)& = \\tau_0\\circ\\tau_1(z)& =&\\quad b_0 + \\cfrac{a_1}{b_1 + z}\\\\[4px]\n\\boldsymbol{\\Tau}_\\boldsymbol{2}(z)& = \\tau_0\\circ\\tau_1\\circ\\tau_2(z)& =&\\quad b_0 + \\cfrac{a_1}{b_1 + \\cfrac{a_2}{b_2 + z}}\\,\n\\end{align}\n"
},
{
"math_id": 39,
"text": "\n\\boldsymbol{\\Tau}_\\boldsymbol{n}(z) = \\tau_0\\circ\\tau_1\\circ\\tau_2\\circ\\cdots\\circ\\tau_n(z) =\nb_0 + \\underset{i=1}\\overset{n}\\operatorname{K} \\frac{a_i}{b_i}\\,\n"
},
{
"math_id": 40,
"text": "\n\\boldsymbol{\\Tau}_\\boldsymbol{n}(0) = \\boldsymbol{\\Tau}_\\boldsymbol{n+1}(\\infty) = \nb_0 + \\underset{i=1}\\overset{n}\\operatorname{K} \\frac{a_i}{b_i}.\\,\n"
},
{
"math_id": 41,
"text": "\nx_n = b_0 + \\underset{i=1}\\overset{n}\\operatorname{K} \\frac{a_i}{b_i} = \\frac{A_n}{B_n} = \\boldsymbol{\\Tau}_{\\boldsymbol{n}}(0) = \\boldsymbol{\\Tau}_{\\boldsymbol{n+1}}(\\infty)\\,\n"
},
{
"math_id": 42,
"text": "\n\\begin{align}\n\\boldsymbol{\\Tau}_{\\boldsymbol{n}}(z)& = \\frac{(b_n+z)A_{n-1} + a_nA_{n-2}}{(b_n+z)B_{n-1} + a_nB_{n-2}}& \\boldsymbol{\\Tau}_{\\boldsymbol{n}}(z)& = \\frac{zA_{n-1} + A_n}{zB_{n-1} + B_n};\\\\[6px]\n\\boldsymbol{\\Tau}_{\\boldsymbol{n+1}}(z)& = \\frac{(b_{n+1}+z)A_n + a_{n+1}A_{n-1}}{(b_{n+1}+z)B_n + a_{n+1}B_{n-1}}& \\boldsymbol{\\Tau}_{\\boldsymbol{n+1}}(z)& = \\frac{zA_n + A_{n+1}} {zB_n + B_{n+1}}.\\,\n\\end{align}\n"
},
{
"math_id": 43,
"text": "\n\\frac{A_{n-1}}{B_{n-1}} \\approx \\frac{A_n}{B_n} \\quad\\Rightarrow\\quad \n\\frac{A_{n-1}}{A_n} \\approx \\frac{B_{n-1}}{B_n} = k\\,\n"
},
{
"math_id": 44,
"text": "\n\\boldsymbol{\\Tau}_{\\boldsymbol{n}}(z) = \\frac{zA_{n-1} + A_n}{zB_{n-1} + B_n}\n= \\frac{A_n}{B_n} \\left(\\frac{z\\frac{A_{n-1}}{A_n} + 1}{z\\frac{B_{n-1}}{B_n} + 1}\\right)\n\\approx \\frac{A_n}{B_n} \\left(\\frac{zk + 1}{zk + 1}\\right) = \\frac{A_n}{B_n}\\,\n"
},
{
"math_id": 45,
"text": "\nx = 1 + \\cfrac{z}{1 + \\cfrac{z}{1 + \\cfrac{z}{1 + \\cfrac{z}{1 + \\ddots}}}}\\,\n"
},
{
"math_id": 46,
"text": "\na_0 + a_0a_1 + a_0a_1a_2 + \\cdots + a_0a_1a_2\\cdots a_n =\n\\frac{a_0}{1-}\n\\frac{a_1}{1+a_1-}\n\\frac{a_2}{1+a_2-}\\cdots\n\\frac{a_{n}}{1+a_n}.\\,\n"
},
{
"math_id": 47,
"text": "\n\\frac{1}{u_1}+\n\\frac{1}{u_2}+\n\\frac{1}{u_3}+\n\\cdots+\n\\frac{1}{u_n} =\n\\frac{1}{u_1-}\n\\frac{u_1^2}{u_1+u_2-}\n\\frac{u_2^2}{u_2+u_3-}\\cdots\n\\frac{u_{n-1}^2}{u_{n-1}+u_n},\\,\n"
},
{
"math_id": 48,
"text": "\n\\frac{1}{a_0} + \\frac{x}{a_0a_1} + \\frac{x^2}{a_0a_1a_2} + \\cdots +\n\\frac{x^n}{a_0a_1a_2 \\ldots a_n} =\n\\frac{1}{a_0-}\n\\frac{a_0x}{a_1+x-}\n\\frac{a_1x}{a_2+x-}\\cdots\n\\frac{a_{n-1}x}{a_n+x}.\\,\n"
},
{
"math_id": 49,
"text": "\ne^x = \\frac{x^0}{0!} + \\frac{x^1}{1!} + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!} + \\cdots \n= 1+\\cfrac{x} {1-\\cfrac{1x} {2+x-\\cfrac{2x} {3+x-\\cfrac{3x} {4+x-\\ddots}}}}\n"
},
{
"math_id": 50,
"text": "\n\\log(1+x) = \\frac{x^1}{1} - \\frac{x^2}{2} + \\frac{x^3}{3} - \\frac{x^4}{4} + \\cdots \n=\\cfrac{x} {1-0x+\\cfrac{1^2x} {2-1x+\\cfrac{2^2x} {3-2x+\\cfrac{3^2x} {4-3x+\\ddots}}}}\n"
},
{
"math_id": 51,
"text": "\n\\arctan\\cfrac{x}{y}=\\cfrac{xy} {1y^2+\\cfrac{(1xy)^2} {3y^2-1x^2+\\cfrac{(3xy)^2} {5y^2-3x^2+\\cfrac{(5xy)^2} {7y^2-5x^2+\\ddots}}}}\n=\\cfrac{x} {1y+\\cfrac{(1x)^2} {3y+\\cfrac{(2x)^2} {5y+\\cfrac{(3x)^2} {7y+\\ddots}}}}\n"
},
{
"math_id": 52,
"text": "\ne^\\frac{x}{y} = 1+\\cfrac{2x} {2y-x+\\cfrac{x^2} {6y+\\cfrac{x^2} {10y+\\cfrac{x^2} {14y+\\cfrac{x^2} {18y+\\ddots}}}}} \\quad\\Rightarrow\\quad\ne^2 = 7+\\cfrac{2} {5+\\cfrac{1} {7+\\cfrac{1} {9+\\cfrac{1} {11+\\ddots}}}}\n"
},
{
"math_id": 53,
"text": "\n\\log \\left( 1+\\frac{x}{y} \\right) = \\cfrac{x} {y+\\cfrac{1x} {2+\\cfrac{1x} {3y+\\cfrac{2x} {2+\\cfrac{2x} {5y+\\cfrac{3x} {2+\\ddots}}}}}} \n= \\cfrac{2x} {2y+x-\\cfrac{(1x)^2} {3(2y+x)-\\cfrac{(2x)^2} {5(2y+x)-\\cfrac{(3x)^2} {7(2y+x)-\\ddots}}}}\n"
},
{
"math_id": 54,
"text": "\n\\log 2 = \\log (1+1) = \\cfrac{1} {1+\\cfrac{1} {2+\\cfrac{1} {3+\\cfrac{2} {2+\\cfrac{2} \n{5+\\cfrac{3} {2+\\ddots}}}}}} \n= \\cfrac{2} {3-\\cfrac{1^2} {9-\\cfrac{2^2} {15-\\cfrac{3^2} {21-\\ddots}}}}\n"
},
{
"math_id": 55,
"text": "\n\\pi = \\cfrac{4} {1+\\cfrac{1^2} {2+\\cfrac{3^2} {2+\\cfrac{5^2} {2+\\ddots}}}}\n= \\sum_{n=0}^\\infty \\frac{4(-1)^n}{2n+1} \n= \\frac{4}{1} - \\frac{4}{3} + \\frac{4}{5} - \\frac{4}{7} +- \\cdots\n"
},
{
"math_id": 56,
"text": "\n\\pi = 3 + \\cfrac{1^2} {6+\\cfrac{3^2} {6+\\cfrac{5^2} {6+\\ddots}}}\n= 3 - \\sum_{n=1}^\\infty \\frac{(-1)^n} {n (n+1) (2n+1)} \n= 3 + \\frac{1}{1\\cdot 2\\cdot 3} - \\frac{1}{2\\cdot 3\\cdot 5} + \\frac{1}{3\\cdot 4\\cdot 7} -+ \\cdots\n"
},
{
"math_id": 57,
"text": "\n\\pi = \\cfrac{4} {1+\\cfrac{1^2} {3+\\cfrac{2^2} {5+\\cfrac{3^2} {7+\\ddots}}}}\n= 4 - 1 + \\frac{1}{6} - \\frac{1}{34} + \\frac {16}{3145} - \\frac{4}{4551} + \\frac{1}{6601} - \\frac{1}{38341} +- \\cdots\n"
},
{
"math_id": 58,
"text": "\n\\pi = 6 \\sin^{-1} \\left( \\frac{1}{2} \\right) \n= \\sum_{n=0}^\\infty \\frac {3 \\cdot \\binom {2n} {n}} {16^n (2n+1)}\n= \\frac {3} {16^0 \\cdot 1} + \\frac {6} {16^1 \\cdot 3} + \\frac {18} {16^2 \\cdot 5} + \\frac {60} {16^3 \\cdot 7} + \\cdots\\!\n"
},
{
"math_id": 59,
"text": "\n\\pi = 16 \\tan^{-1} \\cfrac{1}{5}\\, -\\, 4 \\tan^{-1} \\cfrac{1}{239} \n= \\cfrac{16} {u+\\cfrac{1^2} {3u+\\cfrac{2^2} {5u+\\cfrac{3^2} {7u+\\ddots}}}} \\,\n-\\, \\cfrac{4} {v+\\cfrac{1^2} {3v+\\cfrac{2^2} {5v+\\cfrac{3^2} {7v+\\ddots}}}}.\n"
},
{
"math_id": 60,
"text": "\n\\sqrt[n]{z^m} = \\sqrt[n]{\\left(x^n+y\\right)^m} = x^m+\\cfrac{my} {nx^{n-m}+\\cfrac{(n-m)y} {2x^m+\\cfrac{(n+m)y} {3nx^{n-m}+\\cfrac{(2n-m)y} {2x^m+\\cfrac{(2n+m)y} {5nx^{n-m}+\\cfrac{(3n-m)y} {2x^m+\\ddots}}}}}}\n"
},
{
"math_id": 61,
"text": "\n\\sqrt[n]{z^m} = x^m+\\cfrac{2x^m \\cdot my} {n(2x^n + y)-my-\\cfrac{(1^2n^2-m^2)y^2} {3n(2x^n + y)-\\cfrac{(2^2n^2-m^2)y^2} {5n(2x^n + y)-\\cfrac{(3^2n^2-m^2)y^2} {7n(2x^n + y)-\\cfrac{(4^2n^2-m^2)y^2} {9n(2x^n + y)-\\ddots}}}}}.\n"
},
{
"math_id": 62,
"text": "\n\\sqrt{z} = \\sqrt{x^2+y} = x+\\cfrac{y} {2x+\\cfrac{y} {2x+\\cfrac{3y} {6x+\\cfrac{3y} {2x+\\ddots}}}} \n= x+\\cfrac{2x \\cdot y} {2(2x^2 + y)-y-\\cfrac{1\\cdot 3y^2} {6(2x^2 + y)-\\cfrac{3\\cdot 5y^2} {10(2x^2 + y)-\\ddots}}}\n"
},
{
"math_id": 63,
"text": "\n\\sqrt{z} = \\sqrt{x^2+y} = x+\\cfrac{y} {2x+\\cfrac{y} {2x+\\cfrac{y} {2x+\\cfrac{y} {2x+\\ddots}}}} \n= x+\\cfrac{2x \\cdot y} {2(2x^2 + y)-y-\\cfrac{y^2} {2(2x^2 + y)-\\cfrac{y^2} {2(2x^2 + y)-\\ddots}}}.\n"
},
{
"math_id": 64,
"text": "\n\\sqrt[3]2 = 1+\\cfrac{1} {3+\\cfrac{2} {2+\\cfrac{4} {9+\\cfrac{5} {2+\\cfrac{7} {15+\\cfrac{8} {2+\\cfrac{10} {21+\\cfrac{11} {2+\\ddots}}}}}}}} = 1+\\cfrac{2 \\cdot 1} {9-1-\\cfrac{2 \\cdot 4} {27-\\cfrac{5 \\cdot 7} {45-\\cfrac{8 \\cdot 10} {63-\\cfrac{11 \\cdot 13} {81-\\ddots}}}}}.\n"
},
{
"math_id": 65,
"text": "\n\\sqrt[3]2 = \\cfrac{5}{4}+\\cfrac{0.5} {50+\\cfrac{2} {5+\\cfrac{4} {150+\\cfrac{5} {5+\\cfrac{7} {250+\\cfrac{8} {5+\\cfrac{10} {350+\\cfrac{11} {5+\\ddots}}}}}}}} = \\cfrac{5}{4}+\\cfrac{2.5 \\cdot 1} {253-1-\\cfrac{2 \\cdot 4} {759-\\cfrac{5 \\cdot 7} {1265-\\cfrac{8 \\cdot 10} {1771-\\ddots}}}}.\n"
},
{
"math_id": 66,
"text": "\n\\sqrt[5]{100} = \\cfrac{5}{2}+\\cfrac{3} {250+\\cfrac{12} {5+\\cfrac{18} {750+\\cfrac{27} {5+\\cfrac{33} {1250+\\cfrac{42} {5+\\ddots}}}}}} = \\cfrac{5}{2}+\\cfrac{5\\cdot 3} {1265-3-\\cfrac{12 \\cdot 18} {3795-\\cfrac{27 \\cdot 33} {6325-\\cfrac{42 \\cdot 48} {8855-\\ddots}}}}.\n"
},
{
"math_id": 67,
"text": "\n\\sqrt[12]2 = 1+\\cfrac{1} {12+\\cfrac{11} {2+\\cfrac{13} {36+\\cfrac{23} {2+\\cfrac{25} {60+\\cfrac{35} {2+\\cfrac{37} {84+\\cfrac{47} {2+\\ddots}}}}}}}} = 1+\\cfrac{2 \\cdot 1} {36-1 - \\cfrac{11 \\cdot 13} {108-\\cfrac{23 \\cdot 25} {180-\\cfrac{35 \\cdot 37} {252-\\cfrac{47 \\cdot 49} {324-\\ddots}}}}}.\n"
},
{
"math_id": 68,
"text": "\n\\sqrt[12]{2^7} = 1+\\cfrac{7} {12+\\cfrac{5} {2+\\cfrac{19} {36+\\cfrac{17} {2+\\cfrac{31} {60+\\cfrac{29} {2+\\cfrac{43} {84+\\cfrac{41} {2+\\ddots}}}}}}}} = 1+\\cfrac{2 \\cdot 7} {36-7 - \\cfrac{5 \\cdot 19} {108-\\cfrac{17 \\cdot 31} {180-\\cfrac{29 \\cdot 43} {252-\\cfrac{41 \\cdot 55} {324-\\ddots}}}}}.\n"
},
{
"math_id": 69,
"text": "\\sqrt[12]{2^7} = \\cfrac{1}{2} \\sqrt[12]{3^{12}-7153} = \\cfrac{3}{2} - \\cfrac{0.5 \\cdot 7153}{4\\cdot 3^{12} - \\cfrac{11\\cdot 7153}{6 - \\cfrac{13\\cdot 7153}{12\\cdot 3^{12} \n- \\cfrac{23\\cdot 7153}{6 - \\cfrac{25\\cdot 7153}{20\\cdot 3^{12} - \\cfrac{35\\cdot 7153}{6 - \\cfrac{37\\cdot 7153}{28\\cdot 3^{12} - \\cfrac{47\\cdot 7153}{6 - \\ddots}}}}}}}} "
},
{
"math_id": 70,
"text": "\\sqrt[12]{2^7} = \\cfrac{3}{2} - \\cfrac{3\\cdot 7153}{12(2^{19}+3^{12}) + 7153 - \\cfrac{11\\cdot 13\\cdot 7153^2}{36(2^{19}+3^{12}) \n- \\cfrac{23\\cdot 25\\cdot 7153^2}{60(2^{19}+3^{12}) - \\cfrac{35\\cdot 37\\cdot 7153^2}{84(2^{19}+3^{12}) - \\ddots}}}}. "
}
]
| https://en.wikipedia.org/wiki?curid=739199 |
73919978 | 2023 FIDE Circuit | Series of chess tournaments
Sports season
The 2023 FIDE Circuit was a system comprising the top chess tournaments in 2023, which serves as a qualification path for the Candidates Tournament 2024. Players receive points based on their performance and the strength of the tournament. A player's final Circuit score is the sum of their five best results of the year. The winner of the Circuit qualifies for the Candidates Tournament 2024 in Toronto, Canada, the winner of which qualifies for the World Chess Championship 2024.
Since the winner of the Circuit (Fabiano Caruana) had already qualified to the 2024 Candidates Tournament via the Chess World Cup 2023, the second-place finisher in the Circuit, Gukesh D, qualified to the 2024 Candidates.
Tournament eligibility.
A FIDE-rated individual standard tournament is eligible for the Circuit if it meets the following criteria:
The Circuit also includes the following tournaments:
Points system.
Event points.
Circuit points obtained by a player from a tournament are calculated as follows:
formula_0
where:
Basic points.
Basic points for a tournament are awarded if the players placed in (or tied for) the top 8, provided that the placing is within the top half of the tournament, or at least the third round for knockout tournaments.
For tied positions, basic points are calculated as 50% of points for final ranking as determined by tournament's tie-break rules, plus 50% of the sum of basic points assigned for the tied places divided by the number of tied players. If no tie-break rule is applied, basic points are 100% shared equally among all tied players.
FIDE World Cup points.
For FIDE World Cup 2023, points are given as above with the following modifications:
Player's total and ranking.
A player's point total for the ranking is the sum of their best 5 tournaments, of which at least 4 events must be played with standard time controls. Players without 5 such events (for example, Leinier Domínguez and Vidit Gujrathi) are not ranked. Tournaments that could be included in player's results are as follows:
Tournaments.
Eligible tournaments as of 30 December 2023.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P = B \\times k \\times w"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "k = (TAR-2500) / 100"
},
{
"math_id": 5,
"text": "w"
}
]
| https://en.wikipedia.org/wiki?curid=73919978 |
73927947 | Samarium(III) molybdate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Samarium(III) molybdate is an inorganic compound, with the chemical formula Sm2(MoO4)3. It is one of the compounds formed by the three elements samarium, molybdenum and oxygen.
Preparation.
Samarium(III) molybdate can be obtained by reacting samarium(III) nitrate and sodium molybdate in the pH range of 5.5~6.0. Its single crystal can be grown at 1085 °C by the Czochralski method.
Samarium(III) molybdate can also be prepared by reacting samarium and molybdenum(VI) oxide:
formula_0
Properties.
Samarium(III) molybdate forms violet crystals of several modifications:
Samarium(III) molybdate exhibits ferroelectric properties. It forms a crystalline hydrate with the composition Sm2(MoO4)3•2H2O.
Samarium(III) molybdate can be reduced to the tetravalent molybdenum compound Sm2Mo3O9 by hydrogen at 500~650 °C.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Sm_2O_3 + 3MoO_3 \\ \\xrightarrow{800^oC}\\ Sm_2(MoO_4)_3 }"
}
]
| https://en.wikipedia.org/wiki?curid=73927947 |
7392872 | Ensemble interpretation | The ensemble interpretation of quantum mechanics considers the quantum state description to apply only to an ensemble of similarly prepared systems, rather than supposing that it exhaustively represents an individual physical system.
The advocates of the ensemble interpretation of quantum mechanics claim that it is minimalist, making the fewest physical assumptions about the meaning of the standard mathematical formalism. It proposes to take to the fullest extent the statistical interpretation of Max Born, for which he won the Nobel Prize in Physics in 1954. On the face of it, the ensemble interpretation might appear to contradict the doctrine proposed by Niels Bohr, that the wave function describes an individual system or particle, not an ensemble, though he accepted Born's statistical interpretation of quantum mechanics. It is not quite clear exactly what kind of ensemble Bohr intended to exclude, since he did not describe probability in terms of ensembles. The ensemble interpretation is sometimes, especially by its proponents, called "the statistical interpretation", but it seems perhaps different from Born's statistical interpretation.
As is the case for "the" Copenhagen interpretation, "the" ensemble interpretation might not be uniquely defined. In one view, the ensemble interpretation may be defined as that advocated by Leslie E. Ballentine, Professor at Simon Fraser University. His interpretation does not attempt to justify, or otherwise derive, or explain quantum mechanics from any deterministic process, or make any other statement about the real nature of quantum phenomena; it intends simply to interpret the wave function. It does not propose to lead to actual results that differ from orthodox interpretations. It makes the statistical operator primary in reading the wave function, deriving the notion of a pure state from that. In the opinion of Ballentine, perhaps the most notable supporter of such an interpretation was Albert Einstein:
<templatestyles src="Template:Blockquote/styles.css" />The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.
History.
In his 1926 paper introducing the concept of quantum scattering theory Max Born proposed to view "the motion of the particle follows the laws of probability, but the probability itself propagates in accord with causal laws", where the causal laws are Schrödinger's equations. As related in his 1954 Nobel Prize in Physics lecture Born viewed the statistical character of quantum mechanics as an empirical observation with philosophical implications.
Einstein maintained consistently that the quantum mechanics only supplied a statistical view. In 1936 he wrote "“the formula_0 function does not in any way describe a condition which could be
that of a single system; it relates rather to many systems, to ‘an ensemble of systems’ in the sense of statistical mechanics.” However Einstein did not provide a detailed study of the ensemble, ultimately because he considered quantum mechanics itself to be incomplete primarily because it was only an ensemble theory. Einstein believed quantum mechanics was correct in the same sense that thermodynamics is correct, but that it was insufficient as means of unifying physics.
Also in the years around 1936, Karl Popper published philosophical studies countering the work of Heisenberg and Bohr. Popper considered their work as essentially subjectivist, unfalsifiable, and thus unscientific. He held that the quantum state represented statistical assertions which have no predictive power for individual particles. Popper described "propensities" as the correct notion of probability for quantum mechanics.
Although several other notable physicists championed the ensemble concept, including John C. Slater, Edwin C. Kemble, and Dmitry Blokhintsev, Leslie Ballentine's 1970 paper 'The statistical interpretation of quantum mechanics" and his textbook have become the main sources. Ballentine followed up with axiomatic development of propensity theory, analysis of decoherence in the ensemble interpretation and other papers spanning 40 years.
States, systems, and ensembles.
Perhaps the first expression of an ensemble interpretation was that of Max Born. In a 1968 article, he used the German words 'gleicher Haufen', which are often translated into English, in this context, as 'ensemble' or 'assembly'. The atoms in his assembly were uncoupled, meaning that they were an imaginary set of independent atoms that defines its observable statistical properties. Born did not develop a more detailed specification of ensembles to complete his scattering theory work.
Although Einstein described quantum mechanics as clearly an ensemble theory he did present a formal definition of an ensemble. Einstein sought a theory of individual entities, which he argued was not quantum mechanics.
Ballentine distinguish his particular ensemble interpretation the Statistical Interpretation.
According to Ballentine, the distinguishing difference between many of the Copenhagen-like interpretations (CI) and the Statistical Interpretation (EI) is the following:
CI: A pure state provides a complete description of an individual system, e.g. an electron.
EI: A pure state describes the statistical properties of an ensemble of identically prepared systems.
Ballentine defines a quantum state as an ensemble of similarly prepared systems. For example, the system may be a single electron, then the ensemble will "the set of all single electrons which subjected to the same state preparation technique." He uses the example of a low-intensity electron beam prepared with a narrow range of momenta. Each prepared electron is a system, the ensemble consists of many such systems.
Ballentine emphasizes that the meaning of the "Quantum State" or "State Vector" may be described, essentially, by a one-to-one correspondence to the probability distributions of measurement results, not the individual measurement results themselves. A mixed state is a description only of the probabilities, formula_1 and formula_2 of positions, not a description of actual individual positions. A mixed state is a mixture of probabilities of physical states, not a coherent superposition of physical states.
Probability; propensity.
Quantum observations are inherently statistical. For example, the electrons in a low-intensity double slit experiment arrive at random times and seemingly random places and yet eventually show an interference pattern.
The theory of quantum mechanics offer only statistical results. Given that we have prepared a system in a state formula_0, the theory predicts a result formula_3 as a probability distribution:
formula_4.
Different approaches to probability can be applied to connect the probability distribution of theory to the observed randomness.
Popper, Ballentine, Paul Humphreys, and others point to propensity as the correct interpretation of probability in science. Propensity, a form of causality that is weaker than determinism, is the tendency of a physical system to produce a result. Thus the mathematical statement
formula_5
means the propensity for event formula_6 to occur given the physical scenario formula_7 is formula_8. The physical scenario is view as weakly causal condition.
The weak causation invalidates Bayes' theorem and correlation is no longer symmetric. As noted by Paul Humphreys, many physical examples show the lack of reciprocal correlation, for example, the propensity for smokers to get lung cancer does not imply lung cancer has a propensity to cause smoking.
Propensity closely matches the application of quantum theory: single event probability can be predicted by theory but only verified by repeated samples in experiment. Popper explicitly developed propensity theory to eliminate subjectivity in quantum mechanics.
Preparative and observing devices as origins of quantum randomness.
An isolated quantum mechanical system, specified by a wave function, evolves in time in a deterministic way according to the Schrödinger equation that is characteristic of the system. Though the wave function can generate probabilities, no randomness or probability is involved in the temporal evolution of the wave function itself. This is agreed, for example, by Born, Dirac, von Neumann, London & Bauer, Messiah, and Feynman & Hibbs. An isolated system is not subject to observation; in quantum theory, this is because observation is an intervention that violates isolation.
The system's initial state is defined by the preparative procedure; this is recognized in the ensemble interpretation, as well as in the Copenhagen approach. The system's state as prepared, however, does not entirely fix all properties of the system. The fixing of properties goes only as far as is physically possible, and is not physically exhaustive; it is, however, physically complete in the sense that no physical procedure can make it more detailed. This is stated clearly by Heisenberg in his 1927 paper. It leaves room for further unspecified properties. For example, if the system is prepared with a definite energy, then the quantum mechanical phase of the wave function is left undetermined by the mode of preparation. The ensemble of prepared systems, in a definite pure state, then consists of a set of individual systems, all having one and the same definite energy, but each having a different quantum mechanical phase, regarded as probabilistically random. The wave function, however, does have a definite phase, and thus specification by a wave function is more detailed than specification by state as prepared. The members of the ensemble are logically distinguishable by their distinct phases, though the phases are not defined by the preparative procedure. The wave function can be multiplied by a complex number of unit magnitude without changing the state as defined by the preparative procedure.
The preparative state, with unspecified phase, leaves room for the several members of the ensemble to interact in respectively several various ways with other systems. An example is when an individual system is passed to an observing device so as to interact with it. Individual systems with various phases are scattered in various respective directions in the analyzing part of the observing device, in a probabilistic way. In each such direction, a detector is placed, in order to complete the observation. When the system hits the analyzing part of the observing device, that scatters it, it ceases to be adequately described by its own wave function in isolation. Instead it interacts with the observing device in ways partly determined by the properties of the observing device. In particular, there is in general no phase coherence between system and observing device. This lack of coherence introduces an element of probabilistic randomness to the system–device interaction. It is this randomness that is described by the probability calculated by the Born rule. There are two independent originative random processes, one that of preparative phase, the other that of the phase of the observing device. The random process that is actually observed, however, is neither of those originative ones. It is the phase difference between them, a single derived random process.
The Born rule describes that derived random process, the observation of a single member of the preparative ensemble. In the ordinary language of classical or Aristotelian scholarship, the preparative ensemble consists of many specimens of a species. The quantum mechanical technical term 'system' refers to a single specimen, a particular object that may be prepared or observed. Such an object, as is generally so for objects, is in a sense a conceptual abstraction, because, according to the Copenhagen approach, it is defined, not in its own right as an actual entity, but by the two macroscopic devices that should prepare and observe it. The random variability of the prepared specimens does not exhaust the randomness of a detected specimen. Further randomness is injected by the quantum randomness of the observing device. It is this further randomness that makes Bohr emphasize that there is randomness in the observation that is not fully described by the randomness of the preparation. This is what Bohr means when he says that the wave function describes "a single system". He is focusing on the phenomenon as a whole, recognizing that the preparative state leaves the phase unfixed, and therefore does not exhaust the properties of the individual system. The phase of the wave function encodes further detail of the properties of the individual system. The interaction with the observing device reveals that further encoded detail. It seems that this point, emphasized by Bohr, is not explicitly recognized by the ensemble interpretation, and this may be what distinguishes the two interpretations. It seems, however, that this point is not explicitly denied by the ensemble interpretation.
Einstein perhaps sometimes seemed to interpret the probabilistic "ensemble" as a preparative ensemble, recognizing that the preparative procedure does not exhaustively fix the properties of the system; therefore he said that the theory is "incomplete". Bohr, however, insisted that the physically important probabilistic "ensemble" was the combined prepared-and-observed one. Bohr expressed this by demanding that an actually observed single fact should be a complete "phenomenon", not a system alone, but always with reference to both the preparing and the observing devices. The Einstein–Podolsky–Rosen criterion of "completeness" is clearly and importantly different from Bohr's. Bohr regarded his concept of "phenomenon" as a major contribution that he offered for quantum theoretical understanding. The decisive randomness comes from both preparation and observation, and may be summarized in a single randomness, that of the phase difference between preparative and observing devices. The distinction between these two devices is an important point of agreement between Copenhagen and ensemble interpretations. Though Ballentine claims that Einstein advocated "the ensemble approach", a detached scholar would not necessarily be convinced by that claim of Ballentine. There is room for confusion about how "the ensemble" might be defined.
"Each photon interferes only with itself".
Niels Bohr famously insisted that the wave function refers to a single individual quantum system. He was expressing the idea that Dirac expressed when he famously wrote: "Each photon then interferes only with itself. Interference between different photons never occurs.". Dirac clarified this by writing: "This, of course, is true only provided the two states that are superposed refer to the same beam of light, "i.e." all that is known about the position and momentum of a photon in either of these states must be the same for each." Bohr wanted to emphasize that a superposition is different from a mixture. He seemed to think that those who spoke of a "statistical interpretation" were not taking that into account. To create, by a superposition experiment, a new and different pure state, from an original pure beam, one can put absorbers and phase-shifters into some of the sub-beams, so as to alter the composition of the re-constituted superposition. But one cannot do so by mixing a fragment of the original unsplit beam with component split sub-beams. That is because one photon cannot both go into the unsplit fragment and go into the split component sub-beams. Bohr felt that talk in statistical terms might hide this fact.
The physics here is that the effect of the randomness contributed by the observing apparatus depends on whether the detector is in the path of a component sub-beam, or in the path of the single superposed beam. This is not explained by the randomness contributed by the preparative device.
Measurement and collapse.
Bras and kets.
The ensemble interpretation is notable for its relative de-emphasis on the duality and theoretical symmetry between bras and kets. The approach emphasizes the ket as signifying a physical preparation procedure. There is little or no expression of the dual role of the bra as signifying a physical observational procedure. The bra is mostly regarded as a mere mathematical object, without very much physical significance. It is the absence of the physical interpretation of the bra that allows the ensemble approach to by-pass the notion of "collapse". Instead, the density operator expresses the observational side of the ensemble interpretation. It hardly needs saying that this account could be expressed in a dual way, with bras and kets interchanged, "mutatis mutandis". In the ensemble approach, the notion of the pure state is conceptually derived by analysis of the density operator, rather than the density operator being conceived as conceptually synthesized from the notion of the pure state.
An attraction of the ensemble interpretation is that it appears to dispense with the metaphysical issues associated with reduction of the state vector, Schrödinger cat states, and other issues related to the concepts of multiple simultaneous states. The ensemble interpretation postulates that the wave function only applies to an ensemble of systems as prepared, but not observed. There is no recognition of the notion that a single specimen system could manifest more than one state at a time, as assumed, for example, by Dirac. Hence, the wave function is not envisaged as being physically required to be "reduced". This can be illustrated by an example:
Consider a quantum die. If this is expressed in Dirac notation, the "state" of the die can be represented by a "wave" function describing the probability of an outcome given by:
formula_9
Where the "+" sign of a probabilistic equation is not an addition operator, it is the standard probabilistic Boolean operator OR. The state vector is inherently defined as a probabilistic mathematical object such that the result of a measurement is one outcome OR another outcome.
It is clear that on each throw, only one of the states will be observed, but this is not expressed by a bra. Consequently, there appears to be no requirement for a notion of collapse of the wave function/reduction of the state vector, or for the die to physically exist in the summed state. In the ensemble interpretation, wave function collapse would make as much sense as saying that the number of children a couple produced, collapsed to 3 from its average value of 2.4.
The state function is not taken to be physically real, or be a literal summation of states. The wave function, is taken to be an abstract statistical function, only applicable to the statistics of repeated preparation procedures. The ket does not directly apply to a single particle detection, but only the statistical results of many. This is why the account does not refer to bras, and mentions only kets.
Diffraction.
The ensemble approach differs significantly from the Copenhagen approach in its view of diffraction. The Copenhagen interpretation of diffraction, especially in the viewpoint of Niels Bohr, puts weight on the doctrine of wave–particle duality. In this view, a particle that is diffracted by a diffractive object, such as for example a crystal, is regarded as really and physically behaving like a wave, split into components, more or less corresponding to the peaks of intensity in the diffraction pattern. Though Dirac does not speak of wave–particle duality, he does speak of "conflict" between wave and particle conceptions. He indeed does describe a particle, before it is detected, as being somehow simultaneously and jointly or partly present in the several beams into which the original beam is diffracted. So does Feynman, who speaks of this as "mysterious".
The ensemble approach points out that this seems perhaps reasonable for a wave function that describes a single particle, but hardly makes sense for a wave function that describes a system of several particles. The ensemble approach demystifies this situation along the lines advocated by Alfred Landé, accepting Duane's hypothesis. In this view, the particle really and definitely goes into one or other of the beams, according to a probability given by the wave function appropriately interpreted. There is definite quantal transfer of translative momentum between particle and diffractive object. This is recognized also in Heisenberg's 1930 textbook, though usually not recognized as part of the doctrine of the so-called "Copenhagen interpretation". This gives a clear and utterly non-mysterious physical or direct explanation instead of the debated concept of wave function "collapse". It is presented in terms of quantum mechanics by other present day writers also, for example, Van Vliet. For those who prefer physical clarity rather than mysterianism, this is an advantage of the ensemble approach, though it is not the sole property of the ensemble approach. With a few exceptions, this demystification is not recognized or emphasized in many textbooks and journal articles.
Criticism.
David Mermin sees the ensemble interpretation as being motivated by an adherence ("not always acknowledged") to classical principles.
"[...] the notion that probabilistic theories must be about ensembles implicitly assumes that probability is about ignorance. (The 'hidden variables' are whatever it is that we are ignorant of.) But in a non-deterministic world probability has nothing to do with incomplete knowledge, and ought not to require an ensemble of systems for its interpretation".
However, according to Einstein and others, a key motivation for the ensemble interpretation is not about any alleged, implicitly assumed probabilistic ignorance, but the removal of "…unnatural theoretical interpretations…". A specific example being the Schrödinger cat problem, but this concept applies to any system where there is an interpretation that postulates, for example, that an object might exist in two positions at once.
Mermin also emphasises the importance of "describing" single systems, rather than ensembles.
"The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are."
Schrödinger's cat.
The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. That being the case, the state vector would not apply to individual cat experiments, but only to the statistics of many similar prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial non-issue. However, the application of state vectors to individual systems, rather than ensembles, has claimed explanatory benefits, in areas like single-particle twin-slit experiments and quantum computing (see Schrödinger's cat applications). As an avowedly minimalist approach, the ensemble interpretation does not offer any specific alternative explanation for these phenomena.
The frequentist probability variation.
The claim that the wave functional approach fails "to apply" to single particle experiments cannot be taken as a claim that quantum mechanics fails in describing single-particle phenomena. In fact, it gives correct results within the limits of a probabilistic or stochastic theory.
Probability always requires a set of multiple data, and thus single-particle experiments are really part of an ensemble — an ensemble of individual experiments that are performed one after the other over time. In particular, the interference fringes seen in the double-slit experiment require repeated trials to be observed.
The quantum Zeno effect.
Leslie Ballentine promoted the ensemble interpretation in his book "Quantum Mechanics, A Modern Development". In it, he described what he called the "Watched Pot Experiment". His argument was that, under certain circumstances, a repeatedly measured system, such as an unstable nucleus, would be prevented from decaying by the act of measurement itself. He initially presented this as a kind of reductio ad absurdum of wave function collapse.
The effect has been shown to be real. Ballentine later wrote papers claiming that it could be explained without wave function collapse. | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\mathcal{P}(\\chi_1)"
},
{
"math_id": 2,
"text": "\\mathcal{P}(\\chi_2)"
},
{
"math_id": 3,
"text": "a_j"
},
{
"math_id": 4,
"text": "P(a_j|\\psi)=|<a_j|\\psi>|^2"
},
{
"math_id": 5,
"text": "Pr(e|G) = r "
},
{
"math_id": 6,
"text": "e"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "r"
},
{
"math_id": 9,
"text": "| \\psi \\rangle = \\frac {|1\\rangle + |2\\rangle + |3\\rangle + |4\\rangle + |5\\rangle + |6\\rangle} {\\sqrt{6}}"
}
]
| https://en.wikipedia.org/wiki?curid=7392872 |
7392980 | Graduate Texts in Mathematics | Series of mathematics textbooks
Graduate Texts in Mathematics (GTM) () is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standard size (with variable numbers of pages). The GTM series is easily identified by a white band at the top of the book.
The books in this series tend to be written at a more advanced level than the similar Undergraduate Texts in Mathematics series, although there is a fair amount of overlap between the two series in terms of material covered and difficulty level.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C^*"
}
]
| https://en.wikipedia.org/wiki?curid=7392980 |
7393195 | Cumulative hierarchy | Family of sets indexed by ordinal numbers
In mathematics, specifically set theory, a cumulative hierarchy is a family of sets formula_0 indexed by ordinals formula_1 such that
Some authors additionally require that formula_5 or that formula_6.
The union formula_7 of the sets of a cumulative hierarchy is often used as a model of set theory.
The phrase "the cumulative hierarchy" usually refers to the standard cumulative hierarchy formula_8 of the von Neumann universe with formula_9 introduced by .
Reflection principle.
A cumulative hierarchy satisfies a form of the reflection principle: any formula in the language of set theory that holds in the union formula_10 of the hierarchy also holds in some stages formula_0. | [
{
"math_id": 0,
"text": "W_\\alpha"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "W_\\alpha \\subseteq W_{\\alpha + 1}"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "W_\\lambda = \\bigcup_{\\alpha < \\lambda} W_{\\alpha}"
},
{
"math_id": 5,
"text": "W_{\\alpha + 1} \\subseteq \\mathcal P(W_\\alpha)"
},
{
"math_id": 6,
"text": "W_0 \\ne \\emptyset"
},
{
"math_id": 7,
"text": "W = \\bigcup_{\\alpha \\in \\mathrm{On}} W_\\alpha"
},
{
"math_id": 8,
"text": "\\mathrm{V}_\\alpha"
},
{
"math_id": 9,
"text": "\\mathrm{V}_{\\alpha + 1} = \\mathcal P(W_\\alpha)"
},
{
"math_id": 10,
"text": "W"
},
{
"math_id": 11,
"text": "\\mathrm{L}_\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=7393195 |
73933322 | Goldston-Pintz-Yıldırım sieve | The Goldston-Pintz-Yıldırım sieve (also called GPY sieve or GPY method) is a sieve method and variant of the Selberg sieve with generalized, multidimensional sieve weights. The sieve led to a series of important breakthroughs in analytic number theory.
It is named after the mathematicians Dan Goldston, János Pintz and Cem Yıldırım. They used it in 2005 to show that there are infinitely many prime tuples whose distances are arbitrarily smaller than the average distance that follows from the prime number theorem.
The sieve was then modified by Yitang Zhang in order to prove a finite bound on the smallest gap between two consecutive primes that is attained infinitely often.
Later the sieve was again modified by James Maynard (who lowered the bound to formula_0) and by Terence Tao.
Goldston-Pintz-Yıldırım sieve.
Notation.
Fix a formula_1 and the following notation:
formula_10
Notice that formula_11.
For an formula_12 we also define
If formula_21 for all formula_22, then we call formula_12 admissible.
Construction.
Let formula_7 be admissible and consider the following sifting function
formula_23
where formula_24 is a weight function we derive later.
For each formula_25 this sifting function counts the primes of the form formula_26 minus some threshold formula_27, so if formula_28 then there exist some formula_6 such that at least formula_29 are prime numbers in formula_30.
Since formula_3 has not so nice analytic properties one chooses rather the following sifting function
formula_31
Since formula_32 and formula_33, we have formula_28 only if there are at least two prime numbers formula_26 and formula_34. Next we have to choose the weight function formula_24 so that we can detect prime k-tuples.
Derivation of the weights.
A candidate for the weight function is the generalized von Mangoldt function
formula_35
which has the following property: if formula_36, then formula_37. This functions also detects factors which are proper prime powers, but this can be removed in applications with a negligible error.
So if formula_30 is a prime k-tuple, then the function
formula_38
will not vanish. The factor formula_39 is just for computational purposes. The (classical) von Mangoldt function can be approximated with the "truncated von Mangoldt function"
formula_40
where formula_41 now no longer stands for the length of formula_12 but for the truncation position. Analogously we approximate formula_42 with
formula_43
For technical purposes we rather want to approximate tuples with primes in multiple components than solely prime tuples and introduce another parameter formula_44 so we can choose to have formula_45 or less distinct prime factors. This leads to the final form
formula_46
Without this additional parameter formula_47 one has for a distinct formula_48 the restriction formula_49 but by introducing this parameter one gets the more looser restriction formula_50.
So one has a formula_45-dimensional sieve for a formula_51-dimensional sieve problem.
Goldston-Pintz-Yıldırım sieve.
The GPY sieve has the following form
formula_52
with
formula_53.
Proof of the main theorem by Goldston, Pintz and Yıldırım.
Consider formula_54 and formula_55 and formula_56 and define formula_57. In their paper, Goldston, Pintz and Yıldırım proved in two propositions that under suitable conditions two asymptotic formulas of the form
formula_58
and
formula_59
hold, where formula_60 are two constants, formula_61 and formula_62 are two singular series whose description we omit here.
Finally one can apply these results to formula_63 to derive the theorem by Goldston, Pintz and Yıldırım on infinitely many prime tuples whose distances are arbitrarily smaller than the average distance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "600"
},
{
"math_id": 1,
"text": "k\\in \\N"
},
{
"math_id": 2,
"text": "\\mathbb{P}"
},
{
"math_id": 3,
"text": "1_{\\mathbb{P}}(n)"
},
{
"math_id": 4,
"text": "\\Lambda(n)"
},
{
"math_id": 5,
"text": "\\omega(n)"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "\\mathcal{H}=\\{h_1,\\dots,h_k\\}"
},
{
"math_id": 8,
"text": "h_i\\in\\Z_+\\cup \\{0\\}"
},
{
"math_id": 9,
"text": "\\theta(n)"
},
{
"math_id": 10,
"text": "\\theta(n)=\\begin{cases} \\log(n) & \\text{if }n\\in \\mathbb{P}\\\\ 0 & \\text{else.}\\end{cases}"
},
{
"math_id": 11,
"text": "\\theta(n)=\\log((n-1)1_{\\mathbb{P}}(n)+1)"
},
{
"math_id": 12,
"text": "\\mathcal{H}"
},
{
"math_id": 13,
"text": "\\mathcal{H}(n):=(n+h_1,\\dots,n+h_k)"
},
{
"math_id": 14,
"text": "P_{\\mathcal{H}}(n):=(n+h_1)(n+h_2)\\cdots (n+h_k)"
},
{
"math_id": 15,
"text": "\\nu_p(\\mathcal{H})"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "\\nu_3(\\{0,2,4\\})=3"
},
{
"math_id": 18,
"text": "\\nu_3(\\{0,2\\})=2"
},
{
"math_id": 19,
"text": "\\{0,2,4\\}\\stackrel{\\pmod{3}}{=}\\{0,1,2\\}"
},
{
"math_id": 20,
"text": "\\{0,2\\}\\stackrel{\\pmod{3}}{=}\\{0,2\\}"
},
{
"math_id": 21,
"text": "\\nu_p(\\mathcal{H})<k"
},
{
"math_id": 22,
"text": "p\\in \\mathbb{P}"
},
{
"math_id": 23,
"text": "\\mathcal{S}(N,c;\\mathcal{H}):=\\sum\\limits_{n=N+1}^{2N}\\left(\\sum\\limits_{h_i\\in \\mathcal {H}}1_{\\mathbb{P}}(n+h_i)-c\\right)w(n)^2,\\quad w(n)\\in \\R,\\quad c>0,"
},
{
"math_id": 24,
"text": "w(n)"
},
{
"math_id": 25,
"text": "n\\in [N+1,2N]"
},
{
"math_id": 26,
"text": "n+h_i"
},
{
"math_id": 27,
"text": "c"
},
{
"math_id": 28,
"text": "\\mathcal{S}>0"
},
{
"math_id": 29,
"text": "\\lfloor c \\rfloor +1"
},
{
"math_id": 30,
"text": "\\mathcal{H}(n)"
},
{
"math_id": 31,
"text": "\\mathcal{S}(N;\\mathcal{H}):=\\sum\\limits_{n=N+1}^{2N}\\left(\\sum\\limits_{h_i\\in \\mathcal{H}}\\theta(n+h_i)-\\log(3N)\\right)w(n)^2."
},
{
"math_id": 32,
"text": "\\log(N)<\\theta(n+h_i)<\\log(2N)"
},
{
"math_id": 33,
"text": "c=\\log(3n)"
},
{
"math_id": 34,
"text": "n+h_j"
},
{
"math_id": 35,
"text": "\\Lambda_k(n)=\\sum\\limits_{d\\mid n}\\mu(d)\\left(\\log\\left(\\frac{n}{d}\\right)\\right)^k,"
},
{
"math_id": 36,
"text": "\\omega(n)>k"
},
{
"math_id": 37,
"text": "\\Lambda_k(n)=0"
},
{
"math_id": 38,
"text": "\\Lambda_k(n;\\mathcal{H})=\\frac{1}{k!}\\Lambda_k(P_{\\mathcal{H}}(n))"
},
{
"math_id": 39,
"text": "1/k!"
},
{
"math_id": 40,
"text": "\\Lambda(n)\\approx \\Lambda_R(n):=\\sum\\limits_{\\begin{array}{c} d\\mid n\\\\ d\\leq R \\end{array}}\\mu(d)\\log\\left(\\frac{R}{d}\\right),"
},
{
"math_id": 41,
"text": "R"
},
{
"math_id": 42,
"text": "\\Lambda_k(n;\\mathcal{H})"
},
{
"math_id": 43,
"text": "\\Lambda_R(n;\\mathcal{H})=\\frac{1}{k!}\\sum\\limits_{\\begin{array}{c} d\\mid P_{\\mathcal{H}}(n)\\\\ d\\leq R \\end{array}}\\mu(d)\\left(\\log\\left(\\frac{R}{d}\\right)\\right)^k"
},
{
"math_id": 44,
"text": "0\\leq \\ell \\leq k"
},
{
"math_id": 45,
"text": "k+\\ell"
},
{
"math_id": 46,
"text": "\\Lambda_R(n;\\mathcal{H},\\ell)=\\frac{1}{(k+\\ell)!}\\sum\\limits_{\\begin{array}{c} d\\mid P_{\\mathcal{H}}(n)\\\\ d\\leq R \\end{array}}\\mu(d)\\left(\\log\\left(\\frac{R}{d}\\right)\\right)^{k+\\ell}"
},
{
"math_id": 47,
"text": "\\ell"
},
{
"math_id": 48,
"text": "d=d_1d_2\\cdots d_k"
},
{
"math_id": 49,
"text": "d_1\\leq R, d_2\\leq R, \\dots ,d_k\\leq R"
},
{
"math_id": 50,
"text": "d_1d_2\\dots d_k\\leq R"
},
{
"math_id": 51,
"text": "k"
},
{
"math_id": 52,
"text": "\\mathcal{S}(N;\\mathcal{H},\\ell):=\\sum\\limits_{n=N+1}^{2N}\\left(\\sum\\limits_{h_i\\in \\mathcal{H}}\\theta(n+h_i)-\\log(3N)\\right)\\Lambda_R(n;\\mathcal{H},\\ell)^2,\\qquad |\\mathcal{H}|=k"
},
{
"math_id": 53,
"text": "\\Lambda_R(n;\\mathcal{H},\\ell)=\\frac{1}{(k+\\ell)!}\\sum\\limits_{\\begin{array}{c} d\\mid P_{\\mathcal{H}}(n)\\\\ d\\leq R \\end{array}}\\mu(d)\\left(\\log\\left(\\frac{R}{d}\\right)\\right)^{k+\\ell},\\quad 0\\leq \\ell\\leq k"
},
{
"math_id": 54,
"text": "(\\mathcal{H}_1,\\ell_1, k_1)"
},
{
"math_id": 55,
"text": "(\\mathcal{H}_2,\\ell_2, k_2)"
},
{
"math_id": 56,
"text": "1\\leq h_0\\leq R"
},
{
"math_id": 57,
"text": "M:=k_1+k_2+\\ell_1+\\ell_2"
},
{
"math_id": 58,
"text": "\\sum\\limits_{n\\leq N}\\Lambda_R(n;\\mathcal{H}_1,\\ell_1)\\Lambda_R(n;\\mathcal{H}_2,\\ell_2) = C_1\\left(\\mathcal{S}(\\mathcal{H}^{i})+o_M(1)\\right)N"
},
{
"math_id": 59,
"text": "\\sum\\limits_{n\\leq N}\\Lambda_R(n;\\mathcal{H}_1,\\ell_1)\\Lambda_R(n;\\mathcal{H}_2,\\ell_2)\\theta(n+h_0)\n= C_2\\left(\\mathcal{S}(\\mathcal{H}^j)+o_M(1)\\right)N"
},
{
"math_id": 60,
"text": "C_1,C_2"
},
{
"math_id": 61,
"text": "\\mathcal{S}(\\mathcal{H}^{i})"
},
{
"math_id": 62,
"text": "\\mathcal{S}(\\mathcal{H}^{j})"
},
{
"math_id": 63,
"text": "\\mathcal{S}"
}
]
| https://en.wikipedia.org/wiki?curid=73933322 |
73934690 | Airy process | The Airy processes are a family of stationary stochastic processes that appear as limit processes in the theory of random growth models and random matrix theory. They are conjectured to be universal limits describing the long time, large scale spatial fluctuations of the models in the (1+1)-dimensional KPZ universality class (Kardar–Parisi–Zhang equation) for many initial conditions (see also KPZ fixed point).
The original process Airy2 was introduced in 2002 by the mathematicians Michael Prähofer and Herbert Spohn. They proved that the height function of a model from the (1+1)-dimensional KPZ universality class - the PNG droplet - converges under suitable scaling and initial condition to the Airy2 process and that it is a stationary process with almost surely continuous sample paths.
The Airy process is named after the Airy function. The process can be defined through its finite-dimensional distribution with a Fredholm determinant and the so-called extended Airy kernel. It turns out that the one-point marginal distribution of the Airy2 process is the Tracy-Widom distribution of the GUE.
There are several Airy processes. The Airy1 process was introduced by Tomohiro Sasomoto and the one-point marginal distribution of the Airy1 is a scalar multiply of the Tracy-Widom distribution of the GOE. Another Airy process is the Airystat process.
Airy2 proces.
Let formula_0 be in formula_1.
The Airy2 process formula_2 has the following finite-dimensional distribution
formula_3
where
formula_4
and formula_5 is the "extended Airy kernel"
formula_6
formula_8
where formula_9 is the Tracy-Widom distribution of the GUE.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t_1<t_2<\\dots <t_n"
},
{
"math_id": 1,
"text": "\\R"
},
{
"math_id": 2,
"text": "A_2(t)"
},
{
"math_id": 3,
"text": "P(A_2(t_{1})<\\xi_1,\\dots,A_2(t_{n})<\\xi_n)=\\det(1-f^{1/2}K^{\\operatorname{ext}}_{\\operatorname{Ai}}f^{1/2})_{L^2(\\{t_1,\\dots,t_n\\}\\times \\R)}"
},
{
"math_id": 4,
"text": "f(t_j,\\xi):=1_{\\{(\\xi_j,\\infty)\\}}(\\xi)"
},
{
"math_id": 5,
"text": "K^{\\operatorname{ext}}_{\\operatorname{Ai}}(t_i,x;t_j,y)"
},
{
"math_id": 6,
"text": "K^{\\operatorname{ext}}_{\\operatorname{Ai}}(t_i,x;t_j,y):=\\begin{cases}{\\displaystyle \\int_0^\\infty e^{-z(t_i-t_j )}\\operatorname{Ai}(x+z)\\operatorname{Ai}(y+z)\\mathrm{d}z}& \\text{if }\\;t_i\\geq t_j,\\\\\n{\\displaystyle -\\int_{-\\infty}^0 e^{-z(t_i-t_j)}\\operatorname{Ai}(x+z)\\operatorname{Ai}(y+z)\\mathrm{d}z}&\\text{if }\\;t_i< t_j.\\end{cases}"
},
{
"math_id": 7,
"text": "t_i=t_j"
},
{
"math_id": 8,
"text": "P(A_2(t)\\leq \\xi)=F_{2}(\\xi),"
},
{
"math_id": 9,
"text": "F_{2}(\\xi)"
},
{
"math_id": 10,
"text": "f^{1/2}K^{\\operatorname{ext}}_{\\operatorname{Ai}}f^{1/2}"
},
{
"math_id": 11,
"text": "L^2(\\{t_1,\\dots,t_n\\}\\times \\R)"
},
{
"math_id": 12,
"text": "\\{t_1,\\dots,t_n\\}"
},
{
"math_id": 13,
"text": "f^{1/2}K^{\\operatorname{ext}}_{\\operatorname{Ai}}(t_i,x;t_j,y)f^{1/2}"
}
]
| https://en.wikipedia.org/wiki?curid=73934690 |
73944094 | Compressed cover tree | Tree data structure
The compressed cover tree is a type of data structure in computer science that is specifically designed to facilitate the speed-up of a k-nearest neighbors algorithm in finite metric spaces. Compressed cover tree is a simplified version of explicit representation of cover tree that was motivated by past issues in proofs of time complexity results of cover tree.
The compressed cover tree was specifically designed to achieve claimed time complexities of cover tree in a mathematically rigorous way.
Problem statement.
In the modern formulation, the k-nearest neighbor problem is to find all formula_0 nearest neighbors in a given reference set R for all points from another given query set Q. Both sets belong to a common ambient space X with a distance metric d satisfying all metric axioms.
Definitions.
Compressed cover tree.
Let (R,d) be a finite metric space. A compressed cover tree formula_1 has the vertex set R with a root formula_2 and a level function formula_3 satisfying the conditions below:
Expansion constants.
In a metric space, let formula_11 be the closed ball with a center p and a radius formula_12.
The notation formula_13 denotes the number (if finite) of points in the closed ball.
The "expansion constant" formula_14 is the smallest formula_15 such that formula_16 for any point formula_17 and formula_12.
the new "minimized expansion constant" formula_18 is a discrete analog of the doubling dimension Navigating nets formula_19, where A is a locally finite set which covers R.
Note that formula_20 for any finite metric space (R,d).
Aspect ratio.
For any finite set R with a metric d, the "diameter" is formula_21. The "aspect ratio" is formula_22, where formula_23 is the shortest distance between points of R.
Complexity.
Insert.
Although cover trees provide faster searches than the naive approach, this advantage must be weighed with the additional cost of maintaining the data structure. In a naive approach adding a new point to the dataset is trivial because order does not need to be preserved, but in a compressed cover tree it can be bounded:
K-nearest neighborhood search.
Let Q and R be finite subsets of a metric space (X,d). Once all points of R are inserted into a compressed cover tree
formula_26 it can be used for find-queries of the query point set Q.
The following time complexities have been proven for finding the k-nearest neighbor of a query point formula_27
in the reference set R:
Space.
The compressed cover tree constructed on finite metric space R requires O(|R|) space, during the construction and during the execution of the Find algorithm.
Compared to other similar data structures.
Using doubling dimension as hidden factor.
Tables below show time complexity estimates which use minimized expansion constant formula_31 or dimensionality constant formula_32 related to doubling dimension. Note that formula_33 denotes the aspect ratio.
Results for exact k-nearest neighbors of one query point formula_34 in reference set R assuming that all data structures are already built. Below we denote the distance between a query point q and the reference set R as formula_35 and distance from a query point q to its k-nearest neighbor in set R as formula_36:
Using expansion constant as hidden factor.
Tables below show time complexity estimates which use formula_37 or KR-type constant formula_38 as a hidden factor. Note that the dimensionality factor formula_38 is equivalent to formula_39
Results for exact k-nearest neighbors of one query point formula_40 assuming that all data structures are already built.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " k\\geq 1 "
},
{
"math_id": 1,
"text": "\\mathcal{T}(R)"
},
{
"math_id": 2,
"text": "r \\in R "
},
{
"math_id": 3,
"text": "l:R \\rightarrow \\mathbb{Z} "
},
{
"math_id": 4,
"text": "l(r) \\geq 1 + \\max\\limits_{p \\in R \\setminus \\{r\\}}l(p)"
},
{
"math_id": 5,
"text": " q \\in R\\setminus \\{r\\} "
},
{
"math_id": 6,
"text": " d(q,p) \\leq 2^{l(q)+1} "
},
{
"math_id": 7,
"text": " l(q) < l(p) "
},
{
"math_id": 8,
"text": " i \\in \\Z "
},
{
"math_id": 9,
"text": " C_i = \\{p \\in R \\mid l(p) \\geq i\\} "
},
{
"math_id": 10,
"text": " d_{\\min}(C_i) = \\min\\limits_{p \\in C_{i}}\\min\\limits_{q \\in C_{i}\\setminus \\{p\\}} d(p,q) > 2^{i} "
},
{
"math_id": 11,
"text": " \\bar B(p,t) "
},
{
"math_id": 12,
"text": " t\\geq 0 "
},
{
"math_id": 13,
"text": "|\\bar B(p,t)|"
},
{
"math_id": 14,
"text": " c(R) "
},
{
"math_id": 15,
"text": " c(R)\\geq 2 "
},
{
"math_id": 16,
"text": "|\\bar{B}(p,2t)|\\leq c(R) \\cdot |\\bar{B}(p,t)| "
},
{
"math_id": 17,
"text": " p\\in R "
},
{
"math_id": 18,
"text": "c_m "
},
{
"math_id": 19,
"text": " c_m(R) = \\lim\\limits_{\\xi \\rightarrow 0^{+}}\\inf\\limits_{R\\subseteq A\\subseteq X}\\sup\\limits_{p \\in A,t > \\xi}\\dfrac{|\\bar{B}(p,2t) \\cap A|}{|\\bar{B}(p,t) \\cap A|} "
},
{
"math_id": 20,
"text": " c_m(R) \\leq c(R) "
},
{
"math_id": 21,
"text": " \\mathrm{diam}(R) = \\max_{p \\in R}\\max_{q \\in R}d(p,q) "
},
{
"math_id": 22,
"text": " \\Delta(R) = \\dfrac{\\mathrm{diam}(R)}{d_{\\min}(R)} "
},
{
"math_id": 23,
"text": " d_{\\min}(R) "
},
{
"math_id": 24,
"text": " O(c(R)^{10} \\cdot \\log|R|) "
},
{
"math_id": 25,
"text": " O(c_m(R)^{8} \\cdot \\log\\Delta(|R|)) "
},
{
"math_id": 26,
"text": "\\mathcal{T}(R) "
},
{
"math_id": 27,
"text": " q \\in Q "
},
{
"math_id": 28,
"text": " O\\Big ( c(R \\cup \\{q\\})^2 \\cdot \\log_2(k) \\cdot \\big((c_m(R))^{10} \\cdot \\log_2(|R|) + c(R \\cup \\{q\\}) \\cdot k\\big) \\Big). "
},
{
"math_id": 29,
"text": " O\\Big ((c_m(R))^{10} \\cdot \\log_2(k) \\cdot \\log_2(\\Delta(R)) + |\\bar{B}(q, 5d_k(q,R))| \\cdot \\log_2(k) \\Big ) "
},
{
"math_id": 30,
"text": " |\\bar{B}(q, 5d_k(q,R))| "
},
{
"math_id": 31,
"text": " c_m(R) "
},
{
"math_id": 32,
"text": "2^{\\text{dim}}"
},
{
"math_id": 33,
"text": " \\Delta "
},
{
"math_id": 34,
"text": "q \\in Q"
},
{
"math_id": 35,
"text": " d(q,R) "
},
{
"math_id": 36,
"text": " d_k(q,R) "
},
{
"math_id": 37,
"text": "c(R)"
},
{
"math_id": 38,
"text": "2^{\\text{dim}_{KR}}"
},
{
"math_id": 39,
"text": "c(R)^{O(1)}"
},
{
"math_id": 40,
"text": "q \\in X"
}
]
| https://en.wikipedia.org/wiki?curid=73944094 |
73952084 | Simon Connell | Particle physicist in South Africa
Simon Henry Connell is a professor of physics at the University of Johannesburg in South Africa. He is an engineering physicist, a Founding Member of the South African participation in the ATLAS Experiment at CERN, and the Chair of the African Light Source (AfLS) Foundation.
Career.
Simon Henry Connell obtained his Bachelors' degree and PhD (1985 - 1989) in Physics from the University of the Witwatersrand. He continued to work at the University of the Witwatersrand until 2008, when he moved to University of Johannesburg. He is a professor of physics at the University of Johannesburg. He is an engineering physicist who previously worked extensively at the European Synchrotron Radiation Facility (ESRF). He is affiliated with the Faculty of Engineering and the Built Environment in the Department of Mechanical Engineering Science. Connell is a Founding Member of the South African participation in the ATLAS Experiment at CERN. Additionally, he served as the President of the South African Institute of Physics.
Research and projects.
According to the South African National Research Foundation, he is highly regarded and acknowledged internationally for his accomplishments. As of June 2023, he has an h-index of 141 on Google Scholar. Connell has research interests in various areas including Particle Physics, Nuclear Physics, Quantum Physics, High-Performance Computing, and Applied Nuclear Physics.
Furthermore, he has been involved in engineering and technical activities related to the Beyond Standard Model search at CERN, which is focused on High Energy Physics. Together with his group, he is involved in the searching for particles related to dark matter, presenting two potential dark vector boson candidates. Their primary objective is to identify additional candidates that could lead to a groundbreaking discovery or alternatively explain these events as background processes. His research also focuses on the development of a gamma ray laser using a specially fabricated diamond superlattice as a crystalline undulator as part of the EU-PEARL.
As the leader of the Mining Positron Emission Technology (MinPET) Research Group, Connell has successfully demonstrated the ability to detect diamonds within kimberlite at a statistically significant level. Moreover, he has utilised high-rate, high-sensitivity detectors developed for this project to investigate fluid-flow in hydro-cyclones. Connell is also engaged in an inter-departmental collaboration to build the national case for South African Advanced High-Temperature Gas Cooled Reactors. His particular interest lies in combining Monte Carlo among other methods with advanced computing solutions to model neutrons in the nuclear reactor core.
African Light Source.
Connell is actively involved in the African Light Source (AfLS) project as the AfLS Foundation's Chair. Connell has contributed to the development and promotion of the AfLS, advocating for the establishment of this facility in Africa. He has co-authored papers and articles discussing the importance and potential impact of the African Light Source. In addition, Simon Connell has given presentations and talks about the African Light Source project. The African Light Source is an initiative aimed at establishing Africa's first synchrotron light source, a particle accelerator that produces intense radiation used for studying the structure and behavior of matter. The project aims to bridge the gap in synchrotron light source capabilities on the continent, as Africa currently lacks such a facility. By establishing the African Light Source, African scientists would have access to a powerful tool for conducting cutting-edge research in various scientific disciplines. The project has gained momentum and support from the scientific community.
Awards and honours.
Connell was elected a Fellow of the Royal Society of South Africa in 2006, a Member of the Academy of Science of South Africa, and a Fellow of the African Academy of Sciences in 2018. Connel received the British Association Medal (Silver) from the South Africa Association of the Advancement of Science (formula_0) in 1994, and the National Science and Technology Forum (NSTF)'s Awards for "Innovation and Research and/or Development: Corporate Organisation" in 2022 for leading the MinPET project. | [
{
"math_id": 0,
"text": "S_2A_3"
}
]
| https://en.wikipedia.org/wiki?curid=73952084 |
739588 | Adaptive system | System that can adapt to the environment
An adaptive system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole that together are able to respond to environmental changes or changes in the interacting parts, in a way analogous to either continuous physiological homeostasis or evolutionary adaptation in biology. Feedback loops represent a key feature of adaptive systems, such as ecosystems and individual organisms; or in the human world, communities, organizations, and families. Adaptive systems can be organized into a hierarchy.
Artificial adaptive systems include robots with control systems that utilize negative feedback to maintain desired states.
The law of adaptation.
The law of adaptation may be stated informally as:
<templatestyles src="Template:Blockquote/styles.css" />Every adaptive system converges to a state in which all kind of stimulation ceases.
Formally, the law can be defined as follows:
Given a system formula_0, we say that a physical event formula_1 is a stimulus for the system formula_0 if and only if the probability formula_2 that the system suffers a change or be perturbed (in its elements or in its processes) when the event formula_1 occurs is strictly greater than the prior probability that formula_0 suffers a change independently of formula_1:
formula_3
"Let formula_0 be an arbitrary system subject to changes in time formula_4 and let formula_1 be an arbitrary event that is a stimulus for the system formula_0: we say that formula_0 is an adaptive system if and only if when t tends to infinity formula_5 the probability that the system formula_0 change its behavior formula_6 in a time step formula_7 given the event formula_1 is equal to the probability that the system change its behavior independently of the occurrence of the event formula_1. In mathematical terms:"
Thus, for each instant formula_4 will exist a temporal interval formula_10 such that:
formula_11
Benefit of self-adjusting systems.
In an adaptive system, a parameter changes slowly and has no preferred value. In a self-adjusting system though, the parameter value “depends on the history of the system dynamics”. One of the most important qualities of "self-adjusting systems" is its “adaptation to the edge of chaos” or ability to avoid chaos. Practically speaking, by heading to the edge of chaos without going further, a leader may act spontaneously yet without disaster. A March/April 2009 Complexity article further explains the self-adjusting systems used and the realistic implications. Physicists have shown that adaptation to the edge of chaos occurs in almost all systems with feedback.
Hierarchy of adaptations: Practopoiesis.
A groundbreaking theory of practopoiesis explains how various types of adaptations interact in a living system? Practopoiesis, a term due to its originator Danko Nikolić, is a reference to a hierarchy of adaptation mechanisms answering this question. The adaptive hierarchy forms a kind of a self-adjusting system in which autopoiesis of the entire "organism" or a "cell" occurs through a hierarchy of allopoietic interactions among "components". This is possible because the components are organized into a poietic hierarchy: adaptive actions of one component result in creation of another component. The theory proposes that living systems exhibit a hierarchy of a total of four such adaptive poietic operations:
"evolution" (i) → "gene expression" (ii) → "non gene-involving homeostatic mechanisms (anapoiesis)" (iii) → "final cell function" (iv)
As the hierarchy evolves towards higher levels of organization, the speed of adaptation increases. Evolution is the slowest; gene expression is faster; and so on. The final cell function is the fastest. Ultimately, practopoiesis challenges current neuroscience doctrine by asserting that mental operations primarily occur at the homeostatic, anapoietic level (iii) — i.e., that minds and thought emerge from fast homeostatic mechanisms poietically controlling the cell function. This contrasts the widespread assumption that thinking is synonymous with computations executed at the level of neural activity (i.e., with the 'final cell function' at level iv).
Sharov proposed that only Eukaryote cells can achieve all four levels of organization.
Each slower level contains knowledge that is more general than the faster level; for example, genes contain more general knowledge than anapoietic mechanisms, which in turn contain more general knowledge than cell functions. This hierarchy of knowledge enables the anapoietic level to implement concepts, which are the most fundamental ingredients of a mind. Activation of concepts through anapoiesis is suggested to underlie ideasthesia. Practopoiesis also has implications for understanding the limitations of Deep Learning.
Empirical tests of practopoiesis require learning on double-loop tasks: One needs to assess how the learning capability adapts over time, i.e., how the system learns to learn (adapts its adapting skills).
It has been proposed that anapoiesis is implemented in the brain by metabotropic receptors and G protein-gated ion channels. These membrane proteins are suggested to transiently select subnetworks and by doing so, give raise to cognition.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "P(S \\rightarrow S'|E)"
},
{
"math_id": 3,
"text": "P(S \\rightarrow S'|E)>P(S \\rightarrow S') "
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "(t\\rightarrow \\infty)"
},
{
"math_id": 6,
"text": "(S\\rightarrow S')"
},
{
"math_id": 7,
"text": "t_0"
},
{
"math_id": 8,
"text": " P_{t_0}(S\\rightarrow S'|E) > P_{t_0}(S\\rightarrow S') > 0 "
},
{
"math_id": 9,
"text": " \\lim_{t\\rightarrow \\infty} P_t(S\\rightarrow S' | E) = P_t(S\\rightarrow S')"
},
{
"math_id": 10,
"text": "h"
},
{
"math_id": 11,
"text": " P_{t+h}(S\\rightarrow S' | E) - P_{t+h}(S\\rightarrow S') < P_t(S\\rightarrow S' | E) - P_t(S\\rightarrow S')"
}
]
| https://en.wikipedia.org/wiki?curid=739588 |
7396128 | Asset turnover | Financial ratio representing how efficiently a company uses its assets to generate revenue
In finance, asset turnover (ATO), total asset turnover, or asset turns is a financial ratio that measures the efficiency of a company's use of its assets in generating sales revenue or sales income to the company. Asset turnover is considered to be a "profitability ratio", which is a group of financial ratios that measure how efficiently a company uses assets. Asset turnover can be furthered subdivided into fixed asset turnover, which measures a company's use of its fixed assets to generate revenue, and working capital turnover, which measures a company's use of its working capital (current assets minus liabilities) to generate revenue. Total asset turnover ratios can be used to calculate return on equity (ROE) figures as part of DuPont analysis. As a financial and activity ratio, and as part of DuPont analysis, asset turnover is a part of company fundamental analysis.
Companies with low profit margins tend to have high asset turnover, while those with high profit margins have low asset turnover. Companies in the retail industry tend to have a very high turnover ratio, due mainly to cutthroat and competitive pricing.
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{ATO} = \\frac{\\text{Net Sales Revenue}}{\\text{Average Total Assets}}"
}
]
| https://en.wikipedia.org/wiki?curid=7396128 |
73964350 | Plancherel–Rotach asymptotics | Asymptotic values of Hermite or Laguerre polynomials
The Plancherel–Rotach asymptotics are asymptotic results for orthogonal polynomials. They are named after the Swiss mathematicians Michel Plancherel and his PhD student Walter Rotach, who first derived the asymptotics for the Hermite polynomial and Laguerre polynomial. Nowadays asymptotic expansions of this kind for orthogonal polynomials are referred to as "Plancherel–Rotach asymptotics" or of "Plancherel–Rotach type".
The case for the associated Laguerre polynomial was derived by the Swiss mathematician Egon Möcklin, another PhD student of Plancherel and George Pólya at ETH Zurich.
Hermite polynomials.
Let formula_0 denote the n-th Hermite polynomial. Let formula_1 and formula_2 be positive and fixed, then
formula_5
formula_8
formula_11
where formula_12 denotes the Airy function.
(Associated) Laguerre polynomials.
Let formula_13 denote the n-th associate Laguerre polynomial. Let formula_14 be arbitrary and real, formula_1 and formula_2 be positive and fixed, then
formula_17
formula_20
formula_22. | [
{
"math_id": 0,
"text": "H_n(x)"
},
{
"math_id": 1,
"text": "\\epsilon"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "x =(2n+1)^{1/2}\\cos \\varphi"
},
{
"math_id": 4,
"text": " \\epsilon \\leq \\varphi \\leq \\pi -\\epsilon"
},
{
"math_id": 5,
"text": "\ne^{-x^2/2}H_n(x)\n=2^{n/2+1/4}(n!)^{1/2}(\\pi n)^{-1/4}(\\sin \\varphi)^{-1/2}\n\\bigg\\{\\sin\\left[\\left(\\tfrac{n}{2}+\\tfrac{1}{4}\\right)(\\sin 2\\varphi-2\\varphi)+3\\tfrac{ \\pi}{4}\\right]+\\mathcal{O}(n^{-1})\\bigg\\}\n"
},
{
"math_id": 6,
"text": "x =(2n+1)^{1/2}\\cosh \\varphi"
},
{
"math_id": 7,
"text": "\\epsilon \\leq \\varphi \\leq \\omega"
},
{
"math_id": 8,
"text": "\ne^{-x^2/2}H_n(x)\n=2^{n/2-3/4}(n!)^{1/2}(\\pi n)^{-1/4}(\\sinh \\varphi)^{-1/2}\n\\exp\\left[\\left(\\tfrac{n}{2}+\\tfrac{1}{4}\\right)(2\\varphi-\\sinh 2\\varphi)\\right]\n\\big\\{1+\\mathcal{O}(n^{-1})\\big\\}\n"
},
{
"math_id": 9,
"text": "x =(2n+1)^{1/2}-2^{-1/2}3^{-1/3}n^{-1/6}t"
},
{
"math_id": 10,
"text": "t"
},
{
"math_id": 11,
"text": "e^{-x^2/2}H_n(x)\n=3^{1/3}\\pi^{-3/4}2^{n/2+1/4}(n!)^{1/2}n^{-1/12}\n\\bigg\\{\\operatorname{Ai}(t)+\\mathcal{O}\\left(n^{-{2/3}}\\right)\\bigg\\}"
},
{
"math_id": 12,
"text": "\\operatorname{Ai}"
},
{
"math_id": 13,
"text": "L^{(\\alpha )}_n(x)"
},
{
"math_id": 14,
"text": "\\alpha"
},
{
"math_id": 15,
"text": "x =(4n+2\\alpha + 2)\\cos^2\\varphi"
},
{
"math_id": 16,
"text": "\\epsilon\\leq \\varphi \\leq \\tfrac{\\pi}{2} -\\epsilon n^{-1/2}"
},
{
"math_id": 17,
"text": "\ne^{-x/2}L^{(\\alpha )}_n(x)\n=(-1)^{n}(\\pi \\sin \\varphi)^{-1/2}x^{-\\alpha/2-1/4}n^{\\alpha/2-1/4}\n\\big\\{\\sin\\left[\\left(n+\\tfrac{\\alpha+1}{2}\\right)(\\sin 2\\varphi-2\\varphi)+3\\pi/4\\right] +(nx)^{-1/2}\\mathcal{O}(1)\\big\\}\n"
},
{
"math_id": 18,
"text": "x =(4n+2\\alpha + 2)\\cosh^2\\varphi"
},
{
"math_id": 19,
"text": "\\epsilon\\leq \\varphi \\leq \\omega"
},
{
"math_id": 20,
"text": "\ne^{-x/2}L^{(\\alpha )}_n(x)\n=\\tfrac{1}{2}(-1)^{n}(\\pi \\sinh \\varphi )^{-1/2}x^{-\\alpha/2-1/4}n^{\\alpha /2-1/4}\n\\exp\\left[\\left(n+\\tfrac{\\alpha+1}{2}\\right)(2\\varphi-\\sinh 2\\varphi)\\right]\n\\{1+\\mathcal{O}\\left(n^{-1}\\right)\\}\n"
},
{
"math_id": 21,
"text": "x =4n+2\\alpha + 2 -2(2n/3)^{1/3}t"
},
{
"math_id": 22,
"text": "e^{-x/2}L^{(\\alpha)}_n(x)\n=(-1)^n\\pi^{-1}2^{-\\alpha-1/3}3^{1/3}n^{-1/3}\n\\bigg\\{\\operatorname{Ai}(t)+\\mathcal{O}\\left(n^{-2/3}\\right)\\bigg\\}"
}
]
| https://en.wikipedia.org/wiki?curid=73964350 |
73966770 | Bobkov's inequality | In probability theory, Bobkov's inequality is a functional isoperimetric inequality for the canonical Gaussian measure. It generalizes the Gaussian isoperimetric inequality.
The equation was proven in 1997 by the Russian mathematician Sergey Bobkov.
Bobkov's inequality.
Notation:
Let
Statement.
For every locally Lipschitz continuous (or smooth) function formula_7 the following inequality holds
formula_8
Generalizations.
There exists a generalization by Dominique Bakry and Michel Ledoux. | [
{
"math_id": 0,
"text": "\\gamma^n(dx)=(2\\pi)^{-n/2}e^{-\\|x\\|^2/2}d^nx"
},
{
"math_id": 1,
"text": "\\R^n"
},
{
"math_id": 2,
"text": "\\phi(x)=(2\\pi)^{-1/2}e^{-x^2/2}"
},
{
"math_id": 3,
"text": "\\Phi(t)=\\gamma^1[-\\infty,t]"
},
{
"math_id": 4,
"text": "I(t):=\\phi(\\Phi^{-1}(t))"
},
{
"math_id": 5,
"text": "I(t):[0,1]\\to [0,1]"
},
{
"math_id": 6,
"text": "\\lim\\limits_{t\\to 0} I(t)=\\lim\\limits_{t\\to 1} I(t)=0."
},
{
"math_id": 7,
"text": "f:\\R^n\\to[0,1]"
},
{
"math_id": 8,
"text": "I\\left( \\int_{\\R^n} f d\\gamma^n(dx)\\right)\\leq \\int_{\\R^n} \\sqrt{I(f)^2+|\\nabla f|^2}d\\gamma^n(dx)."
}
]
| https://en.wikipedia.org/wiki?curid=73966770 |
73967005 | Matsaev's theorem | Matsaev's theorem is a theorem from complex analysis, which characterizes the order and type of an entire function.
The theorem was proven in 1960 by Vladimir Igorevich Matsaev.
Matsaev's theorem.
Let formula_0 with formula_1 be an entire function which is bounded from below as follows
formula_2
where
formula_3 and formula_4
Then formula_5 is of order formula_6 and has finite type. | [
{
"math_id": 0,
"text": "f(z)"
},
{
"math_id": 1,
"text": "z=re^{i\\theta}"
},
{
"math_id": 2,
"text": "\\log(|f(z)|)\\geq -C\\frac{r^{\\rho}}{|\\sin(\\theta)|^s},"
},
{
"math_id": 3,
"text": "C>0,\\quad \\rho>1\\quad"
},
{
"math_id": 4,
"text": "\\quad s\\geq 0."
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "\\rho"
}
]
| https://en.wikipedia.org/wiki?curid=73967005 |
73971166 | Third medium contact method | Implicit formulation for contact mechanics
The third medium contact (TMC) is an implicit formulation for contact mechanics. Contacting bodies are embedded in a highly compliant medium (the third medium), which becomes increasingly stiff under compression. The stiffening of the third medium allows tractions to be transferred between the contacting bodies when the third medium between the bodies is compressed. In itself, the method is inexact; however, in contrast to most other contact methods, the third medium approach is continuous and differentiable, which makes it applicable to applications such as topology optimization.
The method was first proposed by Peter Wriggers, Jörg Schröder, and Alexander Schwarz where a St. Venant-Kirchhoff material was used to model the third medium. This approach requires explicit treatment of surface normals. A simplification to the method was offered by Bog et al. by applying a Hencky material with the inherent property of becoming rigid under ultimate compression. This property has made the explicit treatment of surface normals redundant, thereby transforming the third medium contact method into a fully implicit method, which is a contrast to the more widely used Mortar methods or Penalty methods. The addition of a new regularization by Bluhm et al. to stabilize the third medium further extended the method to applications involving moderate sliding, rendering it practically applicable.
Methodology.
A material with the property that it becomes increasingly stiff under compression is augmented by a regularization term. In terms of strain energy density, this may be expressed as
formula_0,
where formula_1 represents the augmented strain energy density in the third medium, formula_2 is the regularization term representing the inner product of the spatial Hessian by itself, and formula_3 is the underlying strain energy density of the third medium, e.g. a Neo-Hookean solid or another hyperelastic material. The term formula_2 is commonly referred to as HuHu-regularization.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Psi(u) = W(u) + \\mathbb{H}(u) \\, \\boldsymbol{\\scriptstyle{\\vdots}} \\, \\mathbb{H}(u)"
},
{
"math_id": 1,
"text": " \\Psi(u)"
},
{
"math_id": 2,
"text": "\\mathbb{H}(u) \\, \\boldsymbol{\\scriptstyle{\\vdots}} \\, \\mathbb{H}(u)"
},
{
"math_id": 3,
"text": "W(u)"
}
]
| https://en.wikipedia.org/wiki?curid=73971166 |
739775 | Well-quasi-ordering | Mathematical concept for comparing objects
<templatestyles src="Stack/styles.css"/>
In mathematics, specifically order theory, a well-quasi-ordering or wqo on a set formula_0 is a quasi-ordering of formula_0 for which every infinite sequence of elements formula_1 from formula_0 contains an increasing pair formula_2 with formula_3
Motivation.
Well-founded induction can be used on any set with a well-founded relation, thus one is interested in when a quasi-order is well-founded. (Here, by abuse of terminology, a quasiorder formula_4 is said to be well-founded if the corresponding strict order formula_5 is a well-founded relation.) However the class of well-founded quasiorders is not closed under certain operations—that is, when a quasi-order is used to obtain a new quasi-order on a set of structures derived from our original set, this quasiorder is found to be not well-founded. By placing stronger restrictions on the original well-founded quasiordering one can hope to ensure that our derived quasiorderings are still well-founded.
An example of this is the power set operation. Given a quasiordering formula_4 for a set formula_0 one can define a quasiorder formula_6 on formula_0's power set formula_7 by setting formula_8 if and only if for each element of formula_9 one can find some element of formula_10 that is larger than it with respect to formula_4. One can show that this quasiordering on formula_7 needn't be well-founded, but if one takes the original quasi-ordering to be a well-quasi-ordering, then it is.
Formal definition.
A well-quasi-ordering on a set formula_0 is a quasi-ordering (i.e., a reflexive, transitive binary relation) such that any infinite sequence of elements formula_1 from formula_0 contains an increasing pair formula_11 with formula_12. The set formula_0 is said to be well-quasi-ordered, or shortly wqo.
A well partial order, or a wpo, is a wqo that is a proper ordering relation, i.e., it is antisymmetric.
Among other ways of defining wqo's, one is to say that they are quasi-orderings which do not contain infinite "strictly decreasing" sequences (of the form formula_13)[#endnote_] nor infinite sequences of "pairwise incomparable" elements. Hence a quasi-order ("X", ≤) is wqo if and only if ("X", <) is well-founded and has no infinite antichains.
Ordinal type.
Let formula_0 be well partially ordered. A (necessarily finite) sequence formula_14 of elements of formula_0 that contains no pair formula_11 with formula_12 is usually called a "bad sequence". The "tree of bad sequences" formula_15 is the tree that contains a vertex for each bad sequence, and an edge joining each nonempty bad sequence formula_16 to its parent formula_17. The root of formula_15 corresponds to the empty sequence. Since formula_0 contains no infinite bad sequence, the tree formula_15 contains no infinite path starting at the root. Therefore, each vertex formula_18 of formula_15 has an ordinal height formula_19, which is defined by transfinite induction as formula_20. The "ordinal type" of formula_0, denoted formula_21, is the ordinal height of the root of formula_15.
A "linearization" of formula_0 is an extension of the partial order into a total order. It is easy to verify that formula_21 is an upper bound on the ordinal type of every linearization of formula_0. De Jongh and Parikh proved that in fact there always exists a linearization of formula_0 that achieves the maximal ordinal type formula_21.
Constructing new wpo's from given ones.
Let formula_36 and formula_37 be two disjoint wpo sets. Let formula_38, and define a partial order on formula_39 by letting formula_40 if and only if formula_41 for the same formula_42 and formula_43. Then formula_39 is wpo, and formula_44, where formula_45 denotes natural sum of ordinals.
Given wpo sets formula_36 and formula_37, define a partial order on the Cartesian product formula_46, by letting formula_47 if and only if formula_48 and formula_49. Then formula_39 is wpo (this is a generalization of Dickson's lemma), and formula_50, where formula_51 denotes natural product of ordinals.
Given a wpo set formula_0, let formula_29 be the set of finite sequences of elements of formula_0, partially ordered by the subsequence relation. Meaning, let formula_52 if and only if there exist indices formula_53 such that formula_54 for each formula_55. By Higman's lemma, formula_29 is wpo. The ordinal type of formula_29 is formula_56
Given a wpo set formula_0, let formula_57 be the set of all finite rooted trees whose vertices are labeled by elements of formula_0. Partially order formula_57 by the tree embedding relation. By Kruskal's tree theorem, formula_57 is wpo. This result is nontrivial even for the case formula_58 (which corresponds to unlabeled trees), in which case formula_59 equals the small Veblen ordinal. In general, for formula_21 countable, we have the upper bound formula_60 in terms of the formula_61 ordinal collapsing function. (The small Veblen ordinal equals formula_62 in this ordinal notation.)
Wqo's versus well partial orders.
In practice, the wqo's one manipulates are quite often not orderings (see examples above), and the theory is technically smoother if we do not require antisymmetry, so it is built with wqo's as the basic notion. On the other hand, according to Milner 1985, "no real gain in generality is obtained by considering quasi-orders rather than partial orders... it is simply more convenient to do so."
Observe that a wpo is a wqo, and that a wqo gives rise to a wpo between equivalence classes induced by the kernel of the wqo. For example, if we order formula_63 by divisibility, we end up with formula_64 if and only if formula_65, so that formula_66.
Infinite increasing subsequences.
If formula_27 is wqo then every infinite sequence formula_67 contains an infinite increasing subsequence formula_68 (with formula_69). Such a subsequence is sometimes called perfect.
This can be proved by a Ramsey argument: given some sequence formula_70, consider the set formula_71 of indexes formula_72 such that formula_73 has no larger or equal formula_74 to its right, i.e., with formula_75. If formula_71 is infinite, then the formula_71-extracted subsequence contradicts the assumption that formula_0 is wqo. So formula_71 is finite, and any formula_76 with formula_77 larger than any index in formula_71 can be used as the starting point of an infinite increasing subsequence.
The existence of such infinite increasing subsequences is sometimes taken as a definition for well-quasi-ordering, leading to an equivalent notion.
Notes.
<templatestyles src="Citation/styles.css"/>^ Here "x" < "y" means: formula_89 and formula_90
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "x_0, x_1, x_2, \\ldots"
},
{
"math_id": 2,
"text": "x_i \\leq x_j"
},
{
"math_id": 3,
"text": "i < j."
},
{
"math_id": 4,
"text": "\\le"
},
{
"math_id": 5,
"text": "x\\le y\\land y\\nleq x"
},
{
"math_id": 6,
"text": "\\le^{+}"
},
{
"math_id": 7,
"text": "P(X)"
},
{
"math_id": 8,
"text": "A \\le^{+} B"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "B"
},
{
"math_id": 11,
"text": "x_i \\le x_j"
},
{
"math_id": 12,
"text": "i< j"
},
{
"math_id": 13,
"text": "x_0> x_1> x_2> \\cdots"
},
{
"math_id": 14,
"text": "(x_1, x_2, \\ldots, x_n)"
},
{
"math_id": 15,
"text": "T_X"
},
{
"math_id": 16,
"text": "(x_1, \\ldots, x_{n-1}, x_n)"
},
{
"math_id": 17,
"text": "(x_1, \\ldots, x_{n-1})"
},
{
"math_id": 18,
"text": "v"
},
{
"math_id": 19,
"text": "o(v)"
},
{
"math_id": 20,
"text": "o(v) = \\lim_{w \\mathrm{\\ child\\ of\\ } v} (o(w)+1)"
},
{
"math_id": 21,
"text": "o(X)"
},
{
"math_id": 22,
"text": "(\\N, \\le)"
},
{
"math_id": 23,
"text": "(\\Z, \\le)"
},
{
"math_id": 24,
"text": "(\\N, |)"
},
{
"math_id": 25,
"text": "(\\N^k, \\le)"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "(X, \\le)"
},
{
"math_id": 28,
"text": "(X^k,\\le^k)"
},
{
"math_id": 29,
"text": "X^*"
},
{
"math_id": 30,
"text": "b, ab, aab, aaab, \\ldots"
},
{
"math_id": 31,
"text": "(X^*,\\le)"
},
{
"math_id": 32,
"text": "u"
},
{
"math_id": 33,
"text": "(X,=)"
},
{
"math_id": 34,
"text": "u\\le v"
},
{
"math_id": 35,
"text": "(X^\\omega,\\le)"
},
{
"math_id": 36,
"text": "X_1"
},
{
"math_id": 37,
"text": "X_2"
},
{
"math_id": 38,
"text": "Y=X_1\\cup X_2"
},
{
"math_id": 39,
"text": "Y"
},
{
"math_id": 40,
"text": "y_1\\le_Y y_2"
},
{
"math_id": 41,
"text": "y_1,y_2\\in X_i"
},
{
"math_id": 42,
"text": "i\\in\\{1,2\\}"
},
{
"math_id": 43,
"text": "y_1 \\le_{X_i} y_2"
},
{
"math_id": 44,
"text": "o(Y) = o(X_1) \\oplus o(X_2)"
},
{
"math_id": 45,
"text": "\\oplus"
},
{
"math_id": 46,
"text": "Y=X_1\\times X_2"
},
{
"math_id": 47,
"text": "(a_1,a_2)\\le_Y (b_1,b_2)"
},
{
"math_id": 48,
"text": "a_1\\le_{X_1} b_1"
},
{
"math_id": 49,
"text": "a_2\\le_{X_2} b_2"
},
{
"math_id": 50,
"text": "o(Y) = o(X_1)\\otimes o(X_2)"
},
{
"math_id": 51,
"text": "\\otimes"
},
{
"math_id": 52,
"text": "(x_1,\\ldots,x_n)\\le_{X^*} (y_1,\\ldots,y_m)"
},
{
"math_id": 53,
"text": "1\\le i_1<\\cdots<i_n\\le m"
},
{
"math_id": 54,
"text": "x_j \\le_X y_{i_j}"
},
{
"math_id": 55,
"text": "1\\le j\\le n"
},
{
"math_id": 56,
"text": "o(X^*)=\\begin{cases}\\omega^{\\omega^{o(X)-1}},&o(X) \\text{ finite};\\\\ \\omega^{\\omega^{o(X)+1}},&o(X)=\\varepsilon_\\alpha+n \\text{ for some }\\alpha\\text{ and some finite }n;\\\\ \\omega^{\\omega^{o(X)}},&\\text{otherwise}.\\end{cases}"
},
{
"math_id": 57,
"text": "T(X)"
},
{
"math_id": 58,
"text": "|X|=1"
},
{
"math_id": 59,
"text": "o(T(X))"
},
{
"math_id": 60,
"text": "o(T(X))\\le\\vartheta(\\Omega^\\omega o(X))"
},
{
"math_id": 61,
"text": "\\vartheta"
},
{
"math_id": 62,
"text": "\\vartheta(\\Omega^\\omega)"
},
{
"math_id": 63,
"text": "\\Z"
},
{
"math_id": 64,
"text": "n\\equiv m"
},
{
"math_id": 65,
"text": "n=\\pm m"
},
{
"math_id": 66,
"text": "(\\Z,|)\\approx(\\N,|)"
},
{
"math_id": 67,
"text": "x_0, x_1, x_2, \\ldots,"
},
{
"math_id": 68,
"text": "x_{n_0} \\le x_{n_1}\\le x_{n_2} \\le \\cdots"
},
{
"math_id": 69,
"text": "n_0< n_1< n_2< \\cdots"
},
{
"math_id": 70,
"text": "(x_i)_i"
},
{
"math_id": 71,
"text": "I"
},
{
"math_id": 72,
"text": "i"
},
{
"math_id": 73,
"text": "x_i"
},
{
"math_id": 74,
"text": "x_j"
},
{
"math_id": 75,
"text": "i<j"
},
{
"math_id": 76,
"text": "x_n"
},
{
"math_id": 77,
"text": "n"
},
{
"math_id": 78,
"text": "(X,\\le)"
},
{
"math_id": 79,
"text": "(P(X), \\le^+)"
},
{
"math_id": 80,
"text": " A \\le^+ B \\iff \\forall a \\in A, \\exists b \\in B, a \\le b"
},
{
"math_id": 81,
"text": "x \\sim y \\iff x\\le y \\land y \\le x"
},
{
"math_id": 82,
"text": "S_0 \\subseteq S_1 \\subseteq \\cdots \\subseteq X"
},
{
"math_id": 83,
"text": "n \\in \\N"
},
{
"math_id": 84,
"text": "S_n = S_{n+1} = \\cdots"
},
{
"math_id": 85,
"text": "S \\subseteq X"
},
{
"math_id": 86,
"text": "\\forall x,y \\in X, x \\le y \\wedge x \\in S \\Rightarrow y \\in S"
},
{
"math_id": 87,
"text": "\\forall i \\in \\N, \\exists j \\in \\N, j > i, \\exists x \\in S_j \\setminus S_i"
},
{
"math_id": 88,
"text": "S"
},
{
"math_id": 89,
"text": "x\\le y"
},
{
"math_id": 90,
"text": "x \\neq y."
}
]
| https://en.wikipedia.org/wiki?curid=739775 |
73980675 | Price of anarchy in congestion games | The Price of Anarchy (PoA) is a concept in game theory and mechanism design that measures how the social welfare of a system degrades due to selfish behavior of its agents. It has been studied extensively in various contexts, particularly in congestion games (CG).
Example.
The inefficiency of congestion games was first illustrated by Pigou in 1920, using the following simple congestion game. Suppose there are two roads that lead from point A to point B:
Suppose there are 1000 drivers who need to go from A to B. Each driver wants to minimize his own delay, but the government would like to minimize the total delay (the sum of delays of all drivers).
In this example, selfish routing leads to a total delay that is 4/3 times higher than the optimum, so the price of anarchy is 4/3. In general, the price of anarchy may differ based on the type of congestion game, the structure of the network, and the delay functions. Various authors have computed upper and lower bounds on the PoA in various congestion games.
Effect of delay functions.
To illustrate the effect of the delay functions on PoA, consider a variant of the above example in which the delay in road 1 is still 1 minute, but the delay in road 2 when "x" drivers use it is formula_0, for some d>1.
Therefore, the price of anarchy approaches infinity as formula_2.
Definitions.
A congestion game (CG) is defined by a set of "resources". For example, in a road network, each road is an individual resource. For each
resource, there is a "delay function" (aka "cost function"). The function maps the amount of congestion in the resource (e.g. the number of drivers choosing to use the road) to the delay experienced by each player using it. The total cost of a player is the total delay in all the resources he chooses. Each player chooses a strategy in order to minimize his own cost.
A Nash equilibirum is a situation in which no player can improve his delay by unilaterally changing his choice. The price of anarchy (PoA) is the ratio between the largest delay in Nash equilibrium, and the smallest possible delay overall. The price of stability (PoS) is the ratio between the "smallest" delay in Nash equilibrium (that is: the best possible equilibrium), and the smallest possible delay overall. The PoA and PoS can also be computed with respect to other equilibrium concepts, such as mixed equilibrium or correlated equilibrium.
There are several main classes of congestion games:
Another classification of CGs is based on the sets of strategies available to the players:
Moreover:
Atomic congestion games.
Christodoulou and Koutsoupias analyzed atomic unweighted CGs. They proved that the PoA when all delay functions are linear is exactly 2.5 (that is: the PoA is always at most 2.5, and in some cases it is exactly 2.5). They also gave upper and lower bounds for PoA when the delay functions are polynomials of bounded degree. In another paper, Christodoulou and Koutsoupias analyzed the PoS of atomic unweighted congestion games with linear delay functions. They proved that the PoS is at most 1.6, and showed an example in which the PoS is 1.577. They also showed that the PoA of correlated equilibria in this case is exactly 2.5 for unweighted games and exactly 2.618 for weighed games.
Awerbuch, Azar and Epstein analyzed analyzed atomic "weighted" CGs. They proved that the PoA when all delay functions are linear is exactly 2.618. They also showed that, when the delay functions are polynomials of degree "d", the PoA is in formula_7.
Aland, Dumrauf, Gairing, Monien and Schoppmann computed the exact PoA for atomic CGs, for delay functions that are polynomials of degree at most "d":
The same bounds hold whenever no player can improve his "expected" cost by a unilateral deviation. Therefore, the worst-case PoA are the same with respect to pure Nash equilibrium, mixed Nash equilibrium, correlated equilibrium and coarse-correlated equilibrium. Moreover, the bounds hold for unweighted and weighted "network" congestion games.
Bhawalkar, Gairing and Roughgarden analyze weighed CGs, and show how to compute the PoA for any class of cost functions (not necessarily polynomial). They also show that, under mild conditions on the allowable delay functions, the PoA with respect to pure Nash equilibria, mixed Nash equilibria, correlated equilibria and coarse correlated equilibria are always equal. They also show that, with polynomial cost functions, the worst-case PoA is attained on a simple network, consisting only of a set of parallel edges. They also show that the PoA of symmetric unweighted congestion games is always equal to the asymmetric ones.
Further results.
De-Jong and Uetz study sequential CGs, in which players pick their strategies sequentially rather than simultaneously. They analyze the PoA of subgame perfect equilibrium. They show that the sequential PoA with affine cost functions is exactly 1.5 for two players and ≈2.13 for three players, and at least 2.46 for four players. For singleton congestion games with affine cost functions, when there are n players, the sequential PoA is at most "n"-1; when formula_16, the sequential PoA is at least 2+1/e ≈ 2.37. For symmetric singleton atomic congestion games with affine cost functions, the sequential PoA is exactly 4/3.
Fotakis studies the PoA of CGs with linearly-independent paths, which is an extension of the setting of parallel links.
Law, Huang and Liu study the PoA of CGs in cognitive radio networks.
Gairing, Burkhard and Karsten study the PoA of CGs with player-specific linear delay functions.
Mlichtaich analyzes the effect of network topology on the efficiency of PNE in atomic CGs:
PoA of nonatomic congestion games.
Roughgarden and Tardos analyzed nonatomic CGs. They showed that, when the delay functions are polynomials of degree at most "d", the PoA is in formula_12, which is substantially smaller than the PoA of atomic games. In particular, when "d"=1, the PoA is 4/3; this shows that Pigou's simple example is the worst case for linear delay functions.
Chau and Sim extend the results of Roughgarden and Tardos by (1) considering symmetric cost maps and (2) incorporating elastic demands.
Correa, Schulz and Stier-Moses present a short, geometric proof to the results on PoA for nonatomic CGs. They also give stronger bounds on the PoA when equilibrium costs are within reasonable limits of the fixed costs.
Blum, Even-Dar and Ligett showed that all these PoA bounds apply under relatively weak behavioral assumptions: it is sufficient that all users achieve vanishing average regret over repeated plays of the game.
A useful concept in the analysis of PoA is "smoothness". A delay function "d" is called formula_17-smooth if for all formula_18, formula_19. If the delay is formula_17 smooth, formula_20 is a Nash equilibrium, and formula_21 is an optimal allocation, then formula_22. In other words, the price of anarchy is formula_23.
Mlichtaich analyzed "singleton" nonatomic CGs, with the following additional characteristics:
In such games, the equilibrium payoffs are always unique and Pareto-efficient, but may not maximize the sum of utilities. Moreover:
PoA of splittable congestion games.
Roughgarden and Schoppmann analyzed splittable congestion games. They showed that, when the delay functions are polynomials of degree at most "d", the PoA is in formula_28. In particular, when "d"=1, the PoA is at most 3/2. The PoA for splittable games is smaller than for atomic games, but larger than nonatomic games. For example:
PoA with altruistic players.
The basic CG model assumes that players are selfish - they care only about their own payoff. In fact, players may be altruistic and care about the social cost too. This can be modeled by assuming that the actual cost of each player is a weighted average of his own delay and the total delay. Altruism may have surprising effects on the system efficiency:
There are other papers studying the effect of altruism on the PoA. An alternative way to measure the effect of altruism on efficiency is via comparative statics: in a single game (not necessarily worst-case one), how does increasing the altruism coefficient affect the social cost? For some classes of CGs, the effect of altruism on efficiency may be negative.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x/1000)^d"
},
{
"math_id": 1,
"text": "x=\\frac{1000}{(d+1)^{1/d}}"
},
{
"math_id": 2,
"text": "d\\to\\infty"
},
{
"math_id": 3,
"text": "1000\\cdot (1-\\epsilon_d)"
},
{
"math_id": 4,
"text": "\\epsilon_d\\to 0"
},
{
"math_id": 5,
"text": "1000\\epsilon_d+1000\\cdot (1-\\epsilon_d)^{d+1}"
},
{
"math_id": 6,
"text": "1000\\cdot 1^{d+1} = 1000"
},
{
"math_id": 7,
"text": "d^{\\Theta(d)}"
},
{
"math_id": 8,
"text": "(\\Phi_d)^{d+1}\n"
},
{
"math_id": 9,
"text": "\\Phi_d\n"
},
{
"math_id": 10,
"text": "(x+1)^d = x^{d+1}\n"
},
{
"math_id": 11,
"text": "\\Phi_1\n"
},
{
"math_id": 12,
"text": "\\Theta\\left(\\frac{d}{\\log{d}}\\right)\n"
},
{
"math_id": 13,
"text": "\\Theta\\left(\\frac{d}{\\log{d}}\\right)^{d+1}\n"
},
{
"math_id": 14,
"text": "\\frac{\n(k_d+1)^{2d+1} - (k_d+2)^d\\cdot (k_d)^{d+1} \n}\n{\n(k_d+1)^{d+1} - (k_d+2)^{d} + (k_d+1)^{d} - (k_d)^{d+1}\n}\n"
},
{
"math_id": 15,
"text": "k_d := \\lfloor \\Phi_d \\rfloor\n"
},
{
"math_id": 16,
"text": "n\\to\\infty"
},
{
"math_id": 17,
"text": "(\\lambda, \\mu)"
},
{
"math_id": 18,
"text": "x,y > 0"
},
{
"math_id": 19,
"text": "y d(x) \\leq \\lambda y d(y) + \\mu x d(x)"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "f^*"
},
{
"math_id": 22,
"text": "\\textstyle \\sum_e x_ed_e(x_e) \\leq \\frac{\\lambda}{1 - \\mu} \\sum_e x_e^* d_e(x_e^*)"
},
{
"math_id": 23,
"text": "\\textstyle \\frac{\\lambda}{1- \\mu}"
},
{
"math_id": 24,
"text": "u_i = v_i(e) - d_e(x_e)\n"
},
{
"math_id": 25,
"text": "v_i(e)\n"
},
{
"math_id": 26,
"text": "d_e(x_e)\n"
},
{
"math_id": 27,
"text": "\\frac{d}{dx}[x\\cdot d_e(x)]\n"
},
{
"math_id": 28,
"text": "\\left(\\frac{1+\\sqrt{d+1}}{2}\\right)^{d+1}\n"
}
]
| https://en.wikipedia.org/wiki?curid=73980675 |
73989969 | Moser's trick | In differential geometry, a branch of mathematics, the Moser's trick (or Moser's argument) is a method to relate two differential forms formula_0 and formula_1 on a smooth manifold by a diffeomorphism formula_2 such that formula_3, provided that one can find a family of vector fields satisfying a certain ODE.
More generally, the argument holds for a family formula_4 and produce an entire isotopy formula_5 such that formula_6.
It was originally given by Jürgen Moser in 1965 to check when two volume forms are equivalent, but its main applications are in symplectic geometry. It is the standard argument for the modern proof of Darboux's theorem, as well as for the proof of Darboux-Weinstein theorem and other normal form results.
General statement.
Let formula_7 be a family of differential forms on a compact manifold formula_8. If the ODE formula_9 admits a solution formula_10, then there exists a family formula_11 of diffeomorphisms of formula_8 such that formula_12 and formula_13.
In particular, there is a diffeomorphism formula_14 such that formula_15.
Proof.
The trick consists in viewing formula_11 as the flows of a time-dependent vector field, i.e. of a smooth family formula_16 of vector fields on formula_8. Using the definition of flow, i.e. formula_17 for every formula_18, one obtains from the chain rule that formula_19 By hypothesis, one can always find formula_20 such that formula_9, hence their flows formula_5 satisfies formula_21. In particular, as formula_8 is compact, this flows exists at formula_22.
Application to volume forms.
Let formula_23 be two volume forms on a compact formula_24-dimensional manifold formula_8. Then there exists a diffeomorphism formula_25 of formula_8 such that formula_26 if and only if formula_27.
Proof.
One implication holds by the invariance of the integral by diffeomorphisms: formula_28.
For the converse, we apply Moser's trick to the family of volume forms formula_29. Since formula_30, the de Rham cohomology class formula_31 vanishes, as a consequence of Poincaré duality and the de Rham theorem. Then formula_32 for some formula_33, hence formula_34. By Moser's trick, it is enough to solve the following ODE, where we used the Cartan's magic formula, and the fact that formula_35 is a top-degree form:formula_36However, since formula_35 is a volume form, i.e. formula_37, given formula_38 one can always find formula_20 such that formula_39.
Application to symplectic structures.
In the context of symplectic geometry, the Moser's trick is often presented in the following form.Let formula_40 be a family of symplectic forms on formula_8 such that formula_41, for formula_42. Then there exists a family formula_11 of diffeomorphisms of formula_8 such that formula_12 and formula_13.
Proof.
In order to apply Moser's trick, we need to solve the following ODE
formula_43where we used the hypothesis, the Cartan's magic formula, and the fact that formula_44 is closed. However, since formula_44 is non-degenerate, i.e. formula_45, given formula_46 one can always find formula_20 such that formula_47.
Corollary.
Given two symplectic structures formula_48 and formula_49 on formula_8 such that formula_50 for some point formula_51, there are two neighbourhoods formula_52 and formula_53 of formula_54 and a diffeomorphism formula_55 such that formula_56 and formula_57.This follows by noticing that, by Poincaré lemma, the difference formula_58 is locally formula_59 for some formula_60; then, shrinking further the neighbourhoods, the result above applied to the family formula_61 of symplectic structures yields the diffeomorphism formula_62.
Darboux theorem for symplectic structures.
The Darboux's theorem for symplectic structures states that any point formula_54 in a given symplectic manifold formula_63 admits a local coordinate chart formula_64 such thatformula_65While the original proof by Darboux required a more general statement for 1-forms, Moser's trick provides a straightforward proof. Indeed, choosing any symplectic basis of the symplectic vector space formula_66, one can always find local coordinates formula_67 such that formula_68. Then it is enough to apply the corollary of Moser's trick discussed above to formula_69 and formula_70, and consider the new coordinates formula_71.
Application: Moser stability theorem.
Moser himself provided an application of his argument for the stability of symplectic structures, which is known now as Moser stability theorem.Let formula_40 a family of symplectic form on formula_8 which are cohomologous, i.e. the deRham cohomology class formula_72 does not depend on formula_73. Then there exists a family formula_5 of diffeomorphisms of formula_8 such that formula_74 and formula_13.
Proof.
It is enough to check that formula_75; then the proof follows from the previous application of Moser's trick to symplectic structures. By the cohomologous hypothesis, formula_76 is an exact form, so that also its derivative formula_77 is exact for every formula_73. The actual proof that this can be done in a smooth way, i.e. that formula_75 for a "smooth" family of functions formula_46, requires some algebraic topology. One option is to prove it by induction, using Mayer-Vietoris sequences; another is to choose a Riemannian metric and employ Hodge theory. | [
{
"math_id": 0,
"text": "\\alpha_0"
},
{
"math_id": 1,
"text": "\\alpha_1"
},
{
"math_id": 2,
"text": "\\psi \\in \\mathrm{Diff}(M)"
},
{
"math_id": 3,
"text": "\\psi^* \\alpha_1 = \\alpha_0"
},
{
"math_id": 4,
"text": "\\{ \\alpha_t \\}_{t \\in [0,1]}"
},
{
"math_id": 5,
"text": "\\psi_t"
},
{
"math_id": 6,
"text": "\\psi_t^* \\alpha_t = \\alpha_0"
},
{
"math_id": 7,
"text": "\\{ \\omega_t \\}_{t \\in [0,1]} \\subset \\Omega^k (M)"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "\\frac{d}{dt} \\omega_t + \\mathcal{L}_{X_t} \\omega_t = 0"
},
{
"math_id": 10,
"text": "\\{ X_t \\}_{t \\in [0,1]} \\subset \\mathfrak{X}(M)"
},
{
"math_id": 11,
"text": "\\{ \\psi_t \\}_{t \\in [0,1]}"
},
{
"math_id": 12,
"text": "\\psi_t^*\\omega_t = \\omega_0"
},
{
"math_id": 13,
"text": "\\psi_0 = \\mathrm{id}_M"
},
{
"math_id": 14,
"text": "\\psi := \\psi_1"
},
{
"math_id": 15,
"text": "\\psi^*\\omega_1 = \\omega_0"
},
{
"math_id": 16,
"text": "\\{ X_t \\}_{t \\in [0,1]}"
},
{
"math_id": 17,
"text": "\\frac{d}{dt} \\psi_t = X_t \\circ \\psi_t"
},
{
"math_id": 18,
"text": "t \\in [0,1]"
},
{
"math_id": 19,
"text": "\\frac{d}{dt} (\\psi_t^* \\omega_t) = \\psi_t^* \\Big( \\frac{d}{dt} \\omega_t + \\mathcal{L}_{X_t}\\omega_t \\Big)."
},
{
"math_id": 20,
"text": "X_t"
},
{
"math_id": 21,
"text": "\\psi_t^* \\omega_t = \\mathrm{const} = \\psi_0^* \\omega_0 = \\omega_0"
},
{
"math_id": 22,
"text": "t = 1"
},
{
"math_id": 23,
"text": "\\alpha_0, \\alpha_1"
},
{
"math_id": 24,
"text": "n"
},
{
"math_id": 25,
"text": "\\psi"
},
{
"math_id": 26,
"text": "\\psi^*\\alpha_1 = \\alpha_0"
},
{
"math_id": 27,
"text": "\\int_M \\alpha_0 = \\int_M \\alpha_1"
},
{
"math_id": 28,
"text": "\\int_M \\alpha_0 = \\int_M \\psi^*\\alpha_1 = \\int_{\\psi(M)} \\alpha_1 = \\int_M \\alpha_1"
},
{
"math_id": 29,
"text": "\\alpha_t := (1-t) \\alpha_0 + t \\alpha_1"
},
{
"math_id": 30,
"text": "\\int_M (\\alpha_1 - \\alpha_0) = 0"
},
{
"math_id": 31,
"text": "[\\alpha_0 - \\alpha_1] \\in H^n_{dR}(M)"
},
{
"math_id": 32,
"text": "\\alpha_1 - \\alpha_0 = d\\beta"
},
{
"math_id": 33,
"text": "\\beta \\in \\Omega^{n-1} (M)"
},
{
"math_id": 34,
"text": "\\alpha_t = \\alpha_0 + t d\\beta"
},
{
"math_id": 35,
"text": "\\alpha_t"
},
{
"math_id": 36,
"text": "0 = \\frac{d}{dt} \\alpha_t + \\mathcal{L}_{X_t} \\alpha_t = d\\beta + d (\\iota_{X_t} \\alpha_t) + \\iota_{X_t} (\\cancel{d \\alpha_t}) = d (\\beta + \\iota_{X_t} \\alpha_t)."
},
{
"math_id": 37,
"text": "TM \\xrightarrow{\\cong} \\wedge^{n-1} T^*M, \\quad X_t \\mapsto \\iota_{X_t} \\alpha_t"
},
{
"math_id": 38,
"text": "\\beta"
},
{
"math_id": 39,
"text": "\\beta + \\iota_{X_t} \\alpha_t = 0"
},
{
"math_id": 40,
"text": "\\{ \\omega_t \\}_{t \\in [0,1]} \\subset \\Omega^2 (M)"
},
{
"math_id": 41,
"text": "\\frac{d}{dt} \\omega_t = d \\sigma_t"
},
{
"math_id": 42,
"text": "\\{ \\sigma_t \\}_{t \\in [0,1]} \\subset \\Omega^1 (M)"
},
{
"math_id": 43,
"text": "0 = \\frac{d}{dt} \\omega_t + \\mathcal{L}_{X_t}\\omega_t = d \\sigma_t + \\iota_{X_t} (\\cancel{d\\omega_t}) + d (\\iota_{X_t} \\omega_t) = d (\\sigma_t + \\iota_{X_t} \\omega_t),"
},
{
"math_id": 44,
"text": "\\omega_t"
},
{
"math_id": 45,
"text": "TM \\xrightarrow{\\cong} T^*M, \\quad X_t \\mapsto \\iota_{X_t} \\omega_t"
},
{
"math_id": 46,
"text": "\\sigma_t"
},
{
"math_id": 47,
"text": "\\sigma_t + \\iota_{X_t} \\omega_t = 0"
},
{
"math_id": 48,
"text": "\\omega_0"
},
{
"math_id": 49,
"text": "\\omega_1"
},
{
"math_id": 50,
"text": "(\\omega_0)_x = (\\omega_1)_x"
},
{
"math_id": 51,
"text": "x \\in M"
},
{
"math_id": 52,
"text": "U_0"
},
{
"math_id": 53,
"text": "U_1"
},
{
"math_id": 54,
"text": "x"
},
{
"math_id": 55,
"text": "\\phi: U_0 \\to U_1"
},
{
"math_id": 56,
"text": "\\phi(x) = x"
},
{
"math_id": 57,
"text": "\\phi^*\\omega_1 = \\omega_0"
},
{
"math_id": 58,
"text": "\\omega_1 - \\omega_0"
},
{
"math_id": 59,
"text": "d\\sigma"
},
{
"math_id": 60,
"text": "\\sigma \\in \\Omega^1 (M)"
},
{
"math_id": 61,
"text": "\\omega_t := (1-t) \\omega_0 + t \\omega_1"
},
{
"math_id": 62,
"text": "\\phi := \\psi_1"
},
{
"math_id": 63,
"text": "(M,\\omega)"
},
{
"math_id": 64,
"text": "(U, x^1,\\ldots,x^n,y^1,\\ldots,y^n)"
},
{
"math_id": 65,
"text": "\\omega|_U = \\sum_{i=1}^n dx^i \\wedge dy^i."
},
{
"math_id": 66,
"text": "(T_x M,\\omega_x)"
},
{
"math_id": 67,
"text": "(\\tilde{U}, \\tilde{x}^1,\\ldots,\\tilde{x}^n,\\tilde{y}^1,\\ldots,\\tilde{y}^n)"
},
{
"math_id": 68,
"text": "\\omega_x = \\sum_{i=i}^n (d\\tilde{x}^i \\wedge d\\tilde{y}^i) |_x"
},
{
"math_id": 69,
"text": "\\omega_0 = \\omega |_{\\tilde{U}}"
},
{
"math_id": 70,
"text": "\\omega_1 = \\sum_{i=i}^n d\\tilde{x}^i \\wedge d\\tilde{y}^i"
},
{
"math_id": 71,
"text": "x^i = \\tilde{x}^i \\circ \\phi, y^i = \\tilde{y}^i \\circ \\phi"
},
{
"math_id": 72,
"text": "[\\omega_t] \\in H^2_{dR}(M)"
},
{
"math_id": 73,
"text": "t"
},
{
"math_id": 74,
"text": "\\psi^*\\omega_t = \\omega_0"
},
{
"math_id": 75,
"text": "\\frac{d}{dt} \\omega_t = d \\sigma_t"
},
{
"math_id": 76,
"text": "\\omega_t - \\omega_0"
},
{
"math_id": 77,
"text": "\\frac{d}{dt} (\\omega_t - \\omega_0) = \\frac{d}{dt} \\omega_t"
}
]
| https://en.wikipedia.org/wiki?curid=73989969 |
7399828 | TC0 | Complexity class used in circuit complexity
TC0 is a complexity class used in circuit complexity. It is the first class in the hierarchy of TC classes.
TC0 contains all languages which are decided by Boolean circuits with constant depth and polynomial size, containing only unbounded fan-in AND gates, OR gates, NOT gates, and majority gates. Equivalently, threshold gates can be used instead of majority gates.
TC0 contains several important problems, such as sorting "n" "n"-bit numbers, multiplying two "n"-bit numbers, integer division or recognizing the Dyck language with two types of parentheses.
Complexity class relations.
We can relate TC0 to other circuit classes, including AC0 and NC1; Vollmer 1999 p. 126 states:
formula_0
Vollmer states that the question of whether the last inclusion above is strict is "one of the main open problems in circuit complexity" (ibid.).
We also have that uniform formula_1. (Allender 1996, as cited in Burtschick 1999).
Basis for uniform TC0.
The functional version of the uniform formula_2 coincides with the closure with respect to composition of the projections and one of the following function sets formula_3, formula_4. Here formula_5, formula_6 is a bitwise AND of formula_7 and formula_8. By functional version one means the set of all functions formula_9 over non-negative integers that are bounded by functions of FP and formula_10 is in the uniform formula_2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{AC}^0 \\subsetneq \\mathsf{AC}^0[p] \\subsetneq \\mathsf{TC}^0 \\subseteq \\mathsf{NC}^1. "
},
{
"math_id": 1,
"text": "\\mathsf{TC}^0 \\subsetneq \\mathsf{PP}"
},
{
"math_id": 2,
"text": "\\mbox{TC}^0"
},
{
"math_id": 3,
"text": "\\{n+m, n \\,\\stackrel{.}{-}\\, m, n\\wedge m, \\lfloor n/m \\rfloor, 2^{\\lfloor \\log_2 n \\rfloor^2} \\}"
},
{
"math_id": 4,
"text": "\\{n+m, n \\,\\stackrel{.}{-}\\, m, n\\wedge m, \\lfloor n/m \\rfloor, n^{\\lfloor \\log_2 m \\rfloor} \\}"
},
{
"math_id": 5,
"text": "n \\,\\stackrel{.}{-}\\, m=\\max(0,n-m)"
},
{
"math_id": 6,
"text": "n\\wedge m"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": "f(x_1,\\ldots,x_n)"
},
{
"math_id": 10,
"text": "(y\\text{-th bit of }f(x_1,\\ldots,x_n))"
}
]
| https://en.wikipedia.org/wiki?curid=7399828 |
74003259 | Praseodymium bismuthide | Binary inorganic compound of praseodymium and bismuth with the chemical formula of PrBi
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium bismuthide is a binary inorganic compound of praseodymium and bismuth with the chemical formula of PrBi. It forms crystals.
Preparation.
Praseodymium bismuthide can be prepared by reacting stoichiometric amounts of praseodymium and bismuth at 1800 °C:
formula_0
Physical properties.
Praseodymium bismuthide forms crystals of the cubic crystal system, with space group "Fm"3"m", cell parameters "a" = 0.64631 nm, Z = 4, and a structure like sodium chloride NaCl. The compound melts congruently at a temperature of roughly 1800 °С. At a pressure of 14 GPa, it undergoes a phase transition.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Pr + Bi \\ \\xrightarrow{1800^oC}\\ PrBi }"
}
]
| https://en.wikipedia.org/wiki?curid=74003259 |
74004225 | T centre | Radiation damage centre in silicon
The T centre is a radiation damage centre in silicon composed of a carbon-carbon pair (C-C) sharing a substitutional site of the silicon lattice. Additionally, one of the substitutional carbon atoms is bonded with a hydrogen atom while the other carbon contains an unpaired electron in the ground state of a dangling bond. Much like the nitrogen-vacancy centres in diamond, the T centre contains spin-dependent optical transitions addressable through photoluminescence. These spin-dependent transitions, however, emit light within the technologically efficient telecommunication O-band. Consequentially, the T centre is an intriguing candidate for quantum information technologies with development of integrated quantum devices benefiting from techniques within the silicon photonic community.
Structure.
The T centre is a radiation damage centre in silicon. It contains a substitutional carbon-carbon pair terminated by an additional hydrogen atom within the lattice. This structure also contains a dangling bond on the other substitutional carbon.
Historically, the structure of the T centre was uncovered using spectroscopic measurements. The presence of carbon as the main constituent within the lattice was hypothesized when a shift in the defect's zero phonon line (ZPL) was observed in samples enriched with 13C. Similarly, the presence of hydrogen was determined using a shift in the ZPL in a deuterium defused sample. Splitting within the local vibration modes (LVM) introduced by the presence of 13C from 2 lines into 4 subsequent lines suggested the presence of a second carbon atom. The suggested formation mechanism is, therefore, the capture of an interstitial C-H pair onto a substitutional carbon with a dangling bond predicted by "ab initio" calculations
External field perturbation measurements are used to determine axial symmetry and orientation of luminescent transitions. Stress-dependent spectral line studies have previously suggested that rhombic "I" ("C2v") symmetry is present within the defect.; however, it was later shown to have monoclinic "I" ("C1h") symmetry. Consequentially, the defect is expected to have 24 orientations, which form 12 optically resolvable orientation pairs under a magnetic field. These have been studied using photoluminescence spectroscopy
Formation.
The current formation model for the T centre contains an interstitial carbon capturing a hydrogen atom before migrating to a substitutional site with another carbon during heat treatment between 350 and 600 °C. T centres have been observed in silicon semiconductors grown using the float-zone and Czochralski (CZ) technique as well as Silicon-On-Insulator devices. They are produced by irradiating the sample followed by a thermal annealing process. It has been shown that both plasma etching as well as irradiating the sample with either neutrons or electrons may produce the desired radiation centre. Hydrogen may be introduced through water vapour or in its gaseous state, or it may be present within the sample. An excess of hydrogen may, however, fill the dangling bond and render the radiation damage center optically inert. Alternatively, rather than irradiating the sample and treating it with a subsequent thermal annealing process, T centres may be developed using only a thermal treatment in carbon rich CZ grown silicon.
Optical properties.
The T centre's zero-phonon line photoluminescence feature is near 935 meV. This represents a transition from an unpaired electron in the ground state to a bound exciton within the first excited state. The 1.8 meV-split doublet is the result of two states within the same defect.
The inhomogeneous linewidth for this feature reduces in isotopically pure silicon-28. Natural silicon contains a mixture of various isotope masses resulting in variations in both the local band gap and binding energies. Without these variations introduced from neighbouring 29Si nuclei, the linewidth reduces from 26.9(8) formula_0eV to 0.25 formula_0eV.
Energy level structure.
The current accepted model of the T centre proposes an unpaired electron in the ground state and an additional bound exciton in the excited states labeled T and TX respectively. The two electrons in the excited state pair into a spin-0 singlet and the remaining unpaired spin-3/2 hole spin state is split into two Kramers doublets TX0 and TX1 by the internal stress of the defect. The TX centre is characterized as a pseudo-acceptor with effective mass-like states labeled formula_1Kformula_2 for even and odd parity. formula_3 represents the principal quantum number and formula_4 indicates the symmetry group of the state. The TX ground state is, therefore, an acceptor-like fourfold degenerate formula_58+ state.
Fine structure behavior.
Both the ground state electron and the first excited state hole are doubly degenerate and split under the Zeeman interaction when exposed to an external magnetic field. Due to the splitting of each state, each orientation subset of the T-centre allows for 4 optical transitions from the ground state to TX0. For the formula_6 subset, the transitions are labeled formula_7. Characterization of these transitions is essential for hyperpolarizing the electron into the different transitions for various state manipulation protocols. Further hyperfine spin interactions between the electron and hydrogen are resolved under electron paramagnetic resonance or read using optically detected magnetic resonance signals.
State manipulation.
For a centre composed of two 12C constituents subject to an external magnetic field formula_8, the spin Hamiltonian for the ground state is given by
formula_9
This Hamiltonian describes the coupling between the unpaired electron and the hydrogen nucleus. The coefficient formula_10 denotes the Bohr magneton. The electron spin vector and g-factor tensor are given by formula_11 and formula_12. The g-factor tensor is approximately isotropic with formula_13. The hydrogen nuclear spin vector is given by formula_14. formula_15 represents the hydrogen nuclear spin g-factor, and formula_16 is the nuclear spin magneton. The hyperfine tensor formula_17 is specific to each optically resolvable orientation subset.
State preparation.
Both the electron and nuclear spins can by hyperpolarized using a single optical radio frequency (RF) and a selectively resonant microwave frequency (MF). Continuous-wave electron paramagnetic resonance can be used to depolarize or mix the electron spin state, and the optical transitions formula_18 and formula_19 are used for state preparation. Specifically, continuously driving the formula_18 transition excites the formula_20 electron into the formula_21. The state is prepared in the formula_22 spin-up state following a subsequent decay through the spin-dependent formula_23 transition. Alternatively, driving the formula_19 transition hyperpolarizes the population to the spin-down state through the formula_24 transition.
Coherence times.
The "T1" lifetimes for both the electron and nuclear spin state have been measured using nuclear magnetic resonance and have been shown to far exceed 16 seconds in 28Si. The averaged electron and nuclear Hahn-echo ("T2") times are 2.1(1) ms and 0.28(1)s respectively. A tighter lower bound for the nuclear coherence time was found by averaging the top 10% highest measurements per time, resulting in an average maximum magnitude nuclear coherence time of formula_25s. | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "N\\Gamma"
},
{
"math_id": 2,
"text": "\\pm"
},
{
"math_id": 3,
"text": "N = 1,2, ... "
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "1\\Gamma"
},
{
"math_id": 6,
"text": "i^{th}"
},
{
"math_id": 7,
"text": "\\{A,B,C,D\\}_i"
},
{
"math_id": 8,
"text": "\\mathbf{B}_0"
},
{
"math_id": 9,
"text": "\\mathcal{H_T} = \\mu_B \\mathbf{B}_0\\mathbf{g_E}\\mathbf{S} + \\mu_N g_N \\mathbf{B}_0\\mathbf{I} +h\\mathbf{SAI}"
},
{
"math_id": 10,
"text": "\\mu_B"
},
{
"math_id": 11,
"text": "\\mathbf{S}"
},
{
"math_id": 12,
"text": "g_E"
},
{
"math_id": 13,
"text": "g_E = 2.005(8)"
},
{
"math_id": 14,
"text": "\\mathbf{I}"
},
{
"math_id": 15,
"text": "g_N"
},
{
"math_id": 16,
"text": "\\mu_N"
},
{
"math_id": 17,
"text": "\\mathbf{A}"
},
{
"math_id": 18,
"text": "B_i"
},
{
"math_id": 19,
"text": "D_i"
},
{
"math_id": 20,
"text": "|\\downarrow_E \\rangle "
},
{
"math_id": 21,
"text": "|\\downarrow_H \\rangle "
},
{
"math_id": 22,
"text": "|\\uparrow_E \\rangle "
},
{
"math_id": 23,
"text": "A_i"
},
{
"math_id": 24,
"text": "C_i"
},
{
"math_id": 25,
"text": "T_{2N}^{mm} = 1.1(2)"
}
]
| https://en.wikipedia.org/wiki?curid=74004225 |
7400895 | Syntactic predicate | A syntactic predicate specifies the syntactic validity of applying a production in a formal grammar and is analogous to a semantic predicate that specifies the semantic validity of applying a production. It is a simple and effective means of dramatically improving the recognition strength of an LL parser by providing arbitrary lookahead. In their original implementation, syntactic predicates had the form “( α )?” and could only appear on the left edge of a production. The required syntactic condition α could be any valid context-free grammar fragment.
More formally, a syntactic predicate is a form of production intersection, used in parser specifications or in formal grammars. In this sense, the term "predicate" has the meaning of a mathematical indicator function. If "p1" and "p2," are production rules, the language generated by "both" "p1" "and" "p2" is their set intersection.
As typically defined or implemented, syntactic predicates implicitly order the productions so that predicated productions specified earlier have higher precedence than predicated productions specified later within the same decision. This conveys an ability to disambiguate ambiguous productions because the programmer can simply specify which production should match.
Parsing expression grammars (PEGs), invented by Bryan Ford, extend these simple predicates by allowing "not predicates" and permitting a predicate to appear anywhere within a production. Moreover, Ford invented packrat parsing to handle these grammars in linear time by employing memoization, at the cost of heap space.
It is possible to support linear-time parsing of predicates as general as those allowed by PEGs, but reduce the memory cost associated with memoization by avoiding backtracking where some more efficient implementation of lookahead suffices. This approach is implemented by ANTLR version 3, which uses Deterministic finite automata for lookahead; this may require testing a predicate in order to choose between transitions of the DFA (called "pred-LL(*)" parsing).
Overview.
Terminology.
The term "syntactic predicate" was coined by Parr & Quong and differentiates this form of predicate from semantic predicates (also discussed).
Syntactic predicates have been called "multi-step matching", "parse constraints", and simply "predicates" in various literature. (See References section below.) This article uses the term "syntactic predicate" throughout for consistency and to distinguish them from semantic predicates.
Formal closure properties.
Bar-Hillel "et al." show that the intersection of two regular languages is also a regular language, which is to say that the regular languages are closed under intersection.
The intersection of a regular language and a context-free language is also closed, and it has been known at least since Hartmanis that the intersection of two context-free languages is not necessarily a context-free language (and is thus not closed). This can be demonstrated easily using the canonical Type 1 language, formula_0:
Let formula_1 (Type 2)
Let formula_2 (Type 2)
Let formula_3
Given the strings "abcc", "aabbc", and "aaabbbccc", it is clear that the only string that belongs to both L1 and L2 (that is, the only one that produces a non-empty intersection) is "aaabbbccc".
Other considerations.
In most formalisms that use syntactic predicates, the syntax of the predicate is noncommutative, which is to say that the operation of predication is ordered. For instance, using the above example, consider the following pseudo-grammar, where "X ::= Y PRED Z" is understood to mean: ""Y" produces "X" if and only if "Y" also satisfies predicate "Z"":
S ::= a X
X ::= Y PRED Z
Y ::= a+ BNCN
Z ::= ANBN c+
BNCN ::= b [BNCN] c
ANBN ::= a [ANBN] b
Given the string "aaaabbbccc", in the case where "Y" must be satisfied "first" (and assuming a greedy implementation), S will generate "aX" and "X" in turn will generate "aaabbbccc", thereby generating "aaaabbbccc". In the case where "Z" must be satisfied first, ANBN will fail to generate "aaaabbb", and thus "aaaabbbccc" is not generated by the grammar. Moreover, if either "Y" or "Z" (or both) specify any action to be taken upon reduction (as would be the case in many parsers), the order that these productions match determines the order in which those side-effects occur. Formalisms that vary over time (such as adaptive grammars) may rely on these side effects.
Examples of use.
ANTLR.
Parr & Quong give this example of a syntactic predicate:
stat: (declaration)? declaration
| expression
which is intended to satisfy the following informally stated constraints of C++:
In the first production of rule stat, the syntactic predicate (declaration)? indicates
that declaration is the syntactic context that must be present for the rest of that production to succeed. We can interpret the use of (declaration)? as "I am not sure if
declaration will match; let me try it out and, if it does not match, I shall try the next
alternative." Thus, when encountering a valid declaration, the rule declaration will be
recognized twice—once as syntactic predicate and once during the actual parse to execute semantic actions.
Of note in the above example is the fact that any code triggered by the acceptance of the "declaration" production will only occur if the predicate is satisfied.
Canonical examples.
The language formula_4 can be represented in various grammars and formalisms as follows:
Parsing Expression Grammars.
S ← &(A !b) a+ B !c
A ← a A? b
B ← b B? c
§-Calculus.
Using a "bound" predicate:
S → {A}B
A → X 'c+'
X → 'a' [X] 'b'
B → 'a+' Y
Y → 'b' [Y] 'c'
Using two "free" predicates:
A → <'a+'>"a" <'b+'>"b" Ψ("a" "b")X <'c+'>"c" Ψ("b" "c")Y
X → 'a' [X] 'b'
Y → 'b' [Y] 'c'
Conjunctive Grammars.
(Note: the following example actually generates formula_5, but is included here because it is the example given by the inventor of conjunctive grammars.):
S → AB&DC
A → aA | ε
B → bBc | ε
C → cC | ε
D → aDb | ε
Parsers/formalisms using some form of syntactic predicate.
Although by no means an exhaustive list, the following parsers and grammar formalisms employ syntactic predicates:
As originally implemented, syntactic predicates sit on the leftmost edge of a production such that the production to the right of the predicate is attempted if and only if the syntactic predicate first accepts the next portion of the input stream. Although ordered, the predicates are checked first, with parsing of a clause continuing if and only if the predicate is satisfied, and semantic actions only occurring in non-predicates.
Balmas refers to syntactic predicates as "multi-step matching" in her paper on APM. As an APM parser parses, it can bind substrings to a variable, and later check this variable against other rules, continuing to parse if and only if that substring is acceptable to further rules.
Ford's PEGs have syntactic predicates expressed as the "and-predicate" and the "not-predicate".
In the §-Calculus, syntactic predicates are originally called simply "predicates", but are later divided into "bound" and "free" forms, each with different input properties.
Raku introduces a generalized tool for describing a grammar called "rules", which are an extension of Perl 5's regular expression syntax. Predicates are introduced via a lookahead mechanism called "before", either with "codice_0" or "codice_1" (that is: ""not" before"). Perl 5 also has such lookahead, but it can only encapsulate Perl 5's more limited regexp features.
ProGrammar's GDL (Grammar Definition Language) makes use of syntactic predicates in a form called "parse constraints". ATTENTION NEEDED: This link is no longer valid!
Conjunctive grammars, first introduced by Okhotin, introduce the explicit notion of conjunction-as-predication. Later treatment of conjunctive and boolean grammars is the most thorough treatment of this formalism to date.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L = \\{ a^n b^n c^n : n \\ge 1 \\} "
},
{
"math_id": 1,
"text": " L_1 = \\{ a^m b^n c^n : m,n \\ge 1 \\}"
},
{
"math_id": 2,
"text": " L_2 = \\{ a^n b^n c^m : m,n \\ge 1 \\}"
},
{
"math_id": 3,
"text": " L_3 = L_1 \\cap L_2"
},
{
"math_id": 4,
"text": "L = \\{a^n b^n c^n | n \\ge 1\\}"
},
{
"math_id": 5,
"text": "L = \\{a^n b^n c^n | n \\ge 0\\}"
}
]
| https://en.wikipedia.org/wiki?curid=7400895 |
7401755 | AC (complexity) | In circuit complexity, AC is a complexity class hierarchy. Each class, ACi, consists of the languages recognized by Boolean circuits with depth formula_0 and a polynomial number of unlimited fan-in AND and OR gates.
The name "AC" was chosen by analogy to NC, with the "A" in the name standing for "alternating" and referring both to the alternation between the AND and OR gates in the circuits and to alternating Turing machines.
The smallest AC class is AC0, consisting of constant-depth unlimited fan-in circuits.
The total hierarchy of AC classes is defined as
formula_1
Relation to NC.
The AC classes are related to the NC classes, which are defined similarly, but with gates having only constant fanin. For each "i", we have
formula_2
As an immediate consequence of this, we have that NC = AC.
It is known that inclusion is strict for "i" = 0.
Variations.
The power of the AC classes can be affected by adding additional gates. If we add gates which calculate the modulo operation for some modulus "m", we have the classes ACCi[m].
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(\\log^i n)"
},
{
"math_id": 1,
"text": "\\mbox{AC} = \\bigcup_{i \\geq 0} \\mbox{AC}^i"
},
{
"math_id": 2,
"text": "\\mbox{NC}^i \\subseteq \\mbox{AC}^i \\subseteq \\mbox{NC}^{i+1}."
}
]
| https://en.wikipedia.org/wiki?curid=7401755 |
74025273 | Blackwell-Girshick equation | Variance of random sum
The Blackwell-Girshick equation is an equation in probability theory that allows for the calculation of the variance of random sums of random variables. It is the equivalent of Wald's lemma for the expectation of composite distributions.
It is named after David Blackwell and Meyer Abraham Girshick.
Statement.
Let formula_0 be a random variable with values in formula_1, let formula_2 be independent and identically distributed random variables, which are also independent of formula_3, and assume that the second moment exists for all formula_4 and formula_3. Then, the random variable defined by
formula_5
has the variance
formula_6.
The Blackwell-Girshick equation can be derived using conditional variance and variance decomposition.
If the formula_7 are natural number-valued random variables, the derivation can be done elementarily using the chain rule and the probability-generating function.
Proof.
For each formula_8, let formula_9 be the random variable which is 1 if formula_3 equals formula_10 and 0 otherwise, and let formula_11. Then
formula_12
By Wald's equation, under the given hypotheses, formula_13. Therefore,
formula_14
as desired.§5.1, Theorem 5.10
Example.
Let formula_0 have a Poisson distribution with expectation formula_15, and let formula_16 follow a Bernoulli distribution with parameter formula_17. In this case, formula_18 is also Poisson distributed with expectation formula_19, so its variance must be formula_19. We can check this with the Blackwell-Girshick equation: formula_3 has variance formula_20 while each formula_4 has mean formula_21 and variance formula_22, so we must have
formula_23.
Application and related concepts.
The Blackwell-Girshick equation is used in actuarial mathematics to calculate the variance of composite distributions, such as the compound Poisson distribution. Wald's equation provides similar statements about the expectation of composite distributions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " N "
},
{
"math_id": 1,
"text": " \\mathbb{Z}_{\\ge 0} "
},
{
"math_id": 2,
"text": " X_1, X_2, X_3, \\dots"
},
{
"math_id": 3,
"text": "N"
},
{
"math_id": 4,
"text": "X_i"
},
{
"math_id": 5,
"text": "Y:=\\sum_{i=1}^NX_i"
},
{
"math_id": 6,
"text": "\\operatorname{Var}(Y)=\\operatorname{Var}(N)\\operatorname{E}(X_1)^2+\\operatorname{E}(N)\\operatorname{Var}(X_1)"
},
{
"math_id": 7,
"text": " X_i "
},
{
"math_id": 8,
"text": "n\\ge 0"
},
{
"math_id": 9,
"text": "\\chi_n"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "Y_n:=X_1+\\cdots+X_n"
},
{
"math_id": 12,
"text": "\\begin{align} \\operatorname{E}(Y^2) \n& = \\sum_{n=0}^\\infty \\operatorname{E}(\\chi_n Y_n^2)\\\\\n&= \\sum_{n=0}^\\infty \\operatorname{P}(N=n) \\operatorname{E}(Y_n^2)\\\\\n&= \\sum_{n=0}^\\infty\\operatorname{P}(N=n) (\\operatorname{Var}(Y_n)+\\operatorname{E}(Y_n)^2)\\\\\n&= \\sum_{n=0}^\\infty\\operatorname{P}(N=n) (n \\operatorname{Var}(X_1)+n^2\\operatorname{E}(X_1)^2)\\\\\n&= \\operatorname{E}(N) \\operatorname{Var}(X_1) + \\operatorname{E}(N^2) \\operatorname{E}(X_1)^2.\n\\end{align}"
},
{
"math_id": 13,
"text": "\\operatorname{E}(Y)=\\operatorname{E}(N) \\operatorname{E}(X_1)"
},
{
"math_id": 14,
"text": "\\begin{align} \\operatorname{Var}(Y)&=\\operatorname{E}(Y^2)-\\operatorname{E}(Y)^2\\\\\n&= \\operatorname{E}(N) \\operatorname{Var}(X_1) + \\operatorname{E}(N^2) \\operatorname{E}(X_1)^2 - \\operatorname{E}(N)^2 \\operatorname{E}(X_1)^2 \\\\\n&= \\operatorname{E}(N) \\operatorname{Var}(X_1) + \\operatorname{Var}(N) \\operatorname{E}(X_1)^2,\n\\end{align}"
},
{
"math_id": 15,
"text": " \\lambda "
},
{
"math_id": 16,
"text": "X_1, X_2, \\dots"
},
{
"math_id": 17,
"text": " p "
},
{
"math_id": 18,
"text": "Y"
},
{
"math_id": 19,
"text": "\\lambda p"
},
{
"math_id": 20,
"text": "\\lambda"
},
{
"math_id": 21,
"text": "p"
},
{
"math_id": 22,
"text": "p(1-p)"
},
{
"math_id": 23,
"text": " \\operatorname{Var}(Y)= \\lambda p^2+\\lambda p (1-p) = \\lambda p "
}
]
| https://en.wikipedia.org/wiki?curid=74025273 |
74025383 | Haldane's sieve | Genetics concept
Haldane's sieve is a concept in population genetics named after the British geneticist J. B. S. Haldane. It refers to the fact that dominant advantageous alleles are more likely to fix in the population than recessive alleles. Haldane's sieve is particularly relevant in situations where the effects of natural selection are strong and the beneficial mutations have a significant impact on an organism's fitness.
According to Haldane's sieve, when a new advantageous mutation arises in a population, it initially occurs as a single copy (a de novo mutation), borne by an heterozygous individual.
This way, genetic dominance is important to estimate the fate of new mutations, that is, if new mutations are going to fix or go extinct.
Dominant alleles are more readily exposed to directional selection since the moment they are rare, and thus they are more likely to fix as a result of a "hard sweep".
The term "sieve" in Haldane's sieve metaphorically represents this filtering effect of natural selection.
When adaptation stems from the species pool of standing genetic variation, a "soft sweep", the rationale does not apply, because the allele is no longer rare in the beginning of the sweep. In fact, recessive alleles are more likely to sweep than dominant sweeps when alleles are previously maintained in the population.
Limited dispersal and population structure can reduce the effects of Haldane's sieve. In subdivided populations, limited dispersal increases inbreeding and homozygosity, allowing recessive alleles to express their beneficial effects more frequently and thus accelerate their fixation. This effect is most pronounced when dispersal is strongly limited (e.g., formula_0).
Haldane's sieve has important implications for understanding the dynamics of adaptation and evolution in diploid populations. It highlights the role of natural selection in driving genetic changes in the presence of genetic dominance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "FST > 0.2"
}
]
| https://en.wikipedia.org/wiki?curid=74025383 |
7403 | Chemotaxis | Movement of an organism or entity in response to a chemical stimulus
Chemotaxis (from "chemo-" + "taxis") is the movement of an organism or entity in response to a chemical stimulus. Somatic cells, bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming toward the highest concentration of food molecules, or to flee from poisons (e.g., phenol). In multicellular organisms, chemotaxis is critical to early development (e.g., movement of sperm towards the egg during fertilization) and development (e.g., migration of neurons or lymphocytes) as well as in normal function and health (e.g., migration of leukocytes during injury or infection). In addition, it has been recognized that mechanisms that allow chemotaxis in animals can be subverted during cancer metastasis, and the aberrant change of the overall property of these networks, which control chemotaxis, can lead to carcinogenesis. The aberrant chemotaxis of leukocytes and lymphocytes also contribute to inflammatory diseases such as atherosclerosis, asthma, and arthritis. Sub-cellular components, such as the polarity patch generated by mating yeast, may also display chemotactic behavior.
"Positive" chemotaxis occurs if the movement is toward a higher concentration of the chemical in question; "negative" chemotaxis if the movement is in the opposite direction. Chemically prompted kinesis (randomly directed or nondirectional) can be called chemokinesis.
History of chemotaxis research.
Although migration of cells was detected from the early days of the development of microscopy by Leeuwenhoek, a Caltech lecture regarding chemotaxis propounds that 'erudite description of chemotaxis was only first made by T. W. Engelmann (1881) and W. F. Pfeffer (1884) in bacteria, and H. S. Jennings (1906) in ciliates'. The Nobel Prize laureate I. Metchnikoff also contributed to the study of the field during 1882 to 1886, with investigations of the process as an initial step of phagocytosis. The significance of chemotaxis in biology and clinical pathology was widely accepted in the 1930s, and the most fundamental definitions underlying the phenomenon were drafted by this time. The most important aspects in quality control of chemotaxis assays were described by H. Harris in the 1950s. In the 1960s and 1970s, the revolution of modern cell biology and biochemistry provided a series of novel techniques that became available to investigate the migratory responder cells and subcellular fractions responsible for chemotactic activity. The availability of this technology led to the discovery of C5a, a major chemotactic factor involved in acute inflammation. The pioneering works of J. Adler modernized Pfeffer's capillary assay and represented a significant turning point in understanding the whole process of intracellular signal transduction of bacteria.
Bacterial chemotaxis—general characteristics.
Some bacteria, such as "E. coli", have several flagella per cell (4–10 typically). These can rotate in two ways:
The directions of rotation are given for an observer outside the cell looking down the flagella toward the cell.
Behavior.
The overall movement of a bacterium is the result of alternating tumble and swim phases, called run-and-tumble motion. As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium. Bacteria such as "E. coli" are unable to choose the direction in which they swim, and are unable to swim in a straight line for more than a few seconds due to rotational diffusion; in other words, bacteria "forget" the direction in which they are going. By repeatedly evaluating their course, and adjusting if they are moving in the wrong direction, bacteria can direct their random walk motion toward favorable locations.
In the presence of a chemical gradient bacteria will chemotax, or direct their overall motion based on the gradient. If the bacterium senses that it is moving in the correct direction (toward attractant/away from repellent), it will keep swimming in a straight line for a longer time before tumbling; however, if it is moving in the wrong direction, it will tumble sooner. Bacteria like "E. coli" use temporal sensing to decide whether their situation is improving or not, and in this way, find the location with the highest concentration of attractant, detecting even small differences in concentration.
This biased random walk is a result of simply choosing between two methods of random movement; namely tumbling and straight swimming. The helical nature of the individual flagellar filament is critical for this movement to occur. The protein structure that makes up the flagellar filament, flagellin, is conserved among all flagellated bacteria. Vertebrates seem to have taken advantage of this fact by possessing an immune receptor (TLR5) designed to recognize this conserved protein.
As in many instances in biology, there are bacteria that do not follow this rule. Many bacteria, such as "Vibrio", are monoflagellated and have a single flagellum at one pole of the cell. Their method of chemotaxis is different. Others possess a single flagellum that is kept inside the cell wall. These bacteria move by spinning the whole cell, which is shaped like a corkscrew.
Signal transduction.
Chemical gradients are sensed through multiple transmembrane receptors, called methyl-accepting chemotaxis proteins (MCPs), which vary in the molecules that they detect. Thousands of MCP receptors are known to be encoded across the bacterial kingdom. These receptors may bind attractants or repellents directly or indirectly through interaction with proteins of periplasmatic space. The signals from these receptors are transmitted across the plasma membrane into the cytosol, where "Che proteins" are activated. The Che proteins alter the tumbling frequency, and alter the receptors.
Flagellum regulation.
The proteins CheW and CheA bind to the receptor. The absence of receptor activation results in autophosphorylation in the histidine kinase, CheA, at a single highly conserved histidine residue. CheA, in turn, transfers phosphoryl groups to conserved aspartate residues in the response regulators CheB and CheY; CheA is a histidine kinase and it does not actively transfer the phosphoryl group, rather, the response regulator CheB takes the phosphoryl group from CheA. This mechanism of signal transduction is called a two-component system, and it is a common form of signal transduction in bacteria. CheY induces tumbling by interacting with the flagellar switch protein FliM, inducing a change from counter-clockwise to clockwise rotation of the flagellum. Change in the rotation state of a single flagellum can disrupt the entire flagella bundle and cause a tumble.
Receptor regulation.
CheB, when activated by CheA, acts as a methylesterase, removing methyl groups from glutamate residues on the cytosolic side of the receptor; it works antagonistically with CheR, a methyltransferase, which adds methyl residues to the same glutamate residues. If the level of an attractant remains high, the level of phosphorylation of CheA (and, therefore, CheY and CheB) will remain low, the cell will swim smoothly, and the level of methylation of the MCPs will increase (because CheB-P is not present to demethylate). The MCPs no longer respond to the attractant when they are fully methylated; therefore, even though the level of attractant might remain high, the level of CheA-P (and CheB-P) increases and the cell begins to tumble. The MCPs can be demethylated by CheB-P, and, when this happens, the receptors can once again respond to attractants. The situation is the opposite with regard to repellents: fully methylated MCPs respond best to repellents, while least-methylated MCPs respond worst to repellents. This regulation allows the bacterium to 'remember' chemical concentrations from the recent past, a few seconds, and compare them to those it is currently experiencing, thus 'know' whether it is traveling up or down a gradient.
that bacteria have to chemical gradients, other mechanisms are involved in increasing the absolute value of the sensitivity on a given background. Well-established examples are the ultra-sensitive response of the motor to the CheY-P signal, and the clustering of chemoreceptors.
Chemoattractants and chemorepellents.
Chemoattractants and chemorepellents are inorganic or organic substances possessing chemotaxis-inducer effect in motile cells. These chemotactic ligands create chemical concentration gradients that organisms, prokaryotic and eukaryotic, move toward or away from, respectively.
Effects of chemoattractants are elicited via chemoreceptors such as methyl-accepting chemotaxis proteins (MCP). MCPs in E.coli include Tar, Tsr, Trg and Tap. Chemoattracttants to Trg include ribose and galactose with phenol as a chemorepellent. Tap and Tsr recognize dipeptides and serine as chemoattractants, respectively.
Chemoattractants or chemorepellents bind MCPs at its extracellular domain; an intracellular signaling domain relays the changes in concentration of these chemotactic ligands to downstream proteins like that of CheA which then relays this signal to flagellar motors via phosphorylated CheY (CheY-P). CheY-P can then control flagellar rotation influencing the direction of cell motility.
For "E.coli", "S. meliloti", and "R. spheroides," the binding of chemoattractants to MCPs inhibit CheA and therefore CheY-P activity, resulting in smooth runs, but for "B. substilis", CheA activity increases. Methylation events in "E.coli" cause MCPs to have lower affinity to chemoattractants which causes increased activity of CheA and CheY-P resulting in tumbles. In this way cells are able to adapt to the immediate chemoattractant concentration and detect further changes to modulate cell motility.
Chemoattractants in eukaryotes are well characterized for immune cells. Formyl peptides, such as fMLF, attract leukocytes such as neutrophils and macrophages, causing movement toward infection sites. Non-acylated methioninyl peptides do not act as chemoattractants to neutrophils and macrophages. Leukocytes also move toward chemoattractants C5a, a complement component, and pathogen-specific ligands on bacteria.
Mechanisms concerning chemorepellents are less known than chemoattractants. Although chemorepellents work to confer an avoidance response in organisms, "Tetrahymena thermophila" adapt to a chemorepellent, Netrin-1 peptide, within 10 minutes of exposure; however, exposure to chemorepellents such as GTP, PACAP-38, and nociceptin show no such adaptations. GTP and ATP are chemorepellents in micro-molar concentrations to both "Tetrahymena" and "Paramecium". These organisms avoid these molecules by producing avoiding reactions to re-orient themselves away from the gradient.
Eukaryotic chemotaxis.
The mechanism of chemotaxis that eukaryotic cells employ is quite different from that in the bacteria "E. coli"; however, sensing of chemical gradients is still a crucial step in the process. Due to their small size and other biophysical constraints, "E. coli" cannot directly detect a concentration gradient. Instead, they employ temporal gradient sensing, where they move over larger distances several times their own width and measure the rate at which perceived chemical concentration changes.
Eukaryotic cells are much larger than prokaryotes and have receptors embedded uniformly throughout the cell membrane. Eukaryotic chemotaxis involves detecting a concentration gradient spatially by comparing the asymmetric activation of these receptors at the different ends of the cell. Activation of these receptors results in migration towards chemoattractants, or away from chemorepellants. In mating yeast, which are non-motile, patches of polarity proteins on the cell cortex can relocate in a chemotactic fashion up pheromone gradients.
It has also been shown that both prokaryotic and eukaryotic cells are capable of chemotactic memory. In prokaryotes, this mechanism involves the methylation of receptors called methyl-accepting chemotaxis proteins (MCPs). This results in their desensitization and allows prokaryotes to "remember" and adapt to a chemical gradient. In contrast, chemotactic memory in eukaryotes can be explained by the Local Excitation Global Inhibition (LEGI) model. LEGI involves the balance between a fast excitation and delayed inhibition which controls downstream signaling such as Ras activation and PIP3 production.
Levels of receptors, intracellular signalling pathways and the effector mechanisms all represent diverse, eukaryotic-type components. In eukaryotic unicellular cells, amoeboid movement and cilium or the eukaryotic flagellum are the main effectors (e.g., Amoeba or Tetrahymena). Some eukaryotic cells of higher vertebrate origin, such as immune cells also move to where they need to be. Besides immune competent cells (granulocyte, monocyte, lymphocyte) a large group of cells—considered previously to be fixed into tissues—are also motile in special physiological (e.g., mast cell, fibroblast, endothelial cells) or pathological conditions (e.g., metastases). Chemotaxis has high significance in the early phases of embryogenesis as development of germ layers is guided by gradients of signal molecules.
Motility.
Unlike motility in bacterial chemotaxis, the mechanism by which eukaryotic cells physically move is unclear. There appear to be mechanisms by which an external chemotactic gradient is sensed and turned into an intracellular Ras and PIP3 gradients, which results in a gradient and the activation of a signaling pathway, culminating in the polymerisation of actin filaments. The growing distal end of actin filaments develops connections with the internal surface of the plasma membrane via different sets of peptides and results in the formation of anterior pseudopods and posterior uropods.
Cilia of eukaryotic cells can also produce chemotaxis; in this case, it is mainly a Ca2+-dependent induction of the microtubular system of the basal body and the beat of the 9 + 2 microtubules within cilia. The orchestrated beating of hundreds of cilia is synchronized by a submembranous system built between basal bodies.
The details of the signaling pathways are still not totally clear.
Chemotaxis-related migratory responses.
Chemotaxis refers to the directional migration of cells in response to chemical gradients; several variations of chemical-induced migration exist as listed below.
Receptors.
In general, eukaryotic cells sense the presence of chemotactic stimuli through the use of 7-transmembrane (or serpentine) heterotrimeric G-protein-coupled receptors, a class representing a significant portion of the genome. Some members of this gene superfamily are used in eyesight (rhodopsins) as well as in olfaction (smelling). The main classes of chemotaxis receptors are triggered by:
However, induction of a wide set of membrane receptors (e.g., cyclic nucleotides, amino acids, insulin, vasoactive peptides) also elicit migration of the cell.
Chemotactic selection.
While some chemotaxis receptors are expressed in the surface membrane with long-term characteristics, as they are determined genetically, others have short-term dynamics, as they are assembled "ad hoc" in the presence of the ligand. The diverse features of the chemotaxis receptors and ligands allows for the possibility of selecting chemotactic responder cells with a simple chemotaxis assay By chemotactic selection, we can determine whether a still-uncharacterized molecule acts via the long- or the short-term receptor pathway. The term "chemotactic selection" is also used to designate a technique that separates eukaryotic or prokaryotic cells according to their chemotactic responsiveness to selector ligands.
Chemotactic ligands.
The number of molecules capable of eliciting chemotactic responses is relatively high, and we can distinguish primary and secondary chemotactic molecules. The main groups of the primary ligands are as follows:
Chemotactic range fitting.
Chemotactic responses elicited by ligand-receptor interactions vary with the concentration of the ligand. Investigations of ligand families (e.g. amino acids or oligopeptides) demonstrates that chemoattractant activity occurs over a wide range, while chemorepellent activities have narrow ranges.
Clinical significance.
A changed migratory potential of cells has relatively high importance in the development of several clinical symptoms and syndromes.
Altered chemotactic activity of extracellular (e.g., Escherichia coli) or intracellular (e.g., Listeria monocytogenes) pathogens itself represents a significant clinical target. Modification of endogenous chemotactic ability of these microorganisms by pharmaceutical agents can decrease or inhibit the ratio of infections or spreading of infectious diseases.
Apart from infections, there are some other diseases wherein impaired chemotaxis is the primary etiological factor, as in Chédiak–Higashi syndrome, where giant intracellular vesicles inhibit normal migration of cells.
Mathematical models.
Several mathematical models of chemotaxis were developed depending on the type of
Although interactions of the factors listed above make the behavior of the solutions of mathematical models of chemotaxis rather complex, it is possible to describe the basic phenomenon of chemotaxis-driven motion in a straightforward way. Indeed, let us denote with formula_0 the spatially non-uniform concentration of the chemo-attractant and formula_1 as its gradient. Then the chemotactic cellular flow (also called current) formula_2 that is generated by the chemotaxis is linked to the above gradient by the law:
formula_3
where formula_4 is the spatial density of the cells and formula_5 is the so-called 'Chemotactic coefficient' - formula_6 is often not constant, but a decreasing function of the chemo-attractant. For some quantity formula_7 that is subject to total flux formula_8 and generation/destruction term formula_9, it is possible to formulate a continuity equation:
formula_10
where formula_11 is the divergence. This general equation applies to both the cell density and the chemo-attractant. Therefore, incorporating a diffusion flux into the total flux term, the interactions between these quantities are governed by a set of coupled reaction-diffusion partial differential equations describing the change in formula_12 and formula_13:
formula_14
where formula_15 describes the growth in cell density, formula_16 is the kinetics/source term for the chemo-attractant, and the diffusion coefficients for cell density and the chemo-attractant are respectively formula_17 and formula_18.
Spatial ecology of soil microorganisms is a function of their chemotactic sensitivities towards substrate and fellow organisms. The chemotactic behavior of the bacteria was proven to lead to non-trivial population patterns even in the absence of environmental heterogeneities. The presence of structural pore scale heterogeneities has an extra impact on the emerging bacterial patterns.
Measurement of chemotaxis.
A wide range of techniques is available to evaluate chemotactic activity of cells or the chemoattractant and chemorepellent character of ligands.
The basic requirements of the measurement are as follows:
Despite the fact that an ideal chemotaxis assay is still not available, there are several protocols and pieces of equipment that offer good correspondence with the conditions described above. The most commonly used are summarised in the table below:
Artificial chemotactic systems.
"Chemical robots" that use artificial chemotaxis to navigate autonomously have been designed. Applications include targeted delivery of drugs in the body. More recently, enzyme molecules have also shown positive chemotactic behavior in the gradient of their substrates. The thermodynamically-favorable binding of enzymes to their specific substrates is recognized as the origin of enzymatic chemotaxis. Additionally, enzymes in cascades have also shown substrate-driven chemotactic aggregation.
Apart from active enzymes, non-reacting molecules also show chemotactic behavior. This has been demonstrated by using dye molecules that move directionally in gradients of polymer solution through favorable hydrophobic interactions.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\varphi"
},
{
"math_id": 1,
"text": "\\nabla \\varphi"
},
{
"math_id": 2,
"text": " {\\bf J} "
},
{
"math_id": 3,
"text": " {\\bf J} = C \\chi(\\varphi) \\nabla\\varphi "
},
{
"math_id": 4,
"text": " C "
},
{
"math_id": 5,
"text": " \\chi "
},
{
"math_id": 6,
"text": "\\chi"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "{\\bf J}"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": " {\\partial \\rho\\over{\\partial t}} + \\nabla \\cdot {\\bf J} = S "
},
{
"math_id": 11,
"text": "\\nabla \\cdot ()"
},
{
"math_id": 12,
"text": "C"
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": " \\begin{aligned}\n{\\partial C\\over{\\partial t}} &= f(C) + \\nabla\\cdot \\left[D_{C}\\nabla C - C\\chi(\\varphi)\\nabla\\varphi \\right ] \\\\\n{\\partial \\varphi\\over{\\partial t}} &= g(\\varphi,C) + \\nabla \\cdot (D_{\\varphi}\\nabla\\varphi)\n\\end{aligned} "
},
{
"math_id": 15,
"text": "f(C)"
},
{
"math_id": 16,
"text": "g(\\varphi,C)"
},
{
"math_id": 17,
"text": "D_{C}"
},
{
"math_id": 18,
"text": "D_{\\varphi}"
}
]
| https://en.wikipedia.org/wiki?curid=7403 |
74034129 | BQV | BQV can refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "Ev = Bqv"
}
]
| https://en.wikipedia.org/wiki?curid=74034129 |
74035291 | Markov operator | In probability theory and ergodic theory, a Markov operator is an operator on a certain function space that conserves the mass (the so-called Markov property). If the underlying measurable space is topologically sufficiently rich enough, then the Markov operator admits a kernel representation. Markov operators can be linear or non-linear. Closely related to Markov operators is the Markov semigroup.
The definition of Markov operators is not entirely consistent in the literature. Markov operators are named after the Russian mathematician Andrey Markov.
Definitions.
Markov operator.
Let formula_0 be a measurable space and formula_1 a set of real, measurable functions formula_2.
A linear operator formula_3 on formula_1 is a Markov operator if the following is true
Alternative definitions.
Some authors define the operators on the Lp spaces as formula_9 and replace the first condition (bounded, measurable functions on such) with the property
formula_10
Markov semigroup.
Let formula_11 be a family of Markov operators defined on the set of bounded, measurables function on formula_0. Then formula_12 is a Markov semigroup when the following is true
formula_19.
Dual semigroup.
Each Markov semigroup formula_11 induces a "dual semigroup" formula_20 through
formula_21
If formula_16 is invariant under formula_12 then formula_22.
Infinitesimal generator of the semigroup.
Let formula_23 be a family of bounded, linear Markov operators on the Hilbert space formula_24, where formula_16 is an invariant measure. The infinitesimal generator formula_25 of the Markov semigroup formula_11 is defined as
formula_26
and the domain formula_27 is the formula_24-space of all such functions where this limit exists and is in formula_24 again.
formula_28
The carré du champ operator formula_29 measuers how far formula_25 is from being a derivation.
Kernel representation of a Markov operator.
A Markov operator formula_30 has a kernel representation
formula_31
with respect to some probability kernel formula_32, if the underlying measurable space formula_0 has the following sufficient topological properties:
If one defines now a σ-finite measure on formula_0 then it is possible to prove that ever Markov operator formula_3 admits such a kernel representation with respect to formula_36. | [
{
"math_id": 0,
"text": "(E,\\mathcal{F})"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "f:(E,\\mathcal{F})\\to (\\mathbb{R},\\mathcal{B}(\\mathbb{R}))"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "\\mathbf{1}"
},
{
"math_id": 5,
"text": "x\\mapsto 1"
},
{
"math_id": 6,
"text": "P(\\mathbf{1})=\\mathbf{1}"
},
{
"math_id": 7,
"text": "f\\geq 0"
},
{
"math_id": 8,
"text": "Pf\\geq 0"
},
{
"math_id": 9,
"text": "P:L^p(X)\\to L^p(Y)"
},
{
"math_id": 10,
"text": "\\|Pf\\|_Y = \\|f\\|_X,\\quad \\forall f\\in L^p(X)"
},
{
"math_id": 11,
"text": "\\mathcal{P}=\\{P_t\\}_{t\\geq 0}"
},
{
"math_id": 12,
"text": "\\mathcal{P}"
},
{
"math_id": 13,
"text": "P_0=\\operatorname{Id}"
},
{
"math_id": 14,
"text": "P_{t+s}=P_t\\circ P_s"
},
{
"math_id": 15,
"text": "t,s\\geq 0"
},
{
"math_id": 16,
"text": "\\mu"
},
{
"math_id": 17,
"text": "f:E\\to \\mathbb{R}"
},
{
"math_id": 18,
"text": "t\\geq 0 "
},
{
"math_id": 19,
"text": "\\int_E P_tf\\mathrm{d}\\mu =\\int_E f\\mathrm{d}\\mu"
},
{
"math_id": 20,
"text": "(P^*_t)_{t\\geq 0}"
},
{
"math_id": 21,
"text": "\\int_EP_tf\\mathrm{d\\mu} =\\int_E f\\mathrm{d}\\left(P^*_t\\mu\\right)."
},
{
"math_id": 22,
"text": "P^*_t\\mu=\\mu"
},
{
"math_id": 23,
"text": "\\{P_t\\}_{t\\geq 0}"
},
{
"math_id": 24,
"text": "L^2(\\mu)"
},
{
"math_id": 25,
"text": "L"
},
{
"math_id": 26,
"text": "Lf=\\lim\\limits_{t\\downarrow 0}\\frac{P_t f-f}{t},"
},
{
"math_id": 27,
"text": "D(L)"
},
{
"math_id": 28,
"text": "D(L)=\\left\\{f\\in L^2(\\mu): \\lim\\limits_{t\\downarrow 0}\\frac{P_t f-f}{t}\\text{ exists and is in } L^2(\\mu)\\right\\}."
},
{
"math_id": 29,
"text": "\\Gamma"
},
{
"math_id": 30,
"text": "P_t"
},
{
"math_id": 31,
"text": "(P_tf)(x)=\\int_E f(y)p_t(x,\\mathrm{d}y),\\quad x\\in E,"
},
{
"math_id": 32,
"text": "p_t(x,A)"
},
{
"math_id": 33,
"text": "\\mu:\\mathcal{F}\\times \\mathcal{F}\\to [0,1]"
},
{
"math_id": 34,
"text": "\\mu(\\mathrm{d}x,\\mathrm{d}y)=k(x,\\mathrm{d}y)\\mu_1(\\mathrm{d}x)"
},
{
"math_id": 35,
"text": "\\mu_1"
},
{
"math_id": 36,
"text": "k(x,\\mathrm{d}y)"
},
{
"math_id": 37,
"text": "\\mathcal{F}"
}
]
| https://en.wikipedia.org/wiki?curid=74035291 |
74042193 | Caesium selenide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Caesium selenide is an inorganic compound of caesium and selenium. It is a selenide, with the chemical formula of Cs2Se. It can be prepared by reacting caesium and selenium. It has an inverse fluorite structure, with space group formula_0. There are 4 units per unit cell, and the other selenides from the same group are similar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Fm\\bar{3}m"
}
]
| https://en.wikipedia.org/wiki?curid=74042193 |
74042388 | Samarium(III) phosphate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Samarium(III) phosphate is an inorganic compound, with the chemical formula of SmPO4. It is one of the phosphates of samarium.
Preparation.
Samarium(III) phosphate can be obtained by reacting sodium metaphosphate with any soluble samarium(III) salt:
formula_0
Samarium(III) phosphate can also be obtained by reacting phosphoric acid and samarium(III) chloride.
Properties.
Samarium(III) phosphate reacts with sodium fluoride at 750 °C to form
Na2SmF2PO4. Samarium(III) phosphate forms crystals of the monoclinic crystal system, with space group "P"21/n, and lattice parameters "a" = 0.6669 nm, "b" = 0.6868 nm, "c" = 0.6351 nm, β = 103.92 °, Z = 4.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ SmCl_3 + NaPO_3 + H_2O \\ \\xrightarrow{}\\ 2 SmPO_4\\downarrow + NaCl + 2 HCl }"
}
]
| https://en.wikipedia.org/wiki?curid=74042388 |
7404467 | Ambient isotopy | Concept in toplogy
In the mathematical subject of topology, an ambient isotopy, also called an "h-isotopy", is a kind of continuous distortion of an ambient space, for example a manifold, taking a submanifold to another submanifold. For example in knot theory, one considers two knots the same if one can distort one knot into the other without breaking it. Such a distortion is an example of an ambient isotopy. More precisely, let formula_0 and formula_1 be manifolds and formula_2 and formula_3 be embeddings of formula_0 in formula_1. A continuous map
formula_4
is defined to be an ambient isotopy taking formula_2 to formula_3 if formula_5 is the identity map, each map formula_6 is a homeomorphism from formula_1 to itself, and formula_7. This implies that the orientation must be preserved by ambient isotopies. For example, two knots that are mirror images of each other are, in general, not equivalent. | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "F:M \\times [0,1] \\rightarrow M "
},
{
"math_id": 5,
"text": "F_0"
},
{
"math_id": 6,
"text": "F_t"
},
{
"math_id": 7,
"text": "F_1 \\circ g = h"
}
]
| https://en.wikipedia.org/wiki?curid=7404467 |
7404755 | Legendrian knot | Knot theory
In mathematics, a Legendrian knot often refers to a smooth embedding of the circle into formula_0, which is tangent to the standard contact structure on formula_0. It is the lowest-dimensional case of a Legendrian submanifold, which is an embedding of a k-dimensional manifold into a (2k+1)-dimensional contact manifold that is always tangent to the contact hyperplane.
Two Legendrian knots are equivalent if they are isotopic through a family of Legendrian knots. There can be inequivalent Legendrian knots that are isotopic as topological knots. Many inequivalent Legendrian knots can be distinguished by considering their Thurston-Bennequin invariants and rotation number, which are together known as the "classical invariants" of Legendrian knots. More sophisticated invariants have been constructed, including one constructed combinatorially by Chekanov and using holomorphic discs by Eliashberg. This Chekanov-Eliashberg invariant yields an invariant for loops of Legendrian knots by considering the monodromy of the loops. This has yielded noncontractible loops of Legendrian knots which are contractible in the space of all knots.
Any Legendrian knot may be formula_1 perturbed to a transverse knot (a knot transverse to a contact structure) by pushing off in a direction transverse to the contact planes. The set of isomorphism classes of Legendrian knots modulo negative Legendrian stabilizations is in bijection with the set of transverse knots.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb R^3"
},
{
"math_id": 1,
"text": "C^0"
}
]
| https://en.wikipedia.org/wiki?curid=7404755 |
74059230 | Kramkov's optional decomposition theorem | In probability theory, Kramkov's optional decomposition theorem (or just "optional decomposition theorem") is a mathematical theorem on the decomposition of a positive supermartingale formula_0 with respect to a family of equivalent martingale measures into the form
formula_1
where formula_2 is an adapted (or optional) process.
The theorem is of particular interest for financial mathematics, where the interpretation is: formula_0 is the wealth process of a trader, formula_3 is the gain/loss and formula_2 the consumption process.
The theorem was proven in 1994 by Russian mathematician Dmitry Kramkov. The theorem is named after the Doob-Meyer decomposition but unlike there, the process formula_2 is no longer predictable but only adapted (which, under the condition of the statement, is the same as dealing with an optional process).
Kramkov's optional decomposition theorem.
Let formula_4 be a filtered probability space with the filtration satisfying the usual conditions.
A formula_5-dimensional process formula_6 is "locally bounded" if there exist a sequence of stopping times formula_7 such that formula_8 almost surely if formula_9 and formula_10 for formula_11 and formula_12.
Statement.
Let formula_6 be formula_5-dimensional càdlàg (or RCLL) process that is locally bounded. Let formula_13 be the space of equivalent local martingale measures for formula_14 and without loss of generality let us assume formula_15.
Let formula_0 be a positive stochastic process then formula_0 is a formula_16-supermartingale for each formula_17 if and only if there exist an formula_14-integrable and predictable process formula_18 and an adapted increasing process formula_2 such that
formula_19
Commentary.
The statement is still true under change of measure to an equivalent measure. | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "V_t=V_0+(H\\cdot X)_t-C_t,\\quad t\\geq 0,"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "(H\\cdot X)"
},
{
"math_id": 4,
"text": "(\\Omega,\\mathcal{A},\\{\\mathcal{F}_t\\},P)"
},
{
"math_id": 5,
"text": "d"
},
{
"math_id": 6,
"text": "X=(X^1,\\dots,X^d)"
},
{
"math_id": 7,
"text": "(\\tau_n)_{n\\geq 1}"
},
{
"math_id": 8,
"text": "\\tau_n\\to \\infty"
},
{
"math_id": 9,
"text": "n\\to \\infty"
},
{
"math_id": 10,
"text": "|X_t^i|\\leq n"
},
{
"math_id": 11,
"text": "1\\leq i\\leq d"
},
{
"math_id": 12,
"text": "t \\leq \\tau_n"
},
{
"math_id": 13,
"text": "M(X)\\neq \\emptyset"
},
{
"math_id": 14,
"text": "X"
},
{
"math_id": 15,
"text": "P\\in M(X)"
},
{
"math_id": 16,
"text": "Q"
},
{
"math_id": 17,
"text": "Q\\in M(X)"
},
{
"math_id": 18,
"text": "H"
},
{
"math_id": 19,
"text": "V_t=V_0 + (H\\cdot X)_t-C_t,\\quad t\\geq 0."
}
]
| https://en.wikipedia.org/wiki?curid=74059230 |
740680 | Radar cross section | Strength of an object's radar echo
Radar cross-section (RCS), denoted σ, also called radar signature, is a measure of how detectable an object is by radar. A larger RCS indicates that an object is more easily detected.
An object reflects a limited amount of radar energy back to the source. The factors that influence this include:
While important in detecting targets, strength of emitter and distance are not factors that affect the calculation of an RCS because RCS is a property of the target's reflectivity.
Radar cross-section is used to detect airplanes in a wide variation of ranges. For example, a stealth aircraft (which is designed to have low detectability) will have design features that give it a low RCS (such as absorbent paint, flat surfaces, surfaces specifically angled to reflect the signal somewhere other than towards the source), as opposed to a passenger airliner that will have a high RCS (bare metal, rounded surfaces effectively guaranteed to reflect some signal back to the source, many protrusions like the engines, antennas, etc.). RCS is integral to the development of radar stealth technology, particularly in applications involving aircraft and ballistic missiles. RCS data for current military aircraft is mostly highly classified.
In some cases, it is of interest to look at an area on the ground that includes many objects. In those situations, it is useful to use a related quantity called the normalized radar cross-section (NRCS), also known as differential scattering coefficient or radar backscatter coefficient, denoted σ0 or σ0 ("sigma nought"), which is the average radar cross-section of a set of objects per unit area:
formula_0
where:
Formulation.
Informally, the RCS of an object is the cross-sectional area of a perfectly reflecting sphere that would produce the same strength reflection as would the object in question. (Bigger sizes of this imaginary sphere would produce stronger reflections.) Thus, RCS is an abstraction: the radar cross-sectional area of an object does not necessarily bear a direct relationship with the physical cross-sectional area of that object but depends upon other factors.
Somewhat less informally, the RCS of a radar target is an effective area that intercepts the transmitted radar power and then scatters that power isotropically back to the radar receiver.
More precisely, the RCS of a radar target is the hypothetical area required to intercept the transmitted power density at the target such that if the total intercepted power were re-radiated isotropically, the power density actually observed at the receiver is produced. This statement can be understood by examining the monostatic (radar transmitter and receiver co-located) radar equation one term at a time:
formula_1
where
The
formula_8
term in the radar equation represents the power density (watts per meter squared) that the radar transmitter produces at the target. This power density is intercepted by the target with radar cross-section formula_9, which has units of area (meters squared). Thus, the product
formula_10
has the dimensions of power (watts), and represents a hypothetical total power intercepted by the radar target. The second formula_11 term represents isotropic spreading of this intercepted power from the target back to the radar receiver. Thus, the product
formula_12
represents the reflected power density at the radar receiver (again watts per meter squared). The receiver antenna then collects this power density with effective area formula_13, yielding the power received by the radar (watts) as given by the radar equation above.
The scattering of incident radar power by a radar target is never isotropic (even for a spherical target), and the RCS is a hypothetical area. In this light, RCS can be viewed as a correction factor that makes the radar equation "work out right" for the experimentally observed ratio of formula_14. However, RCS is a property of the target alone and may be measured or calculated. Thus, RCS allows the performance of a radar system with a given target to be analysed independent of the radar and engagement parameters. In general, RCS is a function of the orientation of the radar and target. A target's RCS depends on its size, "reflectivity" of its surface, and the "directivity" of the radar return caused by the target's geometric shape.
Factors.
Size.
As a rule, the larger an object, the stronger its radar reflection and thus the greater its RCS. Also, radar of one band may not even detect certain size objects. For example, 10 cm (S-band radar) can detect rain drops but not clouds whose droplets are too small.
Material.
Materials such as metal are strongly radar reflective and tend to produce strong signals. Wood and cloth (such as portions of airplanes and balloons used to be commonly made) or plastic and fibreglass are less reflective or indeed transparent to radar making them suitable for radomes. Even a very thin layer of metal can make an object strongly radar reflective. Chaff is often made from metallised plastic or glass (in a similar manner to metallised foils on food stuffs) with microscopically thin layers of metal.
Also, some devices are designed to be Radar active, such as radar antennas and this will increase RCS.
Radar absorbent paint.
The SR-71 Blackbird and other aircraft were painted with a special "iron ball paint" that consisted of small metallic-coated balls. Radar energy received is converted to heat rather than being reflected.
Shape, directivity and orientation.
The surfaces of the F-117A are designed to be flat and very angled. This has the effect that radar will be incident at a large angle (to the normal ray) that will then bounce off at a similarly high reflected angle; it is forward-scattered. The edges are sharp to prevent rounded surfaces which are normal at some point to the radar source. As any ray incident along the normal will reflect back along the normal, rounded surfaces make for a strong reflected signal.
From the side, a fighter aircraft will present a much larger area than the same aircraft viewed from the front. All other factors being equal, the aircraft will have a stronger signal from the side than from the front; hence the orientation of the target relative to the radar station is important.
Smooth surfaces.
The relief of a surface could contain indentations that act as corner reflectors which would increase RCS from many orientations. This could arise from open bomb-bays, engine intakes, ordnance pylons, joints between constructed sections, etc. Also, it can be impractical to coat these surfaces with radar-absorbent materials.
Measurement.
The size of a target's image on radar is measured by the radar cross section or RCS, often represented by the symbol σ and expressed in square meters. This does not equal geometric area. A perfectly conducting sphere of projected cross sectional area 1 m2 (i.e. a diameter of 1.13 m) will have an RCS of 1 m2. For radar wavelengths much less than the diameter of the sphere, RCS is independent of frequency. Conversely, a square flat plate of area 1 m2 will have an RCS of σ = 4π "A"2 / "λ"2 (where "A"=area, "λ"=wavelength), or 139.62 m2 at 1 GHz if the radar is perpendicular to the flat surface. At off-normal incident angles, energy is reflected away from the receiver, reducing the RCS. Modern stealth aircraft are said to have an RCS comparable with small birds or large insects, though this varies widely depending on aircraft and radar.
If the RCS was directly related to the target's cross-sectional area, the only way to reduce it would be to make the physical profile smaller. Rather, by reflecting much of the radiation away or by absorbing it, the target achieves a smaller radar cross section.
Measurement of a target's RCS is performed at a radar reflectivity range or scattering range. The first type of range is an outdoor range where the target is positioned on a specially shaped low RCS pylon some distance down-range from the transmitters. Such a range eliminates the need for placing radar absorbers behind the target, however multi-path interactions with the ground must be mitigated.
An anechoic chamber is also commonly used. In such a room, the target is placed on a rotating pillar in the center, and the walls, floors and ceiling are covered by stacks of radar absorbing material. These absorbers prevent corruption of the measurement due to reflections. A compact range is an anechoic chamber with a reflector to simulate far field conditions.
Typical values for a centimeter wave radar are:
Calculation.
Quantitatively, RCS is calculated in three-dimensions as
formula_15
Where formula_5 is the RCS, formula_16 is the incident power density measured at the target, and formula_17 is the scattered power density seen at a distance formula_4 away from the target.
In electromagnetic analysis this is also commonly written as
formula_18
where formula_19 and formula_20 are the far field scattered and incident electric field intensities, respectively.
In the design phase, it is often desirable to employ a computer to predict what the RCS will look like before fabricating an actual object. Many iterations of this prediction process can be performed in a short time at low cost, whereas use of a measurement range is often time-consuming, expensive and error-prone.
The linearity of Maxwell's equations makes RCS relatively straightforward to calculate with a variety of analytic and numerical methods, but changing levels of military interest and the need for secrecy have made the field challenging, nonetheless.
The field of solving Maxwell's equations through numerical algorithms is called computational electromagnetics, and many effective analysis methods have been applied to the RCS prediction problem.
RCS prediction software are often run on large supercomputers and employ high-resolution CAD models of real radar targets.
High frequency approximations such as geometric optics, physical optics, the geometric theory of diffraction, the uniform theory of diffraction and the physical theory of diffraction are used when the wavelength is much shorter than the target feature size.
Statistical models include chi-square, Rice, and the log-normal target models. These models are used to predict likely values of the RCS given an average value, and are useful when running radar Monte Carlo simulations.
Purely numerical methods such as the boundary element method (method of moments), finite difference time domain method (FDTD) and finite element methods are limited by computer performance to longer wavelengths or smaller features.
Though, for simple cases, the wavelength ranges of these two types of method overlap considerably, for difficult shapes and materials or very high accuracy they are combined in various sorts of hybrid method.
Reduction.
RCS reduction is chiefly important in stealth technology for aircraft, missiles, ships, and other military vehicles. With smaller RCS, vehicles can better evade radar detection, whether it be from land-based installations, guided weapons or other vehicles. Reduced signature design also improves platforms' overall survivability through the improved effectiveness of its radar counter-measures.
Several methods exist. The distance at which a target can be detected for a given radar configuration varies with the fourth root of its RCS. Therefore, in order to cut the detection distance to one tenth, the RCS should be reduced by a factor of 10,000. While this degree of improvement is challenging, it is often possible when influencing platforms during the concept/design stage and using experts and advanced computer code simulations to implement the control options described below.
Purpose shaping.
With purpose shaping, the shape of the target's reflecting surfaces is designed such that they reflect energy away from the source. The aim is usually to create a “cone-of-silence” about the target's direction of motion. Due to the energy reflection, this method is defeated by using passive (multistatic) radars.
Purpose-shaping can be seen in the design of surface faceting on the F-117A Nighthawk stealth attack aircraft. This aircraft, designed in the late 1970s though only revealed to the public in 1988, uses a multitude of flat surfaces to reflect incident radar energy away from the source. Yue suggests that limited available computing power for the design phase kept the number of surfaces to a minimum. The B-2 Spirit stealth bomber benefited from increased computing power, enabling its contoured shapes and further reduction in RCS. The F-22 Raptor and F-35 Lightning II continue the trend in purpose shaping and promise to have even smaller monostatic RCS.
Redirecting scattered energy without shaping.
This technique is relatively new compared to other techniques chiefly after the invention of metasurfaces. As mentioned earlier, the primary objective in geometry alteration is to redirect scattered waves away from the backscattered direction (or the source). However, it may compromise performance in terms of aerodynamics. One feasible solution, which has extensively been explored in recent time, is to utilize metasurfaces which can redirect scattered waves without altering the geometry of the target. Such metasurfaces can primarily be classified in two categories: (i) Checkerboard metasurfaces, (ii) Gradient index metasurfaces.
Active cancellation.
With active cancellation, the target generates a radar signal equal in intensity but opposite in phase to the predicted reflection of an incident radar signal (similarly to noise canceling ear phones). This creates destructive interference between the reflected and generated signals, resulting in reduced RCS. To incorporate active cancellation techniques, the precise characteristics of the waveform and angle of arrival of the illuminating radar signal must be known, since they define the nature of generated energy required for cancellation. Except against simple or low frequency radar systems, the implementation of active cancellation techniques is extremely difficult due to the complex processing requirements and the difficulty of predicting the exact nature of the reflected radar signal over a broad aspect of an aircraft, missile or other target.
Radar absorbent material.
Radar absorbent material (RAM) can be used in the original construction, or as an addition to highly reflective surfaces. There are at least three types of RAM: resonant, non-resonant magnetic and non-resonant large volume.
Thin coatings made of only dielectrics and conductors have very limited absorbing bandwidth, so magnetic materials are used when weight and cost permit, either in resonant RAM or as non-resonant RAM.
Optimization methods.
Thin non-resonant or broad resonance coatings can be modeled with a Leontovich impedance boundary condition (see also Electrical impedance). This is the ratio of the tangential electric field to the tangential magnetic field on the surface, and ignores fields propagating along the surface within the coating. This is particularly convenient when using boundary element method calculations. The surface impedance can be calculated and tested separately.
For an isotropic surface the ideal surface impedance is equal to the 377 ohm impedance of free space.
For non-isotropic (anisotropic) coatings, the optimal coating depends on the shape of the target and the radar direction, but duality, the symmetry of Maxwell's equations between the electric and magnetic fields, tells one that optimal coatings have η0 × η1 = 3772 Ω2, where η0 and η1 are perpendicular components of the anisotropic surface impedance, aligned with edges and/or the radar direction.
A perfect electric conductor has more back scatter from a leading edge for the linear polarization with the electric field parallel to the edge and more from a trailing edge with the electric field perpendicular to the edge, so the high surface impedance should be parallel to leading edges and perpendicular to trailing edges, for the greatest radar threat direction, with some sort of smooth transition between.
To calculate the radar cross-section of such a stealth body, one would typically do one-dimensional reflection calculations to calculate the surface impedance, then two dimensional numerical calculations to calculate the diffraction coefficients of edges and small three dimensional calculations to calculate the diffraction coefficients of corners and points. The cross section can then be calculated, using the diffraction coefficients, with the physical theory of diffraction or other high frequency method, combined with physical optics to include the contributions from illuminated smooth surfaces and Fock calculations to calculate creeping waves circling around any smooth shadowed parts.
Optimization is in the reverse order. First one does high frequency calculations to optimize the shape and find the most important features, then small calculations to find the best surface impedances in the problem areas, then reflection calculations to design coatings. Large numerical calculations can run too slowly for numerical optimization or can distract workers from the physics, even when massive computing power is available.
RCS of an antenna.
For the case of an antenna the total RCS can be divided into two separate components as Structural Mode RCS and Antenna Mode RCS. The two components of the RCS relates to the two scattering phenomena that takes place at the antenna. When an electromagnetic signal falls on an antenna surface, some part of the electromagnetic energy is scattered back to the space. This is called structural mode scattering. The remaining part of the energy is absorbed due to the antenna effect. Some part of the absorbed energy is again scattered back into the space due to the impedance mismatches, called antenna mode scattering.
Bistatic RCS.
For the bistatic radar configuration—transmitter and receiver separated (not co-located) -- the bistatic radar cross-section (BRCS) is a function of both the transmitter-target orientation and the receiver-target orientation.
A normalized bistatic radar cross-section (NBRCS) or bistatic normalized radar cross-section (BNRCS) may also be defined, similar to the monostatic NRCS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma^0 = \\left\\langle {\\sigma\\over{A}} \\right\\rangle "
},
{
"math_id": 1,
"text": "P_r = {{P_t G_t}\\over{4 \\pi r^2}} \\sigma {{1}\\over{4 \\pi r^2}} A_\\mathrm{eff}"
},
{
"math_id": 2,
"text": "P_t"
},
{
"math_id": 3,
"text": "G_t"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "\\sigma"
},
{
"math_id": 6,
"text": "A_\\mathrm{eff}"
},
{
"math_id": 7,
"text": "P_r"
},
{
"math_id": 8,
"text": "{{P_t G_t}\\over{4 \\pi r^2}}"
},
{
"math_id": 9,
"text": "\\sigma"
},
{
"math_id": 10,
"text": "{{P_t G_t}\\over{4 \\pi r^2}} \\sigma"
},
{
"math_id": 11,
"text": "{{1}\\over{4 \\pi r^2}}"
},
{
"math_id": 12,
"text": "{{P_t G_t}\\over{4 \\pi r^2}} \\sigma {{1}\\over{4 \\pi r^2}}"
},
{
"math_id": 13,
"text": "A_\\mathrm{eff}"
},
{
"math_id": 14,
"text": "P_r/P_t"
},
{
"math_id": 15,
"text": "\\sigma = \\lim_{r \\to \\infty} 4 \\pi r^{2} \\frac{S_{s}}{S_{i}}"
},
{
"math_id": 16,
"text": "S_{i}"
},
{
"math_id": 17,
"text": "S_{s}"
},
{
"math_id": 18,
"text": "\\sigma = \\lim_{r \\to \\infty} 4 \\pi r^{2} \\frac{|E_{s}|^{2}}{|E_{i}|^{2}}"
},
{
"math_id": 19,
"text": "E_{s}"
},
{
"math_id": 20,
"text": "E_{i}"
}
]
| https://en.wikipedia.org/wiki?curid=740680 |
74073245 | Doignon's theorem | Doignon's theorem in geometry is an analogue of Helly's theorem for the integer lattice. It states that, if a family of convex sets in formula_0-dimensional Euclidean space have the property that the intersection of every formula_1 contains an integer point, then the intersection of all of the sets contains an integer point. Therefore, formula_0-dimensional integer linear programs form an LP-type problem of combinatorial dimension formula_1, and can be solved by certain generalizations of linear programming algorithms in an amount of time that is linear in the number of constraints of the problem and fixed-parameter tractable in its dimension. The same theorem applies more generally to any lattice, not just the integer lattice.
The theorem can be classified as belonging to convex geometry, discrete geometry, and the geometry of numbers. It is named after Belgian mathematician and mathematical psychologist Jean-Paul Doignon, who published it in 1973. Doignon credits Francis Buekenhout with posing the question answered by this theorem. It is also called the Doignon–Bell–Scarf theorem, crediting mathematical economists David E. Bell and Herbert Scarf, who both rediscovered it in 1977 and pointed out its applications to integer programming.
The result is tight: there exist systems of half-spaces for which every formula_2 have an integer point in their intersection, but for which the whole system has no integer intersection. Such a system can be obtained, for instance, by choosing halfspaces that contain all but one vertex of the unit cube. Another way of phrasing the result is that the Helly number of convex subsets of the integers is exactly formula_1. More generally, the Helly number of any discrete set of Euclidean points equals the maximum number of points that can be chosen to form the vertices of a convex polytope that contains no other point from the set. Generalizing both Helly's theorem and Doignon's theorem, the Helly number of the Cartesian product formula_3 is formula_4.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "2^d"
},
{
"math_id": 2,
"text": "2^d-1"
},
{
"math_id": 3,
"text": "\\mathbb{Z}^d\\times\\mathbb{R}^k"
},
{
"math_id": 4,
"text": "2^d(k+1)"
}
]
| https://en.wikipedia.org/wiki?curid=74073245 |
74081567 | Sophie Germain's identity | Mathematical polynomial factorization
In mathematics, Sophie Germain's identity is a polynomial factorization named after Sophie Germain stating that
formula_0
Beyond its use in elementary algebra, it can also be used in number theory to factorize integers of the special form formula_1, and it frequently forms the basis of problems in mathematics competitions.
History.
Although the identity has been attributed to Sophie Germain, it does not appear in her works. Instead, in her works one can find the related identity
formula_2
Modifying this equation by multiplying formula_3 by formula_4 gives
formula_5
a difference of two squares, from which Germain's identity follows. The inaccurate attribution of this identity to Germain was made by Leonard Eugene Dickson in his "History of the Theory of Numbers", which also stated (equally inaccurately) that it could be found in a letter from Leonhard Euler to Christian Goldbach.
The identity can be proven simply by multiplying the two terms of the factorization together, and verifying that their product equals the right hand side of the equality. A proof without words is also possible based on multiple applications of the Pythagorean theorem.
Applications to integer factorization.
One consequence of Germain's identity is that the numbers of the form
formula_6
cannot be prime for formula_7. (For formula_8, the result is the prime number 5.) They are obviously not prime if formula_9 is even, and if formula_9 is odd they have a factorization given by the identity with formula_10 and formula_11. These numbers (starting with formula_12) form the integer sequence
<templatestyles src="Block indent/styles.css"/>
Many of the appearances of Sophie Germain's identity in mathematics competitions come from this corollary of it.
Another special case of the identity with formula_13 and formula_14
can be used to produce the factorization
formula_15
where formula_16 is the fourth cyclotomic polynomial. As with the cyclotomic polynomials more generally, formula_17 is an irreducible polynomial, so this factorization of infinitely many of its values cannot be extended to a factorization of formula_17 as a polynomial, making this an example of an aurifeuillean factorization.
Generalization.
Germain's identity has been generalized to the functional equation
formula_18
which by Sophie Germain's identity is satisfied by the square function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\nx^4 + 4y^4 &= \\bigl((x + y)^2 + y^2\\bigr)\\cdot\\bigl((x - y)^2 + y^2\\bigr)\\\\\n &= (x^2 + 2xy + 2y^2)\\cdot(x^2 - 2xy + 2y^2).\n\\end{align}"
},
{
"math_id": 1,
"text": "x^4+4y^4"
},
{
"math_id": 2,
"text": "\n\\begin{align}\nx^4+y^4 &= (x^2-y^2)^2+2(xy)^2\\\\\n &= (x^2+y^2)^2-2(xy)^2.\\\\\n\\end{align}"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "\\sqrt2"
},
{
"math_id": 5,
"text": "\nx^4+4y^4 = (x^2+2y^2)^2-4(xy)^2,\n"
},
{
"math_id": 6,
"text": "n^4+4^n"
},
{
"math_id": 7,
"text": "n>1"
},
{
"math_id": 8,
"text": "n=1"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "x=n"
},
{
"math_id": 11,
"text": "y=2^{(n-1)/2}"
},
{
"math_id": 12,
"text": "n=0"
},
{
"math_id": 13,
"text": "x=1"
},
{
"math_id": 14,
"text": "y=2^k"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\Phi_4(2^{2k+1})&=2^{4k+2}+1\\\\\n&=(2^{2k+1}-2^{k+1}+1)\\cdot (2^{2k+1}+2^{k+1}+1),\\\\\n\\end{align}"
},
{
"math_id": 16,
"text": "\\Phi_4(x)=x^2+1"
},
{
"math_id": 17,
"text": "\\Phi_4"
},
{
"math_id": 18,
"text": "\nf(x)^2+4f(y)^2 = \\bigl( f(x+y)+f(y) \\bigr)\\bigl(f(x-y)+f(y)\\bigr),\n"
}
]
| https://en.wikipedia.org/wiki?curid=74081567 |
740817 | Mathematical coincidence | Coincidence in mathematics
A mathematical coincidence is said to occur when two expressions with no direct relationship show a near-equality which has no apparent theoretical explanation.
For example, there is a near-equality close to the round number 1000 between powers of 2 and powers of 10:
formula_0
Some mathematical coincidences are used in engineering when one expression is taken as an approximation of another.
Introduction.
A mathematical coincidence often involves an integer, and the surprising feature is the fact that a real number arising in some context is considered by some standard as a "close" approximation to a small integer or to a multiple or power of ten, or more generally, to a rational number with a small denominator. Other kinds of mathematical coincidences, such as integers simultaneously satisfying multiple seemingly unrelated criteria or coincidences regarding units of measurement, may also be considered. In the class of those coincidences that are of a purely mathematical sort, some simply result from sometimes very deep mathematical facts, while others appear to come 'out of the blue'.
Given the countably infinite number of ways of forming mathematical expressions using a finite number of symbols, the number of symbols used and the precision of approximate equality might be the most obvious way to assess mathematical coincidences; but there is no standard, and the strong law of small numbers is the sort of thing one has to appeal to with no formal opposing mathematical guidance. Beyond this, some sense of mathematical aesthetics could be invoked to adjudicate the value of a mathematical coincidence, and there are in fact exceptional cases of true mathematical significance (see Ramanujan's constant below, which made it into print some years ago as a scientific April Fools' joke). All in all, though, they are generally to be considered for their curiosity value, or perhaps to encourage new mathematical learners at an elementary level.
Some examples.
Rational approximants.
Sometimes simple rational approximations are exceptionally close to interesting irrational values. These are explainable in terms of large terms in the continued fraction representation of the irrational value, but further insight into why such improbably large terms occur is often not available.
Rational approximants (convergents of continued fractions) to ratios of logs of different numbers are often invoked as well, making coincidences between the powers of those numbers.
Many other coincidences are combinations of numbers that put them into the form that such rational approximants provide close relationships.
Concerning musical intervals.
In music, the distances between notes (intervals) are measured as ratios of their frequencies, with near-rational ratios often sounding harmonious. In western twelve-tone equal temperament, the ratio between consecutive note frequencies is formula_10.
The two sides of this expression differ only after the 42nd decimal place; this is not a coincidence.
formula_58
formula_59
formula_60
Numerical expressions.
Containing both "π" and "e".
with the last accurate to 14 or 15 decimal places.
Numerical coincidences in numbers from the physical world.
Speed of light.
The speed of light is (by definition) exactly , extremely close to (). This is a pure coincidence, as the metre was originally defined as 1 / of the distance between the Earth's pole and equator along the surface at sea level, and the Earth's circumference just happens to be about 2/15 of a light-second. It is also roughly equal to one foot per nanosecond (the actual number is 0.9836 ft/ns).
Angular diameters of the Sun and the Moon.
As seen from Earth, the angular diameter of the Sun varies between 31′27″ and 32′32″, while that of the Moon is between 29′20″ and 34′6″. The fact that the intervals overlap (the former interval is contained in the latter) is a coincidence, and has implications for the types of solar eclipses that can be observed from Earth.
Gravitational acceleration.
While not constant but varying depending on latitude and altitude, the numerical value of the acceleration caused by Earth's gravity on the surface lies between 9.74 and 9.87 m/s2, which is quite close to 10. This means that as a result of Newton's second law, the weight of a kilogram of mass on Earth's surface corresponds roughly to 10 newtons of force exerted on an object.
This is related to the aforementioned coincidence that the square of pi is close to 10. One of the early definitions of the metre was the length of a pendulum whose half swing had a period equal to one second. Since the period of the full swing of a pendulum is approximated by the equation below, algebra shows that if this definition was maintained, gravitational acceleration measured in metres per second per second would be exactly equal to "π"2.
formula_95
The upper limit of gravity on Earth's surface (9.87 m/s2) is equal to π2 m/s2 to four significant figures. It is approximately 0.6% greater than standard gravity (9.80665 m/s2).
Rydberg constant.
The Rydberg constant, when multiplied by the speed of light and expressed as a frequency, is close to formula_96:
formula_97
formula_98
This is also approximately the ratio between one metre and one foot: 1 m/ft = 1 m / (0.3048 m).
US customary to metric conversions.
As discovered by Randall Munroe, a cubic mile is close to formula_99 cubic kilometres (within 0.5%). This means that a sphere with radius "n" kilometres has almost exactly the same volume as a cube with sides length "n" miles.
The ratio of a mile to a kilometre is approximately the Golden ratio. As a consequence, a Fibonacci number of miles is approximately the next Fibonacci number of kilometres.
The ratio of a mile to a kilometre is also very close to formula_100 (within 0.006%). That is, formula_101 where "m" is the number of miles, "k" is the number of kilometres and "e" is Euler's number.
A density of one ounce per cubic foot is very close to one kilogram per cubic metre: 1 oz/ft3 = 1 oz × 0.028349523125 kg/oz / (1 ft × 0.3048 m/ft)3 ≈ 1.0012 kg/m3.
The ratio between one troy ounce and one gram is approximately formula_102.
Fine-structure constant.
The fine-structure constant formula_103 is close to, and was once conjectured to be precisely equal to . Its CODATA recommended value is
formula_103 =
formula_103 is a dimensionless physical constant, so this coincidence is not an artifact of the system of units being used.
Planet Earth.
The radius of geostationary orbit, is within 0.02% of the variation of the distance of the moon in a month (the difference between its apogee and perigee), , and 5% error of the length of the equator, .
Earth's Solar Orbit
The number of seconds in one year, based on the Gregorian Calendar, can be calculated by: formula_104
This value can be approximated by formula_105 or 31,415,926.54 with less than one percent of an error: formula_106
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^{10} = 1024 \\approx 1000 = 10^3."
},
{
"math_id": 1,
"text": "\\pi \\approx 4 / \\sqrt{\\varphi} = 3.1446\\dots"
},
{
"math_id": 2,
"text": "\\tfrac{10}{\\pi-1}"
},
{
"math_id": 3,
"text": "2^{10} = 1024 \\approx 1000 = 10^3"
},
{
"math_id": 4,
"text": "\\textstyle\\frac{\\log10}{\\log2} \\approx 3.3219 \\approx \\frac{10}{3}"
},
{
"math_id": 5,
"text": " 2 \\approx 10^{3/10}"
},
{
"math_id": 6,
"text": "128 = 2^7 \\approx 5^3 = 125"
},
{
"math_id": 7,
"text": "2^3"
},
{
"math_id": 8,
"text": "\\textstyle\\frac{\\log5}{\\log2} \\approx 2.3219 \\approx \\frac{7}{3}"
},
{
"math_id": 9,
"text": " 2 \\approx 5^{3/7}"
},
{
"math_id": 10,
"text": "\\sqrt[12]{2}"
},
{
"math_id": 11,
"text": "2^{19} \\approx 3^{12}"
},
{
"math_id": 12,
"text": "\\frac{\\log3}{\\log2} = 1.5849\\ldots \\approx \\frac{19}{12}"
},
{
"math_id": 13,
"text": "2^{7/12}\\approx 3/2"
},
{
"math_id": 14,
"text": "{(3/2)}^{4} = (81/16) \\approx 5"
},
{
"math_id": 15,
"text": "3/2"
},
{
"math_id": 16,
"text": "5/4"
},
{
"math_id": 17,
"text": "5/1"
},
{
"math_id": 18,
"text": "81/80"
},
{
"math_id": 19,
"text": "\\sqrt[12]{2}\\sqrt[7]{5} = 1.33333319\\ldots \\approx \\frac43"
},
{
"math_id": 20,
"text": "\\sqrt[8]{5}\\sqrt[3]{35} = 4.00000559\\ldots \\approx 4"
},
{
"math_id": 21,
"text": "{(5/4)}^{3} \\approx {2/1}"
},
{
"math_id": 22,
"text": "\\pi^2\\approx10;"
},
{
"math_id": 23,
"text": "\\zeta(2)=\\pi^2/6."
},
{
"math_id": 24,
"text": "\\pi"
},
{
"math_id": 25,
"text": "\\sqrt{10},"
},
{
"math_id": 26,
"text": "\\pi^2+\\pi\\approx13;"
},
{
"math_id": 27,
"text": "\\pi^2\\approx 227/23,"
},
{
"math_id": 28,
"text": "\\pi^3\\approx31,"
},
{
"math_id": 29,
"text": "2\\pi^3 -\\pi^2-\\pi \\approx7^2,"
},
{
"math_id": 30,
"text": "\\pi^4\\approx 2143/22;"
},
{
"math_id": 31,
"text": "\\pi\\approx\\left(9^2+\\frac{19^2}{22}\\right)^{1/4},"
},
{
"math_id": 32,
"text": "\n\\int_0^\\infty \\cos(2x)\\prod_{n=1}^\\infty \\cos\\left(\\frac{x}{n}\\right)\\mathrm{d}x \\approx \\frac{\\pi}{8}.\n"
},
{
"math_id": 33,
"text": "\\pi^4+\\pi^5\\approx e^6"
},
{
"math_id": 34,
"text": "4 \\cdot \\ln(\\pi) + \\ln(\\pi+1) \\approx 6"
},
{
"math_id": 35,
"text": "(e-1)\\pi\\approx\\sqrt{5}+\\sqrt{10}"
},
{
"math_id": 36,
"text": "\\left(\\frac{\\pi}{2} - \\ln\\left( \\frac{3\\pi}{2}\\right) \\right)42\\pi \\approx e"
},
{
"math_id": 37,
"text": "e^\\pi - \\pi\\approx 20"
},
{
"math_id": 38,
"text": "(\\pi+20)^i=-0.999 999 999 2\\ldots -i\\cdot 0.000 039\\ldots \\approx -1."
},
{
"math_id": 39,
"text": "e^\\pi - \\pi"
},
{
"math_id": 40,
"text": "\\textstyle\\sum_{k=1}^{\\infty }\\left ( 8\\pi k^{2}-2 \\right )e^{\\left (-\\pi k^{2} \\right )}=1,"
},
{
"math_id": 41,
"text": "\\left (8\\pi-2 \\right )e^{-\\pi}\\approx 1,"
},
{
"math_id": 42,
"text": "e^{\\pi}\\approx 8\\pi-2."
},
{
"math_id": 43,
"text": "\\pi \\approx 22/7"
},
{
"math_id": 44,
"text": "e^{\\pi}\\approx \\pi+(7\\cdot\\frac{22}{7}-2) = \\pi+20."
},
{
"math_id": 45,
"text": "\\pi^e+e^\\pi \\approx 45\\frac{3}{5}"
},
{
"math_id": 46,
"text": "\\pi^9/e^8\\approx 10"
},
{
"math_id": 47,
"text": "\\ln(\\pi) \\approx {\\ln(10)+8 \\over 9}"
},
{
"math_id": 48,
"text": "2\\pi + e \\approx 9"
},
{
"math_id": 49,
"text": "e^{-\\frac{\\pi}{9}} + e^{-4\\frac{\\pi}{9}} + e^{-9\\frac{\\pi}{9}} + e^{-16\\frac{\\pi}{9}} + e^{-25\\frac{\\pi}{9}} + e^{-36\\frac{\\pi}{9}} + e^{-49\\frac{\\pi}{9}} + e^{-64\\frac{\\pi}{9}} = 1.00000000000105\\ldots \\approx 1"
},
{
"math_id": 50,
"text": "\\textstyle\\sum_{k=1}^{n-1}{e^{-\\frac{k^2\\pi}{n}}}\\approx\\frac{-1+\\sqrt{n}}{2},"
},
{
"math_id": 51,
"text": "e^{\\pi\\sqrt{163}} \\approx 262537412640768744 = 12^3(231^2-1)^3+744"
},
{
"math_id": 52,
"text": "2.9\\cdot 10^{-28}\\%"
},
{
"math_id": 53,
"text": " k= 2198, 422151, 614552, 2508952, 6635624, 199148648,\\dots"
},
{
"math_id": 54,
"text": "\\pi \\approx \\frac{\\ln(k)}{\\sqrt{n}}"
},
{
"math_id": 55,
"text": "k \\approx e^{\\pi\\sqrt{n}}"
},
{
"math_id": 56,
"text": "n = 6, 17, 18, 22, 25, 37,\\dots"
},
{
"math_id": 57,
"text": "k=199148648 = 14112^2+104,"
},
{
"math_id": 58,
"text": "\\pi \\approx \\frac{\\ln (784^2-104)}{\\sqrt{18}}"
},
{
"math_id": 59,
"text": "\\pi \\approx \\frac{\\ln (1584^2-104)}{\\sqrt{22}}"
},
{
"math_id": 60,
"text": "\\pi \\approx \\frac{\\ln (14112^2+104)}{\\sqrt{37}}"
},
{
"math_id": 61,
"text": "(e^e)^e \\approx 1000\\varphi"
},
{
"math_id": 62,
"text": "\\frac{10(e^\\pi-\\ln3)}{\\ln2} = 318.000000033\\ldots"
},
{
"math_id": 63,
"text": "10! = 6! \\cdot 7! = 3! \\cdot 5! \\cdot 7!"
},
{
"math_id": 64,
"text": "\\lambda=\\frac{1}{365}{23\\choose 2}=\\frac{253}{365}"
},
{
"math_id": 65,
"text": "\\ln(2)"
},
{
"math_id": 66,
"text": "5 \\cdot 10^5 - 1 = 31 \\cdot 127 \\cdot 127"
},
{
"math_id": 67,
"text": "\\sqrt[6]{6!}"
},
{
"math_id": 68,
"text": "6! \\approx 3^6"
},
{
"math_id": 69,
"text": "H_6= 1+\\frac{1}{2}+\\frac{1}{3}+\\frac{1}{4}+\\frac{1}{5}+\\frac{1}{6} = \\frac{49}{20} = 2.45"
},
{
"math_id": 70,
"text": "\\sqrt{6}"
},
{
"math_id": 71,
"text": "\\sqrt[5]{109} \\approx \\frac{23}{9}"
},
{
"math_id": 72,
"text": "2 \\times 10^{-7}"
},
{
"math_id": 73,
"text": "3^3+4^4+3^3+5^5=3435 "
},
{
"math_id": 74,
"text": "0^0=0"
},
{
"math_id": 75,
"text": "\\,1!+4!+5!=145"
},
{
"math_id": 76,
"text": "\\,4!+0!+5!+8!+5!=40585"
},
{
"math_id": 77,
"text": "\\frac{16}{64}=\\frac{1\\!\\!\\!\\not6}{\\not64}=\\frac{1}{4}"
},
{
"math_id": 78,
"text": "\\frac{26}{65}=\\frac{2\\!\\!\\!\\not6}{\\not65}=\\frac {2}{5}"
},
{
"math_id": 79,
"text": "\\frac{19}{95}=\\frac{1\\!\\!\\!\\not9}{\\not95}=\\frac{1}{5}"
},
{
"math_id": 80,
"text": "\\frac{49}{98}=\\frac{4\\!\\!\\!\\not9}{\\not98}=\\frac{4}{8}"
},
{
"math_id": 81,
"text": "\\,(4+9+1+3)^3=4913"
},
{
"math_id": 82,
"text": "\\,(5+8+3+2)^3=5832"
},
{
"math_id": 83,
"text": "\\,(1+9+6+8+3)^3=19683"
},
{
"math_id": 84,
"text": "\\,(3+4)^3=343"
},
{
"math_id": 85,
"text": "\\,-1+2^7=127"
},
{
"math_id": 86,
"text": "2^5\\cdot9^2=2592"
},
{
"math_id": 87,
"text": "\\,1^3+5^3+3^3=153"
},
{
"math_id": 88,
"text": "\\,3^3+7^3+0^3=370"
},
{
"math_id": 89,
"text": "\\,3^3+7^3+1^3=371"
},
{
"math_id": 90,
"text": "\\,4^3+0^3+7^3=407"
},
{
"math_id": 91,
"text": "\\,588^2+2353^2=5882353 "
},
{
"math_id": 92,
"text": "\\,2^1+6^2+4^3+6^4+7^5+9^6+8^7=2646798"
},
{
"math_id": 93,
"text": "\\,12157692622039623539=1^1+2^2+1^3+\\ldots+9^{20}"
},
{
"math_id": 94,
"text": "13532385396179=13\\times53^{2}\\times3853\\times96179"
},
{
"math_id": 95,
"text": "T \\approx 2\\pi \\sqrt\\frac{L}{g}"
},
{
"math_id": 96,
"text": "\\frac{\\pi^2}{3}\\times 10^{15}\\ \\text{Hz}"
},
{
"math_id": 97,
"text": "\\underline{3.2898}41960364(17) \\times 10^{15}\\ \\text{Hz} = R_\\infty c"
},
{
"math_id": 98,
"text": "\\underline{3.2898}68133696\\ldots = \\frac{\\pi^2}{3}"
},
{
"math_id": 99,
"text": "\\frac{4}{3}\\pi"
},
{
"math_id": 100,
"text": "\\ln(5)"
},
{
"math_id": 101,
"text": "5^m \\approx e^k"
},
{
"math_id": 102,
"text": " 10\\pi-\\frac{\\pi}{10} = \\frac{99}{10}\\pi"
},
{
"math_id": 103,
"text": "\\alpha"
},
{
"math_id": 104,
"text": "365.2425\\left(\\frac{\\text{days}}{\\text{year}}\\right)\\times 24\\left(\\frac{\\text{hours}}{\\text{day}}\\right)\\times 60\\left(\\frac{\\text{minutes}}{\\text{hour}}\\right) \\times 60\\left(\\frac{\\text{seconds}}{\\text{minute}}\\right)=\n31,556,952\\left(\\frac{\\text{seconds}}{\\text{year}}\\right)"
},
{
"math_id": 105,
"text": "\\pi\\times10^7"
},
{
"math_id": 106,
"text": "\\left[1 - \\left(\\frac{31,415,926.54}{31,556,952}\\right)\\right]\\times 100 = 0.4489%"
}
]
| https://en.wikipedia.org/wiki?curid=740817 |
74089127 | Praseodymium antimonide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium antimonide is a binary inorganic compound of praseodymium and antimony with the formula PrSb.
Preparation.
Praseodymium antimonide can be prepared by heating praseodymium and antimony in a vacuum:
formula_0
Physical properties.
Praseodymium antimonide forms cubic crystals, space group "Fm"3"m", cell parameters a = 0.638 nm, Z = 4, and structure like sodium chloride.
The compound melts congruently at 2170 °C or 2161 °C. At a temperature of 1950 °C, a phase transition occurs in the crystals. At a pressure of 13 GPa, a phase transition also occurs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Pr + Sb \\ \\xrightarrow{2170^oC}\\ PrSb }"
}
]
| https://en.wikipedia.org/wiki?curid=74089127 |
74090169 | Itô–Nisio theorem | Convergence of random variables in Banach spaces
The Itô-Nisio theorem is a theorem from probability theory that characterizes convergence in Banach spaces. The theorem shows the equivalence of the different types of convergence for sums of independent and symmetric random variables in Banach spaces. The Itô-Nisio theorem leads to a generalization of Wiener's construction of the Brownian motion. The symmetry of the distribution in the theorem is needed in infinite spaces.
The theorem was proven by Japanese mathematicians Kiyoshi Itô and Makiko Nisio in 1968.
Statement.
Let formula_0 be a real separable Banach space with the norm induced topology, we use the Borel σ-algebra and denote the dual space as formula_1. Let formula_2 be the dual pairing and formula_3 is the imaginary unit. Let
The following is equivalent
formula_15
Remarks:
Since formula_5 is separable point formula_16 (i.e. convergence in the Lévy–Prokhorov metric) is the same as convergence in distribution formula_17. If we remove the symmetric distribution condition: | [
{
"math_id": 0,
"text": "(E,\\|\\cdot\\|)"
},
{
"math_id": 1,
"text": "E^*"
},
{
"math_id": 2,
"text": "\\langle z,S\\rangle:={}_{E^*}\\langle z,S\\rangle_E"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "X_1,\\dots,X_n"
},
{
"math_id": 5,
"text": "E"
},
{
"math_id": 6,
"text": "S_n=\\sum_{i=1}^n X_n"
},
{
"math_id": 7,
"text": "\\mu_n"
},
{
"math_id": 8,
"text": "S_n"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "S_n\\to S"
},
{
"math_id": 11,
"text": "\\mu"
},
{
"math_id": 12,
"text": "\\{\\mu_n\\}"
},
{
"math_id": 13,
"text": "\\langle z, S_n\\rangle\\to \\langle z, S\\rangle"
},
{
"math_id": 14,
"text": "z\\in E^*"
},
{
"math_id": 15,
"text": "\\mathbb{E}[e^{i\\langle z,S_n\\rangle}]\\to \\int_{E}e^{i\\langle z,x\\rangle}\\mu(\\mathrm{d}x)."
},
{
"math_id": 16,
"text": "3"
},
{
"math_id": 17,
"text": "\\mu_n\\implies \\mu"
},
{
"math_id": 18,
"text": "4"
},
{
"math_id": 19,
"text": "1\\iff 2 \\iff 3"
},
{
"math_id": 20,
"text": "6\\implies 3"
}
]
| https://en.wikipedia.org/wiki?curid=74090169 |
740918 | SIGSALY | Secure speech system
SIGSALY (also known as the X System, Project X, Ciphony I, and the Green Hornet) was a secure speech system used in World War II for the highest-level Allied communications. It pioneered a number of digital communications concepts, including the first transmission of speech using pulse-code modulation.
The name SIGSALY was not an acronym, but a cover name that resembled an acronym—the SIG part was common in Army Signal Corps names (e.g., SIGABA). The prototype was called the "Green Hornet" after the radio show "The Green Hornet", because it sounded like a buzzing hornet, resembling the show's theme tune, to anyone trying to eavesdrop on the conversation.
Development.
At the time of its inception, long-distance telephone communications used the "A-3" voice scrambler developed by Western Electric. It worked on the voice inversion principle. The Germans had a listening station on the Dutch coast which could intercept and break A-3 traffic.
Although telephone scramblers were used by both sides in World War II, they were known not to be very secure in general, and both sides often cracked the scrambled conversations of the other. Inspection of the audio spectrum using a spectrum analyzer often provided significant clues to the scrambling technique. The insecurity of most telephone scrambler schemes led to the development of a more secure scrambler, based on the one-time pad principle.
A prototype was developed at Bell Telephone Laboratories, under the direction of A. B. Clark, assisted by British mathematician Alan Turing, and demonstrated to the US Army. The Army was impressed and awarded Bell Labs a contract for two systems in 1942. SIGSALY went into service in 1943 and remained in service until 1946.
Operation.
SIGSALY used a random noise mask to encrypt voice conversations which had been encoded by a vocoder. The latter was used to minimize the amount of redundancy (which is high in voice traffic), in order to reduce the amount of information to be encrypted.
The voice encoding used the fact that speech varies fairly slowly as the components of the throat move. The system extracts information about the voice signal 50 times a second (every 20 milliseconds).
Next, each signal was sampled for its amplitude once every 20 milliseconds. For the band amplitude signals, the amplitude converted into one of six amplitude levels, with values from 0 through 5. The amplitude levels were on a nonlinear scale, with the steps between levels wide at high amplitudes and narrower at low amplitudes. This scheme, known as "companding" or "compressing-expanding", exploits the fact that the fidelity of voice signals is more sensitive to low amplitudes than to high amplitudes. The pitch signal, which required greater sensitivity, was encoded by a pair of six-level values (one coarse, and one fine), giving thirty-six levels in all.
A cryptographic key, consisting of a series of random values from the same set of six levels, was subtracted from each sampled voice amplitude value to encrypt them before transmission. The subtraction was performed using modular arithmetic: a "wraparound" fashion, meaning that if there was a negative result, it was added to six to give a positive result. For example, if the voice amplitude value was 3 and the random value was 5, then the subtraction would work as follows:
formula_0
— giving a value of 4.
The sampled value was then transmitted, with each sample level transmitted on one of six corresponding frequencies in a frequency band, a scheme known as "frequency-shift keying (FSK)". The receiving SIGSALY read the frequency values, converted them into samples, and added the key values back to them to decrypt them. The addition was also performed in a "modulo" fashion, with six subtracted from any value over five. To match the example above, if the receiving SIGSALY got a sample value of 4 with a matching random value of 5, then the addition would be as follows:
formula_1
— which gives the correct value of 3.
To convert the samples back into a voice waveform, they were first turned back into the dozen low-frequency vocoded signals. An inversion of the vocoder process was employed, which included:
The noise values used for the encryption key were originally produced by large mercury-vapor rectifying vacuum tubes and stored on a phonograph record. The record was then duplicated, with the records being distributed to SIGSALY systems on both ends of a conversation. The records served as the SIGSALY "one-time pad", and distribution was very strictly controlled (although if one had been seized, it would have been of little importance, since only one pair of each was ever produced). For testing and setup purposes, a pseudo-random number generating system made out of relays, known as the "threshing machine", was used.
The records were played on turntables, but since the timing – the clock synchronization – between the two SIGSALY terminals had to be precise, the turntables were by no means just ordinary record-players. The rotation rate of the turntables was carefully controlled, and the records were started at highly specific times, based on precision time-of-day clock standards. Since each record only provided 12 minutes of key, each SIGSALY had two turntables, with a second record "queued up" while the first was "playing".
Usage.
The SIGSALY terminal was massive, consisting of 40 racks of equipment. It weighed over 50 tons, and used about 30 kW of power, necessitating an air-conditioned room to hold it. Too big and cumbersome for general use, it was only used for the highest level of voice communications.
A dozen SIGSALY terminal installations were eventually set up all over the world. The first was installed in the Pentagon building rather than the White House, which had an extension line, as the US President Franklin Roosevelt knew of the British Prime Minister Winston Churchill's insistence that he be able to call at any time of the day or night. The second was installed below street level in the basement of Selfridges department store on Oxford Street, London, close to the US Embassy on Grosvenor Square. The first conference took place on 15 July 1943, and it was used by both General Dwight D. Eisenhower as the commander of SHAEF, and Churchill, before extensions were installed to the Embassy, 10 Downing Street and the Cabinet War Rooms. One was installed in a ship and followed General Douglas MacArthur during his South Pacific campaigns. In total during WW2, the system supported about 3,000 high-level telephone conferences.
The installation and maintenance of all SIGSALY machines was undertaken by the specially formed and vetted members of the 805th Signal Service Company of the US Army Signal Corps. The system was cumbersome, but it worked very effectively. When the Allies invaded Germany, an investigative team discovered that the Germans had recorded significant amounts of traffic from the system, but had erroneously concluded that it was a complex telegraphic encoding system.
Significance.
SIGSALY has been credited with a number of "firsts"; this list is taken from (Bennett, 1983):
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "3 - 5 \\equiv -2,\\ -2 + 6 \\equiv 4\\pmod 6 \\,"
},
{
"math_id": 1,
"text": "4 + 5 \\equiv 9,\\ 9 - 6 \\equiv 3\\pmod 6 \\,"
}
]
| https://en.wikipedia.org/wiki?curid=740918 |
74097726 | Praseodymium arsenide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium arsenide is a binary inorganic compound of praseodymium and arsenic with the formula PrAs.
Preparation.
Praseodymium arsenide can be prepared by heating praseodymium and arsenic:
formula_0
Physical properties.
Praseodymium arsenide forms cubic crystals, space group F m3m, cell parameters "a" = 0.6009 nm, Z = 4, and structure like sodium chloride. When heated, it decomposes into arsenic and Pr4As3.
At a pressure of 27 GPa, a phase transition to the tetragonal crystal system occurs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ Pr + As \\ \\xrightarrow{T}\\ PrAs }"
}
]
| https://en.wikipedia.org/wiki?curid=74097726 |
74102918 | Pure 4D N = 1 supergravity | Minimal supergravity in four dimensions
In supersymmetry, pure 4D formula_0 supergravity describes the simplest four-dimensional supergravity, with a single supercharge and a supermultiplet containing a graviton and gravitino. The action consists of the Einstein–Hilbert action and the Rarita–Schwinger action. The theory was first formulated by Daniel Z. Freedman, Peter van Nieuwenhuizen, and Sergio Ferrara, and independently by Stanley Deser and Bruno Zumino in 1976. The only consistent extension to spacetimes with a cosmological constant is to anti-de Sitter space, first formulated by Paul Townsend in 1977. When additional matter supermultiplets are included in this theory, the result is known as matter-coupled 4D formula_1 supergravity.
Flat spacetime.
To describe the coupling between gravity and particles of arbitrary spin, it is useful to use the vielbein formalism of general relativity. This replaces the metric by a set of vector fields formula_2 indexed by flat indices formula_3 such that
formula_4
In a sense the vielbeins are the square root of the metric. This introduces a new local Lorentz symmetry on the vielbeins formula_5, together with the usual diffeomorphism invariance associated with the spacetime indices formula_6. This has an associated connection known as the spin connection formula_7 defined through formula_8, it being a generalization of the Christoffel connection to arbitrary spin fields. For example, for spinors the covariant derivative is given by
formula_9
where formula_10 are gamma matrices satisfing the Dirac algebra, with formula_11. These are often contracted with vielbeins to construct formula_12 which are in general position-dependent fields rather than constants. The spin connection has an explicit expression in terms of the vielbein and an additional torsion tensor which can arise when there is matter present in the theory. A vanishing torsion is equivalent to the Levi-Civita connection.
The pure formula_0 supergravity action in four dimensions is the combination of the Einstein–Hilbert action and the Rarita–Schwinger action
Pure 4D N=1 supergravity action
formula_13
Here formula_14 is the Planck mass, formula_15, and formula_16 is the Majorana gravitino with its spinor index left implicit. Treating this action within the first-order formalism where both the vielbein and spin connection are independent fields allows one to solve for the spin connections equation of motion, showing that it has the torsion formula_17. The second-order formalism action is then acquired by substituting this expression for the spin connection back into the action, yielding additional quartic gravitino vertices, with the Einstein–Hilbert and Rarita–Schwinger actions now being written with a torsionless spin connection that explicitly depends on the vielbeins.
The supersymmetry transformation rules that leave the action invariant are
formula_18
where formula_19 is the spinorial gauge parameter. While historically the first order and second order formalism were the first ones used to show the invariance of the action, the 1.5-order formalism is the easiest for most supergravity calculations. The additional symmetries of the action are general coordinate transformations and local Lorentz transformations.
Curved spacetime.
The four dimensional formula_0 super-Poincare algebra in Minkowski spacetime can be generalized to anti-de Sitter spacetime, but not to de Sitter spacetime, since the super-Jacobi identity cannot be satisfied in that case. Its action can be constructed by gauging this superalgebra, yielding the supersymmetry transformation rules for the vielbein and the gravitino.
The action for formula_0 AdS supergravity in four dimensions is
formula_20
where formula_21 is the AdS radius and the second term is the negative cosmological constant formula_22. The supersymmetry transformations are
formula_23
While the bilinear term in the action appears to be giving a mass to the gravitino, it still belongs to the massless gravity supermultiplet. This is because mass is not well-defined in curved spacetimes, with formula_24 no longer being a Casimir operator of the AdS super-Poinacre algebra. It is however conventional to define a mass through the Laplace–Beltrami operator, in which case particles within the same supermultiplet have different masses, unlike in flat spacetimes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal N=1"
},
{
"math_id": 1,
"text": "\\mathcal N = 1"
},
{
"math_id": 2,
"text": "e_a = e^\\mu_a \\partial_\\mu"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "\ng_{\\mu\\nu} = e^a_\\mu e^b_\\nu \\eta_{ab}.\n"
},
{
"math_id": 5,
"text": "e^a_\\mu \\rightarrow e^b_\\mu \\Lambda^a{}_b(x)"
},
{
"math_id": 6,
"text": "\\mu"
},
{
"math_id": 7,
"text": "\\omega^{ab}_\\mu"
},
{
"math_id": 8,
"text": "\\nabla_\\mu e_a = \\omega_\\mu{}^{b}{}_a e_b"
},
{
"math_id": 9,
"text": "\nD_\\mu = \\partial_\\mu + \\frac{1}{4}\\omega_\\mu^{ab}\\gamma_{ab},\n"
},
{
"math_id": 10,
"text": "\\gamma_a"
},
{
"math_id": 11,
"text": "\\gamma_{ab} = \\gamma_{[a}\\gamma_{b]}"
},
{
"math_id": 12,
"text": "\\gamma_\\mu = e^a_\\mu \\gamma_a"
},
{
"math_id": 13,
"text": " S =\\frac{M_P^2}{2} \\int d^4 x \\ e R -\\frac{M_P^2}{2} \\int d^4 x \\ e \\ \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho}D_\\nu \\psi_\\rho. "
},
{
"math_id": 14,
"text": "M_P"
},
{
"math_id": 15,
"text": "e = \\det e^a_\\mu=\\sqrt{-g}"
},
{
"math_id": 16,
"text": "\\psi_\\mu"
},
{
"math_id": 17,
"text": "T^\\mu_{ab} = \\tfrac{1}{2}\\bar \\psi_a\\gamma^\\mu \\psi_b"
},
{
"math_id": 18,
"text": "\n\\delta e^a_\\mu = \\tfrac{1}{2}\\bar \\epsilon \\gamma^a \\psi_\\mu, \\ \\ \\ \\ \\ \\ \\ \\ \\delta \\psi_\\mu = D_\\mu \\epsilon,\n"
},
{
"math_id": 19,
"text": "\\epsilon(x)"
},
{
"math_id": 20,
"text": "\nS = \\frac{M_P^2}{2} \\int d^4 x \\ e \\bigg(R+ \\frac{6}{L^2}\\bigg) - \\frac{M_P^2}{2}\\int d^4 x \\ e \\bigg(\\bar \\psi_\\mu \\gamma^{\\mu \\nu \\rho}D_\\nu \\psi_\\rho + \\frac{1}{L}\\bar \\psi_\\mu \\gamma^{\\mu\\nu} \\psi_\\nu\\bigg),\n"
},
{
"math_id": 21,
"text": "L"
},
{
"math_id": 22,
"text": "\\Lambda = -3/L^2"
},
{
"math_id": 23,
"text": "\n\\delta e^a_\\mu = \\tfrac{1}{2}\\bar \\epsilon \\gamma^a \\psi_\\mu, \\ \\ \\ \\ \\ \\ \\delta \\psi_\\mu = D_\\mu \\epsilon - \\tfrac{1}{2L} \\gamma_\\mu \\epsilon.\n"
},
{
"math_id": 24,
"text": "P_\\mu P^\\mu"
}
]
| https://en.wikipedia.org/wiki?curid=74102918 |
74104164 | Hardy distribution | Discrete probability distribution
In probability theory and statistics, the Hardy distribution is a discrete probability distribution that expresses the probability of the hole score for a given golf player. It is based on Hardy's (Hardy, 1945) basic assumption that there are three types of shots:
good formula_8,
bad formula_9 and
ordinary formula_10,
where the probability of a good hit equals formula_11, the probability of a bad hit equals formula_12 and the probability of an ordinary hit equals formula_13. Hardy further assigned
a value of 2 to a good stroke,
a value of 0 to a bad stroke and
a value of 1 to a regular or ordinary stroke.
Once the sum of the values is greater than or equal to the value of the par of the hole, the number of strokes in question is equal to the score achieved on that hole. A birdie on a par three could then have come about in three ways: formula_14, formula_15 and formula_16, respectively, with probabilities formula_17, formula_18 and formula_19.
Definitions.
Probability mass function.
A discrete random variable X is said to have a Hardy distribution, with parameters formula_11, formula_12 and formula_20 if it has a probability mass function given by:
formula_0 if "m" is odd
and
formula_1 if "m" is even
with
formula_2
and
formula_3
where
The moment generating function is given by:
formula_4 if "m" is odd
and
formula_5 if "m" is even
with
formula_6
and
formula_7
Each raw moment and each central moment can be easily determined with the moment generating function, but the formulas involved are too large to present here.
Hardy distribution for a par three, four and five.
For a par three:
formula_27
For a par four:
formula_28
Note the resemblance with formula_29. For a par five:
formula_30
Note the resemblance with the formulas for formula_29 and formula_31.
History.
When trying to make a probability distribution in golf that describes the frequency distribution of the number of strokes on a hole, the simplest setup is to assume that there are only two types of strokes:
A good stroke with a probability of formula_11
A bad stroke with a probability of formula_32.
while
a good shot then gets the value 1 and
a bad shot gets the value 0.
Once the sum of the shot values equals the par of the hole, that is the number of strokes needed for the hole.
It is clear that with this setup, a birdie is not possible. After all, the smallest number of strokes one can get is the par of the hole. Hardy (1945) probably realized that too and then came up with the idea not to assume that there were just two types of strokes: good formula_8 and bad formula_9, but three types:
good formula_8 with probability formula_11
bad formula_9 with probability formula_12
ordinary formula_10 with probability formula_33.
In fact, Hardy called a good shot a "supershot" and a bad shot a "subshot". Minton later called Hardy's supershot an "excellent" shot formula_34 and Hardy's subshot a bad shot formula_9. In this article, Minton's excellent shot is called a good shot formula_8. Hardy came up with the idea of three types of shots in 1945, but the actual derivation of the probability distribution of the hole score was not given until 2012 by van der Ven.
Hardy assumed that the probability of a good stroke was equal to the probability of a bad stroke, namely formula_35. This was confirmed by Kang:
<templatestyles src="Template:Blockquote/styles.css" />Hardy's model is very simple in that all strokes are independent from each other and the probability of producing a good shot is equal to the probability of producing a bad shot.
In retrospect, Hardy might well have been right, as the data in Table 2 in van der Ven (2013) show. This table shows the estimated formula_11- and formula_12-values for holes 1-18 for rounds 1 and 2 of the 2012 British Open Championship. The mean values were equal to 0.0633 and 0.0697, respectively. Later Cohen (2002) introduced the idea that formula_11 and formula_12 should be different. Kang says about this:
<templatestyles src="Template:Blockquote/styles.css" />
For the Hardy distribution the values of formula_11 and formula_12 may be different.
Goodness of fit.
The Hardy distribution gives the probability distribution of a single player's hole score. It takes several observations to perform a goodness-of-fit test (see Goodness of fit test) to check whether the Hardy distribution applies or not. This can be done with a single individual by having the individual play the same hole multiple times. Goodness-of-fit tests assume pure replications (see Replication (statistics)). This means that there should be no change in the player's golfing ability during repeated play of the hole. For example, there should not be an ongoing learning process (see Learning). Such effects cannot really be ruled out. One way around this problem is to use multiple players who can be assumed to have approximately the same golf proficiency. Such players are the participants in professional golf tournaments (see PGA Tour). Before using a goodness-of-fit test, it should first be checked that the participants indeed have approximately the same golf proficiency. This can be done separately for each hole by using, for example, the Pearson correlation coefficient between the hole score on the first day and the second day of a tournament. If there are no systematic differences (see Classical test theory) between players, the correlation (see Correlation) between the score achieved on Day 1 on a hole and the score achieved on Day 2 on that hole will not differ significantly (see Statistical significance) from zero. This can be easily tested statistically. In a study by van der Ven, the results of a goodness-of-fit test of the Hardy distribution were reported using the hole-by-hole scores from the 2012 Open Championship played at the St Andrews Golf Club. The distribution has been tested separately for each hole. Pearson's chi-squared test was used to determine whether the "observed" sample frequencies of the hole scores differed significantly from the "expected" frequencies according to the Hardy distribution. The fit between observed and expected frequencies was generally very satisfactory.
References.
Notes
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P \\left( X=n \\right) \\, = \\,\\sum _{j=\\frac{m+1}{2}}^{m}{n-1\\choose n-j}{q}^{n-j} \\left( A_{{j,m}}+B_{{j,m}} \\right) "
},
{
"math_id": 1,
"text": "P \\left( X=n \\right) \\, = \\,\\sum _{j=\\frac{m}{2}}^{m}{n-1\\choose n-j}{q}^{n-j} \\left( A_{{j,m}}+B_{{j,m}} \\right) "
},
{
"math_id": 2,
"text": "A_{{j,m}}\\, = \\, {j-1\\choose 2\\,j-m-1}{p}^{m-j+1} \\left( 1-p-q \\right) ^{2\\,j-m-1}"
},
{
"math_id": 3,
"text": "B_{{j,m}}\\, = \\, {j\\choose 2\\,j-m}{p}^{m-j} \\left( 1-p-q \\right) ^{2\\,j-m}"
},
{
"math_id": 4,
"text": "M _{m }\\left(t \\right)=\\sum _{j=\\frac{m}{2}+\\frac{1}{2}}^{m}\\frac{\\left(X _{{\\it jm} }+Y _{{\\it jm} }\\right)~e^{j ~t }}{\\left(1-e^{t }~q \\right)^{j }}"
},
{
"math_id": 5,
"text": "M _{m }\\left(t \\right)=\\sum _{j=\\frac{m}{2}}^{m}\\frac{\\left(X _{{\\it jm} }+Y _{{\\it jm} }\\right)~e^{j ~t }}{\\left(1-e^{t }~q \\right)^{j }}"
},
{
"math_id": 6,
"text": "X_{{{\\it jm}}}\\, = \\,{j-1\\choose 2\\,j-m-1}{p}^{m+1-j} \\left( 1-p-q \\right) ^{2\\,j-m-1}"
},
{
"math_id": 7,
"text": "Y_{{{\\it jm}}}\\, = \\,{j\\choose 2\\,j-m}{p}^{-j+m} \\left( 1-p-q \\right) ^{2\\,j-m}"
},
{
"math_id": 8,
"text": "(G)"
},
{
"math_id": 9,
"text": "(B)"
},
{
"math_id": 10,
"text": "(O)"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "q"
},
{
"math_id": 13,
"text": "1 - p - q"
},
{
"math_id": 14,
"text": "OG"
},
{
"math_id": 15,
"text": "GO"
},
{
"math_id": 16,
"text": "GG"
},
{
"math_id": 17,
"text": "(1-p-q) \\, p"
},
{
"math_id": 18,
"text": "p \\, (1-p-q)"
},
{
"math_id": 19,
"text": "p^2"
},
{
"math_id": 20,
"text": "m"
},
{
"math_id": 21,
"text": "m = 1, 2, \\ldots"
},
{
"math_id": 22,
"text": "n = \\frac{m}{2}, \\frac{m}{2} + 1, \\frac{m}{2} + 2, \\ldots"
},
{
"math_id": 23,
"text": "n = \\frac{m+1}{2}, \\frac{m+1}{2} + 1, \\frac{m+1}{2} + 2, \\ldots"
},
{
"math_id": 24,
"text": "0 < p < 1"
},
{
"math_id": 25,
"text": "0 < q < 1"
},
{
"math_id": 26,
"text": "0 < p+q < 1"
},
{
"math_id": 27,
"text": "\\begin{align}\nP \\left( T_{{3}}=n \\right) &= {n-1\\choose n-2} {q}^{n-2} \\left( {p}^{2}+2\\,p \\, \\left( 1-p-q \\right) \\right)+\\\\\n&+{n-1\\choose n-3}{q}^{n-3} \\left( p \\, \\left( 1-p-q \\right) ^{2}+ \\left( 1-p-q \\right) ^{3} \\right)\n\\end{align}"
},
{
"math_id": 28,
"text": "\\begin{align}\nP \\left( T_{{4}}=n \\right) &= {n-1\\choose n-2} {q}^{n-2}{p}^{2}+\\\\\n&+{n-1\\choose n-3}{q}^{n-3} \\left( 2\\,{p}^{2} \\left( 1-p-q \\right) +3\\,p \\, \\left( 1-p-q \\right) ^{2} \\right)+\\\\\n&+{n-1\\choose n-4}{q}^{n-4} \\left( p \\, \\left( 1-p-q \\right) ^{3}+ \\left( 1-p-q \\right) ^{4} \\right)\n\\end{align}"
},
{
"math_id": 29,
"text": "P(T_{3}=n)"
},
{
"math_id": 30,
"text": "\\begin{align}\nP \\left( T_{{5}}=n \\right) &= {n-1\\choose n-3}{q}^{n-3} \\left( {p}^{3}+3\\,{p}^{2} \\left( 1-p-q \\right) \\right) +\\\\\n&+{n-1\\choose n-4}{q}^{n-4} \\left( 3\\,{p}^{2} \\left( 1-p-q \\right) ^{2}+4\\,p \\, \\left( 1-p-q \\right) ^{3} \\right) +\\\\\n&+{n-1\\choose n-5}{q}^{n-5} \\left( p \\, \\left( 1-p-q \\right) ^{4}+ \\left( 1-p-q \\right) ^{5} \\right)\n\\end{align}"
},
{
"math_id": 31,
"text": "P(T_{4}=n)"
},
{
"math_id": 32,
"text": "1-p"
},
{
"math_id": 33,
"text": "1-p-q"
},
{
"math_id": 34,
"text": "(E)"
},
{
"math_id": 35,
"text": "p = q"
}
]
| https://en.wikipedia.org/wiki?curid=74104164 |
74110097 | Kleene equality | Equality operator on partial functions
In mathematics, Kleene equality, or strong equality, (formula_0) is an equality operator on partial functions, that states that on a given argument either both functions are undefined, or both are defined and their values on that arguments are equal.
For example, if we have partial functions formula_1 and formula_2, formula_3 means that for every formula_4:
Some authors are using "quasi-equality", which is defined like this:
formula_8
where the down arrow means that the term on the left side of it is defined.
Then it becomes possible to define the strong equality in the following way:
formula_9 | [
{
"math_id": 0,
"text": "\\simeq"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "f \\simeq g"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "f(x)"
},
{
"math_id": 6,
"text": "g(x)"
},
{
"math_id": 7,
"text": "f(x) = g(x)"
},
{
"math_id": 8,
"text": "\n(y_1 \\sim y_2):\\Leftrightarrow((y_1\\downarrow \\lor y_2\\downarrow)\\longrightarrow y_1=y_2),\n"
},
{
"math_id": 9,
"text": "\n(f \\simeq g):\\Leftrightarrow (\\forall x. (f(x)\\sim g(x))).\n"
}
]
| https://en.wikipedia.org/wiki?curid=74110097 |
7411694 | Aitken's delta-squared process | Numerical analysis series acceleration method
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.
Definition.
Given a sequence formula_0, one associates with this sequence the new sequence
formula_1
which can, with improved numerical stability, also be written as
formula_2
or equivalently as
formula_3
where
formula_4
and
formula_5
for formula_6.
Obviously, formula_7 is ill-defined if formula_8 contains a zero element, or equivalently, if the sequence of first differences has a repeating term.
From a theoretical point of view, if that occurs only for a finite number of indices, one could easily agree to consider the sequence formula_7 restricted to indices formula_9 with a sufficiently large formula_10. From a practical point of view, one does in general rather consider only the first few terms of the sequence, which usually provide the needed precision. Moreover, when numerically computing the sequence, one has to take care to stop the computation when rounding errors in the denominator become too large, where the Δ2 operation may cancel too many significant digits. (It would be better for numerical calculation to use formula_11 rather than formula_12.)
Properties.
Aitken's delta-squared process is a method of acceleration of convergence, and a particular case of a nonlinear sequence transformation.
Convergence of formula_13 to limit formula_14 is called "linear" if there is some number "μ" ∈ (0, 1) for which
formula_15
Which means that the distance between the sequence and its limit shrinks by nearly the same proportion on every step, and that rate of reduction becomes closer to being constant with every step. (This is also called "geometric convergence"; this form of convergence is common for power series.)
Aitken's method will accelerate the sequence formula_16 if formula_17
formula_18 is not a linear operator, but a constant term drops out, viz: formula_19 if formula_20 is a constant. This is clear from the expression of formula_21 in terms of the finite difference operator formula_22
Although the new process does not in general converge quadratically, it can be shown that for a fixed point process, that is, for an iterated function sequence formula_23 for some function formula_24 converging to a fixed point, the convergence is quadratic. In this case, the technique is known as Steffensen's method.
Empirically, the "A"-operation eliminates the "most important error term". One can check this by considering a sequence of the form formula_25, where formula_26:
The sequence formula_27 will then go to the limit like formula_28 goes to zero.
Geometrically, the graph of an exponential function formula_29 that satisfies formula_30, formula_31 and formula_32 has an horizontal asymptote at formula_33 (if formula_34).
One can also show that if formula_35 goes to its limit formula_20 at a rate strictly greater than 1, formula_27 does not have a better rate of convergence. (In practice, one rarely has e.g. quadratic convergence which would mean over 30 resp. 100 correct decimal places after 5 resp. 7 iterations (starting with 1 correct digit); usually no acceleration is needed in that case.)
In practice, formula_27 converges much faster to the limit than formula_35 does, as demonstrated by the example calculations below.
Usually, it is much cheaper to calculate formula_27 (involving only calculation of differences, one multiplication and one division) than to calculate many more terms of the sequence formula_35. Care must be taken, however, to avoid introducing errors due to insufficient precision when calculating the "differences" in the numerator and denominator of the expression.
Example calculations.
Example 1: The value of formula_36 can be approximated by assuming an initial value for formula_37 and iterating the following:
formula_38
Starting with formula_39
It is worth noting here that Aitken's method does not save two iteration steps; computation of the first three "Ax" values required the first five "x" values. Also, the second Ax value is decidedly inferior to the 4th x value, mostly due to the fact that Aitken's process assumes linear, rather than quadratic, convergence.
Example 2: The value of formula_40 may be calculated as an infinite sum:
formula_41
In this example, Aitken's method is applied to a sublinearly converging series, accelerating convergence considerably. It is still sublinear, but much faster than the original convergence: the first Ax value, whose computation required the first three x values, is closer to the limit than the eighth x value.
Example pseudocode for Aitken extrapolation.
The following is an example of using the Aitken extrapolation to help find the limit of the sequence formula_42 when given some initial formula_43 where the limit of this sequence is assumed to be a fixed point formula_44 (say formula_45). For instance, if the sequence is given by formula_46 with starting point formula_47 then the function will be formula_48 which has formula_49 as a fixed point (see Methods of computing square roots); it is this fixed point whose value will be approximated.
This pseudo code also computes the Aitken approximation to formula_50. The Aitken extrapolates will be denoted by codice_0. During the computation of the extrapolate, it is important to check if the denominator becomes too small, which could happen if we already have a large amount of accuracy; without this check, a large amount of error could be introduced by the division. This small number will be denoted by codice_1. Because the binary representation of the fixed point could be infinite (or at least too large to fit in the available memory), the calculation will stop once the approximation is within codice_2 of the true value.
%These choices depend on the problem being solved
x0 = 1 %The initial value
f(x) = (1/2)*(x + 2/x) %The function that finds the next element in the sequence
tolerance = 10^-10 %10 digit accuracy is desired
epsilon = 10^-16 %Do not divide by a number smaller than this
maxIterations = 20 %Do not allow the iterations to continue indefinitely
haveWeFoundSolution = false %Were we able to find the solution to within the desired tolerance? not yet
for i = 1 : maxIterations
x1 = f(x0)
x2 = f(x1)
if (x1 ~= x0)
lambda = absoluteValue((x2 - x1)/(x1 - x0)) %OPTIONAL: Computes an approximation of |f'(fixedPoint)|, which is denoted by lambda
end
denominator = (x2 - x1) - (x1 - x0);
if (absoluteValue(denominator) < epsilon) %To avoid greatly increasing error, do not divide by too small of a number
print('WARNING: denominator is too small')
break %Leave the loop
end
aitkenX = x2 - ( (x2 - x1)^2 )/denominator
if (absoluteValue(aitkenX - x2) < tolerance) %If the value is within tolerance
print("The fixed point is ", aitkenX)) %Display the result of the Aitken extrapolation
haveWeFoundSolution = true
break %Done, so leave the loop
end
x0 = aitkenX %Update x0 to start again
end
if (haveWeFoundSolution == false) %If we were not able to find a solution to within the desired tolerance
print("Warning: Not able to find solution to within the desired tolerance of ", tolerance)
print("The last computed extrapolate was ", aitkenX)
end | [
{
"math_id": 0,
"text": "X = {(x_n)}_{n\\in\\N}"
},
{
"math_id": 1,
"text": "A X = {\\left(\\frac{x_n\\,x_{n+2}-x_{n+1}^2}{x_n+x_{n+2}-2\\,x_{n+1}}\\right)}_{n\\in\\Z^*},"
},
{
"math_id": 2,
"text": " (A X)_n = x_n-\\frac{(\\Delta x_n)^2}{\\Delta^2 x_n},"
},
{
"math_id": 3,
"text": "(A X)_n = x_{n+2} - \\frac{(\\Delta x_{n+1})^2}{\\Delta^2 x_n} = x_{n+2} - \\frac{(x_{n+2}-x_{n+1})^2}{(x_{n+2}-x_{n+1})-(x_{n+1}-x_{n})}"
},
{
"math_id": 4,
"text": "\\Delta x_{n}={(x_{n+1}-x_{n})},\\ \\Delta x_{n+1}={(x_{n+2}-x_{n+1})},"
},
{
"math_id": 5,
"text": "\\Delta^2 x_n=x_n -2x_{n+1} + x_{n+2}=\\Delta x_{n+1}-\\Delta x_{n},\\ "
},
{
"math_id": 6,
"text": "n = 0, 1, 2, 3, \\dots "
},
{
"math_id": 7,
"text": " A X "
},
{
"math_id": 8,
"text": " \\Delta^2 x "
},
{
"math_id": 9,
"text": " n > n_0 "
},
{
"math_id": 10,
"text": " n_0 "
},
{
"math_id": 11,
"text": " \\Delta x_{n+1} - \\Delta x_{n}\\ = (x_{n+2}-x_{n+1})-(x_{n+1}-x_{n})\\ "
},
{
"math_id": 12,
"text": "x_n - 2x_{n+1} + x_{n+2}\\ "
},
{
"math_id": 13,
"text": "\\ \\{ x_n \\}_{n=1}^\\infty\\ "
},
{
"math_id": 14,
"text": "\\ \\ell\\ "
},
{
"math_id": 15,
"text": " \\lim_{n\\to \\infty} \\frac{|x_{n+1} - \\ell|}{|x_n - \\ell|} = \\mu\\ ."
},
{
"math_id": 16,
"text": "x_n"
},
{
"math_id": 17,
"text": "\\ \\lim_{n \\to \\infty}\\frac{(A x)_n-\\ell}{x_n-\\ell} = 0\\ ."
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": "\\ A[x-\\ell] = Ax - \\ell\\ ,"
},
{
"math_id": 20,
"text": "\\ell"
},
{
"math_id": 21,
"text": "\\ Ax\\ "
},
{
"math_id": 22,
"text": "\\ \\Delta\\ ."
},
{
"math_id": 23,
"text": "\\ x_{n+1} = f(x_n)\\ "
},
{
"math_id": 24,
"text": "\\ f\\ ,"
},
{
"math_id": 25,
"text": "x_n=\\ell+a^n+b^n"
},
{
"math_id": 26,
"text": "0 < b < a < 1"
},
{
"math_id": 27,
"text": "Ax"
},
{
"math_id": 28,
"text": "b^n"
},
{
"math_id": 29,
"text": "f(t)"
},
{
"math_id": 30,
"text": "f(n) = x_n"
},
{
"math_id": 31,
"text": "f(n+1) = x_{n+1}"
},
{
"math_id": 32,
"text": "f(n+2)=x_{n+2}"
},
{
"math_id": 33,
"text": "\\frac{x_{n} x_{n+2}-x_{n+1}^2}{x_{n}-2x_{n+1}+x_{n+2}}"
},
{
"math_id": 34,
"text": "x_{n} - 2x_{n+1} + x_{n+2} \\neq 0"
},
{
"math_id": 35,
"text": "x"
},
{
"math_id": 36,
"text": "\\sqrt{2} \\approx 1.4142136"
},
{
"math_id": 37,
"text": "a_0"
},
{
"math_id": 38,
"text": "a_{n+1} = \\frac{a_n + \\frac{2}{a_n}}{2}. "
},
{
"math_id": 39,
"text": "a_0 = 1:"
},
{
"math_id": 40,
"text": "\\frac{\\pi}{4}"
},
{
"math_id": 41,
"text": "\\frac{\\pi}{4} = \\sum_{n=0}^\\infty \\frac{(-1)^n}{2n+1} \\approx 0.785398"
},
{
"math_id": 42,
"text": "x_{n+1} = f(x_n)"
},
{
"math_id": 43,
"text": "x_0,"
},
{
"math_id": 44,
"text": "f"
},
{
"math_id": 45,
"text": "\\alpha = f(\\alpha)"
},
{
"math_id": 46,
"text": "x_{n+1} = \\frac{1}{2} \\left(x_n + \\frac{2}{x_n}\\right)"
},
{
"math_id": 47,
"text": "x_0 = 1,"
},
{
"math_id": 48,
"text": "f(x) := \\frac{1}{2}\\left(x + \\frac{2}{x}\\right),"
},
{
"math_id": 49,
"text": "\\alpha := \\sqrt{2}"
},
{
"math_id": 50,
"text": "f^{\\prime}(\\alpha)"
}
]
| https://en.wikipedia.org/wiki?curid=7411694 |
7411740 | Ars Magna (Cardano book) | 1545 text on mathematics by Gerolamo Cardano
The Ars Magna ("The Great Art", 1545) is an important Latin-language book on algebra written by Gerolamo Cardano. It was first published in 1545 under the title Artis Magnae, Sive de Regulis Algebraicis Liber Unus ("Book number one about The Great Art, or The Rules of Algebra"). There was a second edition in Cardano's lifetime, published in 1570. It is considered one of the three greatest scientific treatises of the early Renaissance, together with Copernicus' "De revolutionibus orbium coelestium" and Vesalius' "De humani corporis fabrica". The first editions of these three books were published within a two-year span (1543–1545).
History.
In 1535 Niccolò Fontana Tartaglia became famous for having solved cubics of the form "x"3 + "ax" = "b" (with "a","b" > 0). However, he chose to keep his method secret. In 1539, Cardano, then a lecturer in mathematics at the Piatti Foundation in Milan, published his first mathematical book, "Pratica Arithmeticæ et mensurandi singularis" ("The Practice of Arithmetic and Simple Mensuration"). That same year, he asked Tartaglia to explain to him his method for solving cubic equations. After some reluctance, Tartaglia did so, but he asked Cardano not to share the information until he published it. Cardano submerged himself in mathematics during the next several years working on how to extend Tartaglia's formula to other types of cubics. Furthermore, his student Lodovico Ferrari found a way of solving quartic equations, but Ferrari's method depended upon Tartaglia's, since it involved the use of an auxiliary cubic equation. Then Cardano became aware of the fact that Scipione del Ferro had discovered Tartaglia's formula before Tartaglia himself, a discovery that prompted him to publish these results.
Contents.
The book, which is divided into forty chapters, contains the first published algebraic solution to cubic and quartic equations. Cardano acknowledges that Tartaglia gave him the formula for solving a type of cubic equations and that the same formula had been discovered by Scipione del Ferro. He also acknowledges that it was Ferrari who found a way of solving quartic equations.
Since at the time negative numbers were not generally acknowledged, knowing how to solve cubics of the form "x"3 + "ax" = "b" did not mean knowing how to solve cubics of the form "x"3 = "ax" + "b" (with "a","b" > 0), for instance. Besides, Cardano also explains how to reduce equations of the form "x"3 + "ax"2 + "bx" + "c" = 0 to cubic equations without a quadratic term, but, again, he has to consider several cases. In all, Cardano was driven to the study of thirteen different types of cubic equations (chapters XI–XXIII).
In "Ars Magna" the concept of multiple root appears for the first time (chapter I). The first example that Cardano provides of a polynomial equation with multiple roots is "x"3 = 12"x" + 16, of which −2 is a double root.
"Ars Magna" also contains the first occurrence of complex numbers (chapter XXXVII). The problem mentioned by Cardano which leads to square roots of negative numbers is: find two numbers whose sum is equal to 10 and whose product is equal to 40. The answer is 5 + √−15 and 5 − √−15. Cardano called this "sophistic," because he saw no physical meaning to it, but boldly wrote "nevertheless we will operate" and formally calculated that their product does indeed equal 40. Cardano then says that this answer is "as subtle as it is useless".
It is a common misconception that Cardano introduced complex numbers in solving cubic equations. Since (in modern notation) Cardano's formula for a root of the polynomial "x"3 + "px" + "q" is
formula_0
square roots of negative numbers appear naturally in this context. However, "q"2/4 + "p"3/27 never happens to be negative in the specific cases in which Cardano applies the formula.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt[3]{-\\frac q2+\\sqrt{\\frac{q^2}{4}+\\frac{p^3}{27}}}+\\sqrt[3]{-\\frac q2-\\sqrt{\\frac{q^2}{4}+\\frac{p^3}{27}}},"
}
]
| https://en.wikipedia.org/wiki?curid=7411740 |
74122214 | Lexell's theorem | Characterizes spherical triangles with fixed base and area
In spherical geometry, Lexell's theorem holds that every spherical triangle with the same surface area on a fixed base has its apex on a small circle, called "Lexell's circle" or "Lexell's locus", passing through each of the two points antipodal to the two base vertices.
A "spherical triangle" is a shape on a sphere consisting of three vertices (corner points) connected by three sides, each of which is part of a "great circle" (the analog on the sphere of a straight line in the plane, for example the equator and meridians of a globe). Any of the sides of a spherical triangle can be considered the "base", and the opposite vertex is the corresponding "apex". Two points on a sphere are "antipodal" if they are diametrically opposite, as far apart as possible.
The theorem is named for Anders Johan Lexell, who presented a paper about it c. 1777 (published 1784) including both a trigonometric proof and a geometric one. Lexell's colleague Leonhard Euler wrote another pair of proofs in 1778 (published 1797), and a variety of proofs have been written since by Adrien-Marie Legendre (1800), Jakob Steiner (1827), Carl Friedrich Gauss (1841), Paul Serret (1855), and Joseph-Émile Barbier (1864), among others.
The theorem is the analog of propositions 37 and 39 in Book I of Euclid's "Elements", which prove that every planar triangle with the same area on a fixed base has its apex on a straight line parallel to the base. An analogous theorem can also be proven for hyperbolic triangles, for which the apex lies on a hypercycle.
Statement.
Given a fixed base formula_0 an arc of a great circle on a sphere, and two apex points formula_1 and formula_2 on the same side of great circle formula_0 Lexell's theorem holds that the surface area of the spherical triangle formula_3 is equal to that of formula_4 if and only if formula_2 lies on the small-circle arc formula_5 where formula_6 and formula_7 are the points antipodal to formula_8 and formula_9 respectively.
As one analog of the planar formula formula_10 for the area of a triangle, the spherical excess formula_11 of spherical triangle formula_4 can be computed in terms of the base formula_12 (the angular length of arc formula_13) and "height" formula_14 (the angular distance between the parallel small circles formula_15 and formula_16):
formula_17
This formula is based on consideration of a sphere of radius formula_18, on which arc length is called "angle measure" and surface area is called "spherical excess" or "solid angle measure". The angle measure of a complete great circle is formula_19 radians, and the spherical excess of a hemisphere (half-sphere) is formula_19 steradians, where formula_20 is the circle constant.
In the limit for triangles much smaller than the radius of the sphere, this reduces to the planar formula.
The small circles formula_15 and formula_16 each intersect the great circle formula_13 at an angle of formula_21
Proofs.
There are several ways to prove Lexell's theorem, each illuminating a different aspect of the relationships involved.
Isosceles triangles.
The main idea in Lexell's c. 1777 geometric proof – also adopted by Eugène Catalan (1843), Robert Allardice (1883), Jacques Hadamard (1901), Antoine Gob (1922), and Hiroshi Maehara (1999) – is to split the triangle formula_22 into three isosceles triangles with common apex at the circumcenter formula_23 and then chase angles to find the spherical excess formula_11 of triangle formula_24 In the figure, points formula_8 and formula_25 are on the far side of the sphere so that we can clearly see their antipodal points and all of Lexell's circle formula_26
Let the base angles of the isosceles triangles formula_27 (shaded red in the figure), formula_28 (blue), and formula_29 (purple) be respectively formula_30 formula_31 and formula_32 (In some cases formula_23 is outside formula_22; then one of the quantities formula_33 will be negative.) We can compute the internal angles of formula_4 (orange) in terms of these angles: formula_34 (the supplement of formula_35) and likewise formula_36 and finally formula_37
By Girard's theorem the spherical excess of formula_4 is
formula_38
If base formula_13 is fixed, for any third vertex formula_1 falling on the same arc of Lexell's circle, the point formula_23 and therefore the quantity formula_39 will not change, so the excess formula_11 of formula_40 which depends only on formula_41 will likewise be constant. And vice versa: if formula_11 remains constant when the point formula_1 is changed, then so must formula_39 be, and therefore formula_23 must be fixed, so formula_1 must remain on Lexell's circle.
Cyclic quadrilateral.
Jakob Steiner (1827) wrote a proof in similar style to Lexell's, also using Girard's theorem, but demonstrating the angle invariants in the triangle formula_22 by constructing a cyclic quadrilateral inside the Lexell circle, using the property that pairs of opposite angles in a spherical cyclic quadrilateral have the same sum.
Starting with a triangle formula_4, let formula_42 be the Lexell circle circumscribing formula_43 and let formula_44 be another point on formula_42 separated from formula_1 by the great circle formula_45 Let formula_46 formula_47 formula_48 formula_49
Because the quadrilateral formula_50 is cyclic, the sum of each pair of its opposite angles is equal, formula_51formula_52 or rearranged formula_53formula_54
By Girard's theorem the spherical excess formula_11 of formula_4 is
formula_55
The quantity formula_56 does not depend on the choice of formula_57 so is invariant when formula_1 is moved to another point on the same arc of formula_26 Therefore formula_11 is also invariant.
Conversely, if formula_1 is changed but formula_11 is invariant, then the opposite angles of the quadrilateral formula_50 will have the same sum, which implies formula_1 lies on the small circle formula_58
Spherical parallelograms.
Euler in 1778 proved Lexell's theorem analogously to Euclid's proof of "Elements" I.35 and I.37, as did Victor-Amédée Lebesgue independently in 1855, using spherical parallelograms – spherical quadrilaterals with congruent opposite sides, which have parallel small circles passing through opposite pairs of adjacent vertices and are in many ways analogous to Euclidean parallelograms. There is one complication compared to Euclid's proof, however: The four sides of a spherical parallelogram are the great-circle arcs through the vertices rather than the parallel small circles. Euclid's proof does not need to account for the small lens-shaped regions sandwiched between the great and small circles, which vanish in the planar case.
A lemma analogous to "Elements" I.35: two spherical parallelograms on the same base and between the same parallels have equal area.
"Proof": Let formula_59 and formula_60 be spherical parallelograms with the great circle formula_61 (the "midpoint circle") passing through the midpoints of sides formula_62 and formula_63 coinciding with the corresponding midpoint circle in formula_64 Let formula_65 be the intersection point between sides formula_66 and formula_67 Because the midpoint circle formula_61 is shared, the two top sides formula_68 and formula_69 lie on the same small circle formula_42 parallel to formula_61 and antipodal to a small circle formula_70 passing through formula_8 and formula_71
Two arcs of formula_42 are congruent, formula_72 thus the two curvilinear triangles formula_73 and formula_74 each bounded by formula_42 on the top side, are congruent. Each parallelogram is formed from one of these curvilinear triangles added to the triangle formula_75 and to one of the congruent lens-shaped regions between each top side and formula_76 with the curvilinear triangle formula_77 cut away. Therefore the parallelograms have the same area. (As in "Elements", the case where the parallelograms do not intersect on the sides is omitted, but can be proven by a similar argument.)
"Proof of Lexell's theorem": Given two spherical triangles formula_78 and formula_79 each with its apex on the same small circle formula_42 through points formula_6 and formula_80 construct new segments formula_68 and formula_69 congruent to formula_13 with vertices formula_81 and formula_82 on formula_26 The two quadrilaterals formula_59 and formula_60 are spherical parallelograms, each formed by pasting together the respective triangle and a congruent copy. By the lemma, the two parallelograms have the same area, so the original triangles must also have the same area.
"Proof of the converse": If two spherical triangles have the same area and the apex of the second is assumed to not lie on the Lexell circle of the first, then the line through one side of the second triangle can be intersected with the Lexell circle to form a new triangle which has a different area from the second triangle but the same area as the first triangle, a contradiction. This argument is the same as that found in "Elements" I.39.
Saccheri quadrilateral.
Another proof using the midpoint circle which is more visually apparent in a single picture is due to Carl Friedrich Gauss (1841), who constructs the Saccheri quadrilateral (a quadrilateral with two adjacent right angles and two other equal angles) formed between the side of the triangle and its perpendicular projection onto the midpoint circle formula_83 which has the same area as the triangle.
Let formula_61 be the great circle through the midpoints formula_85 of formula_86 and formula_87 of formula_88 and let formula_89 formula_90 and formula_91 be the perpendicular projections of the triangle vertices onto formula_92 The resulting pair of right triangles formula_93 and formula_94 (shaded red) have equal angles at formula_85 (vertical angles) and equal hypotenuses, so they are congruent; so are the triangles formula_95 and formula_96 (blue). Therefore, the area of triangle formula_4 is equal to the area of Saccheri quadrilateral formula_97 as each consists of one red triangle, one blue triangle, and the green quadrilateral formula_98 pasted together. (If formula_91 falls outside the arc formula_99 then either the red or blue triangles will have negative signed area.) Because the great circle formula_83 and therefore the quadrilateral formula_97 is the same for any choice of formula_1 lying on the Lexell circle formula_76 the area of the corresponding triangle formula_4 is constant.
Stereographic projection.
The stereographic projection maps the sphere to the plane. A designated great circle is mapped onto the "primitive circle" in the plane, and its poles are mapped to the origin (center of the primitive circle) and the point at infinity, respectively. Every circle on the sphere is mapped to a circle or straight line in the plane, with straight lines representing circles through the second pole. The stereographic projection is conformal, meaning it preserves angles.
To prove relationships about a general spherical triangle formula_40 without loss of generality vertex formula_8 can be taken as the point which projects to the origin. The sides of the spherical triangle then project to two straight segments and a circular arc. If the tangent lines to the circular side at the other two vertices intersect at point formula_100 a planar straight-sided quadrilateral formula_101 can be formed whose external angle at formula_102 is the spherical excess formula_103 of the spherical triangle. This is sometimes called the Cesàro method of spherical trigonometry, after crystallographer Giuseppe Cesàro who popularized it in two 1905 papers.
Paul Serret (in 1855, a half century before Cesàro), and independently Aleksander Simonič (2019), used Cesàro's method to prove Lexell's theorem. Let formula_104 be the center in the plane of the circular arc to which side formula_105 projects. Then formula_106 is a right kite, so the central angle formula_107 is equal to the external angle at formula_100 the triangle's spherical excess formula_108 Planar angle formula_109 is an inscribed angle subtending the same arc, so by the inscribed angle theorem has measure formula_110 This relationship is preserved for any choice of formula_1; therefore, the spherical excess of the triangle is constant whenever formula_1 remains on the Lexell circle formula_76 which projects to a line through formula_7 in the plane. (If the area of the triangle is greater than a half-hemisphere, a similar argument can be made, but the point formula_102 is no longer internal to the angle formula_111)
Perimeter of the polar triangle.
Every spherical triangle has a dual, its polar triangle; if triangle formula_112 (shaded purple) is the polar triangle of formula_4 (shaded orange) then the vertices formula_113 are the poles of the respective sides formula_114 and vice versa, the vertices formula_115 are the poles of the sides formula_116 The polar duality exchanges the sides (central angles) and external angles (dihedral angles) between the two triangles.
Because each side of the dual triangle is the supplement of an internal angle of the original triangle, the spherical excess formula_11 of formula_4 is a function of the perimeter formula_117 of the dual triangle formula_112:
formula_118
where the notation formula_119 means the angular length of the great-circle arc formula_120
In 1854 Joseph-Émile Barbier – and independently László Fejes Tóth (1953) – used the polar triangle in his proof of Lexell's theorem, which is essentially dual to the proof by isosceles triangles above, noting that under polar duality the Lexell circle formula_42 circumscribing formula_22 becomes an excircle formula_121 of formula_112 (incircle of a colunar triangle) externally tangent to side formula_122
If vertex formula_1 is moved along formula_76 the side formula_123 changes but always remains tangent to the same circle formula_124 Because the arcs from each vertex to either adjacent touch point of an incircle or excircle are congruent, formula_125 (blue segments) and formula_126 (red segments), the perimeter formula_117 is
formula_127
which remains constant, depending only on the circle formula_121 but not on the changing side formula_122 Conversely, if the point formula_1 moves off of formula_76 the associated excircle formula_121 will change in size, moving the points formula_128 and formula_129 both toward or both away from formula_130 and changing the perimeter formula_117 of formula_131 and thus changing formula_108
The locus of points formula_1 for which formula_11 is constant is therefore formula_26
Trigonometric proofs.
Both Lexell (c. 1777) and Euler (1778) included trigonometric proofs in their papers, and several later mathematicians have presented trigonometric proofs, including Adrien-Marie Legendre (1800), Louis Puissant (1842), Ignace-Louis-Alfred Le Cointe (1858), and Joseph-Alfred Serret (1862). Such proofs start from known triangle relations such as the spherical law of cosines or a formula for spherical excess, and then proceed by algebraic manipulation of trigonometric identities.
Opposite arcs of Lexell's circle.
The sphere is separated into two hemispheres by the great circle formula_0 and any Lexell circle through formula_6 and formula_7 is separated into two arcs, one in each hemisphere. If the point formula_2 is on the opposite arc from formula_57 then the areas of formula_4 and formula_3 will generally differ. However, if spherical surface area is interpreted to be signed, with sign determined by boundary orientation, then the areas of triangle formula_4 and formula_3 have opposite signs and differ by the area of a hemisphere.
Lexell suggested a more general framing. Given two distinct non-antipodal points formula_8 and formula_9 there are two great-circle arcs joining them: one shorter than a semicircle and the other longer. Given a triple formula_132 of points, typically formula_4 is interpreted to mean the area enclosed by the three shorter arcs joining each pair. However, if we allow choice of arc for each pair, then 8 distinct generalized spherical triangles can be made, some with self intersections, of which four might be considered to have the same base formula_133
These eight triangles do not all have the same surface area, but if area is interpreted to be signed, with sign determined by boundary orientation, then those which differ differ by the area of a hemisphere.
In this context, given four distinct, non-antipodal points formula_134 formula_9 formula_57 and formula_2 on a sphere, Lexell's theorem holds that the signed surface area of any generalized triangle formula_4 differs from that of any generalized triangle formula_3 by a whole number of hemispheres if and only if formula_135 formula_80 formula_57 and formula_2 are concyclic.
Special cases.
Lunar degeneracy.
As the apex formula_1 approaches either of the points antipodal to the base vertices – say formula_7 – along Lexell's circle formula_76 in the limit the triangle degenerates to a lune tangent to formula_42 at formula_7 and tangent to the antipodal small circle formula_70 at formula_9 and having the same excess formula_11 as any of the triangles with apex on the same arc of formula_26 As a degenerate triangle, it has a straight angle at formula_8 (i.e. formula_136 a half turn) and equal angles formula_137
As formula_1 approaches formula_7 from the opposite direction (along the other arc of Lexell's circle), in the limit the triangle degenerates to the co-hemispherical lune tangent to the Lexell circle at formula_7 with the opposite orientation and angles formula_138
Half-hemisphere area.
The area of a spherical triangle is equal to half a hemisphere (excess formula_139) if and only if the Lexell circle formula_140 is orthogonal to the great circle formula_0 that is if arc formula_141 is a diameter of circle formula_140 and arc formula_13 is a diameter of formula_142
In this case, letting formula_44 be the point diametrically opposed to formula_1 on the Lexell circle formula_140 then the four triangles formula_40 formula_143 formula_144 and formula_145 are congruent, and together form a spherical disphenoid formula_146 (the central projection of a disphenoid onto a concentric sphere). The eight points formula_147 are the vertices of a rectangular cuboid.
Related concepts and results.
Spherical parallelogram.
A "spherical parallelogram" is a spherical quadrilateral formula_148 whose opposite sides and opposite angles are congruent (formula_149 formula_150 formula_151 formula_152). It is in many ways analogous to a planar parallelogram. The two diagonals formula_86 and formula_153 bisect each-other and the figure has 2-fold rotational symmetry about the intersection point (so the diagonals each split the parallelogram into two congruent spherical triangles, formula_154 and formula_155); if the midpoints of either pair of opposite sides are connected by a great circle formula_61, the four vertices fall on two parallel small circles equidistant from it. More specifically, any vertex (say formula_44) of the spherical parallelogram lies at the intersection of the two Lexell circles (formula_156 and formula_157) passing through one of the adjacent vertices and the points antipodal to the other two vertices.
As with spherical triangles, spherical parallelograms with the same base and the apex vertices lying on the same Lexell circle have the same area; see above. Starting from any spherical triangle, a second congruent triangle can be formed via a (spherical) point reflection across the midpoint of any side. When combined, these two triangles form a spherical parallelogram with twice the area of the original triangle.
Sorlin's theorem (polar dual).
The polar dual to Lexell's theorem, sometimes called "Sorlin's theorem" after A. N. J. Sorlin who first proved it trigonometrically in 1825, holds that for a spherical trilateral formula_158 with sides on fixed great circles formula_159 (thus fixing the angle between them) and a fixed perimeter formula_160 (where formula_161 means the length of the triangle side formula_162), the envelope of the third side formula_12 is a small circle internally tangent to formula_159 and externally tangent to formula_163 the excircle to trilateral formula_164 Joseph-Émile Barbier later wrote a geometrical proof (1864) which he used to prove Lexell's theorem, by duality; see above.
This result also applies in Euclidean and hyperbolic geometry: Barbier's geometrical argument can be transplanted directly to the Euclidean or hyperbolic plane.
Foliation of the sphere.
Lexell's loci for any base formula_13 make a "foliation" of the sphere (decomposition into one-dimensional "leaves"). These loci are arcs of small circles with endpoints at formula_6 and formula_80 on which any intermediate point formula_1 is the apex of a triangle formula_165 of a fixed signed area. That area is twice the signed angle between the Lexell circle and the great circle formula_166 at either of the points formula_6 or formula_7; see above. In the figure, the Lexell circles are in green, except for those whose triangles' area is a multiple of a half hemisphere, which are black, with area labeled; see above.
These Lexell circles through formula_6 and formula_7 are the spherical analog of the family of Apollonian circles through two points in the plane.
Maximizing spherical triangle area subject to constraints.
In 1784 Nicolas Fuss posed and solved the problem of finding the triangle formula_4 of maximal area on a given base formula_13 with its apex formula_1 on a given great circle formula_167 Fuss used an argument involving infinitesimal variation of formula_57 but the solution is also a straightforward corollary of Lexell's theorem: the Lexell circle formula_140 through the apex must be tangent to formula_84 at formula_168
If formula_84 crosses the great circle through formula_13 at a point formula_23, then by the spherical analog of the tangent–secant theorem, the angular distance formula_169 to the desired point of tangency satisfies
formula_170
from which we can explicitly construct the point formula_1 on formula_84 such that formula_4 has maximum area.
In 1786 Theodor von Schubert posed and solved the problem of finding the spherical triangles of maximum and minimum area of a given base and altitude (the spherical length of a perpendicular dropped from the apex to the great circle containing the base); spherical triangles with constant altitude have their apex on a common small circle (the "altitude circle") parallel to the great circle containing the base. Schubert solved this problem by a calculus-based trigonometric approach to show that the triangle of minimal area has its apex at the nearest intersection of the altitude circle and the perpendicular bisector of the base, and the triangle of maximal area has its apex at the far intersection. However, this theorem is also a straightforward corollary of Lexell's theorem: the Lexell circles through the points antipodal to the base vertices representing the smallest and largest triangle areas are those tangent to the altitude circle. In 2019 Vincent Alberge and Elena Frenkel solved the analogous problem in the hyperbolic plane.
Steiner's theorem on area bisectors.
In the Euclidean plane, a median of a triangle is the line segment connecting a vertex to the midpoint of the opposite side. The three medians of a triangle all intersect at its centroid. Each median bisects the triangle's area.
On the sphere, a median of a triangle can also be defined as the great-circle arc connecting a vertex to the midpoint of the opposite side. The three medians all intersect at a point, the central projection onto the sphere of the triangle's extrinsic centroid – that is, centroid of the flat triangle containing the three points if the sphere is embedded in 3-dimensional Euclidean space. However, on the sphere the great-circle arc through one vertex and a point on the opposite side which bisects the triangle's area is, in general, distinct from the corresponding median.
Jakob Steiner used Lexell's theorem to prove that these three area-bisecting arcs (which he called "equalizers") all intersect in a point, one possible alternative analog of the planar centroid in spherical geometry. (A different spherical analog of the centroid is the apex of three triangles of equal area whose bases are the sides of the original triangle, the point with formula_171 as its spherical area coordinates.)
Spherical area coordinates.
The barycentric coordinate system for points relative to a given triangle in affine space does not have a perfect analogy in spherical geometry; there is no single spherical coordinate system sharing all of its properties. One partial analogy is "spherical area coordinates" for a point formula_23 relative to a given spherical triangle formula_40
formula_172
where each quantity formula_173 is the signed spherical excess of the corresponding spherical triangle formula_174 These coordinates sum to formula_175 and using the same definition in the plane results in barycentric coordinates.
By Lexell's theorem, the locus of points with one coordinate constant is the corresponding Lexell circle. It is thus possible to find the point corresponding to a given triple of spherical area coordinates by intersecting two small circles.
Using their respective spherical area coordinates, any spherical triangle can be mapped to any other, or to any planar triangle, using corresponding barycentric coordinates in the plane. This can be used for polyhedral map projections; for the definition of discrete global grids; or for parametrizing triangulations of the sphere or texture mapping any triangular mesh topologically equivalent to a sphere.
Euclidean plane.
The analog of Lexell's theorem in the Euclidean plane comes from antiquity, and can be found in Book I of Euclid's "Elements", propositions 37 and 39, built on proposition 35. In the plane, Lexell's circle degenerates to a straight line (which could be called "Lexell's line") parallel to the base.
"Elements" I.35 holds that parallelograms with the same base whose top sides are colinear have equal area. "Proof": Let the two parallelograms be formula_59 and formula_176 with common base formula_13 and formula_177 formula_178 formula_179 and formula_82 on a common line parallel to the base, and let formula_65 be the intersection between formula_62 and formula_180 Then the two top sides are congruent formula_181 so, adding the intermediate segment to each, formula_182 Therefore the two triangles formula_73 and formula_183 have matching sides so are congruent. Now each of the parallelograms is formed from one of these triangles, added to the triangle formula_75 with the triangle formula_77 cut away, so therefore the two parallelograms formula_59 and formula_60 have equal area.
"Elements" I.37 holds that triangles with the same base and an apex on the same line parallel to the base have equal area. "Proof": Let triangles formula_78 and formula_79 each have its apex on the same line formula_42 parallel to the base formula_133 Construct new segments formula_68 and formula_69 congruent to formula_13 with vertices formula_81 and formula_82 on formula_26 The two quadrilaterals formula_59 and formula_60 are parallelograms, each formed by pasting together the respective triangle and a congruent copy. By I.35, the two parallelograms have the same area, so the original triangles must also have the same area.
"Elements" I.39 is the converse: two triangles of equal area on the same side of the same base have their apexes on a line parallel to the base. "Proof": If two triangles have the same base and same area and the apex of the second is assumed to not lie on the line parallel to the base (the "Lexell line") through the first, then the line through one side of the second triangle can be intersected with the Lexell line to form a new triangle which has a different area from the second triangle but the same area as the first triangle, a contradiction.
In the Euclidean plane, the area formula_11 of triangle formula_4 can be computed using any side length (the "base") and the distance between the line through the base and the parallel line through the apex (the corresponding "height"). Using point formula_1 as the apex, and multiplying both sides of the traditional identity by formula_184 to make the analogy to the spherical case more obvious, this is:
formula_185
The Euclidean theorem can be taken as a corollary of Lexell's theorem on the sphere. It is the limiting case as the curvature of the sphere approaches zero, i.e. for spherical triangles as which are infinitesimal in proportion to the radius of the sphere.
Hyperbolic plane.
In the hyperbolic plane, given a triangle formula_40 the locus of a variable point formula_2 such that the triangle formula_3 has the same area as formula_4 is a hypercycle passing through the points antipodal to formula_8 and formula_9 which could be called "Lexell's hypercycle". Several proofs from the sphere have straightforward analogs in the hyperbolic plane, including a Gauss-style proof via a Saccheri quadrilateral by Barbarin (1902) and Frenkel & Su (2019), an Euler-style proof via hyperbolic parallelograms by Papadopoulos & Su (2017), and a Paul Serret-style proof via stereographic projection by Shvartsman (2007).
In spherical geometry, the "antipodal transformation" takes each point to its antipodal (diametrically opposite) point. For a sphere embedded in Euclidean space, this is a point reflection through the center of the sphere; for a sphere stereographically projected to the plane, it is an inversion across the primitive circle composed with a point reflection across the origin (or equivalently, an inversion in a circle of imaginary radius of the same magnitude as the radius of the primitive circle).
In planar hyperbolic geometry, there is a similar antipodal transformation, but any two antipodal points lie in opposite branches of a double hyperbolic plane. For a hyperboloid of two sheets embedded in Minkowski space of signature formula_186 known as the hyperboloid model, the antipodal transformation is a point reflection through the center of the hyperboloid which takes each point onto the opposite sheet; in the conformal half-plane model it is a reflection across the boundary line of ideal points taking each point into the opposite half-plane; in the conformal disk model it is an inversion across the boundary circle, taking each point in the disk to a point in its complement. As on the sphere, any generalized circle passing through a pair of antipodal points in hyperbolic geometry is a geodesic.
Analogous to the planar and spherical triangle area formulas, the hyperbolic area formula_11 of the triangle can be computed in terms of the base formula_12 (the hyperbolic length of arc formula_13) and "height" formula_14 (the hyperbolic distance between the parallel hypercycles formula_15 and formula_16):
formula_187
As in the spherical case, in the small-triangle limit this reduces to the planar formula.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "AB,"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\triangle ABX"
},
{
"math_id": 4,
"text": "\\triangle ABC"
},
{
"math_id": 5,
"text": "B^* C A^*\\!,"
},
{
"math_id": 6,
"text": "A^*"
},
{
"math_id": 7,
"text": "B^*"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "B,"
},
{
"math_id": 10,
"text": "\\text{area} = \\tfrac12 \\, \\text{base} \\cdot \\text{height}"
},
{
"math_id": 11,
"text": "\\varepsilon"
},
{
"math_id": 12,
"text": "c"
},
{
"math_id": 13,
"text": "AB"
},
{
"math_id": 14,
"text": "h_c"
},
{
"math_id": 15,
"text": "A^* B^* C"
},
{
"math_id": 16,
"text": "A B C^*"
},
{
"math_id": 17,
"text": "\\sin \\tfrac12 \\varepsilon = \\tan \\tfrac12 c \\, \\tan\\tfrac12 h_c."
},
{
"math_id": 18,
"text": "1"
},
{
"math_id": 19,
"text": "2\\pi"
},
{
"math_id": 20,
"text": "\\pi"
},
{
"math_id": 21,
"text": "\\tfrac12 \\varepsilon."
},
{
"math_id": 22,
"text": "\\triangle A^* B^* C"
},
{
"math_id": 23,
"text": "P"
},
{
"math_id": 24,
"text": "\\triangle ABC."
},
{
"math_id": 25,
"text": "B"
},
{
"math_id": 26,
"text": "l."
},
{
"math_id": 27,
"text": "\\triangle B^* C P"
},
{
"math_id": 28,
"text": "\\triangle CA^* P"
},
{
"math_id": 29,
"text": "\\triangle A^* B^* P"
},
{
"math_id": 30,
"text": "\\alpha,"
},
{
"math_id": 31,
"text": "\\beta,"
},
{
"math_id": 32,
"text": "\\delta."
},
{
"math_id": 33,
"text": "\\alpha, \\beta, \\delta"
},
{
"math_id": 34,
"text": "\\angle A = \\pi - \\beta - \\delta"
},
{
"math_id": 35,
"text": "\\angle A^*"
},
{
"math_id": 36,
"text": "\\angle B = \\pi - \\alpha - \\delta,"
},
{
"math_id": 37,
"text": "\\angle C = \\alpha + \\beta."
},
{
"math_id": 38,
"text": "\\begin{align}\n\\varepsilon\n&= \\angle A + \\angle B + \\angle C - \\pi \\\\[3mu]\n&= (\\pi - \\beta - \\delta) + (\\pi - \\alpha - \\delta) + (\\alpha + \\beta) - \\pi \\\\[3mu]\n&= \\pi - 2\\delta.\n\\end{align}"
},
{
"math_id": 39,
"text": "\\delta"
},
{
"math_id": 40,
"text": "\\triangle ABC,"
},
{
"math_id": 41,
"text": "\\delta,"
},
{
"math_id": 42,
"text": "l"
},
{
"math_id": 43,
"text": "\\triangle A^* B^* C,"
},
{
"math_id": 44,
"text": "D"
},
{
"math_id": 45,
"text": "B^* A^*\\!."
},
{
"math_id": 46,
"text": "\\alpha_1 = \\angle C A^* B^*\\!,"
},
{
"math_id": 47,
"text": "\\beta_1 = \\angle A^* B^* C,"
},
{
"math_id": 48,
"text": "\\alpha_2 = \\angle B^* A^* D,"
},
{
"math_id": 49,
"text": "\\beta_2 = \\angle D A^* B^*\\!."
},
{
"math_id": 50,
"text": "\\square A^* D B^* C"
},
{
"math_id": 51,
"text": "\\angle C + \\angle D = {}\\!"
},
{
"math_id": 52,
"text": " \\alpha_1 + \\alpha_2 + \\beta_1 + \\beta_2,"
},
{
"math_id": 53,
"text": "\\alpha_1 + \\beta_1 - \\angle C = {}\\!"
},
{
"math_id": 54,
"text": "\\angle D - \\alpha_2 - \\beta_2."
},
{
"math_id": 55,
"text": "\\begin{align}\n\\varepsilon\n&= \\angle A + \\angle B + \\angle C - \\pi \\\\[3mu]\n&= (\\pi - \\alpha_1) + (\\pi - \\beta_1) + \\angle C - \\pi \\\\[3mu]\n&= \\pi - (\\alpha_1 + \\beta_1 - \\angle C) \\\\[3mu]\n&= \\pi - (\\angle D - \\alpha_2 - \\beta_2).\n\\end{align}"
},
{
"math_id": 56,
"text": "\\angle D - \\alpha_2 - \\beta_2"
},
{
"math_id": 57,
"text": "C,"
},
{
"math_id": 58,
"text": " A^* D B^*\\!."
},
{
"math_id": 59,
"text": "\\square ABC_1D_1"
},
{
"math_id": 60,
"text": "\\square ABC_2D_2"
},
{
"math_id": 61,
"text": "m"
},
{
"math_id": 62,
"text": "BC_1"
},
{
"math_id": 63,
"text": "AD_1"
},
{
"math_id": 64,
"text": "\\square ABC_2D_2."
},
{
"math_id": 65,
"text": "F"
},
{
"math_id": 66,
"text": "AD_2"
},
{
"math_id": 67,
"text": "BC_1."
},
{
"math_id": 68,
"text": "C_1D_1"
},
{
"math_id": 69,
"text": "C_2D_2"
},
{
"math_id": 70,
"text": "l^*"
},
{
"math_id": 71,
"text": "B."
},
{
"math_id": 72,
"text": "D_1D_2 \\cong C_1C_2,"
},
{
"math_id": 73,
"text": "\\triangle BC_1C_2"
},
{
"math_id": 74,
"text": "\\triangle AD_1D_2,"
},
{
"math_id": 75,
"text": "\\triangle ABF"
},
{
"math_id": 76,
"text": "l,"
},
{
"math_id": 77,
"text": "\\triangle D_2C_1F"
},
{
"math_id": 78,
"text": "\\triangle ABC_1"
},
{
"math_id": 79,
"text": "\\triangle ABC_2"
},
{
"math_id": 80,
"text": "B^*\\!,"
},
{
"math_id": 81,
"text": "D_1"
},
{
"math_id": 82,
"text": "D_2"
},
{
"math_id": 83,
"text": "m,"
},
{
"math_id": 84,
"text": "g"
},
{
"math_id": 85,
"text": "M_1"
},
{
"math_id": 86,
"text": "AC"
},
{
"math_id": 87,
"text": "M_2"
},
{
"math_id": 88,
"text": "BC,"
},
{
"math_id": 89,
"text": "A',"
},
{
"math_id": 90,
"text": "B',"
},
{
"math_id": 91,
"text": "C'"
},
{
"math_id": 92,
"text": "m."
},
{
"math_id": 93,
"text": "\\triangle AA'M_1"
},
{
"math_id": 94,
"text": "\\triangle CC'M_1"
},
{
"math_id": 95,
"text": "\\triangle BB'M_2"
},
{
"math_id": 96,
"text": "\\triangle CC'M_2"
},
{
"math_id": 97,
"text": "\\square ABB'A',"
},
{
"math_id": 98,
"text": "\\square ABM_2M_1"
},
{
"math_id": 99,
"text": "A'B',"
},
{
"math_id": 100,
"text": "E,"
},
{
"math_id": 101,
"text": "\\square ABEC"
},
{
"math_id": 102,
"text": "E"
},
{
"math_id": 103,
"text": "\\varepsilon = \\angle A + \\angle B + \\angle C - \\pi"
},
{
"math_id": 104,
"text": "O"
},
{
"math_id": 105,
"text": "BC"
},
{
"math_id": 106,
"text": "\\square OBEC"
},
{
"math_id": 107,
"text": "\\angle BOC"
},
{
"math_id": 108,
"text": "\\varepsilon."
},
{
"math_id": 109,
"text": "\\angle BB^*C"
},
{
"math_id": 110,
"text": "\\tfrac12\\varepsilon."
},
{
"math_id": 111,
"text": "\\angle BOC."
},
{
"math_id": 112,
"text": "\\triangle A'B'C'"
},
{
"math_id": 113,
"text": "A'\\!, B'\\!, C'"
},
{
"math_id": 114,
"text": "BC, CA, AB,"
},
{
"math_id": 115,
"text": "A, B, C"
},
{
"math_id": 116,
"text": "B'C'\\!, C'A'\\!, A'B'\\!."
},
{
"math_id": 117,
"text": "p'"
},
{
"math_id": 118,
"text": "\\begin{align}\n\\varepsilon\n&= \\angle A + \\angle B + \\angle C - \\pi \\\\[3mu]\n&= \\bigl(\\pi - |B'C'|\\bigr) + \\bigl(\\pi - |C'A'|\\bigr) + \\bigl(\\pi - |A'B'|\\bigr) - \\pi \\\\[3mu]\n&= 2\\pi - p',\n\\end{align}"
},
{
"math_id": 119,
"text": "|PQ|"
},
{
"math_id": 120,
"text": "PQ."
},
{
"math_id": 121,
"text": "l'"
},
{
"math_id": 122,
"text": "A'B'."
},
{
"math_id": 123,
"text": "A'B'"
},
{
"math_id": 124,
"text": "l'."
},
{
"math_id": 125,
"text": "A'T_B \\cong A'T_C"
},
{
"math_id": 126,
"text": "B'T_A \\cong B'T_C"
},
{
"math_id": 127,
"text": "\\begin{align}\np' &= |A'B'| + |B'C'| + |C'A'| \\\\[3mu]\n&= \\bigl(|A'T_C| + |B'T_C|\\bigr) + |C'B'| + |C'A'| \\\\[3mu]\n&= \\bigl(|C'A'| + |A'T_B|\\bigr) + \\bigl(|C'B'| + |B'T_A|\\bigr) \\\\[3mu]\n&= |C'T_B| + |C'T_A|,\n\\end{align}"
},
{
"math_id": 128,
"text": "T_A"
},
{
"math_id": 129,
"text": "T_B"
},
{
"math_id": 130,
"text": "C'^*"
},
{
"math_id": 131,
"text": "\\triangle A'B'C'\\!"
},
{
"math_id": 132,
"text": "A,B, C"
},
{
"math_id": 133,
"text": "AB."
},
{
"math_id": 134,
"text": "A,"
},
{
"math_id": 135,
"text": "A^*\\!,"
},
{
"math_id": 136,
"text": "\\angle A = \\pi,"
},
{
"math_id": 137,
"text": "B = B^* = \\tfrac12\\varepsilon."
},
{
"math_id": 138,
"text": "\\angle B = \\angle B^\\star = \\pi - \\tfrac12\\varepsilon."
},
{
"math_id": 139,
"text": "\\varepsilon = \\pi"
},
{
"math_id": 140,
"text": "A^*B^*C"
},
{
"math_id": 141,
"text": "A^*B^*"
},
{
"math_id": 142,
"text": "ABC^*\\!."
},
{
"math_id": 143,
"text": "\\triangle BAD,"
},
{
"math_id": 144,
"text": "\\triangle CDA,"
},
{
"math_id": 145,
"text": "\\triangle DCB"
},
{
"math_id": 146,
"text": "ABCD"
},
{
"math_id": 147,
"text": "AA^*BB^*CC^*DD^*"
},
{
"math_id": 148,
"text": "\\square ABCD"
},
{
"math_id": 149,
"text": "AB \\cong CD,"
},
{
"math_id": 150,
"text": "BC \\cong DA,"
},
{
"math_id": 151,
"text": "\\angle A = \\angle C,"
},
{
"math_id": 152,
"text": "\\angle B = \\angle D"
},
{
"math_id": 153,
"text": "BD"
},
{
"math_id": 154,
"text": "\\triangle ABC \\cong \\triangle CDA"
},
{
"math_id": 155,
"text": "\\triangle ABD \\cong \\triangle CDB"
},
{
"math_id": 156,
"text": "l_{cd}"
},
{
"math_id": 157,
"text": "l_{da}"
},
{
"math_id": 158,
"text": "\\triangle abc"
},
{
"math_id": 159,
"text": "a, b"
},
{
"math_id": 160,
"text": "p = |a| + |b| + |c|"
},
{
"math_id": 161,
"text": "|a|"
},
{
"math_id": 162,
"text": "a"
},
{
"math_id": 163,
"text": "c,"
},
{
"math_id": 164,
"text": "\\triangle abc."
},
{
"math_id": 165,
"text": "ABC"
},
{
"math_id": 166,
"text": "ABA^*B^*"
},
{
"math_id": 167,
"text": "g."
},
{
"math_id": 168,
"text": "C."
},
{
"math_id": 169,
"text": "PC"
},
{
"math_id": 170,
"text": "\\tan^2 \\tfrac12|PC| = \\tan \\tfrac12 |PA^*|\\,\\tan \\tfrac12 |PB^*|,"
},
{
"math_id": 171,
"text": "\\bigl(\\tfrac13,\\tfrac13,\\tfrac13\\bigr)"
},
{
"math_id": 172,
"text": "\\left(\n\\frac{\\varepsilon_{PBC}}{\\varepsilon_{ABC}},\n\\frac{\\varepsilon_{APC}}{\\varepsilon_{ABC}},\n\\frac{\\varepsilon_{ABP}}{\\varepsilon_{ABC}}\n\\right),"
},
{
"math_id": 173,
"text": "\\varepsilon_{QRS}"
},
{
"math_id": 174,
"text": "\\triangle QRS."
},
{
"math_id": 175,
"text": "1,"
},
{
"math_id": 176,
"text": "\\square ABC_2D_2,"
},
{
"math_id": 177,
"text": "C_1,"
},
{
"math_id": 178,
"text": "D_1,"
},
{
"math_id": 179,
"text": "C_2,"
},
{
"math_id": 180,
"text": "AD_2."
},
{
"math_id": 181,
"text": "C_1D_1 \\cong C_2D_2"
},
{
"math_id": 182,
"text": "C_1C_2 \\cong D_1D_2."
},
{
"math_id": 183,
"text": "\\triangle AD_1D_2"
},
{
"math_id": 184,
"text": "\\tfrac12"
},
{
"math_id": 185,
"text": "\\tfrac12 \\varepsilon = \\tfrac12 c\\,\\tfrac12 h_c."
},
{
"math_id": 186,
"text": "(-, +, +),"
},
{
"math_id": 187,
"text": "\\sin \\tfrac12 \\varepsilon = \\tanh\\tfrac12 c \\, \\tanh\\tfrac12 h_c."
}
]
| https://en.wikipedia.org/wiki?curid=74122214 |
74141 | Subtraction | One of the four basic arithmetic operations
\\[1ex]\scriptstyle\frac{\scriptstyle\text{numerator}}{\scriptstyle\text{denominator}}\end{matrix}\right\}\,=\,</math>
Subtraction (which is signified by the minus sign −) is one of the four arithmetic operations along with addition, multiplication and division. Subtraction is an operation that represents removal of objects from a collection. For example, in the adjacent picture, there are 5 − 2 peaches—meaning 5 peaches with 2 taken away, resulting in a total of 3 peaches. Therefore, the "difference" of 5 and 2 is 3; that is, 5 − 2 = 3. While primarily associated with natural numbers in arithmetic, subtraction can also represent removing or decreasing physical and abstract quantities using different kinds of objects including negative numbers, fractions, irrational numbers, vectors, decimals, functions, and matrices.
In a sense, subtraction is the inverse of addition. That is, "c" = "a" − "b" if and only if "c" + "b" = "a". In words: the difference of two numbers is the number that gives the first one when added to the second one.
Subtraction follows several important patterns. It is anticommutative, meaning that changing the order changes the sign of the answer. It is also not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Because 0 is the additive identity, subtraction of it does not change a number. Subtraction also obeys predictable rules concerning related operations, such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers and beyond. General binary operations that follow these patterns are studied in abstract algebra.
In computability theory, considering subtraction is not well-defined over natural numbers, operations between numbers are actually defined using "truncated subtraction" or monus.
Notation and terminology.
Subtraction is usually written using the minus sign "−" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example,
formula_0 (pronounced as "two minus one equals one")
formula_1 (pronounced as "four minus two equals two")
formula_2 (pronounced as "six minus three equals three")
formula_3 (pronounced as "four minus six equals negative two")
There are also situations where subtraction is "understood", even though no symbol appears:
Formally, the number being subtracted is known as the subtrahend, while the number it is subtracted from is the minuend. The result is the difference. That is,
formula_4.
All of this terminology derives from Latin. "Subtraction" is an English word derived from the Latin verb "subtrahere", which in turn is a compound of "sub" "from under" and "trahere" "to pull". Thus, to subtract is to "draw from below", or to "take away". Using the gerundive suffix "-nd" results in "subtrahend", "thing to be subtracted". Likewise, from "minuere" "to reduce or diminish", one gets "minuend", which means "thing to be diminished".
Of integers and real numbers.
Integers.
Imagine a line segment of length "b" with the left end labeled "a" and the right end labeled "c".
Starting from "a", it takes "b" steps to the right to reach "c". This movement to the right is modeled mathematically by addition:
"a" + "b" = "c".
From "c", it takes "b" steps to the "left" to get back to "a". This movement to the left is modeled by subtraction:
"c" − "b" = "a".
Now, a line segment labeled with the numbers 1, 2, and 3. From position 3, it takes no steps to the left to stay at 3, so 3 − 0 = 3. It takes 2 steps to the left to get to position 1, so 3 − 2 = 1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended.
To subtract arbitrary natural numbers, one begins with a line containing every natural number (0, 1, 2, 3, 4, 5, 6, ...). From 3, it takes 3 steps to the left to get to 0, so 3 − 3 = 0. But 3 − 4 is still invalid, since it again leaves the line. The natural numbers are not a useful context for subtraction.
The solution is to consider the integer number line (..., −3, −2, −1, 0, 1, 2, 3, ...). This way, it takes 4 steps to the left from 3 to get to −1:
3 − 4 = −1.
Natural numbers.
Subtraction of natural numbers is not closed: the difference is not a natural number unless the minuend is greater than or equal to the subtrahend. For example, 26 cannot be subtracted from 11 to give a natural number. Such a case uses one of two approaches:
Real numbers.
The field of real numbers can be defined specifying only two binary operations, addition and multiplication, together with unary operations yielding additive and multiplicative inverses. The subtraction of a real number (the subtrahend) from another (the minuend) can then be defined as the addition of the minuend and the additive inverse of the subtrahend. For example, 3 − "π" = 3 + (−"π"). Alternatively, instead of requiring these unary operations, the binary operations of subtraction and division can be taken as basic.
Properties.
Anti-commutativity.
Subtraction is anti-commutative, meaning that if one reverses the terms in a difference left-to-right, the result is the negative of the original result. Symbolically, if "a" and "b" are any two numbers, then
"a" − "b" = −("b" − "a)".
Non-associativity.
Subtraction is non-associative, which comes up when one tries to define repeated subtraction. In general, the expression
""a" − "b" − "c""
can be defined to mean either ("a" − "b") − "c" or "a" − ("b" − "c"), but these two possibilities lead to different answers. To resolve this issue, one must establish an order of operations, with different orders yielding different results.
Predecessor.
In the context of integers, subtraction of one also plays a special role: for any integer "a", the integer ("a" − 1) is the largest integer less than "a", also known as the predecessor of "a".
Units of measurement.
When subtracting two numbers with units of measurement such as kilograms or pounds, they must have the same unit. In most cases, the difference will have the same unit as the original numbers.
Percentages.
Changes in percentages can be reported in at least two forms, percentage change and percentage point change. Percentage change represents the relative change between the two quantities as a percentage, while percentage point change is simply the number obtained by subtracting the two percentages.
As an example, suppose that 30% of widgets made in a factory are defective. Six months later, 20% of widgets are defective. The percentage change is = − = %, while the percentage point change is −10 percentage points.
In computing.
The method of complements is a technique used to subtract one number from another using only the addition of positive numbers. This method was commonly used in mechanical calculators, and is still used in modern computers.
To subtract a binary number "y" (the subtrahend) from another number "x" (the minuend), the ones' complement of "y" is added to "x" and one is added to the sum. The leading digit "1" of the result is then discarded.
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing "0" to "1" and vice versa). And adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example:
01100100 (x, equals decimal 100)
- 00010110 (y, equals decimal 22)
becomes the sum:
01100100 (x)
+ 11101001 (ones' complement of y)
+ 1 (to get the two's complement)
101001110
Dropping the initial "1" gives the answer: 01001110 (equals decimal 78)
The teaching of subtraction in schools.
Methods used to teach subtraction to elementary school vary from country to country, and within a country, different methods are adopted at different times. In what is known in the United States as traditional mathematics, a specific process is taught to students at the end of the 1st year (or during the 2nd year) for use with multi-digit whole numbers, and is extended in either the fourth or fifth grade to include decimal representations of fractional numbers.
In America.
Almost all American schools currently teach a method of subtraction using borrowing or regrouping (the decomposition algorithm) and a system of markings called crutches. Although a method of borrowing had been known and published in textbooks previously, the use of crutches in American schools spread after William A. Brownell published a study—claiming that crutches were beneficial to students using this method. This system caught on rapidly, displacing the other methods of subtraction in use in America at that time.
In Europe.
Some European schools employ a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid memory), which vary by country.
Comparing the two main methods.
Both these methods break up the subtraction as a process of one digit subtractions by place value. Starting with a least significant digit, a subtraction of the subtrahend:
"s""j" "s""j"−1 ... "s"1
from the minuend
"m""k" "m""k"−1 ... "m"1,
where each "s""i" and "m""i" is a digit, proceeds by writing down "m"1 − "s"1, "m"2 − "s"2, and so forth, as long as "s""i" does not exceed "m""i". Otherwise, "m""i" is increased by 10 and some other digit is modified to correct for this increase. The American method corrects by attempting to decrease the minuend digit "m""i"+1 by one (or continuing the borrow leftwards until there is a non-zero digit from which to borrow). The European method corrects by increasing the subtrahend digit "s""i"+1 by one.
Example: 704 − 512.
The minuend is 704, the subtrahend is 512. The minuend digits are "m"3 = 7, "m"2 = 0 and "m"1 = 4. The subtrahend digits are "s"3 = 5, "s"2 = 1 and "s"1 = 2. Beginning at the one's place, 4 is not less than 2 so the difference 2 is written down in the result's one's place. In the ten's place, 0 is less than 1, so the 0 is increased by 10, and the difference with 1, which is 9, is written down in the ten's place. The American method corrects for the increase of ten by reducing the digit in the minuend's hundreds place by one. That is, the 7 is struck through and replaced by a 6. The subtraction then proceeds in the hundreds place, where 6 is not less than 5, so the difference is written down in the result's hundred's place. We are now done, the result is 192.
The Austrian method does not reduce the 7 to 6. Rather it increases the subtrahend hundreds digit by one. A small mark is made near or below this digit (depending on the school). Then the subtraction proceeds by asking what number when increased by 1, and 5 is added to it, makes 7. The answer is 1, and is written down in the result's hundreds place.
There is an additional subtlety in that the student always employs a mental subtraction table in the American method. The Austrian method often encourages the student to mentally use the addition table in reverse. In the example above, rather than adding 1 to 5, getting 6, and subtracting that from 7, the student is asked to consider what number, when increased by 1, and 5 is added to it, makes 7.
Subtraction by hand.
Austrian method.
Example:
Subtraction from left to right.
Example:
American method.
In this method, each digit of the subtrahend is subtracted from the digit above it starting from right to left. If the top number is too small to subtract the bottom number from it, we add 10 to it; this 10 is "borrowed" from the top digit to the left, which we subtract 1 from. Then we move on to subtracting the next digit and borrowing as needed, until every digit has been subtracted. Example:
Trade first.
A variant of the American method where all borrowing is done before all subtraction.
Example:
Partial differences.
The partial differences method is different from other vertical subtraction methods because no borrowing or carrying takes place. In their place, one places plus or minus signs depending on whether the minuend is greater or smaller than the subtrahend. The sum of the partial differences is the total difference.
Example:
Nonvertical methods.
Counting up.
Instead of finding the difference digit by digit, one can count up the numbers between the subtrahend and the minuend.
Example:
1234 − 567 = can be found by the following steps:
Add up the value from each step to get the total difference: 3 + 30 + 400 + 234 = 667.
Breaking up the subtraction.
Another method that is useful for mental arithmetic is to split up the subtraction into small steps.
Example:
1234 − 567 = can be solved in the following way:
Same change.
The same change method uses the fact that adding or subtracting the same number from the minuend and subtrahend does not change the answer. One simply adds the amount needed to get zeros in the subtrahend.
Example:
"1234 − 567 =" can be solved as follows:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2 - 1 = 1 "
},
{
"math_id": 1,
"text": "4 - 2 = 2 "
},
{
"math_id": 2,
"text": "6 - 3 = 3 "
},
{
"math_id": 3,
"text": "4 - 6 = -2 "
},
{
"math_id": 4,
"text": " {\\rm minuend} - {\\rm subtrahend} = {\\rm difference} "
}
]
| https://en.wikipedia.org/wiki?curid=74141 |
74142621 | Ogawa integral | In stochastic calculus, the Ogawa integral, also called the non-causal stochastic integral, is a stochastic integral for non-adapted processes as integrands. The corresponding calculus is called non-causal calculus in order to distinguish it from the anticipating calculus of the Skorokhod integral. The term causality refers to the adaptation to the natural filtration of the integrator.
The integral was introduced by the Japanese mathematician Shigeyoshi Ogawa in 1979.
Ogawa integral.
Let
Further let formula_8 be the set of real-valued processes formula_9 that are formula_10-measurable and almost surely in formula_11, i.e.
formula_12
Ogawa integral.
Let formula_13 be a complete orthonormal basis of the Hilbert space formula_11.
A process formula_14 is called formula_15-integrable if the random series
formula_16
converges in probability and the corresponding sum is called the Ogawa integral with respect to the basis formula_17.
If formula_18 is formula_15-integrable for any complete orthonormal basis of formula_11 and the corresponding integrals share the same value then formula_18 is called universal Ogawa integrable (or u-integrable).
More generally, the Ogawa integral can be defined for any formula_19-process formula_20 (such as the fractional Brownian motion) as integrators
formula_21
as long as the integrals
formula_22
are well-defined.
Regularity of the orthonormal basis.
An important concept for the Ogawa integral is the "regularity" of an orthonormal basis. An orthonormal basis formula_13 is called "regular" if
formula_23
holds.
The following results on regularity are known: | [
{
"math_id": 0,
"text": "(\\Omega,\\mathcal{F},P)"
},
{
"math_id": 1,
"text": "W=(W_t)_{t\\in[0,T]}"
},
{
"math_id": 2,
"text": "T\\in\\mathbb{R}_+"
},
{
"math_id": 3,
"text": "\\mathcal{F}_t^W=\\sigma(W_s;0\\leq s \\leq t)\\subset \\mathcal{F}"
},
{
"math_id": 4,
"text": "\\mathbf{F}^W=\\{\\mathcal{F}_t^W, t\\geq 0\\}"
},
{
"math_id": 5,
"text": "\\mathcal{B}([0,T])"
},
{
"math_id": 6,
"text": "\\int f\\; dW_t"
},
{
"math_id": 7,
"text": "dt"
},
{
"math_id": 8,
"text": "\\mathbf{H}"
},
{
"math_id": 9,
"text": "X\\colon [0,T]\\times \\Omega \\to\\mathbb{R}"
},
{
"math_id": 10,
"text": "\\mathcal{B}([0,T])\\times \\mathcal{F}"
},
{
"math_id": 11,
"text": "L^2([0,T],dt)"
},
{
"math_id": 12,
"text": "P\\left(\\int_0^T |X(t,\\omega)|^2 \\, dt< \\infty\\right)=1."
},
{
"math_id": 13,
"text": "\\{\\varphi_n\\}_{n\\in \\mathbb{N}}"
},
{
"math_id": 14,
"text": "X\\in\\mathbf{H}"
},
{
"math_id": 15,
"text": "\\varphi"
},
{
"math_id": 16,
"text": "\\int_0^T X_t \\, d_\\varphi W_t:=\\sum_{n=1}^\\infty \\left(\\int_0^T X_t \\varphi_n(t) \\, dt\\right) \\int_0^T\\varphi_n(t) \\, dW_t"
},
{
"math_id": 17,
"text": "\\{\\varphi_n\\}"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "L^2(\\Omega,P)"
},
{
"math_id": 20,
"text": "Z_t"
},
{
"math_id": 21,
"text": "\\int_0^T X_t \\, d_\\varphi Z_t:=\\sum_{n=1}^\\infty \\left(\\int_0^T X_t \\varphi_n(t) \\, dt\\right) \\int_0^T\\varphi_n(t) \\, dZ_t"
},
{
"math_id": 22,
"text": "\\int_0^T\\varphi_n(t) \\, dZ_t"
},
{
"math_id": 23,
"text": "\\sup_n \\int_0^T \\left( \\sum_{i=1}^n \\varphi_i(t)\\int_0^t \\varphi_i(s) \\, ds\\right)^2 \\, dt<\\infty"
},
{
"math_id": 24,
"text": "L^2([0,1], dt)"
},
{
"math_id": 25,
"text": "\\mathbf{F}^W"
}
]
| https://en.wikipedia.org/wiki?curid=74142621 |
7414275 | Kopp's law | Kopp's law can refer to either of two relationships discovered by the German chemist Hermann Franz Moritz Kopp (1817–1892).
Kopp–Neumann law.
The Kopp–Neumann law, named for Kopp and Franz Ernst Neumann, is a common approach for determining the specific heat "C" (in J·kg−1·K−1) of compounds using the following equation:
formula_0
where "N" is the total number of compound constituents, and "Ci" and "fi" denote the specific heat and mass fraction of the "i"-th constituent. This law works surprisingly well at room-temperature conditions, but poorly at elevated temperatures. | [
{
"math_id": 0,
"text": "C = \\sum_{i=1}^N C_i f_i,"
}
]
| https://en.wikipedia.org/wiki?curid=7414275 |
74144071 | Dubins–Schwarz theorem | In the theory of martingales, the Dubins-Schwarz theorem (or Dambis-Dubins-Schwarz theorem) is a theorem that says all continuous local martingales and martingales are time-changed Brownian motions.
The theorem was proven in 1965 by Lester Dubins and Gideon E. Schwarz and independently in the same year by K. E. Dambis, a doctorial student of Eugene Dynkin.
Dubins-Schwarz theorem.
Let
Statement.
Let formula_5 and formula_6 and define for all formula_7 the time-changes (i.e. stopping times)
formula_8
Then formula_9 is a formula_10-Brownian motion and formula_11. | [
{
"math_id": 0,
"text": "\\mathcal{M}_{0,\\operatorname{loc}}^{c}"
},
{
"math_id": 1,
"text": "\\mathcal{F}_t"
},
{
"math_id": 2,
"text": "M=(M_t)_{t\\geq 0}"
},
{
"math_id": 3,
"text": "M_0=0"
},
{
"math_id": 4,
"text": "\\langle M\\rangle"
},
{
"math_id": 5,
"text": "M\\in \\mathcal{M}_{0,\\operatorname{loc}}^{c}"
},
{
"math_id": 6,
"text": "\\langle M\\rangle_{\\infty}=\\infty"
},
{
"math_id": 7,
"text": "t\\geq 0"
},
{
"math_id": 8,
"text": "T_t=\\inf \\{s:\\langle M\\rangle_s>t\\}."
},
{
"math_id": 9,
"text": "B:=(B_t):=(M_{T_t})"
},
{
"math_id": 10,
"text": "\\mathcal{F}_{T_t}"
},
{
"math_id": 11,
"text": "(M_t)=(B_{\\langle M\\rangle_t})"
},
{
"math_id": 12,
"text": "B"
},
{
"math_id": 13,
"text": "\\mathcal{F}_{t}"
},
{
"math_id": 14,
"text": "(T_t)"
}
]
| https://en.wikipedia.org/wiki?curid=74144071 |
74151415 | Fresh variable | In formal reasoning, in particular in mathematical logic, computer algebra, and automated theorem proving, a fresh variable is a variable that did not occur in the context considered so far.
The concept is often used without explanation.
Fresh variables may be used to replace other variables, to eliminate variable shadowing or capture. For instance, in alpha-conversion, the processing of terms in the lambda calculus into equivalent terms with renamed variables, replacing variables with fresh variables can be helpful as a way to avoid accidentally capturing variables that should be free. Another use for fresh variables involves the development of loop invariants in formal program verification, where it is sometimes useful to replace constants by newly introduced fresh variables.
Example.
For example, in term rewriting, before applying a rule formula_0 to a given term formula_1, each variable in formula_0 should be replaced by a fresh one to avoid clashes with variables occurring in formula_1.
Given the rule
formula_2
and the term
formula_3,
attempting to find a matching substitution of the rule's left-hand side, formula_4, within formula_3 will fail, since formula_5 cannot match formula_6.
However, if the rule is replaced by a "fresh copy"
formula_7
before, matching will succeed with the answer substitution formula_8.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "l \\to r"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "append(cons(x,y),z) \\to cons(x,append(y,z))"
},
{
"math_id": 3,
"text": "append(cons(x,cons(y,nil)),cons(3,nil))"
},
{
"math_id": 4,
"text": "append(cons(x,y),z)"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "cons(y,nil)"
},
{
"math_id": 7,
"text": "append(cons(v_1,v_2),v_3) \\to cons(v_1,append(v_2,v_3))"
},
{
"math_id": 8,
"text": "\\{ v_2 \\mapsto x, \\; v_2 \\mapsto cons(y,nil), \\; v_3 \\mapsto cons(3,nil) \\}"
}
]
| https://en.wikipedia.org/wiki?curid=74151415 |
74151612 | Stein-Rosenberg theorem | The Stein-Rosenberg theorem, proved in 1948, states that under certain premises, the Jacobi method and the Gauss-Seidel method are either both convergent, or both divergent. If they are convergent, then the Gauss-Seidel is asymptotically faster than the Jacobi method.
Statement.
Let formula_0. Let formula_1 be the spectral radius of a matrix formula_2. Let formula_3 and formula_4 be the matrix splitting for the Jacobi method and the Gauss-Seidel method respectively.
Theorem: If formula_5 for formula_6 and formula_7 for formula_8. Then, one and only one of the following mutually exclusive relations is valid:
Proof and applications.
The proof uses the Perron-Frobenius theorem for non-negative matrices. Its proof can be found in Richard S. Varga's 1962 book "Matrix Iterative Analysis".
In the words of Richard Varga:
the Stein-Rosenberg theorem gives us our first comparison theorem for two different iterative methods. Interpreted in a more practical way, not only is the point Gauss-Seidel iterative method computationally more convenient to use (because of storage requirements) than the point Jacobi iterative matrix, but it is also asymptotically faster when the Jacobi matrix formula_13 is non-negative
Employing more hypotheses, on the matrix formula_14, one can even give quantitative results. For example, under certain conditions one can state that the Gauss-Seidel method is twice as fast as the Jacobi iteration.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A=(a_{ij})\\in\\mathbb{R}^{n\\times n}"
},
{
"math_id": 1,
"text": "\\rho(X)"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "T_J=D^{-1}(L+U)"
},
{
"math_id": 4,
"text": "T_1=(D-L)^{-1}U"
},
{
"math_id": 5,
"text": "a_{ij}\\le 0"
},
{
"math_id": 6,
"text": "i\\ne j"
},
{
"math_id": 7,
"text": "a_{ii} > 0"
},
{
"math_id": 8,
"text": "i=1,\\ldots,n"
},
{
"math_id": 9,
"text": "\\rho(T_J) = \\rho(T_1) = 0"
},
{
"math_id": 10,
"text": "0 < \\rho(T_1) < \\rho(T_J) < 1"
},
{
"math_id": 11,
"text": "1=\\rho(T_J)=\\rho(T_1)"
},
{
"math_id": 12,
"text": "1 < \\rho(T_J) < \\rho(T_1)"
},
{
"math_id": 13,
"text": "T_J"
},
{
"math_id": 14,
"text": "A"
}
]
| https://en.wikipedia.org/wiki?curid=74151612 |
74158384 | Short circuit ratio (electrical grid) | Term in electrical engineering
In an electrical grid, the short circuit ratio (or SCR) is the ratio of the short circuit apparent power (SCMVA) in the case of a line-line-line-ground (3LG) fault at the location in the grid where some generator is connected to the power rating of the generator itself (GMW). Since the power that can be delivered by the grid varies by location, frequently a location is indicated, for example, at the point of interconnection (POI):
formula_0
SCR is used to quantify the system strength of the grid (its ability to deal with changes in active and reactive power injection and consumption). On a simplified level, a high SCR indicates that the particular generator represents a small portion of the power available at the point of its connection to the grid, and therefore the generator problems cannot affect the grid in a significant way. SCMVA is defined as a product of the voltage before the 3LG fault and the current that would flow after the fault (this worst-case combination will not happen in practice, but provides a useful estimation of the capacity of the circuit). SCMVA is also called a short circuit level (SCL), although sometimes the term SCL is used to designate just the short-circuit current.
Grid strength.
The term grid strength (also system strength) is used to describe the resiliency of the grid to the small changes in the vicinity of the grid location (“grid stiffness”). From the side of an electrical generator, the system strength is related to the changes of voltage the generator encounters on its terminals as the generator's current injection varies. Therefore, the quantification of the system strength can be done through finding the equivalent (Thévenin) electrical impedance of the system as observed from these terminals (the strength is inversely proportional to the resistance). SCR and its variations provide a convenient way to calculate this impedance under normal or contingency conditions (these estimates are not intended for the actual short-circuit state).
Strong grids provide a reliable reference for power sources to synchronize. In a very stiff system the voltage does not change with variations of the power injected by a particular generator, making its control simpler. In a traditional grid dominated by synchronous generators, a strong grid with SCR greater than 3.0 will have the desired voltage stability and active power reserves. A weak grid (with SCR values between 2.0 and 3.0) can exhibit voltage instability and control problems. A grid with SCR below 2.0 is "very weak".
Importance of overcurrent.
Grid strength is also important for its overcurrent capabilities that are essential for the power system operations. Lack of overcurrent capability (low SCR) in a weak grid creates a multitude of problems, including:
Presence of inverter-based resources.
Large penetration of the inverter-based resources (IBRs) reduced the short circuit level: a typical synchronous generator can deliver a significant overcurrent, 2-5 p.u., for a relatively long time (minutes), while the component limitations of the IBRs result in overcurrent limits of less than 2 p.u. (usually 1.1-1.2 p.u.).
The original SCR definition above was intended for a system with predominantly synchronous generation, so multiple alternative metrics, including "weighted short circuit ratio" (WSCR), "composite short circuit ratio" (CSCR), "equivalent circuit short circuit ratio" (ESCR), and "short circuit ratio with interaction factors" (SCRIF), have been proposed for the grids with multiple adjacent IBRs to avoid an overestimation of the grid strength (an IBR relies on grid strength to synchronize its operation and does not have much overcurrent capacity).
Henderson et al. argue that in case of IBRs the SCR and system strength are in fact decoupled and propose a new metric, "grid strength impedance".
Integrating renewable energy sources often raises concerns about the system's strength. The ability of different components in a power system to perform effectively depends on the system's strength, which measures the system variables' sensitivity to disturbances. The short circuit ratio (SCR) is an indicator of the strength of a network bus about the rated power of a device and is frequently used as a measure of system strength. A higher SCR value indicates a stronger system, meaning that the impact of disturbances on voltage and other variables will be minimized. A strong system is defined as having an SCR above three, and the SCRs of weak and very weak systems range between three and two and below two, respectively.
Power electronic applications often encounter issues related to SCR, particularly in renewable energy systems that use power converters to connect to power grids. When connecting HVDC/FACTs devices based on current source converters to weak AC systems, particular technologies must be employed to overcome SCR of less than three. For HVDC, voltage-source-based converters or capacitor-commutated converters are utilized in applications with SCR near one. Failing to use these technologies will require special studies to determine the impact and take measures to prevent or minimize the adverse effects, as low levels of SCR can cause problems such as high over-voltages, low-frequency resonances, and instability in control systems.
Wind farms are commonly linked to less robust network sections away from the main power consumption areas. Problems with voltage stability that arise from incorporating large-scale wind power into vulnerable systems are crucial issues that require attention. Some wind turbines have specific minimum system strength criteria. GE indicates that the standard parameters of their wind turbine model are appropriate for systems with a Short Circuit Ratio (SCR) of five or higher. However, if connecting to weaker systems, it is necessary to carry out further analysis to guarantee that the model parameters are adequately adjusted. Specifically designed control methods for wind turbines or dynamic reactive compensation devices, such as STATCOM, are required to ensure optimal performance.
Example.
An experience at ERCOT in early 21st century provides a prime example of how the wind turbine's performance is affected by a weak system strength. The wind power plant, linked to the ERCOT grid through two 69kV transmission lines, worked efficiently when the SCR was around 4 during normal operations. However, when one of the 69kV lines was disconnected, the SCR dropped to 2 or less, leading to unfavorable, poorly damped, or un-damped voltage oscillations that were documented by PMUs at the Point of Interconnection (POI) of the wind plant. After a thorough investigation, it was determined that the aggressive voltage control used by the WPP was not appropriate for a weak grid environment and was the primary cause of the oscillatory response. Due to the low short circuit level detected by the wind generator voltage controller and the high voltage control gain, the oscillation occurred. When compared to the normal grid with high SCR, the closed loop voltage control would have a faster response under weak grid conditions. To replicate the oscillatory response, the event was simulated using a detailed dynamic model representing the WPP.
Impact on grid.
The SCR can be calculated for each point on an electrical grid. A point on a grid having a number of machines with an SCR above a number between 1 and 1.5 has less vulnerability to voltage instability. Hence, such a grid is known strong grid or power system. A power system (grid) having a lower SCR has more vulnerability to grid voltage instability. Hence such a grid or system is known as a weak grid or a weak power system.
Grid strength can be increased by installing synchronous condensers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "SCR_{POI} = \\frac {SCMVA_{POI}} {GMW}"
}
]
| https://en.wikipedia.org/wiki?curid=74158384 |
74158460 | Short circuit ratio (synchronous generator) | In an electromechanical generator, the short circuit ratio is the ratio of field current required to produce rated armature voltage at the open circuit to the field current required to produce the rated armature current at short circuit. This ratio can also be expressed as an inverse of the saturated direct-axis synchronous reactance (in p.u.):
formula_0
Effects of SCR values.
Higher SCR requires lower reactance formula_1 that in practice means a larger air gap.
Both high and low levels of SCR have their benefits:
Therefore, in practice the design of a generator is seeking an SCR that balances benefits and drawbacks for a particular application.
Effects of construction.
The larger the SCR, the smaller is alternator reactance (Xd) and inductance Ld. This is the result of larger air gaps in generator design (As in Hydro generators or Salient Pole Machines). It results into Machine loosely coupled to the grid, and its response will be slow. This increases the machines’ stability while operating on the grid, but simultaneously will increase the short circuit current delivery capability of the machine (higher short circuit current) and subsequently larger machine size and its cost. Typical values of SCR for Hydro alternators may be in the range of 1 to 1.5.
Conversely, the smaller the SCR, the larger is alternator's reactance (Xd), the larger is Ld. It results from small air gaps in machine design (As in Turbo generators or Cylindrical rotor Machines). Machines are tightly coupled to the grid, and their response will be fast. This reduces the machine's stability while operating on the grid and will reduce the short circuit current delivery capability (lower short circuit current), smaller machine size, and lower cost subsequently. Typical values of SCR for turbo alternators may be in the range of 0.45 to 0.9.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "SCR = \\frac 1 {X_S}"
},
{
"math_id": 1,
"text": "X_S"
}
]
| https://en.wikipedia.org/wiki?curid=74158460 |
7415899 | Sequential quadratic programming | Optimization algorithm
Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable, but not necessarily convex.
SQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem.
Algorithm basics.
Consider a nonlinear programming problem of the form:
formula_0
The Lagrangian for this problem is
formula_1
where formula_2 and formula_3 are Lagrange multipliers.
The standard Newton's Method searches for the solution formula_4 by iterating the following equation, where formula_5 denotes the Hessian matrix:
formula_6.
However, because the matrix formula_7 is generally singular (and therefore non-invertible), the Newton step formula_8 cannot be calculated directly. Instead the basic sequential quadratic programming algorithm defines an appropriate search direction formula_9 at an iterate formula_10, as a solution to the quadratic programming subproblem
formula_11
where the quadratic form is formed with the Hessian of the Lagrangian.
Note that the term formula_12 in the expression above may be left out for the minimization problem, since it is constant under the formula_13 operator.
Together, the SQP algorithm starts by first choosing the initial iterate formula_14, then calculating formula_15 and formula_16. Then the QP subproblem is built and solved to find the Newton step direction formula_17 which is used to update the parent problem iterate using formula_18. This process is repeated for formula_19 until the parent problem satisfies a convergence test.
Practical implementations.
Practical implementations of the SQP algorithm are significantly more complex than its basic version above. To adapt SQP for real-world applications, the following challenges must be addressed:
To overcome these challenges, various strategies are typically employed:
These strategies can be combined in numerous ways, resulting in a diverse range of SQP methods.
Implementations.
SQP methods have been implemented in well known numerical environments such as MATLAB and GNU Octave. There also exist numerous software libraries, including open source:
and commercial
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{rl}\n\\min\\limits_{x} & f(x) \\\\\n\\mbox{subject to} & h(x) \\ge 0 \\\\\n & g(x) = 0.\n\\end{array}"
},
{
"math_id": 1,
"text": "\\mathcal{L}(x,\\lambda,\\sigma) = f(x) + \\lambda h(x) + \\sigma g(x),"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "\\nabla \\mathcal{L}(x,\\lambda,\\sigma) =0 "
},
{
"math_id": 5,
"text": " \\nabla^2_{xx}"
},
{
"math_id": 6,
"text": " \\begin{bmatrix}\n x_{k+1} \\\\\n \\lambda_{k+1} \\\\\n \\sigma_{k+1}\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n x_k \\\\\n \\lambda_k \\\\\n \\sigma_k\n \\end{bmatrix}\n -\n \\underbrace{\n \\begin{bmatrix}\n \\nabla^2_{xx} \\mathcal{L} & \\nabla h & \\nabla g \\\\\n \\nabla h^T & 0 & 0 \\\\\n \\nabla g^{T} & 0 & 0\n \\end{bmatrix}^{-1}\n }_{\\nabla^2 \\mathcal{L}}\n \\underbrace{\n \\begin{bmatrix}\n \\nabla f + \\lambda_k \\nabla h + \\sigma_k \\nabla g \\\\\n h \\\\\n g\n \\end{bmatrix}\n }_{\\nabla \\mathcal{L}}"
},
{
"math_id": 7,
"text": " \\nabla^2 \\mathcal{L}"
},
{
"math_id": 8,
"text": " d_k = \\left( \\nabla^2_{xx} \\mathcal{L} \\right)^{-1} \\nabla \\mathcal{L} "
},
{
"math_id": 9,
"text": "d_k"
},
{
"math_id": 10,
"text": "(x_k, \\lambda_k, \\sigma_k)"
},
{
"math_id": 11,
"text": "\\begin{array}{rl} \\min\\limits_{d} & f(x_k) + \\nabla f(x_k)^Td + \\tfrac{1}{2} d^T \\nabla^2_{xx} \\mathcal{L}(x_k,\\lambda_k,\\sigma_k) d \\\\\n\\mathrm{s.t.} & h(x_k) + \\nabla h(x_k)^Td \\ge 0 \\\\\n & g(x_k) + \\nabla g(x_k)^T d = 0. \\end{array}"
},
{
"math_id": 12,
"text": "f(x_k)"
},
{
"math_id": 13,
"text": "\\min\\limits_{d}"
},
{
"math_id": 14,
"text": "(x_0, \\lambda_0, \\sigma_0)"
},
{
"math_id": 15,
"text": " \\nabla^2 \\mathcal{L}(x_0, \\lambda_0, \\sigma_0)"
},
{
"math_id": 16,
"text": " \\nabla \\mathcal{L}(x_0, \\lambda_0, \\sigma_0)"
},
{
"math_id": 17,
"text": "d_0"
},
{
"math_id": 18,
"text": "\\left[ x_{k+1}, \\lambda_{k+1}, \\sigma_{k+1} \\right]^T = \n\\left[ x_{k}, \\lambda_{k}, \\sigma_{k} \\right]^T + d_k"
},
{
"math_id": 19,
"text": "k = 0, 1, 2, \\ldots"
}
]
| https://en.wikipedia.org/wiki?curid=7415899 |
7416843 | Peskin–Takeuchi parameter | In particle physics, the Peskin–Takeuchi parameters are a set of three measurable quantities, called "S", "T", and "U", that parameterize potential new physics contributions to electroweak radiative corrections. They are named after physicists Michael Peskin and Tatsu Takeuchi, who proposed the parameterization in 1990; proposals from two other groups (see References below) came almost simultaneously.
The Peskin–Takeuchi parameters are defined so that they are all equal to zero at a "reference point" in the Standard Model, with a particular value chosen for the (then unmeasured) Higgs boson mass. The parameters are then extracted from a global fit to the high-precision electroweak data from particle collider experiments (mostly the Z pole data from the CERN LEP collider) and atomic parity violation.
The measured values of the Peskin–Takeuchi parameters agree with the Standard Model. They can then be used to constrain models of new physics beyond the Standard Model. The Peskin–Takeuchi parameters are only sensitive to new physics that contributes to the oblique corrections, i.e., the vacuum polarization corrections to four-fermion scattering processes.
Definitions.
The Peskin–Takeuchi parameterization is based on the following assumptions about the nature of the new physics:
With these assumptions, the oblique corrections can be parameterized in terms of four vacuum polarization functions: the self-energies of the photon, Z boson, and W boson, and the mixing between the photon and the Z boson induced by loop diagrams.
Assumption number 3 above allows us to expand the vacuum polarization functions in powers of q2/M2, where M represents the heavy mass scale of the new interactions, and keep only the constant and linear terms in q2. We have,
formula_0
formula_1
formula_2
formula_3
where formula_4 denotes the derivative of the vacuum polarization function with respect to q2. The constant pieces of formula_5 and formula_6 are zero because of the renormalization conditions. We thus have six parameters to deal with. Three of these may be absorbed into the renormalization of the three input parameters of the electroweak theory, which are usually chosen to be the fine structure constant formula_7, as determined from quantum electrodynamic measurements (there is a significant running of α between the scale of the mass of the electron and the electroweak scale and this needs to be corrected for), the Fermi coupling constant GF, as determined from the muon decay which measures the weak current coupling strength at close to zero momentum transfer, and the Z boson mass MZ, leaving three left over which are measurable. This is because we are not able to determine which contribution comes from the Standard Model proper and which contribution comes from physics beyond the Standard Model (BSM) when measuring these three parameters. To us, the low energy processes could have equally well come from a pure Standard Model with redefined values of e, GF and MZ. These remaining three are the Peskin–Takeuchi parameters "S, T" and "U", and are defined as:
formula_8
formula_9
formula_10
where sw and cw are the sine and cosine of the weak mixing angle, respectively. The definitions are carefully chosen so that
References.
The following papers constitute the original proposals for the "S, T, U" parameters:
The first detailed global fits were presented in:
For a review, see: | [
{
"math_id": 0,
"text": "\\Pi_{\\gamma\\gamma}(q^2) = q^2 \\Pi_{\\gamma\\gamma}^{\\prime}(0) + ..."
},
{
"math_id": 1,
"text": "\\Pi_{Z \\gamma}(q^2) = q^2 \\Pi_{Z \\gamma}^{\\prime}(0) + ..."
},
{
"math_id": 2,
"text": "\\Pi_{ZZ}(q^2) = \\Pi_{ZZ}(0) + q^2 \\Pi_{ZZ}^{\\prime}(0) + ..."
},
{
"math_id": 3,
"text": "\\Pi_{WW}(q^2) = \\Pi_{WW}(0) + q^2 \\Pi_{WW}^{\\prime}(0) + ..."
},
{
"math_id": 4,
"text": "\\Pi^{\\prime}"
},
{
"math_id": 5,
"text": "\\Pi_{\\gamma\\gamma}"
},
{
"math_id": 6,
"text": "\\Pi_{Z \\gamma}"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "\\alpha S = 4 s_w^2 c_w^2 \\left[ \\Pi_{ZZ}^{\\prime}(0) - \\frac{c_w^2 - s_w^2}{s_w c_w} \\Pi_{Z \\gamma}^{\\prime}(0) - \\Pi_{\\gamma \\gamma}^{\\prime}(0) \\right]"
},
{
"math_id": 9,
"text": "\\alpha T = \\frac{\\Pi_{WW}(0)}{M_W^2} - \\frac{\\Pi_{ZZ}(0)}{M_Z^2}"
},
{
"math_id": 10,
"text": "\\alpha U = 4 s_w^2 \\left[ \\Pi_{WW}^{\\prime}(0) - c_w^2 \\Pi_{ZZ}^{\\prime}(0) - 2 s_w c_w \\Pi_{Z \\gamma}^{\\prime}(0) - s_w^2 \\Pi_{\\gamma\\gamma}^{\\prime}(0) \\right]"
},
{
"math_id": 11,
"text": "\\left|H^\\dagger D_\\mu H\\right|^2/\\Lambda^2"
},
{
"math_id": 12,
"text": "H^\\dagger W^{\\mu\\nu}B_{\\mu\\nu}H/\\Lambda^2"
},
{
"math_id": 13,
"text": "H^\\dagger B^{\\mu\\nu}B_{\\mu\\nu}H/\\Lambda^2"
},
{
"math_id": 14,
"text": "H^\\dagger W^{\\mu\\nu}W_{\\mu\\nu}H/\\Lambda^2"
},
{
"math_id": 15,
"text": "\\left(H^\\dagger W^{\\mu\\nu}H\\right)\\left(H^\\dagger W_{\\mu\\nu}H\\right)/\\Lambda^4"
}
]
| https://en.wikipedia.org/wiki?curid=7416843 |
74175376 | Type and cotype of a Banach space | In functional analysis, the type and cotype of a Banach space are a classification of Banach spaces through probability theory and a measure, how far a Banach space from a Hilbert space is.
The starting point is the Pythagorean identity for orthogonal vectors formula_0 in Hilbert spaces
formula_1
This identity no longer holds in general Banach spaces, however one can introduce a notion of orthogonality probabilistically with the help of Rademacher random variables, for this reason one also speaks of "Rademacher type" and "Rademacher cotype".
The notion of type and cotype was introduced by French mathematician Jean-Pierre Kahane.
Definition.
Let
Type.
formula_8 is of type formula_9 for formula_10 if there exist a finite constant formula_11 such that
formula_12
for all finite sequences formula_13. The sharpest constant formula_14 is called "type formula_9 constant" and denoted as formula_15.
Cotype.
formula_8 is of cotype formula_16 for formula_17 if there exist a finite constant formula_11 such that
formula_18
respectively
formula_19
for all finite sequences formula_13. The sharpest constant formula_14 is called "cotype formula_16 constant" and denoted as formula_20.
Remarks.
By taking the formula_9-th resp. formula_16-th root one gets the equation for the Bochner formula_21 norm.
Properties.
If a Banach space: | [
{
"math_id": 0,
"text": "(e_k)_{k=1}^{n}"
},
{
"math_id": 1,
"text": "\\left\\|\\sum_{k=1}^n e_k \\right\\|^2 = \\sum_{k=1}^n \\left\\|e_k\\right\\|^2."
},
{
"math_id": 2,
"text": "(X,\\|\\cdot\\|)"
},
{
"math_id": 3,
"text": "(\\varepsilon_i)"
},
{
"math_id": 4,
"text": "P(\\varepsilon_i=-1)=P(\\varepsilon_i=1)=1/2"
},
{
"math_id": 5,
"text": "\\mathbb{E}[\\varepsilon_i\\varepsilon_m]=0"
},
{
"math_id": 6,
"text": "i\\neq m"
},
{
"math_id": 7,
"text": "\\operatorname{Var}[\\varepsilon_i]=1"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "p\\in [1,2]"
},
{
"math_id": 11,
"text": "C \\geq 1"
},
{
"math_id": 12,
"text": "\\mathbb{E}_{\\varepsilon}\\left[\\left\\|\\sum\\limits_{i=1}^n \\varepsilon_i x_i \\right\\|^p\\right]\\leq C^p\\left(\\sum\\limits_{i=1}^n \\|x_i\\|^p\\right)"
},
{
"math_id": 13,
"text": "(x_i)_{i=1}^n \\in X^{n}"
},
{
"math_id": 14,
"text": "C"
},
{
"math_id": 15,
"text": "T_p(X)"
},
{
"math_id": 16,
"text": "q"
},
{
"math_id": 17,
"text": "q\\in [2,\\infty]"
},
{
"math_id": 18,
"text": "\\mathbb{E}_{\\varepsilon}\\left[\\left\\|\\sum\\limits_{i=1}^n \\varepsilon_i x_i\\right\\|^q \\right]\\geq \\frac{1}{C^q}\\left(\\sum\\limits_{i=1}^n \\|x_i\\|^q\\right), \\quad\\text{if}\\; 2\\leq q <\\infty"
},
{
"math_id": 19,
"text": "\\mathbb{E}_{\\varepsilon}\\left[\\left\\|\\sum\\limits_{i=1}^n \\varepsilon_i x_i\\right\\| \\right]\\geq \\frac{1}{C}\\sup\\|x_i\\|, \\quad\\text{if}\\; q=\\infty"
},
{
"math_id": 20,
"text": "C_q(X)"
},
{
"math_id": 21,
"text": "L^p"
},
{
"math_id": 22,
"text": "1"
},
{
"math_id": 23,
"text": "2"
},
{
"math_id": 24,
"text": "p'\\in [1,p]"
},
{
"math_id": 25,
"text": "q'\\in [q,\\infty]"
},
{
"math_id": 26,
"text": "1<p\\leq 2"
},
{
"math_id": 27,
"text": "X^*"
},
{
"math_id": 28,
"text": "p^*"
},
{
"math_id": 29,
"text": "p^* :=(1-1/p)^{-1}"
},
{
"math_id": 30,
"text": "C_{p^*}(X^*)\\leq T_p(X)"
},
{
"math_id": 31,
"text": "L^1"
},
{
"math_id": 32,
"text": "L^2"
},
{
"math_id": 33,
"text": "p\\in [2,\\infty)"
},
{
"math_id": 34,
"text": "L^{\\infty}"
},
{
"math_id": 35,
"text": "\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=74175376 |
741759 | Chernoff bound | Exponentially decreasing bounds on tail distributions of random variables
In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms "the" Chernoff or Chernoff-Cramér bound, which may decay faster than exponential (e.g. sub-Gaussian). It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.
The bound is commonly named after Herman Chernoff who described the method in a 1952 paper, though Chernoff himself attributed it to Herman Rubin. In 1938 Harald Cramér had published an almost identical concept now known as Cramér's theorem.
It is a sharper bound than the first- or second-moment-based tail bounds such as Markov's inequality or Chebyshev's inequality, which only yield power-law bounds on tail decay. However, when applied to sums the Chernoff bound requires the random variables to be independent, a condition that is not required by either Markov's inequality or Chebyshev's inequality.
The Chernoff bound is related to the Bernstein inequalities. It is also used to prove Hoeffding's inequality, Bennett's inequality, and McDiarmid's inequality.
Generic Chernoff bounds.
The generic Chernoff bound for a random variable formula_0 is attained by applying Markov's inequality to formula_1 (which is why it is sometimes called the "exponential Markov" or "exponential moments" bound). For positive formula_2 this gives a bound on the right tail of formula_0 in terms of its moment-generating function formula_3:
formula_4
Since this bound holds for every positive formula_2, we may take the infimum:
formula_5
Performing the same analysis with negative formula_2 we get a similar bound on the left tail:
formula_6
and
formula_7
The quantity formula_8 can be expressed as the expectation value formula_9, or equivalently formula_10.
Properties.
The exponential function is convex, so by Jensen's inequality formula_11. It follows that the bound on the right tail is greater or equal to one when formula_12, and therefore trivial; similarly, the left bound is trivial for formula_13. We may therefore combine the two infima and define the two-sided Chernoff bound:formula_14which provides an upper bound on the folded cumulative distribution function of formula_0 (folded at the mean, not the median).
The logarithm of the two-sided Chernoff bound is known as the rate function (or "Cramér transform") formula_15. It is equivalent to the Legendre–Fenchel transform or convex conjugate of the cumulant generating function formula_16, defined as: formula_17The moment generating function is log-convex, so by a property of the convex conjugate, the Chernoff bound must be log-concave. The Chernoff bound attains its maximum at the mean, formula_18, and is invariant under translation: formula_19.
The Chernoff bound is exact if and only if formula_0 is a single concentrated mass (degenerate distribution). The bound is tight only at or beyond the extremes of a bounded random variable, where the infima are attained for infinite formula_2. For unbounded random variables the bound is nowhere tight, though it is asymptotically tight up to sub-exponential factors ("exponentially tight"). Individual moments can provide tighter bounds, at the cost of greater analytical complexity.
In practice, the exact Chernoff bound may be unwieldy or difficult to evaluate analytically, in which case a suitable upper bound on the moment (or cumulant) generating function may be used instead (e.g. a sub-parabolic CGF giving a sub-Gaussian Chernoff bound).
Lower bounds from the MGF.
Using only the moment generating function, a lower bound on the tail probabilities can be obtained by applying the Paley-Zygmund inequality to formula_1, yielding: formula_20(a bound on the left tail is obtained for negative formula_2). Unlike the Chernoff bound however, this result is not exponentially tight.
Theodosopoulos constructed a tight(er) MGF-based lower bound using an exponential tilting procedure.
For particular distributions (such as the binomial) lower bounds of the same exponential order as the Chernoff bound are often available.
Sums of independent random variables.
When X is the sum of n independent random variables "X"1, ..., "Xn", the moment generating function of X is the product of the individual moment generating functions, giving that:
and:
formula_21
Specific Chernoff bounds are attained by calculating the moment-generating function formula_22 for specific instances of the random variables formula_23.
When the random variables are also "identically distributed" (iid), the Chernoff bound for the sum reduces to a simple rescaling of the single-variable Chernoff bound. That is, the Chernoff bound for the "average" of "n" iid variables is equivalent to the "n"th power of the Chernoff bound on a single variable (see Cramér's theorem).
Sums of independent bounded random variables.
Chernoff bounds may also be applied to general sums of independent, bounded random variables, regardless of their distribution; this is known as Hoeffding's inequality. The proof follows a similar approach to the other Chernoff bounds, but applying Hoeffding's lemma to bound the moment generating functions (see Hoeffding's inequality).
Hoeffding's inequality. Suppose "X"1, ..., "Xn" are independent random variables taking values in [a,b]. Let X denote their sum and let "μ"
E["X"] denote the sum's expected value. Then for any formula_24,
formula_25
formula_26
Sums of independent Bernoulli random variables.
The bounds in the following sections for Bernoulli random variables are derived by using that, for a Bernoulli random variable formula_23 with probability "p" of being equal to 1,
formula_27
One can encounter many flavors of Chernoff bounds: the original "additive form" (which gives a bound on the absolute error) or the more practical "multiplicative form" (which bounds the error relative to the mean).
Multiplicative form (relative error).
Multiplicative Chernoff bound. Suppose "X"1, ..., "Xn" are independent random variables taking values in {0, 1}. Let X denote their sum and let "μ"
E["X"] denote the sum's expected value. Then for any "δ" > 0,
formula_28
A similar proof strategy can be used to show that for 0 < "δ" < 1
formula_29
The above formula is often unwieldy in practice, so the following looser but more convenient bounds are often used, which follow from the inequality formula_30 from the list of logarithmic inequalities:
formula_31
formula_32
formula_33
Notice that the bounds are trivial for formula_34.
In addition, based on the Taylor expansion for the Lambert W function,
formula_35
Additive form (absolute error).
The following theorem is due to Wassily Hoeffding and hence is called the Chernoff–Hoeffding theorem.
Chernoff–Hoeffding theorem. Suppose "X"1, ..., "Xn" are i.i.d. random variables, taking values in {0, 1}. Let "p"
E["X"1] and "ε" > 0.
formula_36
where
formula_37
is the Kullback–Leibler divergence between Bernoulli distributed random variables with parameters "x" and "y" respectively. If "p" ≥ , then formula_38 which means
formula_39
A simpler bound follows by relaxing the theorem using "D"("p" + "ε" || "p") ≥ 2"ε"2, which follows from the convexity of "D"("p" + "ε" || "p") and the fact that
formula_40
This result is a special case of Hoeffding's inequality. Sometimes, the bounds
formula_41
which are stronger for "p" < , are also used.
Applications.
Chernoff bounds have very useful applications in set balancing and packet routing in sparse networks.
The set balancing problem arises while designing statistical experiments. Typically while designing a statistical experiment, given the features of each participant in the experiment, we need to know how to divide the participants into 2 disjoint groups such that each feature is roughly as balanced as possible between the two groups.
Chernoff bounds are also used to obtain tight bounds for permutation routing problems which reduce network congestion while routing packets in sparse networks.
Chernoff bounds are used in computational learning theory to prove that a learning algorithm is probably approximately correct, i.e. with high probability the algorithm has small error on a sufficiently large training data set.
Chernoff bounds can be effectively used to evaluate the "robustness level" of an application/algorithm by exploring its perturbation space with randomization.
The use of the Chernoff bound permits one to abandon the strong—and mostly unrealistic—small perturbation hypothesis (the perturbation magnitude is small). The robustness level can be, in turn, used either to validate or reject a specific algorithmic choice, a hardware implementation or the appropriateness of a solution whose structural parameters are affected by uncertainties.
A simple and common use of Chernoff bounds is for "boosting" of randomized algorithms. If one has an algorithm that outputs a guess that is the desired answer with probability "p" > 1/2, then one can get a higher success rate by running the algorithm formula_42 times and outputting a guess that is output by more than "n"/2 runs of the algorithm. (There cannot be more than one such guess.) Assuming that these algorithm runs are independent, the probability that more than "n"/2 of the guesses is correct is equal to the probability that the sum of independent Bernoulli random variables "Xk" that are 1 with probability "p" is more than "n"/2. This can be shown to be at least formula_43 via the multiplicative Chernoff bound (Corollary 13.3 in Sinclair's class notes, "μ"
"np").:
formula_44
Matrix Chernoff bound.
Rudolf Ahlswede and Andreas Winter introduced a Chernoff bound for matrix-valued random variables. The following version of the inequality can be found in the work of Tropp.
Let "M"1, ..., "Mt" be independent matrix valued random variables such that formula_45 and formula_46.
Let us denote by formula_47 the operator norm of the matrix formula_48. If formula_49 holds almost surely for all formula_50, then for every "ε" > 0
formula_51
Notice that in order to conclude that the deviation from 0 is bounded by "ε" with high probability, we need to choose a number of samples formula_52 proportional to the logarithm of formula_53. In general, unfortunately, a dependence on formula_54 is inevitable: take for example a diagonal random sign matrix of dimension formula_55. The operator norm of the sum of "t" independent samples is precisely the maximum deviation among "d" independent random walks of length "t". In order to achieve a fixed bound on the maximum deviation with constant probability, it is easy to see that "t" should grow logarithmically with "d" in this scenario.
The following theorem can be obtained by assuming "M" has low rank, in order to avoid the dependency on the dimensions.
Theorem without the dependency on the dimensions.
Let 0 < "ε" < 1 and "M" be a random symmetric real matrix with formula_56 and formula_57 almost surely. Assume that each element on the support of "M" has at most rank "r". Set
formula_58
If formula_59 holds almost surely, then
formula_60
where "M"1, ..., "Mt" are i.i.d. copies of "M".
Sampling variant.
The following variant of Chernoff's bound can be used to bound the probability that a majority in a population will become a minority in a sample, or vice versa.
Suppose there is a general population "A" and a sub-population "B" ⊆ "A". Mark the relative size of the sub-population (|"B"|/|"A"|) by "r".
Suppose we pick an integer "k" and a random sample "S" ⊂ "A" of size "k". Mark the relative size of the sub-population in the sample (|"B"∩"S"|/|"S"|) by "rS".
Then, for every fraction "d" ∈ [0,1]:
formula_61
In particular, if "B" is a majority in "A" (i.e. "r" > 0.5) we can bound the probability that "B" will remain majority in "S"("rS" > 0.5) by taking: "d" = 1 − 1/(2"r"):
formula_62
This bound is of course not tight at all. For example, when "r" = 0.5 we get a trivial bound Prob > 0.
Proofs.
Multiplicative form.
Following the conditions of the multiplicative Chernoff bound, let "X"1, ..., "Xn" be independent Bernoulli random variables, whose sum is "X", each having probability "pi" of being equal to 1. For a Bernoulli variable:
formula_63
So, using (1) with formula_64 for any formula_65 and where formula_66,
formula_67
If we simply set "t"
log(1 + "δ") so that "t" > 0 for "δ" > 0, we can substitute and find
formula_68
This proves the result desired.
Chernoff–Hoeffding theorem (additive form).
Let "q"
"p" + "ε". Taking "a"
"nq" in (1), we obtain:
formula_69
Now, knowing that Pr("Xi"
1)
"p", Pr("Xi"
0)
1 − "p", we have
formula_70
Therefore, we can easily compute the infimum, using calculus:
formula_71
Setting the equation to zero and solving, we have
formula_72
so that
formula_73
Thus,
formula_74
As "q"
"p" + "ε" > "p", we see that "t" > 0, so our bound is satisfied on t. Having solved for t, we can plug back into the equations above to find that
formula_75
We now have our desired result, that
formula_76
To complete the proof for the symmetric case, we simply define the random variable "Yi"
1 − "Xi", apply the same proof, and plug it into our bound.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "e^{tX}"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "M(t) = \\operatorname E (e^{t X})"
},
{
"math_id": 4,
"text": "\\operatorname P \\left(X \\geq a \\right) = \\operatorname P \\left(e^{t X} \\geq e^{t a}\\right) \\leq M(t) e^{-t a} \\qquad (t > 0)"
},
{
"math_id": 5,
"text": "\\operatorname P \\left(X \\geq a\\right) \\leq \\inf_{t > 0} M(t) e^{-t a}"
},
{
"math_id": 6,
"text": "\\operatorname P \\left(X \\leq a \\right) = \\operatorname P \\left(e^{t X} \\geq e^{t a}\\right) \\leq M(t) e^{-t a} \\qquad (t < 0)"
},
{
"math_id": 7,
"text": "\\operatorname P \\left(X \\leq a\\right) \\leq \\inf_{t < 0} M(t) e^{-t a}"
},
{
"math_id": 8,
"text": "M(t) e^{-t a}"
},
{
"math_id": 9,
"text": "\\operatorname E (e^{t X}) e^{-t a}"
},
{
"math_id": 10,
"text": "\\operatorname E (e^{t (X-a)})"
},
{
"math_id": 11,
"text": "\\operatorname E (e^{t X}) \\ge e^{t \\operatorname E (X)}"
},
{
"math_id": 12,
"text": "a \\le \\operatorname E (X)"
},
{
"math_id": 13,
"text": "a \\ge \\operatorname E (X)"
},
{
"math_id": 14,
"text": "C(a) = \\inf_{t} M(t) e^{-t a} "
},
{
"math_id": 15,
"text": "I = -\\log C"
},
{
"math_id": 16,
"text": "K = \\log M"
},
{
"math_id": 17,
"text": "I(a) = \\sup_{t} at - K(t) "
},
{
"math_id": 18,
"text": "C(\\operatorname E(X))=1"
},
{
"math_id": 19,
"text": "C_{X+k}(a) = C_X(a - k) "
},
{
"math_id": 20,
"text": "\\operatorname P \\left(X > a\\right) \\geq \\sup_{t > 0 \\and M(t) \\geq e^{ta}} \\left( 1 - \\frac{e^{ta}}{M(t)} \\right)^2 \\frac{M(t)^2}{M(2t)}"
},
{
"math_id": 21,
"text": " \\Pr (X \\leq a) \\leq \\inf_{t < 0} e^{-ta} \\prod_i \\operatorname E \\left[e^{t X_i} \\right ]"
},
{
"math_id": 22,
"text": "\\operatorname E \\left[e^{-t\\cdot X_i} \\right ]"
},
{
"math_id": 23,
"text": "X_i"
},
{
"math_id": 24,
"text": "t>0"
},
{
"math_id": 25,
"text": "\\Pr (X \\le \\mu-t) < e^{-2t^2/(n(b-a)^2)},"
},
{
"math_id": 26,
"text": "\\Pr (X \\ge \\mu+t) < e^{-2t^2/(n(b-a)^2)}."
},
{
"math_id": 27,
"text": "\\operatorname E \\left[e^{t\\cdot X_i} \\right] = (1 - p) e^0 + p e^t = 1 + p (e^t -1) \\leq e^{p (e^t - 1)}."
},
{
"math_id": 28,
"text": "\\Pr ( X \\ge (1+\\delta)\\mu) \\leq \\left(\\frac{e^{\\delta}}{(1+\\delta)^{1+\\delta}}\\right)^\\mu."
},
{
"math_id": 29,
"text": "\\Pr(X \\le (1-\\delta)\\mu) \\leq \\left(\\frac{e^{-\\delta}}{(1-\\delta)^{1-\\delta}}\\right)^\\mu."
},
{
"math_id": 30,
"text": "\\textstyle\\frac{2\\delta}{2+\\delta} \\le \\log(1+\\delta)"
},
{
"math_id": 31,
"text": "\\Pr( X \\ge (1+\\delta)\\mu)\\le e^{-\\delta^2\\mu/(2+\\delta)}, \\qquad 0 \\le \\delta,"
},
{
"math_id": 32,
"text": "\\Pr( X \\le (1-\\delta)\\mu) \\le e^{-\\delta^2\\mu/2}, \\qquad 0 < \\delta < 1,"
},
{
"math_id": 33,
"text": "\\Pr( |X - \\mu| \\ge \\delta\\mu) \\le 2e^{-\\delta^2\\mu/3}, \\qquad 0 < \\delta < 1."
},
{
"math_id": 34,
"text": "\\delta = 0"
},
{
"math_id": 35,
"text": "\\Pr( X \\ge R)\\le 2^{-xR}, \\qquad x > 0, \\ R \\ge (2^x e -1)\\mu."
},
{
"math_id": 36,
"text": "\\begin{align}\n\\Pr \\left (\\frac{1}{n} \\sum X_i \\geq p + \\varepsilon \\right ) \\leq \\left (\\left (\\frac{p}{p + \\varepsilon}\\right )^{p+\\varepsilon} {\\left (\\frac{1 - p}{1-p- \\varepsilon}\\right )}^{1 - p- \\varepsilon}\\right )^n &= e^{-D(p+\\varepsilon\\parallel p) n} \\\\\n\\Pr \\left (\\frac{1}{n} \\sum X_i \\leq p - \\varepsilon \\right ) \\leq \\left (\\left (\\frac{p}{p - \\varepsilon}\\right )^{p-\\varepsilon} {\\left (\\frac{1 - p}{1-p+ \\varepsilon}\\right )}^{1 - p+ \\varepsilon}\\right )^n &= e^{-D(p-\\varepsilon\\parallel p) n}\n\\end{align}"
},
{
"math_id": 37,
"text": " D(x\\parallel y) = x \\ln \\frac{x}{y} + (1-x) \\ln \\left (\\frac{1-x}{1-y} \\right )"
},
{
"math_id": 38,
"text": "D(p+\\varepsilon\\parallel p)\\ge \\tfrac{\\varepsilon^2}{2p(1-p)}"
},
{
"math_id": 39,
"text": " \\Pr\\left ( \\frac{1}{n}\\sum X_i>p+x \\right ) \\leq \\exp \\left (-\\frac{x^2n}{2p(1-p)} \\right )."
},
{
"math_id": 40,
"text": "\\frac{d^2}{d\\varepsilon^2} D(p+\\varepsilon\\parallel p) = \\frac{1}{(p+\\varepsilon)(1-p-\\varepsilon) } \\geq 4 =\\frac{d^2}{d\\varepsilon^2}(2\\varepsilon^2)."
},
{
"math_id": 41,
"text": "\n\\begin{align}\nD( (1+x) p \\parallel p) \\geq \\frac{1}{4} x^2 p, & & & {-\\tfrac{1}{2}} \\leq x \\leq \\tfrac{1}{2},\\\\[6pt]\nD(x \\parallel y) \\geq \\frac{3(x-y)^2}{2(2y+x)}, \\\\[6pt]\nD(x \\parallel y) \\geq \\frac{(x-y)^2}{2y}, & & & x \\leq y,\\\\[6pt]\nD(x \\parallel y) \\geq \\frac{(x-y)^2}{2x}, & & & x \\geq y\n\\end{align}\n"
},
{
"math_id": 42,
"text": "n = \\log(1/\\delta) 2p/(p - 1/2)^2"
},
{
"math_id": 43,
"text": "1-\\delta"
},
{
"math_id": 44,
"text": "\\Pr\\left[X > {n \\over 2}\\right] \\ge 1 - e^{-n \\left(p - 1/2 \\right)^2/(2p)} \\geq 1-\\delta"
},
{
"math_id": 45,
"text": " M_i\\in \\mathbb{C}^{d_1 \\times d_2} "
},
{
"math_id": 46,
"text": " \\mathbb{E}[M_i]=0"
},
{
"math_id": 47,
"text": " \\lVert M \\rVert "
},
{
"math_id": 48,
"text": " M "
},
{
"math_id": 49,
"text": " \\lVert M_i \\rVert \\leq \\gamma "
},
{
"math_id": 50,
"text": " i\\in\\{1,\\ldots, t\\} "
},
{
"math_id": 51,
"text": "\\Pr\\left( \\left\\| \\frac{1}{t} \\sum_{i=1}^t M_i \\right\\| > \\varepsilon \\right) \\leq (d_1+d_2) \\exp \\left( -\\frac{3\\varepsilon^2 t}{8\\gamma^2} \\right)."
},
{
"math_id": 52,
"text": "t "
},
{
"math_id": 53,
"text": " d_1+d_2 "
},
{
"math_id": 54,
"text": " \\log(\\min(d_1,d_2)) "
},
{
"math_id": 55,
"text": "d\\times d "
},
{
"math_id": 56,
"text": "\\| \\operatorname E[M] \\| \\leq 1 "
},
{
"math_id": 57,
"text": "\\| M\\| \\leq \\gamma "
},
{
"math_id": 58,
"text": " t = \\Omega \\left( \\frac{\\gamma\\log (\\gamma/\\varepsilon^2)}{\\varepsilon^2} \\right)."
},
{
"math_id": 59,
"text": " r \\leq t "
},
{
"math_id": 60,
"text": "\\Pr\\left(\\left\\| \\frac{1}{t} \\sum_{i=1}^t M_i - \\operatorname E[M] \\right\\| > \\varepsilon \\right) \\leq \\frac{1}{\\mathbf{poly}(t)}"
},
{
"math_id": 61,
"text": "\\Pr\\left(r_S < (1-d)\\cdot r\\right) < \\exp\\left(-r\\cdot d^2 \\cdot \\frac k 2\\right)"
},
{
"math_id": 62,
"text": "\\Pr\\left(r_S > 0.5\\right) > 1 - \\exp\\left(-r\\cdot \\left(1 - \\frac{1}{2 r}\\right)^2 \\cdot \\frac k 2 \\right)"
},
{
"math_id": 63,
"text": "\\operatorname E \\left[e^{t\\cdot X_i} \\right] = (1 - p_i) e^0 + p_i e^t = 1 + p_i (e^t -1) \\leq e^{p_i (e^t - 1)}"
},
{
"math_id": 64,
"text": "a = (1+\\delta)\\mu"
},
{
"math_id": 65,
"text": "\\delta>0"
},
{
"math_id": 66,
"text": "\\mu = \\operatorname E[X] = \\textstyle\\sum_{i=1}^n p_i"
},
{
"math_id": 67,
"text": "\\begin{align}\n\\Pr (X > (1 + \\delta)\\mu) &\\le \\inf_{t \\geq 0} \\exp(-t(1+\\delta)\\mu)\\prod_{i=1}^n\\operatorname{E}[\\exp(tX_i)]\\\\[4pt]\n& \\leq \\inf_{t \\geq 0} \\exp\\Big(-t(1+\\delta)\\mu + \\sum_{i=1}^n p_i(e^t - 1)\\Big) \\\\[4pt]\n& = \\inf_{t \\geq 0} \\exp\\Big(-t(1+\\delta)\\mu + (e^t - 1)\\mu\\Big).\n\\end{align}"
},
{
"math_id": 68,
"text": "\\exp\\Big(-t(1+\\delta)\\mu + (e^t - 1)\\mu\\Big) = \\frac{\\exp((1+\\delta - 1)\\mu)}{(1+\\delta)^{(1+\\delta)\\mu}} = \\left[\\frac{e^\\delta}{(1+\\delta)^{(1+\\delta)}}\\right]^\\mu."
},
{
"math_id": 69,
"text": "\\Pr\\left ( \\frac{1}{n} \\sum X_i \\ge q\\right )\\le \\inf_{t>0} \\frac{E \\left[\\prod e^{t X_i}\\right]}{e^{tnq}} = \\inf_{t>0} \\left ( \\frac{ E\\left[e^{tX_i} \\right] }{e^{tq}}\\right )^n."
},
{
"math_id": 70,
"text": "\\left (\\frac{\\operatorname E\\left[e^{tX_i} \\right] }{e^{tq}}\\right )^n = \\left (\\frac{p e^t + (1-p)}{e^{tq} }\\right )^n = \\left ( pe^{(1-q)t} + (1-p)e^{-qt} \\right )^n."
},
{
"math_id": 71,
"text": "\\frac{d}{dt} \\left (pe^{(1-q)t} + (1-p)e^{-qt} \\right) = (1-q)pe^{(1-q)t}-q(1-p)e^{-qt}"
},
{
"math_id": 72,
"text": "\\begin{align}\n(1-q)pe^{(1-q)t} &= q(1-p)e^{-qt} \\\\\n(1-q)pe^{t} &= q(1-p)\n\\end{align}"
},
{
"math_id": 73,
"text": "e^t = \\frac{(1-p)q}{(1-q)p}."
},
{
"math_id": 74,
"text": "t = \\log\\left(\\frac{(1-p)q}{(1-q)p}\\right)."
},
{
"math_id": 75,
"text": "\\begin{align}\n\\log \\left (pe^{(1-q)t} + (1-p)e^{-qt} \\right ) &= \\log \\left ( e^{-qt}(1-p+pe^t) \\right ) \\\\\n&= \\log\\left (e^{-q \\log\\left(\\frac{(1-p)q}{(1-q)p}\\right)}\\right) + \\log\\left(1-p+pe^{\\log\\left(\\frac{1-p}{1-q}\\right)}e^{\\log\\frac{q}{p}}\\right ) \\\\\n&= -q\\log\\frac{1-p}{1-q} -q \\log\\frac{q}{p} + \\log\\left(1-p+ p\\left(\\frac{1-p}{1-q}\\right)\\frac{q}{p}\\right) \\\\\n&= -q\\log\\frac{1-p}{1-q} -q \\log\\frac{q}{p} + \\log\\left(\\frac{(1-p)(1-q)}{1-q}+\\frac{(1-p)q}{1-q}\\right) \\\\\n&= -q \\log\\frac{q}{p} + \\left ( -q\\log\\frac{1-p}{1-q} + \\log\\frac{1-p}{1-q} \\right ) \\\\\n&= -q\\log\\frac{q}{p} + (1-q)\\log\\frac{1-p}{1-q} \\\\\n&= -D(q \\parallel p).\n\\end{align}"
},
{
"math_id": 76,
"text": "\\Pr \\left (\\tfrac{1}{n}\\sum X_i \\ge p + \\varepsilon\\right ) \\le e^{-D(p+\\varepsilon\\parallel p) n}."
}
]
| https://en.wikipedia.org/wiki?curid=741759 |
7417940 | Acoustic transmission line | Acoustic waveguide used to transmit sound
An acoustic transmission line is the use of a long duct, which acts as an acoustic waveguide and is used to produce or transmit sound in an undistorted manner. Technically it is the acoustic analog of the electrical transmission line, typically conceived as a rigid-walled duct or tube, that is long and thin relative to the wavelength of sound present in it.
Examples of transmission line (TL) related technologies include the (mostly obsolete) speaking tube, which transmitted sound to a different location with minimal loss and distortion, wind instruments such as the pipe organ, woodwind and brass which can be modeled in part as transmission lines (although their design also involves generating sound, controlling its timbre, and coupling it efficiently to the open air), and transmission line based loudspeakers which use the same principle to produce accurate extended low bass frequencies and avoid distortion. The comparison between an acoustic duct and an electrical transmission line is useful in "lumped-element" modeling of acoustical systems, in which acoustic elements like volumes, tubes, pistons, and screens can be modeled as single elements in a circuit. With the substitution of pressure for voltage, and volume particle velocity for current, the equations are essentially the same. Electrical transmission lines can be used to describe acoustic tubes and ducts, provided the frequency of the waves in the tube is below the critical frequency, such that they are purely planar.
Design principles.
Phase inversion is achieved by selecting a length of line that is equal to the quarter wavelength of the target lowest frequency. The effect is illustrated in Fig. 1, which shows a hard boundary at one end (the speaker) and the open-ended line vent at the other. The phase relationship between the bass driver and vent is in phase in the pass band until the frequency approaches the quarter wavelength, when the relationship reaches 90 degrees as shown. However, by this time the vent is producing most of the output (Fig. 2). Because the line is operating over several octaves with the drive unit, cone excursion is reduced, providing higher SPL's and lower distortion levels, compared with reflex and infinite baffle designs.
The calculation of the length of the line required for a certain bass extension appears to be straightforward, based on a simple formula:
formula_0
where formula_1 is the sound frequency in hertz (Hz), formula_2 is the speed of sound in air at 20°C in meters/second, and formula_3 is the length of the transmission line in meters.
The complex loading of the bass drive unit demands specific Thiele-Small driver parameters to realise the full benefits of a TL design. However, most drive units in the marketplace are developed for the more common reflex and infinite baffle designs and are usually not suitable for TL loading. High efficiency bass drivers with extended low frequency ability, are usually designed to be extremely light and flexible, having very compliant suspensions. Whilst performing well in a reflex design, these characteristics do not match the demands of a TL design. The drive unit is effectively coupled to a long column of air which has mass. This lowers the resonant frequency of the drive unit, negating the need for a highly compliant device. Furthermore, the column of air provides greater force on the driver itself than a driver opening onto a large volume of air (in simple terms it provides more resistance to the driver's attempt to move it), so to control the movement of air requires an extremely rigid cone, to avoid deformation and consequent distortion.
The introduction of the absorption materials reduces the velocity of sound through the line, as discovered by Bailey in his original work. Bradbury published his extensive tests to determine this effect in a paper in the Journal of the Audio Engineering Society (JAES) in 1976 and his results agreed that heavily damped lines could reduce the velocity of sound by as much as 50%, although 35% is typical in medium damped lines. Bradbury's tests were carried out using fibrous materials, typically longhaired wool and glass fibre. These kinds of materials, however, produce highly variable effects that are not consistently repeatable for production purposes. They are also liable to produce inconsistencies due to movement, climatic factors and effects over time. High-specification acoustic foams, developed by loudspeaker manufacturers such as PMC, with similar characteristics to longhaired wool, provide repeatable results for consistent production. The density of the polymer, the diameter of the pores and the sculptured profiling are all specified to provide the correct absorption for each speaker model. Quantity and position of the foam is critical to engineer a low-pass acoustic filter that provides adequate attenuation of the upper bass frequencies, whilst allowing an unimpeded path for the low bass frequencies.
Discovery and development.
The concept was termed "acoustical labyrinth" by Stromberg-Carlson Co. when used in their console radios beginning in 1936 (see Concert Grand 837G Ch= 837 Radio Stromberg-Carlson Australasia Pty | Radiomuseum). Benjamin Olney who worked for Stromberg-Carlson was the inventor of the Acoustical Labyrinth and wrote an article for the Journal of the Acoustic Society of America in October of 1936 entitled "A Method of Eliminating Cavity Resonance, Extending Low Frequency Response and Increasing Acoustic Damping in Cabinet Type Loudspeakers" see Stromberg-Carlson started manufacturing an Acoustic Labyrinth speaker enclosure meant for a 12" or 15" coaxial driver as early as 1952 as evident in an Audio Engineering article in July of 1952 (page 28) see and numerous ads in Hi-Fidelity Magazine in 1952 and thereafter. The Transmission line type of loudspeaker enclosure was proposed in October 1965 by Dr A.R. Bailey and A.H. Radford in "Wireless World" (p483-486) magazine. The article postulated that energy from the rear of a driver unit could be essentially absorbed, without damping the cone's motion or superimposing internal reflections and resonance, so Bailey and Radford reasoned that the rear wave could be channeled down a long pipe. If the acoustic energy was absorbed, it would not be available to excite resonances. A pipe of sufficient length could be tapered, and stuffed so that the energy loss was almost complete, minimizing output from the open end. No broad consensus on the ideal taper (expanding, uniform cross-section, or contracting) has been established.
Uses.
Loudspeaker design.
Acoustic transmission lines gained attention in their use within loudspeakers in the 1960s and 1970s. In 1965, A R Bailey's article in Wireless World, “A Non-resonant Loudspeaker Enclosure Design”, detailed a working Transmission Line, which was commercialized by John Wright and partners under the brand name IMF and later TDL, and were sold by audiophile Irving M. "Bud" Fried in the United States.
A transmission line is used in loudspeaker design, to reduce time, phase and resonance related distortions, and in many designs to gain exceptional bass extension to the lower end of human hearing, and in some cases the near-infrasonic (below 20 Hz). TDL's 1980s reference speaker range (now discontinued) contained models with frequency ranges of 20 Hz upwards, down to 7 Hz upwards, without needing a separate subwoofer. Irving M. Fried, an advocate of TL design, stated that:
"I believe that speakers should preserve the integrity of the signal waveform and the Audio Perfectionist Journal has presented a great deal of information about the importance of time domain performance in loudspeakers. I’m not the only one who appreciates time- and phase-accurate speakers but I have been virtually the only advocate to speak out in print in recent years. There’s a reason for that."
In practice, the duct is folded inside a conventional shaped cabinet, so that the open end of the duct appears as a vent on the speaker cabinet. There are many ways in which the duct can be folded and the line is often tapered in cross section to avoid parallel internal surfaces that encourage standing waves. Depending upon the drive unit and quantity – and various physical properties – of absorbent material, the amount of taper will be adjusted during the design process to tune the duct to remove irregularities in its response. The internal partitioning provides substantial bracing for the entire structure, reducing cabinet flexing and colouration. The inside faces of the duct or line, are treated with an absorbent material to provide the correct termination with frequency to load the drive unit as a TL. A theoretically perfect TL would absorb all frequencies entering the line from the rear of the drive unit but remains theoretical, as it would have to be infinitely long. The physical constraints of the real world, demand that the length of the line must often be less than 4 meters before the cabinet becomes too large for any practical applications, so not all the rear energy can be absorbed by the line. In a realized TL, only the upper bass is TL loaded in the true sense of the term (i.e. fully absorbed); the low bass is allowed to freely radiate from the vent in the cabinet. The line therefore effectively works as a low-pass filter, another crossover point in fact, achieved acoustically by the line and its absorbent filling. Below this “crossover point” the low bass is loaded by the column of air formed by the length of the line. The length is specified to reverse the phase of the rear output of the drive unit as it exits the vent. This energy combines with the output of the bass unit, extending its response and effectively creating a second driver.
Sound ducts as transmission lines.
A duct for sound propagation also behaves like a transmission line (e.g. air conditioning duct, car muffler, ...). Its length may be similar to the wavelength of the sound passing through it, but the dimensions of its cross-section are normally smaller than one quarter the wavelength.
Sound is introduced at one end of the tube by forcing the pressure across the whole cross-section to vary with time. An almost planar wavefront travels down the line at the speed of sound. When the wave reaches the end of the transmission line, behaviour depends on what is present at the end of the line. There are three possible scenarios:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ell = \\frac{344}{\\, 4 \\times f \\,~}"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "344"
},
{
"math_id": 3,
"text": "\\ell"
}
]
| https://en.wikipedia.org/wiki?curid=7417940 |
741823 | Optical filter | Filters which selectively transmit specific colors
An optical filter is a device that selectively transmits light of different wavelengths, usually implemented as a glass plane or plastic device in the optical path, which are either dyed in the bulk or have interference coatings. The optical properties of filters are completely described by their frequency response, which specifies how the magnitude and phase of each frequency component of an incoming signal is modified by the filter.
Filters mostly belong to one of two categories. The simplest, physically, is the absorptive filter; then there are interference or dichroic filters. Many optical filters are used for optical imaging and are manufactured to be transparent; some used for light sources can be translucent.
Optical filters selectively transmit light in a particular range of wavelengths, that is, colours, while absorbing the remainder. They can usually pass long wavelengths only (longpass), short wavelengths only (shortpass), or a band of wavelengths, blocking both longer and shorter wavelengths (bandpass). The passband may be narrower or wider; the transition or cutoff between maximal and minimal transmission can be sharp or gradual. There are filters with more complex transmission characteristic, for example with two peaks rather than a single band; these are more usually older designs traditionally used for photography; filters with more regular characteristics are used for scientific and technical work.
Optical filters are commonly used in photography (where some special effect filters are occasionally used as well as absorptive filters), in many optical instruments, and to colour stage lighting. In astronomy optical filters are used to restrict light passed to the spectral band of interest, e.g., to study infrared radiation without visible light which would affect film or sensors and overwhelm the desired infrared. Optical filters are also essential in fluorescence applications such as fluorescence microscopy and fluorescence spectroscopy.
Photographic filters are a particular case of optical filters, and much of the material here applies. Photographic filters do not need the accurately controlled optical properties and precisely defined transmission curves of filters designed for scientific work, and sell in larger quantities at correspondingly lower prices than many laboratory filters. Some photographic effect filters, such as star effect filters, are not relevant to scientific work.
Measurement.
In general, a given optical filter transmits a certain percentage of the incoming light as the wavelength changes. This is measured by a spectrophotometer. As a linear material, the absorption for each wavelength is independent of the presence of other wavelengths. A very few materials are non-linear, and the transmittance depends on the intensity and the combination of wavelengths of the incident light. Transparent fluorescent materials can work as an optical filter, with an absorption spectrum, and also as a light source, with an emission spectrum.
Also in general, light which is not transmitted is absorbed; for intense light, that can cause significant heating of the filter. However, the optical term absorbance refers to the attenuation of the incident light, regardless of the mechanism by which it is attenuated. Some filters, like mirrors, interference filters, or metal meshes, reflect or scatter much of the non-transmitted light.
The (dimensionless) Optical Density of a filter at a particular wavelength of light is defined as formula_0 where T is the (dimensionless) transmittance of the filter at that wavelength.
Absorptive.
Optical filtering was first done with liquid-filled, glass-walled cells; they are still used for special purposes. The widest range of color-selection is now available as colored-film filters, originally made from animal gelatin but now usually a thermoplastic such as acetate, acrylic, polycarbonate, or polyester depending upon the application. They were standardized for photographic use by Wratten in the early 20th century, and also by color gel manufacturers for theater use.
There are now many absorptive filters made from glass to which various inorganic or organic compounds have been added. Colored glass optical filters, although harder to make to precise transmittance specifications, are more durable and stable once manufactured.
Dichroic filter.
Alternately, dichroic filters (also called "reflective" or "thin film" or "interference" filters) can be made by coating a glass substrate with a series of optical coatings. Dichroic filters usually reflect the unwanted portion of the light and transmit the remainder.
Dichroic filters use the principle of interference. Their layers form a sequential series of reflective cavities that resonate with the desired wavelengths. Other wavelengths destructively cancel or reflect as the peaks and troughs of the waves overlap.
Dichroic filters are particularly suited for precise scientific work, since their exact colour range can be controlled by the thickness and sequence of the coatings. They are usually much more expensive and delicate than absorption filters.
They can be used in devices such as the dichroic prism of a camera to separate a beam of light into different coloured components.
The basic scientific instrument of this type is a Fabry–Pérot interferometer. It uses two mirrors to establish a resonating cavity. It passes wavelengths that are a multiple of the cavity's resonance frequency.
Etalons are another variation: transparent cubes or fibers whose polished ends form mirrors tuned to resonate with specific wavelengths. These are often used to separate channels in telecommunications networks that use wavelength division multiplexing on long-haul optic fibers.
Monochromatic.
Monochromatic filters only allow a narrow range of wavelengths (essentially a single colour) to pass.
Infrared.
The term "infrared filter" can be ambiguous, as it may be applied to filters to pass infrared (blocking other wavelengths) or to block infrared (only).
Infrared-passing filters are used to block visible light but pass infrared; they are used, for example, in infrared photography.
Infrared cut-off filters are designed to block or reflect infrared wavelengths but pass visible light. Mid-infrared filters are often used as heat-absorbing filters in devices with bright incandescent light bulbs (such as slide and overhead projectors) to prevent unwanted heating due to infrared radiation. There are also filters which are used in solid state video cameras to block IR due to the high sensitivity of many camera sensors to unwanted near-infrared light.
Ultraviolet.
Ultraviolet (UV) filters block ultraviolet radiation, but let visible light through. Because photographic film and digital sensors are sensitive to ultraviolet (which is abundant in skylight) but the human eye is not, such light would, if not filtered out, make photographs look different from the scene visible to people, for example making images of distant mountains appear unnaturally hazy. An ultraviolet-blocking filter renders images closer to the visual appearance of the scene.
As with infrared filters there is a potential ambiguity between UV-blocking and UV-passing filters; the latter are much less common, and more usually known explicitly as UV pass filters and UV bandpass filters.
Neutral density.
Neutral density (ND) filters have a constant attenuation across the range of visible wavelengths, and are used to reduce the intensity of light by reflecting or absorbing a portion of it. They are specified by the optical density (OD) of the filter, which is the negative of the common logarithm of the transmission coefficient. They are useful for making photographic exposures longer. A practical example is making a waterfall look blurry when it is photographed in bright light. Alternatively, the photographer might want to use a larger aperture (so as to limit the depth of field); adding an ND filter permits this. ND filters can be reflective (in which case they look like partially reflective mirrors) or absorptive (appearing grey or black).
Longpass.
A longpass (LP) Filter is an optical interference or coloured glass filter that attenuates shorter wavelengths and transmits (passes) longer wavelengths over the active range of the target spectrum (ultraviolet, visible, or infrared). Longpass filters, which can have a very sharp slope (referred to as edge filters), are described by the cut-on wavelength at 50 percent of peak transmission. In fluorescence microscopy, longpass filters are frequently utilized in dichroic mirrors and barrier (emission) filters. Use of the older term 'low pass' to describe longpass filters has become uncommon; filters are usually described in terms of wavelength rather than frequency, and a "low pass filter", without qualification, would be understood to be an electronic filter.
Band-pass.
Band-pass filters only transmit a certain wavelength band, and block others. The width of such a filter is expressed in the wavelength range it lets through and can be anything from much less than an Ångström to a few hundred nanometers. Such a filter can be made by combining an LP- and an SP filter.
Examples of band-pass filters are the Lyot filter and the Fabry–Pérot interferometer. Both of these filters can also be made tunable, such that the central wavelength can be chosen by the user. Band-pass filters are often used in astronomy when one wants to observe a certain process with specific associated spectral lines. The Dutch Open Telescope and Swedish Solar Telescope are examples where Lyot and Fabry–Pérot filters are being used.
Shortpass.
A shortpass (SP) Filter is an optical interference or coloured glass filter that attenuates longer wavelengths and transmits (passes) shorter wavelengths over the active range of the target spectrum (usually the ultraviolet and visible region). In fluorescence microscopy, shortpass filters are frequently employed in dichromatic mirrors and excitation filters.
Guided-mode resonance filters.
A relatively new class of filters introduced around 1990. These filters are normally filters in reflection, that is they are notch filters in transmission. They consist in their most basic form of a substrate waveguide and a subwavelength grating or 2D hole array. Such filters are normally transparent, but when a leaky guided mode of the waveguide is excited they become highly reflective (a record of over 99% experimentally) for a particular polarization, angular orientations, and wavelength range. The parameters of the filters are designed by proper choice of the grating parameters. The advantage of such filters are the few layers needed for ultra-narrow bandwidth filters (in contrast to dichroic filters), and the potential decoupling between spectral bandwidth and angular tolerance when more than 1 mode is excited.
Metal mesh filters.
Filters for sub-millimeter and near infrared wavelengths in astronomy are metal mesh grids that are stacked together to form LP, BP, and SP filters for these wavelengths.
Polarizer.
Another kind of optical filter is a polarizer or polarization filter, which blocks or transmits light according to its polarization. They are often made of materials such as Polaroid and are used for sunglasses and photography. Reflections, especially from water and wet road surfaces, are partially polarized, and polarized sunglasses will block some of this reflected light, allowing an angler to better view below the surface of the water and better vision for a driver. Light from a clear blue sky is also polarized, and adjustable filters are used in colour photography to darken the appearance of the sky without introducing colours to other objects, and in both colour and black-and-white photography to control specular reflections from objects and water. Much older than g.m.r.f (just above) these first (and some still) use fine mesh integrated in the lens.
Polarized filters are also used to view certain types of stereograms, so that each eye will see a distinct image from a single source.
Arc welding.
An arc source puts out visible, infrared and ultraviolet light that may be harmful to human eyes. Therefore, optical filters on welding helmets must meet ANSI Z87:1 (a safety glasses specification) in order to protect human vision.
Some examples of filters that would provide this kind of filtering would be earth elements embedded or coated on glass, but practically speaking it is not possible to do perfect filtering. A perfect filter would remove particular wavelengths and leave plenty of light so a worker can see what he/she is working on.
Wedge filter.
A wedge filter is an optical filter so constructed that its thickness varies continuously or in steps in the shape of a wedge. The filter is used to modify the intensity distribution in a radiation beam. It is also known as linearly variable filter (LVF). It is used in various optical sensors where wavelength separation is required e.g. in hyperspectral sensors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " -\\log_{10} T"
}
]
| https://en.wikipedia.org/wiki?curid=741823 |
7418540 | Constraint (mathematics) | Condition of an optimization problem which the solution must satisfy
In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set.
Example.
The following is a simple optimization problem:
formula_0
subject to
formula_1
and
formula_2
where formula_3 denotes the vector ("x"1, "x"2).
In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions.
Without the constraints, the solution would be (0,0), where formula_4 has the lowest value. But this solution does not satisfy the constraints. The solution of the constrained optimization problem stated above is formula_5, which is the point with the smallest value of formula_4 that satisfies the two constraints.
Hard and soft constraints.
If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to as "hard constraints". However, in some problems, called flexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known as "soft constraints". Soft constraints arise in, for example, preference-based planning. In a MAX-CSP problem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints.
Global constraints.
Global constraints are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as the codice_0 constraint, can be rewritten as a conjunction of atomic constraints in a simpler language: the codice_0 constraint holds on "n" variables formula_6, and is satisfied if the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalities formula_7. Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, the codice_2 constraint expresses that a sequence of variables is accepted by a deterministic finite automaton.
Global constraints are used to simplify the modeling of constraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve the constraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into an online catalog.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\min f(\\mathbf x) = x_1^2+x_2^4"
},
{
"math_id": 1,
"text": "x_1 \\ge 1"
},
{
"math_id": 2,
"text": "x_2 = 1,"
},
{
"math_id": 3,
"text": "\\mathbf x"
},
{
"math_id": 4,
"text": "f(\\mathbf x)"
},
{
"math_id": 5,
"text": "\\mathbf x = (1,1)"
},
{
"math_id": 6,
"text": "x_1... x_n"
},
{
"math_id": 7,
"text": "x_1 \\neq x_2, x_1 \\neq x_3..., x_2 \\neq x_3, x_2 \\neq x_4 ... x_{n-1} \\neq x_n"
}
]
| https://en.wikipedia.org/wiki?curid=7418540 |
741856 | Rotary variable differential transformer | Type of electrical transformer
A rotary variable differential transformer (RVDT) is a type of electrical transformer used for measuring angular displacement. The transformer has a rotor which can be turned by an external force. The transformer acts as an electromechanical transducer that outputs an alternating current (AC) voltage proportional to the angular displacement of its rotor shaft.
In operation, an alternating current (AC) voltage is applied to the transformer primary to energize the RVDT. When energized with a constant AC voltage, the transfer function (output voltage vs. shaft angular displacement) of any particular RVDT is linear (to within a specified tolerance) over a specified range of angular displacement.
RVDTs employ contactless, electromagnetic coupling, which provides long life and reliable, repeatable position sensing with high resolution, even under extreme operating conditions. Most RVDTs consist of a wound, laminated stator and a salient two-pole rotor. The stator, containing four slots, contains both the primary winding and the two secondary windings, which may be connected together in some cases. RVDTs offer advantages such as sturdiness, relatively low cost, small size, and low sensitivity to temperature, primary voltage and frequency variations.
Operation.
The two induced voltages of the secondary windings, formula_0 and formula_1, vary linearly with the mechanical angle of the rotor, θ:
formula_2
where formula_3 is the gain or sensitivity. The second voltage can be determined by:
formula_4
The difference formula_5 gives a proportional voltage:
formula_6
and the sum of the voltages is a constant:
formula_7
Consequently, the angular information output by a RVDT is independent of the input voltage, frequency and temperature.
Putting the above mathematical equations in some theoretical form, the working of RVDT can be explained as below.
Basic RVDT construction and operation is provided by rotating an iron-core bearing supported within a housed stator assembly. The housing is passivated stainless steel. The stator consists of a primary excitation coil and a pair of secondary output coils.
A fixed alternating current excitation is applied to the primary stator coil that is electromagnetically coupled to the secondary coils. This coupling is proportional to the angle of the input shaft. The output pair is structured so that one coil is in-phase with the excitation coil, and the second is 180° out-of-phase with the excitation coil.
When the rotor is in a position that directs the available flux equally in both the in-phase and out-of-phase coils, the output voltages cancel and result in a zero value signal. This is referred to as the electrical zero position or E.Z. When the rotor shaft is displaced from E.Z., the resulting output signals have a magnitude and phase relationship proportional to the direction of rotation.
Because RVDTs perform essentially like a transformer, excitation voltages changes will cause directly proportional changes to the output (transformation ratio). However, the voltage out to excitation voltage ratio will remain constant. Since most RVDT signal conditioning systems measure signal as a function of the transformation ratio (TR), excitation voltage drift beyond 7.5% typically has no effect on sensor accuracy and strict voltage regulation is not typically necessary. Excitation frequency should be controlled within ±1% to maintain accuracy.
Although the RVDT can theoretically operate between ±45°, accuracy decreases quickly after ±35°. Thus, its operational limits lie mostly within ±30°, but some up to ±40°. Certain types can operate up to ±60°.
Varieties.
An RVDT can also be designed with two laminations, one containing the primary and the other, the secondaries. These types can operate on larger rotations.
A similar transformer is called the Rotary Variable Transformer and contains only one secondary winding giving only one voltage:
formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_1"
},
{
"math_id": 1,
"text": "V_2"
},
{
"math_id": 2,
"text": "\\theta\\ = G \\cdot\\ \\left( \\frac{V_1 - V_2}{V_1 + V_2} \\right)"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "V_2 = V_1 \\pm \\ G \\cdot\\ \\theta\\ "
},
{
"math_id": 5,
"text": "V_1 - V_2"
},
{
"math_id": 6,
"text": " \\Delta\\ V = 2 \\cdot\\ G \\cdot\\ \\theta\\ "
},
{
"math_id": 7,
"text": "C= \\sum\\ V = 2 \\cdot\\ V_0"
},
{
"math_id": 8,
"text": "V = G \\cdot\\ \\theta\\ "
}
]
| https://en.wikipedia.org/wiki?curid=741856 |
741875 | Combinatorial proof | In mathematics, the term combinatorial proof is often used to mean either of two types of mathematical proof:
The term "combinatorial proof" may also be used more broadly to refer to any kind of elementary proof in combinatorics. However, as writes in his review of (a book about combinatorial proofs), these two simple techniques are enough to prove many theorems in combinatorics and number theory.
Example.
An archetypal double counting proof is for the well known formula for the number formula_0 of "k"-combinations (i.e., subsets of size "k") of an "n"-element set:
formula_1
Here a direct bijective proof is not possible: because the right-hand side of the identity is a fraction, there is no set "obviously" counted by it (it even takes some thought to see that the denominator always evenly divides the numerator). However its numerator counts the Cartesian product of "k" finite sets of sizes "n", "n" − 1, ..., "n" − "k" + 1, while its denominator counts the permutations of a "k"-element set (the set most obviously counted by the denominator would be another Cartesian product "k" finite sets; if desired one could map permutations to that set by an explicit bijection). Now take "S" to be the set of sequences of "k" elements selected from our "n"-element set without repetition. On one hand, there is an easy bijection of "S" with the Cartesian product corresponding to the numerator formula_2, and on the other hand there is a bijection from the set "C" of pairs of a "k"-combination and a permutation "σ" of "k" to "S", by taking the elements of "C" in increasing order, and then permuting this sequence by "σ" to obtain an element of "S". The two ways of counting give the equation
formula_3
and after division by "k"! this leads to the stated formula for formula_0. In general, if the counting formula involves a division, a similar double counting argument (if it exists) gives the most straightforward combinatorial proof of the identity, but double counting arguments are not limited to situations where the formula is of this form.
Here is a simpler, more informal combinatorial proof of the same identity:
formula_4
Suppose that n people would like to enter a museum, but the museum only has room for "k" people. First choose which "k" people from among the "n" people will be allowed in. There are formula_0 ways to do this by definition. Now order the "k" people into a single-file line so that they may pay one at a time. There are "k"! ways to permute this set of size "k". Next, order the "n" − "k" people who must remain outside into a single-file line so that later they can enter one at a time, as the others leave. There are ("n" − "k")! ways to do this. But now we have ordered the entire group of n people, something which can be done in "n"! ways. So both sides count the number of ways to order the "n" people. Division yields the well-known formula for formula_0.
The benefit of a combinatorial proof.
gives an example of a combinatorial enumeration problem (counting the number of sequences of "k" subsets "S"1, "S"2, ... "S""k", that can be formed from a set of "n" items such that the intersection of all the subsets is empty) with two different proofs for its solution. The first proof, which is not combinatorial, uses mathematical induction and generating functions to find that the number of sequences of this type is (2"k" −1)"n". The second proof is based on the observation that there are 2"k" −1 proper subsets of the set {1, 2, ..., "k"}, and (2"k" −1)"n" functions from the set {1, 2, ..., "n"} to the family of proper subsets of {1, 2, ..., "k"}. The sequences to be counted can be placed in one-to-one correspondence with these functions, where the function formed from a given sequence of subsets maps each element "i" to the set {"j" | "i" ∈ "S""j"}.
Stanley writes, “Not only is the above combinatorial proof much shorter than our previous proof, but also it makes the reason for the simple answer completely transparent. It is often the case, as occurred here, that the first proof to come to mind turns out to be laborious and inelegant, but that the final answer suggests a simple combinatorial proof.” Due both to their frequent greater elegance than non-combinatorial proofs and the greater insight they provide into the structures they describe, Stanley formulates a general principle that combinatorial proofs are to be preferred over other proofs, and lists as exercises many problems of finding combinatorial proofs for mathematical facts known to be true through other means.
The difference between bijective and double counting proofs.
Stanley does not clearly distinguish between bijective and double counting proofs, and gives examples of both kinds, but the difference between the two types of combinatorial proof can be seen in an example provided by , of proofs for Cayley's formula stating that there are "n""n" − 2 different trees that can be formed from a given set of "n" nodes. Aigner and Ziegler list four proofs of this theorem, the first of which is bijective and the last of which is a double counting argument. They also mention but do not describe the details of a fifth bijective proof.
The most natural way to find a bijective proof of this formula would be to find a bijection between "n"-node trees and some collection of objects that has "n""n" − 2 members, such as the sequences of "n" − 2 values each in the range from 1 to "n". Such a bijection can be obtained using the Prüfer sequence of each tree. Any tree can be uniquely encoded into a Prüfer sequence, and any Prüfer sequence can be uniquely decoded into a tree; these two results together provide a bijective proof of Cayley's formula.
An alternative bijective proof, given by Aigner and Ziegler and credited by them to André Joyal, involves a bijection between, on the one hand, "n"-node trees with two designated nodes (that may be the same as each other), and on the other hand, "n"-node directed pseudoforests. If there are "Tn" "n"-node trees, then there are "n"2"Tn" trees with two designated nodes. And a pseudoforest may be determined by specifying, for each of its nodes, the endpoint of the edge extending outwards from that node; there are "n" possible choices for the endpoint of a single edge (allowing self-loops) and therefore "nn" possible pseudoforests. By finding a bijection between trees with two labeled nodes and pseudoforests, Joyal's proof shows that "Tn" = "n""n" − 2.
Finally, the fourth proof of Cayley's formula presented by Aigner and Ziegler is a double counting proof due to Jim Pitman. In this proof, Pitman considers the sequences of directed edges that may be added to an "n"-node empty graph to form from it a single rooted tree, and counts the number of such sequences in two different ways. By showing how to derive a sequence of this type by choosing a tree, a root for the tree, and an ordering for the edges in the tree, he shows that there are "Tnn"! possible sequences of this type. And by counting the number of ways in which a partial sequence can be extended by a single edge, he shows that there are "n""n" − 2"n"! possible sequences. Equating these two different formulas for the size of the same set of edge sequences and cancelling the common factor of "n"! leads to Cayley's formula. | [
{
"math_id": 0,
"text": "\\tbinom nk"
},
{
"math_id": 1,
"text": "\\binom nk=\\frac{n(n-1)\\cdots(n-k+1)}{k(k-1)\\cdots1}."
},
{
"math_id": 2,
"text": "n(n-1)\\cdots(n-k+1)"
},
{
"math_id": 3,
"text": "n(n-1)\\cdots(n-k+1)=\\binom nk k!,"
},
{
"math_id": 4,
"text": "\\binom nk k!(n-k)!=n!"
}
]
| https://en.wikipedia.org/wiki?curid=741875 |
74188206 | Bismuth oxyiodide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Bismuth oxyiodide is an inorganic compound, an oxyiodide of bismuth, with the chemical formula BiOI.
Preparation.
Bismuth oxyiodide can be obtained by reacting bismuth(III) oxide with hydroiodic acid:
formula_0
It can also be obtained by reacting bismuth nitrate pentahydrate and potassium iodide in ethylene glycol at 160 °C in a reactor. The aqueous solution of bismuth nitrate acidified with nitric acid is adjusted by sodium hydroxide and then added dropwise with potassium iodide to obtain the reaction product, and other proportions of oxyiodides will also be produced according to different adjustments.
Properties.
Bismuth oxyiodide forms a brick red crystalline powder or copper-colored crystals. It is insoluble in water and ethanol, is only slightly attacked by water even when heated, and melts with decomposition when red hot. It has a tetragonal crystal structure (isotypical with bismuth oxychloride) with space group "P"4/"nmm" (No. 129).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Bi_2 O_3 + 2 \\ HI \\longrightarrow 2 \\ BiOI + H_2 O}"
}
]
| https://en.wikipedia.org/wiki?curid=74188206 |
7419216 | Iron butterfly (options strategy) | Type of advanced stock options trading strategy
In finance an iron butterfly, also known as the ironfly, is the name of an advanced, neutral-outlook, options trading strategy that involves buying and holding four different options at three different strike prices. It is a limited-risk, limited-profit trading strategy that is structured for a larger probability of earning smaller limited profit when the underlying stock is perceived to have a low volatility.
formula_0
It is known as an iron butterfly because it replicates the characteristics of a butterfly with a different combination of options (compare iron condor).
Short iron butterfly.
A short iron butterfly option strategy will attain maximum profit when the price of the underlying asset at expiration is equal to the strike price at which the call and put options are sold. The trader will then receive the net credit of entering the trade when the options all expire worthless.
A short iron butterfly option strategy consists of the following options:
where X = the spot price (i.e. current market price of underlying) and a > 0.
Limited risk.
A long iron butterfly will attain maximum losses when the stock price falls at or below the lower strike price of the put or rises above or equal to the higher strike of the call purchased. The difference in strike price between the calls or puts subtracted by the premium received when entering the trade is the maximum loss accepted.
The formula for calculating maximum loss is given below:
Break even points.
Two break even points are produced with the iron butterfly strategy.
Using the following formulas, the break even points can be calculated:
Long iron butterfly (reverse iron butterfly).
A long iron butterfly option strategy will attain maximum profit when the price of the underlying asset at expiration is greater than the strike price set by the out-of-the-money put and less than the strike price set by the out-of-the-money call. The trader will then receive the difference between the options that expire in the money, while paying the premium on the options that expire out of the money.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mbox{ironfly} = \\Delta(\\mbox{butterfly strike price}) \\times (1+rt) - \\mbox{butterfly} "
}
]
| https://en.wikipedia.org/wiki?curid=7419216 |
74199361 | Pi is 3 | Misunderstanding in Japanese education
Pi is 3 is a misunderstanding that the Japanese public believed that, due to the revision of the Japanese Curriculum guideline in 2002, the approximate value of pi (π), which had previously been taught as 3.14, is now taught as 3 in arithmetic education. In fact, this is not true, and even after the revision, the approximate value of pi is still taught as 3.14.
In the Japanese Curriculum guideline published in December 1998 and implemented in 2002, regulations such as the limitation on the number of digits for decimal multiplication were changed. Although the new regulations did not change the fact that pi was to be calculated using "3.14," they added the statement that "3 is used for the purpose."
At that time, many people believed elementary school students were forced to hand calculate pi as "3", and believed that is a typical example of the negative effects of a relaxed education. This misunderstanding was not easily resolved.
Overview.
In the fall of 1999, a major cram school company launched a major campaign with advertisements that read "The formula for finding the area of a circle: radius x radius x 3!", "Calculate the quadrature of a circle with pi as approximately 3 instead of 3.14."
The mass media also picked up on this issue, and it wrongly became widely believed in society that "as a result of the relaxed education system, pi is now taught as 3.
As a criticism of the decline in scholastic achievement and the relaxed education system, "Pi is 3" was widely covered in weekly and monthly magazines and other mathematics-related journals.
Details.
"Use 3 if fits your purpose" guideline.
The Japanese Curriculum guideline was started as a guide only, but at some point, they came to be considered legally binding. In addition to this, there was also a so-called "restrictive provision" that "has to teach without excesses or deficiencies of what the guidelines say." This statement was carried over to the 1998 revision (implemented in 2002 for elementary schools.)
(1) c) of "section B: Quantity and Measurement" and (1) d) of "section C: Figures"in the contents, while 3.14 is used for pi, consideration should be given to the ability to process using 3 for the purposes. — Curriculum guideline published in 1989
"To process using 3 for the purpose" assumed for,
One interpretation is that they expect to develop the ability to make appropriate judgments and process the information according to the situation and application.
Issues with the 1998 Revised Guidelines.
As part of the so-called "relaxed education," the content of arithmetic learning in multiplication, division, and decimals was reduced, while calculators were allowed to be used from the arithmetic learning stage.
On the other hand, because relaxed education reduced the time for learning but not the areas of learning, many people believed that, students were forced by guideline to use 3 instead of 3.14 as the approximate number of Pi for calculating. However, it was a misunderstanding.
In addition, the use of calculators, which was allowed from the 5th grade under the previous teaching guidelines, was allowed from the 4th grade and calculations using 3.14 were possible even with a calculator.
Limit the number of decimal digits.
In the 1998 Revised Curriculum guideline in Japan, the calculation of decimals in the fifth grade of elementary school is,
2. Details
A. Numbers and Calculations.
(3) Understand the meaning of multiplication and division of decimals and be able to use them appropriately.
(3c) The student should be able to think about how to multiply and divide decimals and be able to perform those calculations. Also, to understand the magnitude of the remainder.
(note 3c) The calculation of decimals to the nearest 1/10th of a place shall be handled.
Because of the above limitation, this led to the misunderstanding that 3 must be used as pi. But even if decimals were limited to 1/10th of a place, the pi used would still be 3.1, which does not define pi as 3.
The misunderstanding that "pi is approximately 3".
At that time, in the fall of 1999, a major cram school company released the following advertisement.
The formula for finding the area of a circle is radius x radius x 3? In 2002, fifth-grade students will be doing a quadrature calculation of circles with pi as "approximately 3" instead of 3.14. It's real.
They conducted a major campaign in the Tokyo metropolitan area, and the media covered it extensively.
This led to widespread public awareness of the misunderstanding that pi is now taught as 3 because of the relaxed education system.
Akito Arima, who promoted "relaxed education" as the Minister of Education at the time, repeatedly said, "I was stunned by that," and regretted "my failure to go around the country and explain it in detail."
Disappearance of the sentence.
In the second report of the Japanese Central Council for Education on February 23, 2003, the policy of emphasizing scholastic ability was formulated.
In December 2003, the Curriculum guideline was partially revised to remove the limitation that they must be taught without excesses or deficiencies and changed to a minimum standard that allows teaching in more detail than what is written in the Curriculum guideline, if necessary.
On February 15, 2008, the Japanese Ministry of Education released the new Curriculum guideline (effective in 2011 for elementary schools), the first after the complete revision of the Fundamental Law of Education, and the "restrictive provision" was eliminated.
As a result of the increase in the content of the study, the content has become such that have already learned how to calculate the decimal point by the time they use pi, and the section on pi now states only that "pi shall be 3.14" and the statement "3 is used for the purpose" has been deleted.
Social Impacts.
Impact on science and education professionals.
The issue of "pi is 3" was discussed in mathematics-related journals and various academic journals. This misunderstanding was not easily resolved, and there were many misunderstandings even among those involved in education.
In response to this situation, Masahiro Kaminaga, an associate professor of the Department of Electrical and Computer Engineering in Tohoku Gakuin University, confessed that he had been convinced that "relaxed education is a foolish reform that teaches pi as 3." And he said, "I usually said, 'Go and do your own research until you are satisfied,' but if teachers are like this, it must be a problem before educational reform."
Other issues with treating pi as approximately 3 that were discussed, included "pi is an irrational number, so it is neither exactly 3 nor 3.14. Thus, while the former and the latter are essentially equivalent in learning the procedure, there is a clear difference in approximate accuracy," "if pi is calculated as 3, the perimeter is the same for the circle and the regular hexagon inscribed in it," and "for the circumference of a circle with a diameter of 10 cm, the error would be 1.4 cm." were point out.
It also points out the danger of adding ".14" in vain in terms of significant figures.
Impact on the public.
The misunderstanding that "they teach pi as 3 in elementary school" was seen in weekly and monthly magazines as well. This has resulted in a distrust of public school education.
In Nisio Isin's novel Zaregoto, a scene appeared in which the truncated decimal point is introduced as "the tragedy of 0.14".
On one TV program, five comedians presented a skit in which they used "Pi is OK at 3" as a key line.
The theme song of "Yutori-chan," an animation about Japan's "Yutori" generation, includes the lyrics "3.1415 pi is approximately 3."
The misunderstanding of teaching pi as 3 was also introduced by Akira Ikegami in a 2013 TV program.
Impact on University Entrance Examination Questions.
In 2003, in the sixth question of the first semester of science at the University of Tokyo, a question asking "Prove that pi is greater than 3.05" was included and it became famous as a question with a message opposing the government's stance of teaching pi as 3.
To solve this problem, to prove that the perimeter of the regular dodecagon inscribed in the circle of diameter 1 is greater than 3.05.
First, consider a circle C of diameter 1 and a regular dodecagon inscribed in circle C. Since the length of the circumference of a circle of radius r is , the length of the circumference l of circle C whose radius 1/2 is
formula_0.
Also, if the perimeter of the regular dodecagon inscribed in circle C is defined as L,
formula_1
formula_2.
Thus, the circumference L of a regular dodecagon are greater than 3.05. Then
formula_3
formula_4.
References.
<templatestyles src="Reflist/styles.css" />
[top] | [
{
"math_id": 0,
"text": "l=2\\pi \\cdot \\frac{1}{2}=\\pi"
},
{
"math_id": 1,
"text": "L=12 \\sin 15^{\\circ} =12\\sqrt{\\frac{1- \\cos {30^{\\circ}} }{2}}=6\\sqrt{2-\\sqrt{3}}"
},
{
"math_id": 2,
"text": "L^2=36\\left(2-\\sqrt{3}\\right)>36(2-1.74)=9.36>9.3025=3.05^2"
},
{
"math_id": 3,
"text": "\\pi=l >L>3.05"
},
{
"math_id": 4,
"text": "\\pi>3.05"
}
]
| https://en.wikipedia.org/wiki?curid=74199361 |
74201328 | Six-dimensional holomorphic Chern–Simons theory | Complex three dimensional gauge theory
In mathematical physics, six-dimensional holomorphic Chern–Simons theory or sometimes holomorphic Chern–Simons theory is a gauge theory on a three-dimensional complex manifold. It is a complex analogue of Chern–Simons theory, named after Shiing-Shen Chern and James Simons who first studied Chern–Simons forms which appear in the action of Chern–Simons theory. The theory is referred to as six-dimensional as the underlying manifold of the theory is three-dimensional as a complex manifold, hence six-dimensional as a real manifold.
The theory has been used to study integrable systems through four-dimensional Chern–Simons theory, which can be viewed as a symmetry reduction of the six-dimensional theory. For this purpose, the underlying three-dimensional complex manifold is taken to be the three-dimensional complex projective space formula_0, viewed as twistor space.
Formulation.
The background manifold formula_1 on which the theory is defined is a complex manifold which has three complex dimensions and therefore six real dimensions. The theory is a gauge theory with gauge group a complex, simple Lie group formula_2 The field content is a partial connection formula_3.
The action is
formula_4
where
formula_5
where formula_6 is a holomorphic (3,0)-form and with formula_7 denoting a trace functional which as a bilinear form is proportional to the Killing form.
On twistor space P3.
Here formula_1 is fixed to be formula_0. For application to integrable theory, the three form formula_6 must be chosen to be meromorphic.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{P}^3"
},
{
"math_id": 1,
"text": "\\mathcal{W}"
},
{
"math_id": 2,
"text": "G."
},
{
"math_id": 3,
"text": "\\bar \\mathcal{A}"
},
{
"math_id": 4,
"text": "S_{\\mathrm{HCS}}[\\bar\\mathcal{A}] = \\frac{1}{2\\pi i} \\int_{\\mathcal W} \\Omega \\wedge \\mathrm{HCS}(\\bar \\mathcal{A})"
},
{
"math_id": 5,
"text": "\\mathrm{HCS}(\\bar \\mathcal{A}) = \\mathrm{tr}\\left(\\bar \\mathcal{A} \\wedge \\bar \\partial \\bar \\mathcal{A} + \\frac{2}{3} \\bar \\mathcal{A} \\wedge \\bar \\mathcal{A} \\wedge \\bar \\mathcal{A}\\right)"
},
{
"math_id": 6,
"text": "\\Omega"
},
{
"math_id": 7,
"text": "\\mathrm{tr}"
}
]
| https://en.wikipedia.org/wiki?curid=74201328 |
74204240 | Reactances of synchronous machines | The reactances of synchronous machines comprise a set of characteristic constants used in the theory of synchronous machines. Technically, these constants are specified in units of the electrical reactance (ohms), although they are typically expressed in the per-unit system and thus dimensionless. Since for practically all (except for the tiniest) machines the resistance of the coils is negligibly small in comparison to the reactance, the latter can be used instead of (complex) electrical impedance, simplifying the calculations.
Two reactions theory.
The air gap of the machines with a salient pole rotor is quite different along the pole axis (so called "direct axis") and in the orthogonal direction (so called "quadrature axis"). Andre Blondel in 1899 proposed in his paper "Empirical Theory of Synchronous Generators" the two reactions theory that divided the armature magnetomotive force (MMF) into two components: the direct axis component and the quadrature axis component. The direct axis component is aligned with the magnetic axis of the rotor, while the quadrature (or transverse) axis component is perpendicular to the direct axis. The relative strengths of these two components depend on the design of the machine and the operating conditions. Since the equations naturally split into direct and quadrature components, many reactances come in pairs, one for the direct axis (with the index d), one for the quadrature axis (with the index q). In the machines with a cylindrical rotor the air gap is uniform, the reactances along the d and q axes are equal, and d/q indices are frequently dropped.
States of the generator.
The flux linkages of the generator vary with its state. Three states are considered:
The sub-transient and transient states are cheracterized by significantly smaller reactances.
List of reactances.
Das identifies the following reactances:
Synchronous reactances.
The synchronous reactances are exhibited by the armature in the steady-state operation of the machine. The three-phase system is viewed as a superposition of two: the direct one, where the maximum of the phase current is reached when the pole is oriented towards the winding and the quadrature one, that is 90° offset.
The per-phase reactance can be determined in a mental experiment where the rotor poles are perfectly aligned with a specific angle of the phase field in the armature (0° for formula_1, 90° for the formula_5). In this case, the reactance X will be related with the flux linkage formula_11 and the phase current I as formula_12, where formula_13 is the circular frequency. The conditions for this mental experiment are hard to recreate in practice, but:
Therefore, the direct synchronous reactance can be determined as a ratio of the voltage in open condition formula_15 to short-circuit current formula_16: formula_17. These current and voltage values can be obtained from the open-circuit saturation curve and the synchronous impedance curve.
The synchronous reactance is a sum of the leakage reactance formula_0 and the reactance of the armature itself (formula_18): formula_19.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_l"
},
{
"math_id": 1,
"text": "X_d"
},
{
"math_id": 2,
"text": "X_S"
},
{
"math_id": 3,
"text": "X'_d"
},
{
"math_id": 4,
"text": "X''_d"
},
{
"math_id": 5,
"text": "X_q"
},
{
"math_id": 6,
"text": "X'_q"
},
{
"math_id": 7,
"text": "X''_q"
},
{
"math_id": 8,
"text": "X_2"
},
{
"math_id": 9,
"text": "X_0"
},
{
"math_id": 10,
"text": "X_P"
},
{
"math_id": 11,
"text": "\\Psi"
},
{
"math_id": 12,
"text": "X = \\omega \\frac \\Psi I"
},
{
"math_id": 13,
"text": "\\omega"
},
{
"math_id": 14,
"text": "\\omega \\Psi"
},
{
"math_id": 15,
"text": "V_{OPEN}"
},
{
"math_id": 16,
"text": "I_{SC}"
},
{
"math_id": 17,
"text": "X_d = \\frac {V_{OPEN}} {I_{SC}}"
},
{
"math_id": 18,
"text": "X_a"
},
{
"math_id": 19,
"text": "X_d = X_l + X_a"
}
]
| https://en.wikipedia.org/wiki?curid=74204240 |
7420724 | Umesh Vazirani | Indian–American academic
Umesh Virkumar Vazirani is an Indian–American academic who is the Roger A. Strauch Professor of Electrical Engineering and Computer Science at the University of California, Berkeley, and the director of the Berkeley Quantum Computation Center. His research interests lie primarily in quantum computing. He is also a co-author of a textbook on algorithms.
Biography.
Vazirani received a BS from MIT in 1981 and received his Ph.D. in 1986 from UC Berkeley under the supervision of Manuel Blum.
He is the brother of University of California, Irvine professor Vijay Vazirani.
Research.
Vazirani is one of the founders of the field of quantum computing. His 1993 paper with his student Ethan Bernstein on quantum complexity theory defined a model of quantum Turing machines which was amenable to complexity based analysis. This paper also gave an algorithm for the quantum Fourier transform, which was then used by Peter Shor within a year in his celebrated quantum algorithm for factoring integers.
With Charles Bennett, Ethan Bernstein, and Gilles Brassard, he showed that quantum computers cannot solve black-box search problems faster than formula_0 in the number of elements to be searched. This result shows that the Grover search algorithm is optimal. It also shows that quantum computers cannot solve NP-complete problems in polynomial time using only the certifier.
Awards and honors.
In 2005, both Vazirani and his brother Vijay Vazirani were inducted as Fellows of the Association for Computing Machinery, Umesh for "contributions to theoretical computer science and quantum computation" and Vijay for his work on approximation algorithms. Vazirani was awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and related problems (jointly with Satish Rao and Sanjeev Arora). In 2018, he was elected to the National Academy of Sciences.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(\\sqrt{N})"
}
]
| https://en.wikipedia.org/wiki?curid=7420724 |
74209817 | Quantum walk search | In the context of quantum computing, the quantum walk search is a quantum algorithm for finding a marked node in a graph.
The concept of a quantum walk is inspired by classical random walks, in which a walker moves randomly through a graph or lattice. In a classical random walk, the position of the walker can be described using a probability distribution over the different nodes of the graph. In a quantum walk, on the other hand, the walker is represented by a quantum state, which can be in a superposition of several locations simultaneously.
Search algorithms based on quantum walks have the potential to find applications in various fields, including optimization, machine learning, cryptography, and network analysis. The efficiency and probability of success of a quantum walk search depend heavily on the structure of the search space. In general, quantum walk search algorithms offer an asymptotic quadratic speedup similar to that of Grover's algorithm.
One of the first works on the application of quantum walk to search problems was proposed by Neil Shenvi, Julia Kempe, and K. Birgitta Whaley.
Classical problem description.
Given a search space formula_0 and a subset formula_1 which contains the marked elements, a probabilistic search algorithm samples an element formula_2 uniformly at random at each step, until it finds a marked element from formula_3. If we define formula_4 as the fraction of marked elements, a procedure of that kind must be repeated formula_5 times to find a marked element.
If we have information about the structure of formula_0 we can model it as a graph formula_6, where every vertex formula_7 represents a sample from the search space with formula_8, while the edges represent the conditional probability to sample the next element starting from the current sample.
We perform a search by starting from a random vertex formula_9and, if it does not belong to formula_3, we sample the next vertex formula_10 among the ones connected to formula_9. This procedure is known as random walk search. To have a probability close to formula_11 to find the marked node, we need to take asymptotically formula_12 steps on the graph, where the parameter formula_13 is the spectral gap associated to the stochastic matrix formula_14 of the graph.
To assess the computational cost of a random walk algorithm, one usually divides the procedure into three sub-phases such as Setup, Check, and Update, and analyses their cost.
The setup cost formula_15 refers to the initialization of the stationary distribution over the vertices of the graph.
The update cost formula_16 is the cost to simulate a transition on the graph according to the transition probability defined in formula_14.
The check cost formula_17 is the cost to verify if the current element belongs to the set formula_3.
The total cost of a random walk search algorithm is formula_18. The greedy version of the algorithm, where the check is performed after every step on the graph has a complexity of formula_19. The presence of the spectral gap term formula_13 in the cost formulation can be thought of as the minimum number of steps that the walker must perform to reach the stationary distribution. This quantity is also known as mixing time.
Algorithm description.
The quantum walk search algorithm was first proposed by Magniez et al., also known as MNRS algorithm, and is based on the quantum walk formulation proposed by Mario Szegedy. The walk is performed on the directed edges of the graph so to represent the quantum state associated with the search space we need two quantum registers formula_20, which correspond to the edge from formula_21 to formula_22. To easily understand how it works, the algorithm can be explained through its geometric interpretation. We first define formula_23 as the uniform superposition over the neighbours of formula_24. We additionally define the superposition over the marked and non-marked states, often referred to as the good and bad states, as
formula_25 and formula_26
where formula_3 is the set of marked elements. The uniform superposition over all the edges formula_16 can be viewed a combination of good and bad states.
formula_27 with formula_28.
The algorithm is composed of the following steps:
Since the way the algorithm finds a marked element is based on the amplitude amplification technique, the proof of correctness is similar to the one of Grover's algorithm (which can also be viewed as a special case of a quantum walk on a fully connected graph ). The two reflections through formula_29 and formula_31 exhibit the effect of moving the quantum state toward the good state. After formula_32 applications of the reflections the state can be written as formula_33, and by setting formula_34 we have that formula_35 which yields the good state with a high probability.
The first reflection has the effect of checking if the current vertex is marked and applying a phase shift equal to formula_36 if it is so. This is a common procedure in many quantum algorithms based on amplitude amplification and can be realized through a quantum oracle function that verifies the condition formula_37.
The second reflection is implemented with a quantum phase estimation over the walk operator formula_38 which must reflect the structure of the graph we are exploring. The walk operator can be defined as formula_39 where formula_40 and formula_41 are two reflections through the subspaces formula_42 and formula_43. Since the eigenvalues of formula_38 are on the form formula_44and the operator has a unique eigenvalue equal to formula_11 corresponding to formula_45 given by formula_46, we can perform a phase estimation with precision formula_47 to find the unique eigenvalue. The precision of the reflection depends on the number of qubits used to estimate the phase.
With the same formalism used to estimate the cost of the classical random walk algorithm, the quantum costs can be summarised with:
The total cost of the quantum walk search is formula_48, which results in a quadratic speedup compared to the classical version. Compared to Grover's algorithm quantum walks become advantageous in the presence of large data structures associated with each quantum state, since in the first case they are entirely rebuilt at each iteration while in walks they are only partially updated in each step.
Hypercube example.
This is an example of how to apply the quantum walk search on a hypercube graph.
Although in the original description Szegedy quantum walks are used, for this example we show the use of coined quantum walk as it is more intuitive to understand. In any case, the two formalizations turn out to be equivalent under specific assumptions.
The search space is a formula_49-hypercube with formula_50, it has formula_51 vertices and it has a degree equal to formula_52. Each node formula_21 can be labeled with a binary string of formula_52 bits and two nodes are connected by an edge if their Hamming distance is formula_11. To set up the quantum walk search we need a coin register of dimension formula_53 to encode all the possible directions which a walker can choose and a vertex register of dimension formula_54to represent the vertices.
The computational basis is formula_55 withformula_56.
The walk is performed by two operators:
Thus, the walk operator is formula_57.
In the case of the hypercube graph, we can leverage the fact that the binary encoding of the vertices differ by only one bit for any couple of adjacent nodes to construct an efficient shift operator. The shift operator can be written as:
formula_58
where formula_59 is the formula_60-basis for the hypercube ( if formula_50 the basis are formula_61). For the coin there are multiple choices such as the Grover coin or the Fourier coin, one can choose the Grover coin to have an equal superposition over all the directions.
The algorithm works as follows:
The shift operator is a key factor to the implementation on an efficient quantum walk, while for certain families of graph such as toroids and lattices, the shift is known, for non-regular graph the design of an effective shift operator is still an open challenge.
Applications.
The following applications are based on quantum walk on Johnson graph formula_63.
Given a function formula_64 defined on formula_65, it asks to find two distinct elements formula_66 such that formula_67 if there exist such a pair.
Given three formula_68 matrices formula_69 and formula_17, the problem asks to verify if formula_70 or otherwise find the indices formula_71 such that formula_72.
A triangle is a complete subgraph on three vertices part of an undirected graph formula_73. Given the adjacent matrix of a graph the problem asks to find a triangle if there is any. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "M \\subseteq X"
},
{
"math_id": 2,
"text": "x \\in X"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "\\epsilon = |M|/|N| "
},
{
"math_id": 5,
"text": "O(1/\\epsilon)"
},
{
"math_id": 6,
"text": "G(V,E)"
},
{
"math_id": 7,
"text": "V=\\{v_{1},\\dots, v_{n}\\}"
},
{
"math_id": 8,
"text": "|X|=n"
},
{
"math_id": 9,
"text": "v_{1}"
},
{
"math_id": 10,
"text": "v_{2}"
},
{
"math_id": 11,
"text": "1"
},
{
"math_id": 12,
"text": "O(1/\\epsilon\\delta )"
},
{
"math_id": 13,
"text": "\\delta"
},
{
"math_id": 14,
"text": "P"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "U"
},
{
"math_id": 17,
"text": "C"
},
{
"math_id": 18,
"text": "S+\\frac{1}{\\epsilon}\\biggl(\\frac{1}{\\delta}U + C\\biggr)"
},
{
"math_id": 19,
"text": "S+\\frac{1}{\\epsilon \\delta}\\biggl(U+ C\\biggr)"
},
{
"math_id": 20,
"text": "|i\\rangle|j\\rangle"
},
{
"math_id": 21,
"text": "v_{i}"
},
{
"math_id": 22,
"text": "v_{j}"
},
{
"math_id": 23,
"text": "|p_{i}\\rangle = \\sum_{j} \\sqrt{P_{ij}} |j\\rangle"
},
{
"math_id": 24,
"text": "|i\\rangle"
},
{
"math_id": 25,
"text": "|G\\rangle=\\frac{1}{\\sqrt{|M|}}\\sum_{i \\in M}|i\\rangle |p_{i}\\rangle"
},
{
"math_id": 26,
"text": "|B\\rangle=\\frac{1}{\\sqrt{X-|M|}}\\sum_{i \\not\\in M}|i\\rangle |p_{i}\\rangle"
},
{
"math_id": 27,
"text": "|U\\rangle=\\frac{1}{\\sqrt{X}}\\sum_{i \\in X}|i\\rangle |p_{i}\\rangle = \\sin(\\theta)|G\\rangle +\n\\cos(\\theta)|B\\rangle"
},
{
"math_id": 28,
"text": "\\theta = \\arcsin(\\sqrt{\\epsilon})"
},
{
"math_id": 29,
"text": "|U\\rangle"
},
{
"math_id": 30,
"text": "O(1/\\sqrt{\\epsilon})"
},
{
"math_id": 31,
"text": "|B\\rangle"
},
{
"math_id": 32,
"text": "k"
},
{
"math_id": 33,
"text": " \\sin((2k+1)\\theta)|G\\rangle + \\cos((2k+1)\\theta)|B\\rangle"
},
{
"math_id": 34,
"text": "k\\thickapprox \\frac{\\pi}{4\\theta} = O(1/\\sqrt{\\epsilon}) "
},
{
"math_id": 35,
"text": "\\sin((2k+1)\\theta) \\thickapprox 1"
},
{
"math_id": 36,
"text": "-1"
},
{
"math_id": 37,
"text": "|i\\rangle \\in M"
},
{
"math_id": 38,
"text": "W"
},
{
"math_id": 39,
"text": "W=ref(\\mathcal{B})ref(\\mathcal{A})"
},
{
"math_id": 40,
"text": "ref(\\mathcal{B})"
},
{
"math_id": 41,
"text": " ref(\\mathcal{A})"
},
{
"math_id": 42,
"text": "\\mathcal{A}=span\\{|i\\rangle, |p_{i}\\rangle\\}"
},
{
"math_id": 43,
"text": "\\mathcal{B}=span\\{|p_{j}\\rangle, |j\\rangle\\}"
},
{
"math_id": 44,
"text": "e^{\\pm2i\\theta}"
},
{
"math_id": 45,
"text": "|U \\rangle"
},
{
"math_id": 46,
"text": "\\theta = 0"
},
{
"math_id": 47,
"text": "O(1/\\sqrt{\\delta})"
},
{
"math_id": 48,
"text": "S+\\frac{1}{\\sqrt{\\epsilon}}\\biggl(\\frac{1}{\\sqrt{\\delta}}U + C\\biggr)"
},
{
"math_id": 49,
"text": "n"
},
{
"math_id": 50,
"text": "n=4"
},
{
"math_id": 51,
"text": "|V|=2^{4}"
},
{
"math_id": 52,
"text": "4"
},
{
"math_id": 53,
"text": "\\mathcal{H}^{n}"
},
{
"math_id": 54,
"text": "\\mathcal{H}^{2^{n}}"
},
{
"math_id": 55,
"text": "|d\\rangle |v\\rangle "
},
{
"math_id": 56,
"text": "\\{d \\in D=\\{00,01,10,11\\},v \\in V=\\{0000,0001, \\dots ,1111\\}\\}"
},
{
"math_id": 57,
"text": "W=SC"
},
{
"math_id": 58,
"text": "S=\\sum_{d=1}^{n}\\sum_{v=1}^{n}|d\\rangle|v \\oplus e_{d} \\rangle \\langle d | \\langle v|"
},
{
"math_id": 59,
"text": "e_{d}"
},
{
"math_id": 60,
"text": "d"
},
{
"math_id": 61,
"text": "\\{0001,0010,0100,1000\\}"
},
{
"math_id": 62,
"text": "\\tilde{\\theta}=0"
},
{
"math_id": 63,
"text": "J(n,k)"
},
{
"math_id": 64,
"text": "f"
},
{
"math_id": 65,
"text": "\\{n\\}"
},
{
"math_id": 66,
"text": "i,j \\in \\{n\\}"
},
{
"math_id": 67,
"text": "f(i)=f(j)"
},
{
"math_id": 68,
"text": "n\\times n "
},
{
"math_id": 69,
"text": "A,B"
},
{
"math_id": 70,
"text": "AB=C"
},
{
"math_id": 71,
"text": "i,j"
},
{
"math_id": 72,
"text": "(AB)_{i,j} \\neq C_{i,j}"
},
{
"math_id": 73,
"text": "G"
}
]
| https://en.wikipedia.org/wiki?curid=74209817 |
74209853 | Thermodynamic modelling | Thermodynamic modelling is a set of different strategies that are used by engineers and scientists to develop models capable of evaluating different thermodynamic properties of a system. At each thermodynamic equilibrium state of a system, the thermodynamic properties of the system are specified. Generally, thermodynamic models are mathematical relations that relate different state properties to each other in order to eliminate the need of measuring all the properties of the system in different states.
The easiest thermodynamic models, also known as equations of state, can come from simple correlations that relate different thermodynamic properties using a linear or second-order polynomial function of temperature and pressures. They are generally fitted using experimental data available for that specific properties. This approach can result in limited predictability of the correlation and as a consequence it can be adopted only in a limited operating range.
By contrast, more advanced thermodynamic models are built in a way that can predict the thermodynamic behavior of the system, even if the functional form of the model is not based on the real thermodynamic behaviour of the material. These types of models contain different parameters that are gradually developed for each specific model in order to enhance the accuracy of the evaluating thermodynamic properties.
Cubic model development.
Cubic equations of state refer to the group of thermodynamic models that can evaluate the specific volume of gas and liquid systems as a function of pressure and temperature. To develop a cubic model, first, it is essential to select a cubic functional form. The most famous functional forms of this category are Redlich-Kwong, Soave-Redlich-Kwong and Peng-Robinson. Although their initial form is empirically suggested, they are categorised as semi-empirical models as their parameters can be adjusted to fit the real experimental measurement data of the target system.
Pure component modelling.
In case the development of a cubic model for a pure component is targeted, the purpose would be to replicate the specific volume behaviour of the fluid in terms of temperature and pressure. At a given temperature, any cubic functional form results in two separate roots which makes us capable of modelling the behaviour of both vapour and liquid phases within a single model. Finding the roots of the cubic function will be done by simulating the vapour-liquid equilibrium condition of the pure component where the fugacity coefficients of the two phases are equal to each other.
So, in this case, the main aim can be limited to deriving fugacity coefficients of vapour and liquid phases from the cubic model and refining the adjustable parameters of the model such that they will become equal to each other at different equilibrium pairs of temperature and pressure. As the equilibrium pressure and temperature are related together in the case of a pure component system, the functional form of cubic models are able to evaluate the specific volume of the system in the wide range of temperature and pressure domain.
Multi-component modelling.
Cubic model development for mixtures of more than one component is different as, according to the Gibbs phase rule, at each temperature level of a multi-component system, equilibrium states can exist at multiple pressure levels. Because of that, development of the thermodynamic model should be performed following different steps:
Mixing rules.
Mixing rules refer to different approaches that can be used to modify the cubic model in the case of multi-component mixtures. The simplest mixing rule is proposed by van der Waals and is called the "van der Waals one fluid (vdW1f)" mixing rule. As it can be understood from its name, this mixing rule is only used in case of modelling of a single phase (vapor phase). As a first step, to combine the model parameters for each binary combination of the mixture, the following equations are suggested:
formula_0
formula_1
where formula_2 and formula_3 are the parameters of the main target cubic model that was previously chosen. Then, all the possible binary combinations together with the concentration of each constituent in the mixture are used to define the final parameters for the mixture model as below:
formula_4
formula_5
In the case of using this mixing rule, except the two adjustable binary interaction parameters (BIPs) for each combination (formula_6 and formula_7), other parameters are specified based on the pure component parameters and the concentration of different constituents in the mixture. So, the model developed in this case is limited to adjusting these two parameters such that the fugacity coefficients at different phases will be equal to each other at a certain temperature and pressure level. To overcome the limitation of the sole single-phase behaviour prediction in the case of using this mixing rule, other advanced mixing rules are developed. To predict the thermodynamic behaviour of the multi-component system in different phases, it is essential to build the energy function as a fundamental property of the system. Although this is mainly the case for the fundamental models, advanced mixing rules such as Huran-Vidal mixing rule and Wong-Sandler mixing rule are developed to adjust the parameters of the cubic models to contain these fundamental properties. This is usually done by building a mathematical structure capable of calculating the excess Gibbs energy of the system. It is generally built by two widely used approaches, namely UNIFAC and Non Random Two Liquid (NRTL) method. The choice of the proper mixing rule to be implemented in the target system can be done based on the inherent properties of the target system such as the polarity of different components, the reactivity of system's constituents with respect to each other, etc.
Fundamental model development.
Fundamental models refer to a family of thermodynamic models that propose a mathematical form for one of the fundamental thermodynamic properties of the system, such as Gibbs free energy or Helmholtz free energy. The core idea behind this type of thermodynamic models is that, by constructing the fundamental property, it is possible to take advantage of thermodynamic relations that express different thermodynamic properties as the first or second-order derivatives of fundamental properties, with respect to pressure, temperature or density.
Helmholtz free energy models.
For the development of Helmholtz free energy models, the idea is to associate different parameters that resemble different inter-molecular forces between system species. As a result, these models are referred to as multi-parameter models. Steps to develop a Helmholtz free energy model can be summarized as:
Thermodynamic models criterions.
A thermodynamic model predicts different properties with a certain level of accuracy. In fact, based on the functional form of the thermodynamic model and the real behaviour of the system some properties can be predicted with high accuracy level, while the other ones could not be predicted accurately enough to comply with different industrial needs. In this regard several criterions should be taken into account for the proper choice of thermodynamic model to be practical based on the targeted application.
Applicability.
Although thermodynamic models are generally developed to predict thermodynamic properties in a wide range of temperatures and pressures, due to the lack of experimental data for different compounds in the full operational range, model accuracy varies by moving towards wider temperature and pressure ranges. When a model is targeted to be used in a specific application, the initial step is to identify the temperature and pressure at what the model is intended to be implemented. If the model is able to perform in the target operating window, the second step is to investigate whether the model can cover all the system constituents within the concentration ranges of interest or not. Fundamental models answered this ambiguity by covering the whole concentration range of the compounds that they involved. However, this is not the case for ad-hoc cubic model developments which may be considered in the specific range of concentration based on the application.
Robustness.
Thermodynamic models should be robust and reliable, providing consistent results across different conditions and applications. They should be able to handle non-ideal behaviour, phase transitions, and complex interactions without significant loss of accuracy. Although some models are capable of taking into account the possible reactions between the system constituents, this is not the case for other simpler models that can only predict the behaviour of the system only in a specific phase. So, it is essential to identify the typical behaviour of the fluid in the target application to select and develop a proper model. However, in most engineering applications, developing a model that would be able to predict the thermodynamic properties of the system in different phases, critical regions and taking into account the possible reaction between systems is a necessity.
Accuracy.
Based on the foundation that each thermodynamic model is built, the accuracy could vary not only for a specific property evaluation from different models but also for predicting different properties within a specific model itself. Cubic models are developed based on the phase equilibrium and as a result, they can predict the phase equilibrium of pure and multi-component systems within an acceptable accuracy level in case the model is fine-tuned to the experimental data of interest. However, this family of models is not accurate enough in predicting density and specific heat capacity as the two main thermodynamic properties that are of importance in most industrial applications. In the recent case, some corrections are suggested to enhance the accuracy of the cubic models for different properties, such as Peneloux translation for density prediction.
On the other hand, models that are developed based on fundamental properties such as Gibbs free energy or Helmholtz free energy, are generally capable of predicting a wider range of properties. As these models have a multiple number of adjustable parameters that fitted to different of experimental properties data, it makes them a pioneer when it comes to accuracy.
Computational speed.
The model should be computationally efficient, especially for complex systems and large-scale simulations. The model's equations and algorithms should be designed to minimize computational time. This is especially important in cases where transient processes are targeted that thermodynamic properties change significantly over the transient time domain and computationally demanding models cannot satisfy industrial needs.
Availability.
In certain applications, it may be important to consider the acceptance and implementation of a specific thermodynamic model within the industry. Industrial standards and guidelines can provide insights into the preferred models for specific processes. However, not all thermodynamic models are widely available in commercial software packages. This is especially the case for more complex fundamental models that despite their robustness, they are not still well-accepted by industry to their limited availability. | [
{
"math_id": 0,
"text": "a_{ij}=\\sqrt{a_{ii}a_{jj}} (1-k_{ij})"
},
{
"math_id": 1,
"text": "b_{ij}=\\frac {b_{ii}+b_{jj}} 2 (1-l_{ij})"
},
{
"math_id": 2,
"text": "a_{ij}"
},
{
"math_id": 3,
"text": "b_{ij}"
},
{
"math_id": 4,
"text": "a_{mix} = \\sum_i \\sum_j y_i y_j a_{ij}"
},
{
"math_id": 5,
"text": "b_{mix} = \\sum_i y_i b_i"
},
{
"math_id": 6,
"text": "k_{ij}"
},
{
"math_id": 7,
"text": "l_{ij}"
}
]
| https://en.wikipedia.org/wiki?curid=74209853 |
74210027 | T-MOS thermal sensor | TMOS is a type of thermal sensor consisting in a micromachined thermally isolated transistor fabricated using CMOS-SOI(Silicon on Insulator) MEMS(Micro electro-mechanical system) technology. It has been developed in the last decade by the Technion - Israel Institute of Technology. A thermal sensor is a device able to detect the thermal radiation emitted by an object located in the FOV(Field Of View) of the sensor. Infrared radiation ( IR ) striking the sensor produces a change in the temperature of the device that as a consequence generates an electric output signal proportional to the incident IR power. The sensor is able to measure the temperature of the object radiating thanks to the information contained in the impinging radiation, exploiting in this sense Stefan - Boltzmann law. TMOS detector has two important characteristics that make it different from others: it's an active and uncooled sensor.
Fabrication process.
A TMOS detector consists in a mosaic structure composed of several sub-pixels, which are electrically connected in parallel or in series or in a mixed combination, and are thermally isolated. In each sub-pixels the sensitive element is the TMOS sensor, that is suspended in vacuum, fabricated in CMOS - SOI technology and dry released. The mosaic structure includes: the pixel frame, the suspended transistor, that absorbs IR radiation and that could also be embedded in an absorbing IR membrane which determine the thermal capacitance of the sensor, and two folding arms that determine the sensor thermal conductivity.
TMOS fabrication is based on built - in masks and dry bulk micromachining. In TMOS fabrication to the standard CMOS - SOI technology, used to produce MOS transistor, is added a MEMS post process necessary to realize the folded arms and the suspension of the transistor. In standard CMOS process there are several metallization layers. In TMOS production the upper ones, made in aluminum or copper, are used as built - in masks. Both metals are not affected by the fluorine plasma, used to dry etch silicon and interlevel dielectrics. The use of built- in mask grants high alignment accuracy and resolution while reducing fabrication costs. Final step of MEMS post process is the metal mask removal. This step is performed using standard wet etchant of aluminum or copper.
At present 130 nm CMOS - SOI technology implemented on 8 inch wafers is used to produce TMOS sensors, employing wafer level processing in standard CMOS facilities, allowing cost reduction and large production volumes.
Packaging.
To improve sensor's performance and to protect it from the surrounding environment, especially from moisture, TMOS sensor are packaged under vacuum. The wafer-level production enables also wafer-level packaging, allowing the possibility to integrate optical windows and filters to improve their efficiency and widening their applicability.
TMOS package contains two devices: one "active", that sense and is exposed to external radiation, and another one "blind", that is shielded from the outside through an aluminum mirror deposited on the package.
Operating principle.
The working principle of TMOS sensor provides that when thermal IR radiation is absorbed in the sensitive area heats up the TMOS causing a variation in its temperature. The temperature change produces a current or a voltage output signal proportional to the absorbed radiation.
TMOS performance depends on the transistor operating region and configuration: two terminals component, diode-like configuration, or three terminals component. Two terminals configuration is characterized by a grater thermal isolation. On the other side the three-terminal configuration has an higher internal voltage gain, given by the higher output resistivity.
Subthreshold region is the preferred one because avoids self heating effects and leads to higher sensitivity. Another reason to work in subthreshold region is that TMOS is an active device so requires a bias, however in this operating region the power consumption is lower than in other ones.
From a circuit point of view the produced TMOS signal can be modelled as a temperature dependent current source formula_0 in parallel with the formula_1 generator for small signal equivalent circuit. The value of "formula_0" is directly proportional to the drain source current variation with respect to TMOS operating temperature and to the temperature variation induced on the TMOS by the radiation absorbed from target object. This temperature has a direct dependence on the absorbing efficiency, the incident radiation power and on the thermal conductance of the sensor.
As mentioned in the previous section TMOS sensor package contains two devices, so the signal is read in a differential configuration. In this way the blind TMOS represents a reference relative to which the measure is done. This configuration is useful because allows to reject the common mode signal and reduce self heating effects.
Responsivity.
The most important figure of merit of every kind of sensor is its responsivity. The responsivity is defined as the ratio between the output electrical parameters, both current or voltage, and the incident power on the detector. For TMOS sensor working in subthreshold region is 1,25 x 107 V/W.
formula_2
TCC and TCV.
TMOS sensitivity depends if the device is working in current or voltage mode. In current mode a bias voltage is applied, the current increases by an increment, which is the signal current. In the first case sensitivity corresponds to the temperature coefficient of current TCC, that is inversely proportional to drain source current and directly proportional to the derivative of drain source current respect to the operating temperature. In contrast, at voltage mode, where a bias current is applied, the voltage decreases by an increment, which is the voltage signal. In voltage mode the sensitivity is the temperature coefficient of voltage TCV and is inversely proportional to the voltage bias and directly proportional to the derivative of voltage respect to temperature for the considered operating temperature. TCC values above 4%/K are achieved working in the subthreshold region.
Advantages.
TMOS thermal sensor presents several advantages compared to other thermal sensors such as thermopiles, bolometers, and also microbolometers, which have a very similar structure. Both thermopile and bolometer are passive detectors while microbolometers can also have an active structure, but the transistor used is a TFT (thin-film transistor).
The main advantages of using TMOS sensor are:
Disadvantages.
The main disadvantage is in the limited sensitivity compared with cooled IR detectors. Quantum photon detectors, for example, reach higher sensitivity but they need to work at cryogenic temperatures, so require a cooling system which consumes a lot of power.
Applications.
Thermal sensors may have a lot of different applications. They respond to thermal IR radiation so their main application is for the production of thermal IR cameras. The other possible applications regard different fields from gas analysis, human detection for autonomous driving, presence detection, people counting, security system, or thermal monitoring during the fabrication process.
Until now the main TMOS application has been as a high-sensitivity detector for motion and presence. When an object enters the FOV of the sensor there is a change in the radiation power that reaches the detector. This changing cause a temperature variation concerning the previous case and so coming from this difference the presence or motion is detected. This changing cause a temperature variation respect to previous case and so coming from this difference the presence or motion is detected. TMOS presence commercial products are available.
The low power consumption typical of the TMOS sensor means that it can also be powered by a common ion battery, making it suitable for IOT, wearable devices, mobile phone integration, and smart homes.
The human body emitted radiation falls in the mid-infrared range peaking around 12 μm, so one of the applications of thermal sensors is fever detection. TMOS high performance, in terms of high sensitivity and low power consumption, and low costs fabrication process make it a promising candidate to implement contactless thermometer. | [
{
"math_id": 0,
"text": "i_{sig}"
},
{
"math_id": 1,
"text": "g_mV_{gs}"
},
{
"math_id": 2,
"text": " R = {V_{out} \\over P_{in}}"
}
]
| https://en.wikipedia.org/wiki?curid=74210027 |
74210833 | Lumped parameter model for the cardiovascular system | A lumped parameter cardiovascular model is a zero-dimensional mathematical model used to describe the hemodynamics of the cardiovascular system. Given a set of parameters that have a physical meaning (e.g. resistances to blood flow), it allows to study the changes in blood pressures or flow rates throughout the cardiovascular system. Modifying the parameters, it is possible to study the effects of a specific disease. For example, arterial hypertension is modeled increasing the arterial resistances of the model.
The lumped parameter model is used to study the hemodynamics of a three-dimensional space (the cardiovascular system) by means of a zero-dimensional space that exploits the analogy between pipes and electrical circuits. The reduction from three to zero dimensions is performed by splitting the cardiovascular system into different compartments<ins>,</ins> each of them representing a specific component of the system, e.g. right atrium or systemic arteries. Each compartment is made up of simple circuital components, like resistances or capacitors, while the blood flux behaves like the current flowing through the circuit according to Kirchoff's laws, under the action of the blood pressure (voltage drop).
The lumped parameter model consists in a system of ordinary differential equations that describes the evolution in time of the volumes of the heart chambers<ins>,</ins> and the blood pressures and fluxes through the blood vessels.
Model description.
The lumped parameter model consists in a system of ordinary differential equations that adhere to the principles of conservation of mass and momentum. The model is obtained exploiting the electrical analogy where the current represents the blood flow, the voltage represents the pressure difference, the electric resistance plays the role of the vascular resistance (determined by the section and the length of the blood vessel), the capacitance plays the role of the vascular compliance (the ability of the vessel to distend and increase volume with increasing transmural pressure, that is the difference in pressure between two sides of a vessel wall) and the inductance represents the blood inertia. Each heart chamber is modeled by means of the elastances that describe the contractility of the cardiac muscle and the unloaded volume, that is the blood volume contained in the chamber at zero-pressure. The valves are modeled as diodes. The parameter of the model are the resistances, the capacitances, the inductances and the elastances. The unknowns of the system are the blood volumes inside each heart chamber, the blood pressures and fluxes inside each compartment of the circulation. The system of ordinary differential equations is solved by means of a numerical method for temporal discretization, e.g., a Runge-Kutta method.
The cardiovascular system is split into different compartments:
Downstream of the left atrium and ventricle and right atrium and ventricle there are the four cardiac valves: mitral, aortic, tricuspid and pulmonary valves, respectively.
The splitting of the pulmonary and systemic circulation is not fixed, for example, if the interest of the study is in systemic capillaries, the compartment accounting for the systemic capillaries can be added to the lumped parameter model. Each compartment is described by a Windkessel circuit with the number of elements depending on the specific compartment. The ordinary differential equations of the model are derived from the Windkessel circuits and the Kirchoff's laws.
In what follows the focus will be on a specific lumped parameter model. The compartments considered are the four heart chambers, the systemic and pulmonary arteries and veins.
Heart chambers equations.
The parameters related to the four heart chambers are the passive and active elastances formula_0 and formula_1 (where the subscripts vary among formula_2 and formula_3 if the elastances refer to the right atrium or ventricle or the left atrium or ventricle, respectively) and the unloaded volumes formula_4. The dynamics of the heart chambers are described by the time-dependent elastance:
formula_5
where formula_6 is a periodic (with period of an heartbeat) time dependent function ranging from formula_7 to formula_8 that accounts for the activation phases of the heart during a heartbeat. From the above equation, the passive elastance represents the minimum elastance of the heart chamber, whereas the sum of formula_0 and formula_1 the maximum elastance of it. The time-dependent elastance allows the computation of the pressure inside a specific heart chamber as follows:
formula_9
where formula_10 is the volume of blood contained in the heart chamber and the volumes for each chamber are the solutions to the following ordinary differential equations that account for inward and outward blood fluxes associated with the heart chamber:
formula_11
formula_12
formula_13
formula_14
where formula_15 and formula_16 are the fluxes through the mitral, aortic, tricuspid and pulmonary valves respectively and formula_17 and formula_18 are the fluxes through the pulmonary and systemic veins, respectively.
Valves equations.
The valves are modeled as diodes and the blood fluxes across the valves depend on the pressure jumps between the upstream and downstream compartment:
formula_19
formula_20
where the pressure inside each heart chamber is defined in the previous section, formula_21 and formula_22 are the time-dependent pressures inside the systemic and pulmonary artery compartment and formula_23 is the flux across the valve depending on the pressure jump:
formula_24
where formula_25 and formula_26 are the resistances of the valves when they are open and closed respectively.
Circulation compartments equations.
Each compartment of the blood vessels is characterized by a combination of resistances, capacitances and inductances. For example, the arterial systemic circulation can be described by three parameters formula_27 and formula_28 that represent the arterial systemic resistance, capacitance and inductance. The ordinary differential equations that describes the systemic arterial circulation are:
formula_29
formula_30
where formula_31 is the blood flux across the systemic arterial compartment and formula_32 is the pressure inside the veins compartment.
Analogous equations with similar notation hold for the other compartments describing the blood circulation.
Ordinary differential equation system.
Assembling the equations described above the following system is obtained: formula_33 it holds
formula_34
with formula_35 the final time. The first two equations are related to the volumes in the left atrium and ventricles respectively. The equations from the third to the sixth are related to the pressures, and fluxes of the systemic arterial and venous systems. The last equations are related to the right heart and the pulmonary circulation in an analogous way. The system is completed with initial conditions for each of the unknowns.
From a mathematical point of view, the well-posedness of the problem is a consequence of the Cauchy–Lipschitz theorem, so its solution exists and it is unique. The solution of the system is approximated by means of a numerical method. The numerical simulation has to be computed for more than formula_36 heartbeats (the final time formula_35 depends on the number of heartbeats and the heart rate) to approach the limit cycle of the dynamical system, so that the solution behaves in a similar way to a periodic function emulating the periodicity of the cardiac cycle.
Further developments.
The model described above is a specific lumped parameter model. It can be easily modified adding or removing compartments or circuit components inside any compartment as needed. The equations that govern the new or the modified compartments are the Kirchoff's laws as before.
The cardiovascular lumped parameter models can be enhanced adding a lumped parameter model for the respiratory system. As for the cardiovascular system, the respiratory system is split into different compartments modeling, for example, the larynx, the pharinx or the trachea. Moreover, the cardiopulmonary model can be combined with a model for blood oxygenation to study, for example, the levels of blood saturation.
There are several lumped parameter models and the choice of the model depends on the purpose of the work or the research. Complex models can describe different dynamics, but the increase in complexity entails a larger computational cost to solve the system of differential equations.
Some of the 0-D compartments of the lumped parameter model could be substituted by formula_37-dimensional components (formula_38) to describe geometrically a specific component of the cardiovascular system (e.g., the 0-D compartment of the left ventricle can be substituted by a 3-D representation of it). As a consequence, the system of equations will include also partial differential equations to describe the dimensional components and it will entail a larger computational cost to be numerically solved.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "EA_{XX}"
},
{
"math_id": 1,
"text": "EB_{XX}"
},
{
"math_id": 2,
"text": "RA, RV, LA"
},
{
"math_id": 3,
"text": "LV"
},
{
"math_id": 4,
"text": "V0_{XX}"
},
{
"math_id": 5,
"text": "E_{XX}(t) = EB_{XX}+EA_{XX}f_{XX}(t)"
},
{
"math_id": 6,
"text": "f_{XX}(t)"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "1"
},
{
"math_id": 9,
"text": "p_{XX}(t) = E_{XX}(t)(V_{XX}(t)-V0_{XX})"
},
{
"math_id": 10,
"text": "V_{XX}(t)"
},
{
"math_id": 11,
"text": "\\frac{dV_{LA}(t)}{dt} = Q_{VEN}^{PUL}(t)-Q_{MV}(t)"
},
{
"math_id": 12,
"text": "\\frac{dV_{LV}(t)}{dt} = Q_{MV}(t)-Q_{AV}(t)"
},
{
"math_id": 13,
"text": "\\frac{dV_{RA}(t)}{dt} = Q_{VEN}^{SYS}(t)-Q_{TV}(t)"
},
{
"math_id": 14,
"text": "\\frac{dV_{RV}(t)}{dt} = Q_{TV}(t)-Q_{PV}(t)"
},
{
"math_id": 15,
"text": "Q_{MV}(t), Q_{AV}(t), Q_{TV}(t)"
},
{
"math_id": 16,
"text": "Q_{PV}(t)"
},
{
"math_id": 17,
"text": "Q_{VEN}^{PUL}(t)"
},
{
"math_id": 18,
"text": "Q_{VEN}^{SYS}(t)"
},
{
"math_id": 19,
"text": "Q_{MV}(t) = Q_{valve}(p_{LA}(t)-p_{LV}(t)) \\qquad Q_{AV}(t) = Q_{valve}(p_{LV}(t)-p_{AR}^{SYS}(t))"
},
{
"math_id": 20,
"text": "Q_{TV}(t) = Q_{valve}(p_{RA}(t)-p_{RV}(t)) \\qquad Q_{TV}(t) = Q_{valve}(p_{RV}(t)-p_{AR}^{PUL}(t))"
},
{
"math_id": 21,
"text": "p_{AR}^{SYS}(t)"
},
{
"math_id": 22,
"text": "p_{AR}^{PUL}(t)"
},
{
"math_id": 23,
"text": "Q_{valve}(\\Delta p)"
},
{
"math_id": 24,
"text": "Q_{valve} (\\Delta p) =\n\\begin{cases}\n\\frac{\\Delta p}{R_{min}} \\qquad & \\text{if } \\Delta p < 0\\\\\n\\frac{\\Delta p}{R_{max}} \\qquad & \\text{if } \\Delta p \\ge 0\n\\end{cases}"
},
{
"math_id": 25,
"text": "R_{min}"
},
{
"math_id": 26,
"text": "R_{max}"
},
{
"math_id": 27,
"text": "R_{AR}^{SYS}, C_{AR}^{SYS}"
},
{
"math_id": 28,
"text": "L_{AR}^{SYS}"
},
{
"math_id": 29,
"text": "C_{AR}^{SYS}\\frac{dp_{AR}^{SYS}}{dt} = Q_{AV}(t)-Q_{AR}^{SYS}(t)"
},
{
"math_id": 30,
"text": "L_{AR}^{SYS}\\frac{d Q_{AR}^{SYS}(t)}{dt} = -R_{AR}^{SYS}Q_{AR}^{SYS}(t)+p_{AR}^{SYS}(t)-p_{VEN}^{SYS}(t)"
},
{
"math_id": 31,
"text": "Q_{AR}^{SYS}(t)"
},
{
"math_id": 32,
"text": "p_{VEN}^{SYS}(t)"
},
{
"math_id": 33,
"text": "\\forall \\, t \\in [0,T]"
},
{
"math_id": 34,
"text": "\\begin{cases}\n\\frac{dV_{LA}(t)}{dt}=Q_{VEN}^{PUL}(t)-Q_{MV}(t)\\\\\n\\frac{dV_{LV}(t)}{dt}=Q_{MV}(t)-Q_{AV}(t)\\\\\n C_{AR}^{SYS}\\frac{dp_{AR}^{SYS}(t)}{dt}=Q_{AV}(t)-Q_{AR}^{SYS}(t)\\\\\nL_{AR}^{SYS}\\frac{d Q_{AR}^{SYS}(t)}{dt} = -R_{AR}^{SYS}Q_{AR}^{SYS}(t)+p_{AR}^{SYS}(t)-p_{VEN}^{SYS}(t) \\\\\nC_{VEN}^{SYS}\\frac{dp_{VEN}^{SYS}(t)}{dt}=Q_{AR}^{SYS}(t)-Q_{VEN}^{SYS}(t)\\\\\nL_{VEN}^{SYS}\\frac{dQ_{VEN}^{SYS}(t)}{dt}=-R_{VEN}^{SYS}Q_{VEN}^{SYS}(t)+p_{VEN}^{SYS}(t)-p_{RA}(t)\\\\\n\\frac{dV_{RA}(t)}{dt}=Q_{VEN}^{SYS}(t)-Q_{TV}(t)\\\\\n\\frac{dV_{RV}(t)}{dt}=Q_{TV}(t)-Q_{PV}(t)\\\\\n C_{AR}^{PUL}\\frac{dp_{AR}^{PUL}(t)}{dt}=Q_{PV}(t)-Q_{AR}^{PUL}(t)\\\\\nL_{AR}^{PUL}\\frac{dQ_{AR}^{PUL}(t)}{dt}=-R_{AR}^{PUL}Q_{AR}^{PUL}(t)+p_{AR}^{PUL}(t)-p_{VEN}^{PUL}(t)\\\\\nC_{VEN}^{PUL}\\frac{dp_{VEN}^{PUL}(t)}{dt}=Q_{AR}^{PUL}(t)-Q_{VEN}^{PUL}(t)\\\\\nL_{VEN}^{PUL}\\frac{dQ_{VEN}^{PUL}(t)}{dt}=-R_{VEN}^{PUL}Q_{VEN}^{PUL}(t)+p_{VEN}^{PUL}(t)-p_{LA} (t)\n\\end{cases}"
},
{
"math_id": 35,
"text": "T"
},
{
"math_id": 36,
"text": "10"
},
{
"math_id": 37,
"text": "d"
},
{
"math_id": 38,
"text": "d = 1,2,3"
}
]
| https://en.wikipedia.org/wiki?curid=74210833 |
74213354 | Huygens principle of double refraction | Optical principle
Huygens principle of double refraction, named after Dutch physicist Christiaan Huygens, explains the phenomenon of double refraction observed in uniaxial anisotropic material such as calcite. When unpolarized light propagates in such materials (along a direction different from the optical axis), it splits into two different rays, known as ordinary and extraordinary rays. The principle states that every point on the wavefront of birefringent material produces two types of wavefronts or wavelets: spherical wavefronts and ellipsoidal wavefronts. These secondary wavelets, originating from different points, interact and interfere with each other. As a result, the new wavefront is formed by the superposition of these wavelets.
History.
The systematic exploration of light polarization began during the 17th century. In 1669, Rasmus Bartholin made an observation of double refraction in a calcite crystal and documented it in a published work in 1670. Later, in 1690, Huygens identified polarization as a characteristic of light and provided a demonstration using two identical blocks of calcite placed in succession. Each crystal divided an incoming ray of light into two, which Huygens referred to as "regular" and "irregular" (in modern terminology: ordinary and extraordinary). However, if the two crystals were aligned in the same orientation, no further division of the light occurred.
Huygens–Fresnel principle.
While the Huygens' principle of double refraction explains the phenomenon of double refraction in an optically anisotropic medium, the Huygens–Fresnel principle pertains to the propagation of waves in an optically isotropic medium. According to the Huygens–Fresnel principle, each point on a wavefront can be considered a secondary point source of waves, so a new wavefront is formed after the secondary wavelets have traveled for a period equal to one vibration cycle. This new wavefront can be described as an envelope or tangent surface to these secondary wavelets. Understanding and forecasting the classical wave propagation of light is based on the Huygens-Fresnel principle.
Polarization of light.
Electric and magnetic fields that are mutually perpendicular and fluctuating give rise to the transverse electromagnetic wave known as light. Electric and magnetic fields are perpendicular to the propagation direction of the wave. For example, if the wave propagation is in the z-direction, both the electric field and the magnetic field lie in the xy-plane. The electric field points in a specific direction in space since it is a vector. The direction of an electromagnetic wave's electric field vector E is referred to as polarization. If the electric field oscillates in the x-direction, the polarization of the light will be linear, along the x-direction.
Plane wave equation of the light.
The electromagnetic wave equation's sinusoidal solution has the following form:formula_0where
The wave vector is related to the angular frequency and speed of light "c" byformula_2
where "k" is the wavenumber (the magnitude of the wave vector) and "λ" is the wavelength.
Unpolarized light.
If we were able to observe a light wave originating from an ordinary source and directed toward us, such as the light emitted by an incandescent bulb, we would find that it consists of mixture of light waves. These waves exhibit electric field components that fluctuate at a rapid pace, nearly matching the optical frequency itself, with a time scale of approximately 10−14 seconds. Consequently, the direction of oscillation of the electric field vector occurs in all possible planes perpendicular to the direction of the light beam. Unpolarized light is a type of light wave where the electric field vector oscillates in multiple planes. Light emitted by the sun, incandescent lamps, or candle flames is considered to be unpolarized.
Types of polarization.
The light wave polarization specifies the form and location of the electric field vector's direction at a particular point in space as a function of time (in the plane perpendicular to the propagation direction). There are three possible polarization states for light, depending on where the formula_3 vector's direction is located. The first is plane or linear polarization, the second is elliptical polarization, and the third is circular polarization.
The light may also be partially polarized in addition to these. The polarization of light cannot be determined by the human eye on its own. However, some animals and insects have a vision that is sensitive to polarization.
Plane linear polarized light.
Light waves that exhibit oscillation in a single plane are referred to as plane-polarized light waves. In such waves, the electric field vector (E) oscillates exclusively within a single plane that is perpendicular to the direction of wave propagation. This type of wave is also called a linearly polarized wave since the orientation of the field vector at any given point in space and time lies along a line within a plane perpendicular to the wave's direction of propagation.
Isotropic and anisotropic materials.
Materials can be classified into two categories based on their isotropy. Materials that are isotropic have the same physical characteristics throughout. In other words, regardless of the direction in which they are measured, their characteristics, such as optical, electrical, and mechanical, stay constant. Gases, liquids, and amorphous solids like glass are instances of isotropic materials. On the other hand, anisotropic materials show various physical characteristics depending on the direction of measurement. Their characteristics are not constant throughout the substance. Crystal structure, molecule orientation, or the presence of preferred axes can all be causes of anisotropy. Crystals, certain polymers, calcite, and numerous minerals are typical examples of anisotropic materials. The physical characteristics of anisotropic materials, such as refractive index, electrical conductivity, and mechanical qualities, can differ depending on the direction of measurement.
Optical axis and types of anisotropic materials.
A frequent notion in the study of anisotropic materials, particularly in the context of optics, is the optical axis. It refers to a particular axis within the material along which certain optical characteristics remain unaltered. To put it in another way, the light that travels along the optical axis does not experience anisotropic behaviours on the transverse plane.
It is possible to further divide anisotropic materials into two categories: uniaxial anisotropic and biaxial anisotropic materials. One optical axis, also referred to as the extraordinary axis, exists in uniaxially anisotropic materials. In these materials, light propagating along the optical axis experience the same effects independently of the polarization. The optical plane, also known as the plane of polarization, is perpendicular to the optical axis. Light exhibits birefringence within this plane, which means that the refractive index and all the phenomena associated to that, depend on the polarization. A common effect that can be observed is the splitting of an incident ray into two rays when propagating in a birefringent medium. Due to the presence of two independent optical axes in biaxial anisotropic materials, light travelling in two different directions will experience different optical characteristics.
Positive and negative uniaxial material.
There are two types of uniaxial material depending on the value of index of refraction for the e-ray and o-ray. When the value of the refractive index of the e-ray (ne) is larger than the index of refraction index of the o-ray(n0), the material is positive uniaxial. On the other hand, when the value of refractive index of the e-ray (ne) is less than index of refraction index of the o-ray (n0), the material is negative uniaxial material. Ice and quartz are examples for positive uniaxial material. Calcite and tourmaline are examples of negative uniaxial materials.
Huygens' explanation of double refraction.
The ordinary ray (o-ray) has a spherical wavefront because the o-ray has a constant refractive index (n0) independent of propagation direction inside the uniaxial material and the same velocity in all directions. On the other hand, the extraordinary ray (E-ray) has an ellipsoidal wavefront due to its refractive index, which varies with the propagation direction within the uniaxial material, leading to different velocities in different directions. The two wavefronts come into contact at the points where they intersect with the optical axis.
When unpolarized light incidents on the birefringent material, the o-ray and e-ray will generate new wavefronts. The new wavefront for the o-ray will be tangent to the spherical wavelets, while the new wavefront for the e-ray will be tangent to the ellipsoidal wavelets. Each plane wavefront propagates straight ahead but with different velocities: V0 for the o-ray and Ve for the e-ray. The direction of the k-vector is always perpendicular to the wavefronts and is calculated from Snell's law. For normal incidence, the o-ray and e-ray having the same k-vector direction. However, the Poynting vector, describing the direction of propagation of optical power, is different for the two rays. The power direction for each ray is determined by connecting the line from the imaginary source on the old wavefront to the intersection point between the new wavefront and the spherical or ellipsoidal wavefront. As a result, the o-ray and e-ray will propagate in different directions with different velocities inside the material. For the e-ray, the angle between the k-vector and the power direction is called walk-off angle.
When a light travels through the crystal, these two wave surfaces follow distinct paths within the crystal. Eventually, two refracted rays emerge as a result of this propagation. | [
{
"math_id": 0,
"text": "\\begin{align}\n\\mathbf{E} (\\mathbf{r}, t) &= \\mathbf{E}_0 \\cos(\\omega t - \\mathbf{k} \\cdot \\mathbf{r} + \\phi_0) \\\\\n\\mathbf{B} (\\mathbf{r}, t) &= \\mathbf{B}_0 \\cos(\\omega t - \\mathbf{k} \\cdot \\mathbf{r} + \\phi_0)\n\\end{align}"
},
{
"math_id": 1,
"text": " \\phi_0 "
},
{
"math_id": 2,
"text": " k = | \\mathbf{k} | = { \\omega \\over c } = { 2 \\pi \\over \\lambda } "
},
{
"math_id": 3,
"text": "\\mathbf{E}"
}
]
| https://en.wikipedia.org/wiki?curid=74213354 |
7421397 | Benjamin Graham formula | Formula for the valuation of growth stocks
The Benjamin Graham formula is a formula for the valuation of growth stocks.
It was proposed by investor and professor of Columbia University, Benjamin Graham - often referred to as the "father of value investing".
Published in his book, "The Intelligent Investor", Graham devised the formula for lay investors to help them with valuing growth stocks, in vogue at the time of the formula's publication.
Graham cautioned here that the formula was not appropriate for companies with a "below-par" debt position: "My advice to analysts would be to limit your appraisals to enterprises of investment quality, excluding from that category such as do not meet specific criteria of financial strength".
Formula calculation.
In Graham's words: "Our study of the various methods has led us to suggest a foreshortened and quite simple formula for the evaluation of growth stocks, which is intended to produce figures fairly close to those resulting from the more refined mathematical calculations."
The formula as described by Graham originally in the 1962 edition of "Security Analysis", and then again in the 1973 edition of "The Intelligent Investor", is as follows:
formula_0
formula_1 = the value expected from the growth formulas over the next 7 to 10 years<BR>
formula_2 = trailing twelve months earnings per share<BR>
formula_3 = P/E base for a no-growth company<BR>
formula_4 = reasonably expected 7 to 10 year growth rate (see )
Revised formula.
Graham later revised his formula based on the belief that the greatest contributing factor to stock values (and prices) over the past decade had been interest rates. In 1974, he restated it as follows:
The Graham formula proposes to calculate a company’s intrinsic value formula_1 as:
formula_5
formula_1 = the value expected from the growth formulas over the next 7 to 10 years<BR>
formula_2 = the company’s last 12-month earnings per share<BR>
formula_3 = P/E base for a no-growth company<BR>
formula_4 = reasonably expected 7 to 10 Year Growth Rate of EPS<BR>
formula_6 = the average yield of AAA corporate bonds in 1962 (Graham did not specify the duration of the bonds, though it has been asserted that he used 20 year AAA bonds as his benchmark for this variable)
formula_7 = the current yield on AAA corporate bonds.
Application.
In "The Intelligent Investor", Graham was careful to include a footnote that this formula was not being recommended for use by investors — rather, it was to model the expected results of other growth formulas popular at the time.
However, a misconception arose that he was using this formula in his daily work due to a later reprinted edition's decision to move footnotes to the back of the book, where fewer readers searched for them. Readers who continued on in the chapter would have found Graham stating:
"Warning": This material is supplied for illustrative purposes only, and because of the inescapable necessity of security analysis to project the future growth rate for most companies studied. Let the reader not be mislead into thinking that such projections have any high degree of reliability, or, conversely, that future prices can be counted on to behave accordingly as the prophecies are realized, surpassed, or disappointed.
The movement of the footnote in the reprint has led to an assortment of advisers and investors recommending this formula (or revised versions of it) to the public at large — a practice that continues to this day. Benjamin Clark, the founder of the blog and investment service ModernGraham, acknowledges the footnote and argues that "I consider the footnote to be more of a reminder from Graham that the calculation of an intrinsic value is not an exact science and cannot be done with 100% certainty." Clark further explains that the formula "is to be used for estimating intrinsic value within a margin of safety which will accommodate the possibility of error in calculation."
Graham also cautioned that his calculations were not perfect, even in the time period for which it was published, noting in the 1973 edition of "The Intelligent Investor": "We should have added caution somewhat as follows: The valuations of expected high-growth stocks are necessarily on the low side, if we were to assume these growth rates will actually be realized." He continued on to point out that if a stock were to be assumed to grow forever, its value would be infinite.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V^* = \\mathrm{EPS} \\times (8.5 + 2g) "
},
{
"math_id": 1,
"text": "V^*"
},
{
"math_id": 2,
"text": "EPS"
},
{
"math_id": 3,
"text": "8.5"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "V^* = \\cfrac{\\mathrm{EPS} \\times (8.5 + 2g) \\times 4.4}{Y}"
},
{
"math_id": 6,
"text": "4.4"
},
{
"math_id": 7,
"text": "Y"
}
]
| https://en.wikipedia.org/wiki?curid=7421397 |
74216190 | Zhu algebra | Invariant of vertex algebra
In mathematics, the Zhu algebra and the closely related C2-algebra, introduced by Yongchang Zhu in his PhD thesis, are two associative algebras canonically constructed from a given vertex operator algebra. Many important representation theoretic properties of the vertex algebra are logically related to properties of its Zhu algebra or C2-algebra.
Definitions.
Let formula_0 be a graded vertex operator algebra with formula_1 and let formula_2 be the vertex operator associated to formula_3 Define formula_4to be the subspace spanned by elements of the form formula_5 for formula_6 An element formula_7 is homogeneous with formula_8 if formula_9 There are two binary operations on formula_10defined byformula_11for homogeneous elements and extended linearly to all of formula_10. Define formula_12to be the span of all elements formula_13.
The algebra formula_14 with the binary operation induced by formula_15 is an associative algebra called the "Zhu algebra" of formula_10.
The algebra formula_16 with multiplication formula_17 is called the "C2-algebra" of formula_10.
Associated variety.
Because the C2-algebra formula_19 is a commutative algebra it may be studied using the language of algebraic geometry. The "associated scheme" formula_24 and "associated variety" formula_25 of formula_10 are defined to be formula_26which are an affine scheme an affine algebraic variety respectively. Moreover, since formula_27 acts as a derivation on formula_19 there is an action of formula_28 on the associated scheme making "formula_24" a conical Poisson scheme and formula_25 a conical Poisson variety. In this language, C2-cofiniteness is equivalent to the property that formula_25 is a point.
Example: If formula_29 is the affine W-algebra associated to affine Lie algebra formula_30 at level formula_31 and nilpotent element formula_32 then formula_33is the Slodowy slice through formula_32.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = \\bigoplus_{n \\ge 0} V_{(n)}"
},
{
"math_id": 1,
"text": "V_{(0)} = \\mathbb{C}\\mathbf{1}"
},
{
"math_id": 2,
"text": "Y(a, z) = \\sum_{n \\in \\Z} a_n z^{-n-1}"
},
{
"math_id": 3,
"text": "a \\in V. "
},
{
"math_id": 4,
"text": "C_2(V)\\subset V"
},
{
"math_id": 5,
"text": "a_{-2} b"
},
{
"math_id": 6,
"text": "a,b \\in V. "
},
{
"math_id": 7,
"text": "a \\in V"
},
{
"math_id": 8,
"text": "\\operatorname{wt} a = n"
},
{
"math_id": 9,
"text": "a \\in V_{(n)}."
},
{
"math_id": 10,
"text": "V"
},
{
"math_id": 11,
"text": "a * b = \\sum_{i \\ge 0} \\binom{\\operatorname{wt} a}{i} a_{i-1}b, ~~~~~ a \\circ b = \\sum_{i \\ge 0} \\binom{\\operatorname{wt}a}{i} a_{i-2} b"
},
{
"math_id": 12,
"text": "O(V)\\subset V"
},
{
"math_id": 13,
"text": "a\\circ b"
},
{
"math_id": 14,
"text": "A(V) := V/O(V)"
},
{
"math_id": 15,
"text": "*"
},
{
"math_id": 16,
"text": "R_V := V/C_2(V)"
},
{
"math_id": 17,
"text": "a\\cdot b = a_{-1}b \\mod C_2(V)"
},
{
"math_id": 18,
"text": "\\{a,b\\} = a_{0}b\\mod C_2(V)"
},
{
"math_id": 19,
"text": "R_V"
},
{
"math_id": 20,
"text": "A(V) = \\bigcup_{p \\ge 0} A_p(V)"
},
{
"math_id": 21,
"text": "A_p(V) = \\operatorname{im}(\\oplus_{j = 0}^p V_p\\to A(V))"
},
{
"math_id": 22,
"text": "A_p(V) \\ast A_q(V) \\subset A_{p+q}(V)."
},
{
"math_id": 23,
"text": "R_V \\to \\operatorname{gr}(A(V))"
},
{
"math_id": 24,
"text": "\\widetilde{X}_V"
},
{
"math_id": 25,
"text": "X_V"
},
{
"math_id": 26,
"text": "\\widetilde{X}_V := \\operatorname{Spec}(R_V), ~~~ X_V := (\\widetilde{X}_V)_{\\mathrm{red}}"
},
{
"math_id": 27,
"text": "L(-1)"
},
{
"math_id": 28,
"text": "\\mathbb{C}^\\ast"
},
{
"math_id": 29,
"text": "W^k(\\widehat{\\mathfrak g}, f)"
},
{
"math_id": 30,
"text": "\\widehat{\\mathfrak g}"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "f"
},
{
"math_id": 33,
"text": "\\widetilde{X}_{W^k(\\widehat{\\mathfrak g}, f)} = \\mathcal{S}_f"
}
]
| https://en.wikipedia.org/wiki?curid=74216190 |
7422265 | Lehmer mean | Mathematic formula for deriving a mean
In mathematics, the Lehmer mean of a tuple formula_0 of positive real numbers, named after Derrick Henry Lehmer, is defined as:
formula_1
The weighted Lehmer mean with respect to a tuple formula_2 of positive weights is defined as:
formula_3
The Lehmer mean is an alternative to power means
for interpolating between minimum and maximum via arithmetic mean and harmonic mean.
Properties.
The derivative of formula_4 is non-negative
formula_5
thus this function is monotonic and the inequality
formula_6
holds.
The derivative of the weighted Lehmer mean is:
formula_7
Applications.
Signal processing.
Like a power mean, a Lehmer mean serves a non-linear moving average which is shifted towards small signal values for small formula_19 and emphasizes big signal values for big formula_19. Given an efficient implementation of a moving arithmetic mean called you can implement a moving Lehmer mean according to the following Haskell code.
lehmerSmooth :: Floating a => ([a] -> [a]) -> a -> [a] -> [a]
lehmerSmooth smooth p xs =
zipWith (/)
(smooth (map (**p) xs))
(smooth (map (**(p-1)) xs))
Gonzalez and Woods call this a "contraharmonic mean filter" described for varying values of "p" (however, as above, the contraharmonic mean can refer to the specific case formula_20). Their convention is to substitute "p" with the order of the filter "Q":
formula_21
"Q"=0 is the arithmetic mean. Positive "Q" can reduce pepper noise and negative "Q" can reduce salt noise.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "L_p(\\mathbf{x}) = \\frac{\\sum_{k=1}^n x_k^p}{\\sum_{k=1}^n x_k^{p-1}}."
},
{
"math_id": 2,
"text": "w"
},
{
"math_id": 3,
"text": "L_{p,w}(\\mathbf{x}) = \\frac{\\sum_{k=1}^n w_k\\cdot x_k^p}{\\sum_{k=1}^n w_k\\cdot x_k^{p-1}}."
},
{
"math_id": 4,
"text": "p \\mapsto L_p(\\mathbf{x})"
},
{
"math_id": 5,
"text": "\n \\frac{\\partial}{\\partial p} L_p(\\mathbf{x}) =\n \\frac\n {\\left(\\sum_{j=1}^n \\sum_{k=j+1}^n\n \\left[x_j - x_k\\right] \\cdot \\left[\\ln(x_j) - \\ln(x_k)\\right] \\cdot \\left[x_j \\cdot x_k\\right]^{p-1}\\right)}\n {\\left(\\sum_{k=1}^n x_k^{p-1}\\right)^2},\n"
},
{
"math_id": 6,
"text": "p \\le q \\Longrightarrow L_p(\\mathbf{x}) \\le L_q(\\mathbf{x})"
},
{
"math_id": 7,
"text": "\n \\frac{\\partial L_{p,w}(\\mathbf{x})}{\\partial p} =\n \\frac{(\\sum w x^{p-1})(\\sum wx^p\\ln{x}) - (\\sum wx^p)(\\sum wx^{p-1}\\ln{x})}{(\\sum wx^{p-1})^2}\n"
},
{
"math_id": 8,
"text": "\\lim_{p \\to -\\infty} L_p(\\mathbf{x})"
},
{
"math_id": 9,
"text": "\\mathbf{x}"
},
{
"math_id": 10,
"text": "L_0(\\mathbf{x})"
},
{
"math_id": 11,
"text": "L_\\frac{1}{2}\\left((x_1, x_2)\\right)"
},
{
"math_id": 12,
"text": "x_1"
},
{
"math_id": 13,
"text": "x_2"
},
{
"math_id": 14,
"text": "L_1(\\mathbf{x})"
},
{
"math_id": 15,
"text": "L_2(\\mathbf{x})"
},
{
"math_id": 16,
"text": "\\lim_{p \\to \\infty} L_p(\\mathbf{x})"
},
{
"math_id": 17,
"text": "x_1,\\dots,x_k"
},
{
"math_id": 18,
"text": "L_p(\\mathbf{x}) = x_1\\cdot\\frac{k + \\left(\\frac{x_{k+1}}{x_1}\\right)^p + \\cdots + \\left(\\frac{x_n}{x_1}\\right)^p}{k + \\left(\\frac{x_{k+1}}{x_1}\\right)^{p-1} + \\cdots + \\left(\\frac{x_n}{x_1}\\right)^{p-1}}"
},
{
"math_id": 19,
"text": "p"
},
{
"math_id": 20,
"text": "p = 2"
},
{
"math_id": 21,
"text": "f(x) = \\frac{\\sum_{k=1}^n x_k^{Q+1}}{\\sum_{k=1}^n x_k^Q}."
}
]
| https://en.wikipedia.org/wiki?curid=7422265 |
742238 | Metre per second squared | SI derived unit of acceleration
<templatestyles src="Template:Infobox/styles-images.css" />
The metre per second squared is the unit of acceleration in the International System of Units (SI). As a derived unit, it is composed from the SI base units of length, the metre, and time, the second. Its symbol is written in several forms as m/s2, m·s−2 or ms−2, formula_0, or less commonly, as (m/s)/s.
As acceleration, the unit is interpreted physically as change in velocity or speed per time interval, i.e. metre per second per second and is treated as a vector quantity.
Example.
When an object experiences a constant acceleration of one metre per second squared (1 m/s2) from a state of rest, it achieves the speed of 5 m/s after 5 seconds and 10 m/s after 10 seconds. The average acceleration "a" can be calculated by dividing the speed "v" (m/s) by the time "t" (s), so the average acceleration in the first example would be calculated: formula_1.
Related units.
Newton's second law states that force equals mass multiplied by acceleration.
The unit of force is the newton (N), and mass has the SI unit kilogram (kg). One newton equals one kilogram metre per second squared. Therefore, the unit metre per second squared is equivalent to newton per kilogram, N·kg−1, or N/kg.
Thus, the Earth's gravitational field (near ground level) can be quoted as 9.8 metres per second squared, or the equivalent 9.8 N/kg.
Acceleration can be measured in ratios to gravity, such as g-force, and peak ground acceleration in earthquakes.
Unicode character.
The "metre per second squared" symbol is encoded by Unicode at code point . This is for compatibility with East Asian encodings and not intended to be used in new documents.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\tfrac\\operatorname{m}{\\operatorname{s}^2}}"
},
{
"math_id": 1,
"text": "a = \\frac{\\Delta v}{\\Delta t} = \\frac{5\\text{ m/s}}{5\\text{ s}} = 1\\text{ (m/s)/s} = 1\\text{ m/s}^2"
}
]
| https://en.wikipedia.org/wiki?curid=742238 |
742288 | Faraday's law of induction | Basic law of electromagnetism
Faraday's law of induction (or simply Faraday's law) is a law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf). This phenomenon, known as electromagnetic induction, is the fundamental operating principle of transformers, inductors, and many types of electric motors, generators and solenoids.
The Maxwell–Faraday equation (listed as one of Maxwell's equations) describes the fact that a spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field, while Faraday's law states that there is emf (electromotive force, defined as electromagnetic work done on a unit charge when it has traveled one round of a conductive loop) on a conductive loop when the magnetic flux through the surface enclosed by the loop varies in time.
Faraday's law had been discovered and one aspect of it (transformer emf) was formulated as the Maxwell–Faraday equation later. The equation of Faraday's law can be derived by the Maxwell–Faraday equation (describing transformer emf) and the Lorentz force (describing motional emf). The integral form of the Maxwell–Faraday equation describes only the transformer emf, while the equation of Faraday's law describes both the transformer emf and the motional emf.
History.
Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Faraday was the first to publish the results of his experiments.
Faraday's notebook on August 29, 1831 describes an experimental demonstration of electromagnetic induction (see figure) that wraps two wires around opposite sides of an iron ring (like a modern toroidal transformer). His assessment of newly-discovered properties of electromagnets suggested that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Indeed, a galvanometer's needle measured a transient current (which he called a "wave of electricity") on the right side's wire when he connected "or" disconnected the left side's wire to a battery. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. His notebook entry also noted that fewer wraps for the battery side resulted in a greater disturbance of the galvanometer's needle.
Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who in 1861–62 used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.
Lenz's law, formulated by Emil Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced emf and current resulting from electromagnetic induction (elaborated upon in the examples below).
According to Albert Einstein, much of the groundwork and discovery of his special relativity theory was presented by this law of induction by Faraday in 1834.
Faraday's law.
The most widespread version of Faraday's law states:
<templatestyles src="Template:Blockquote/styles.css" />
Mathematical statement.
For a loop of wire in a magnetic field, the magnetic flux Φ"B" is defined for any surface Σ whose boundary is the given loop. Since the wire loop may be moving, we write Σ("t") for the surface. The magnetic flux is the surface integral:
formula_0
where dA is an element of area vector of the moving surface Σ("t"), B is the magnetic field, and B · dA is a vector dot product representing the element of flux through dA. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.
When the flux changes—because B changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an emf, defined as the energy available from a unit charge that has traveled once around the wire loop. (Although some sources state the definition differently, this expression was chosen for compatibility with the equations of special relativity.) Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads.
Faraday's law states that the emf is also given by the rate of change of the magnetic flux:
formula_1
where formula_2 is the electromotive force (emf) and Φ"B" is the magnetic flux.
The direction of the electromotive force is given by Lenz's law.
The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845.
Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula.
It is possible to find out the direction of the electromotive force (emf) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows:
For a tightly wound coil of wire, composed of N identical turns, each with the same Φ"B", Faraday's law of induction states that
formula_3
where N is the number of turns of wire and Φ"B" is the magnetic flux through a single loop.
Maxwell–Faraday equation.
The Maxwell–Faraday equation states that a time-varying magnetic field always accompanies a spatially varying (also possibly time-varying), non-conservative electric field, and vice versa. The Maxwell–Faraday equation is
formula_4
(in SI units) where ∇ × is the curl operator and again E(r, "t") is the electric field and B(r, "t") is the magnetic field. These fields can generally be functions of position r and time t.
The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin–Stokes theorem, thereby reproducing Faraday's law:
formula_5
where, as indicated in the figure, Σ is a surface bounded by the closed contour ∂Σ, dl is an infinitesimal vector element of the contour ∂Σ, and dA is an infinitesimal vector element of surface Σ. Its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface.
Both dl and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. For a planar surface Σ, a positive path element dl of curve ∂Σ is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal n to the surface Σ.
The line integral around ∂Σ is called circulation. A nonzero circulation of E is different from the behavior of the electric field generated by static charges. A charge-generated E-field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem.
The integral equation is true for "any" path ∂Σ through space, and any surface Σ for which that path is a boundary.
If the surface Σ is not changing in time, the equation can be rewritten:
formula_6
The surface integral at the right-hand side is the explicit expression for the magnetic flux Φ"B" through Σ.
The electric vector field induced by a changing magnetic flux, the solenoidal component of the overall electric field, can be approximated in the non-relativistic limit by the volume integral equation
formula_7
Proof.
The four Maxwell's equations (including the Maxwell–Faraday equation), along with Lorentz force law, are a sufficient foundation to derive "everything" in classical electromagnetism. Therefore, it is possible to "prove" Faraday's law starting with these equations.
The starting point is the time-derivative of flux through an arbitrary surface Σ (that can be moved or deformed) in space:
formula_8
(by definition). This total time derivative can be evaluated and simplified with the help of the Maxwell–Faraday equation and some vector identities; the details are in the box below:
The result is:
formula_9
where ∂Σ is the boundary (loop) of the surface Σ, and vl is the velocity of a part of the boundary.
In the case of a conductive loop, emf (Electromotive Force) is the electromagnetic work done on a unit charge when it has traveled around the loop once, and this work is done by the Lorentz force. Therefore, emf is expressed as
formula_10
where formula_2 is emf and v is the unit charge velocity.
In a macroscopic view, for charges on a segment of the loop, v consists of two components in average; one is the velocity of the charge along the segment vt, and the other is the velocity of the segment vl (the loop is deformed or moved). vt does not contribute to the work done on the charge since the direction of vt is same to the direction of formula_11. Mathematically,
formula_12
since formula_13 is perpendicular to formula_11 as formula_14 and formula_11 are along the same direction. Now we can see that, for the conductive loop, emf is same to the time-derivative of the magnetic flux through the loop except for the sign on it. Therefore, we now reach the equation of Faraday's law (for the conductive loop) as
formula_15
where formula_16. With breaking this integral, formula_17 is for the transformer emf (due to a time-varying magnetic field) and formula_18 is for the motional emf (due to the magnetic Lorentz force on charges by the motion or deformation of the loop in the magnetic field).
Exceptions.
It is tempting to generalize Faraday's law to state: "If "∂Σ" is any arbitrary closed loop in space whatsoever, then the total time derivative of magnetic flux through "Σ" equals the emf around "∂Σ"." This statement, however, is not always true and the reason is not just from the obvious reason that emf is undefined in empty space when no conductor is present. As noted in the previous section, Faraday's law is not guaranteed to work unless the velocity of the abstract curve ∂Σ matches the actual velocity of the material conducting the electricity. The two examples illustrated below show that one often obtains incorrect results when the motion of ∂Σ is divorced from the motion of the material.
One can analyze examples like these by taking care that the path ∂Σ moves with the same velocity as the material. Alternatively, one can always correctly calculate the emf by combining Lorentz force law with the Maxwell–Faraday equation:
formula_19
where "it is very important to notice that (1) [v"m"] is the velocity of the conductor ... not the velocity of the path element dl and (2) in general, the partial derivative with respect to time cannot be moved outside the integral since the area is a function of time."
Faraday's law and relativity.
Two phenomena.
Faraday's law is a single equation describing two different phenomena: the "motional emf" generated by a magnetic force on a moving wire (see the Lorentz force), and the "transformer emf" generated by an electric force due to a changing magnetic field (described by the Maxwell–Faraday equation).
James Clerk Maxwell drew attention to this fact in his 1861 paper "On Physical Lines of Force". In the latter half of Part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena.
A reference to these two aspects of electromagnetic induction is made in some modern textbooks. As Richard Feynman states:
<templatestyles src="Template:Blockquote/styles.css" />Richard P. Feynman, "The Feynman Lectures on Physics"
Explanation based on four-dimensional formalism.
In the general case, explanation of the "motional emf" appearance by action of the magnetic force on the charges in the moving wire or in the circuit changing its area is unsatisfactory. As a matter of fact, the charges in the wire or in the circuit could be completely absent, will then the electromagnetic induction effect disappear in this case? This situation is analyzed in the article, in which, when writing the integral equations of the electromagnetic field in a four-dimensional covariant form, in the Faraday’s law the total time derivative of the magnetic flux through the circuit appears instead of the partial time derivative. Thus, electromagnetic induction appears either when the magnetic field changes over time or when the area of the circuit changes. From the physical point of view, it is better to speak not about the induction emf, but about the induced electric field strength formula_20, that occurs in the circuit when the magnetic flux changes. In this case, the contribution to formula_21 from the change in the magnetic field is made through the term formula_22 , where formula_23 is the vector potential. If the circuit area is changing in case of the constant magnetic field, then some part of the circuit is inevitably moving, and the electric field formula_21 emerges in this part of the circuit in the comoving reference frame K’ as a result of the Lorentz transformation of the magnetic field formula_24, present in the stationary reference frame K, which passes through the circuit. The presence of the field formula_21 in K’ is considered as a result of the induction effect in the moving circuit, regardless of whether the charges are present in the circuit or not. In the conducting circuit, the field formula_21 causes motion of the charges. In the reference frame K, it looks like appearance of emf of the induction formula_25, the gradient of which in the form of formula_26, taken along the circuit, seems to generate the field formula_21.
Einstein's view.
Reflection on this apparent dichotomy was one of the principal paths that led Albert Einstein to develop special relativity:
<templatestyles src="Template:Blockquote/styles.css" />It is known that Maxwell's electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor.
The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated.
But if the magnet is stationary and the conductor in motion, no electric field arises in the neighbourhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives rise—assuming equality of relative motion in the two cases discussed—to electric currents of the same path and intensity as those produced by the electric forces in the former case.
Examples of this sort, together with unsuccessful attempts to discover any motion of the earth relative to the "light medium," suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Phi_B = \\iint_{\\Sigma(t)} \\mathbf{B}(t) \\cdot \\mathrm{d} \\mathbf{A}\\, , "
},
{
"math_id": 1,
"text": "\\mathcal{E} = -\\frac{\\mathrm{d}\\Phi_B}{\\mathrm{d}t}, "
},
{
"math_id": 2,
"text": "\\mathcal{E}"
},
{
"math_id": 3,
"text": " \\mathcal{E} = -N \\frac{\\mathrm{d}\\Phi_B}{\\mathrm{d}t} "
},
{
"math_id": 4,
"text": "\\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}} {\\partial t}"
},
{
"math_id": 5,
"text": " \\oint_{\\partial \\Sigma} \\mathbf{E} \\cdot \\mathrm{d}\\mathbf{l} = - \\int_\\Sigma \\frac{\\partial \\mathbf{B}}{\\partial t} \\cdot \\mathrm{d}\\mathbf{A} "
},
{
"math_id": 6,
"text": " \\oint_{\\partial \\Sigma} \\mathbf{E} \\cdot \\mathrm{d}\\mathbf{l} = - \\frac{\\mathrm{d}}{\\mathrm{d}t} \\int_{\\Sigma} \\mathbf{B} \\cdot \\mathrm{d}\\mathbf{A}. "
},
{
"math_id": 7,
"text": " \\mathbf E_s (\\mathbf r,t) \\approx -\\frac{1}{4\\pi}\\iiint_V \\ \\frac{\\left(\\frac{\\partial \\mathbf{B}(\\mathbf{r}',t)}{\\partial t} \\right) \\times \\left(\\mathbf{r}-\\mathbf{r}' \\right) }{|\\mathbf {r} - \\mathbf{r}'|^3} d^3\\mathbf{r'}"
},
{
"math_id": 8,
"text": "\\frac{\\mathrm{d}\\Phi_B}{\\mathrm{d}t} = \\frac{\\mathrm{d}}{\\mathrm{d}t}\\int_{\\Sigma(t)} \\mathbf{B}(t) \\cdot \\mathrm{d}\\mathbf{A}"
},
{
"math_id": 9,
"text": "\\frac{\\mathrm{d}\\Phi_B}{\\mathrm{d}t} = - \\oint_{\\partial \\Sigma} \\left( \\mathbf{E} + \\mathbf{v}_{\\mathbf{l}} \\times \\mathbf{B} \\right) \\cdot \\mathrm{d}\\mathbf{l}."
},
{
"math_id": 10,
"text": "\\mathcal{E} = \\oint \\left(\\mathbf{E} + \\mathbf{v}\\times\\mathbf{B}\\right) \\cdot \\mathrm{d}\\mathbf{l}"
},
{
"math_id": 11,
"text": "\\mathrm{d}\\mathbf{l}"
},
{
"math_id": 12,
"text": "(\\mathbf{v}\\times \\mathbf{B})\\cdot \\mathrm{d}\\mathbf{l} = ((\\mathbf{v}_t + \\mathbf{v}_l) \\times \\mathbf{B}) \\cdot \\mathrm{d}\\mathbf{l}=(\\mathbf{v}_t\\times \\mathbf{B}+\\mathbf{v}_l\\times \\mathbf{B})\\cdot \\mathrm{d}\\mathbf{l} = (\\mathbf{v}_l\\times \\mathbf{B})\\cdot \\mathrm{d}\\mathbf{l}"
},
{
"math_id": 13,
"text": "(\\mathbf{v}_t\\times \\mathbf{B})"
},
{
"math_id": 14,
"text": "\\mathbf{v}_t"
},
{
"math_id": 15,
"text": "\\frac{\\mathrm{d}\\Phi_B}{\\mathrm{d}t} = -\\mathcal{E}"
},
{
"math_id": 16,
"text": "\\mathcal{E} = \\oint \\left(\\mathbf{E} + \\mathbf{v}\\times\\mathbf{B}\\right) \\cdot \\mathrm{d}\\mathbf{l}"
},
{
"math_id": 17,
"text": "\\oint\\mathbf{E}\\cdot\\mathrm{d}\\mathbf{l}"
},
{
"math_id": 18,
"text": "\\oint \\left(\\mathbf{v}\\times\\mathbf{B}\\right) \\cdot \\mathrm{d}\\mathbf{l} = \\oint \\left(\\mathbf{v}_l\\times\\mathbf{B}\\right) \\cdot \\mathrm{d}\\mathbf{l}"
},
{
"math_id": 19,
"text": "\\mathcal{E} = \\int_{\\partial \\Sigma} (\\mathbf{E} + \\mathbf{v}_m \\times \\mathbf{B}) \\cdot \\mathrm{d}\\mathbf{l} = -\\int_\\Sigma \\frac{\\partial \\mathbf{B}}{\\partial t} \\cdot \\mathrm{d}\\Sigma + \\oint_{\\partial \\Sigma} (\\mathbf{v}_m\\times\\mathbf{B}) \\cdot \\mathrm{d}\\mathbf{l}"
},
{
"math_id": 20,
"text": " \\mathbf E = - \\nabla \\mathcal{E} - \\frac{ \\partial \\mathbf A}{ \\partial t}"
},
{
"math_id": 21,
"text": " \\mathbf E"
},
{
"math_id": 22,
"text": " - \\frac{ \\partial \\mathbf A}{ \\partial t}"
},
{
"math_id": 23,
"text": " \\mathbf A"
},
{
"math_id": 24,
"text": " \\mathbf B"
},
{
"math_id": 25,
"text": " \\mathcal{E} "
},
{
"math_id": 26,
"text": " - \\nabla \\mathcal{E} "
}
]
| https://en.wikipedia.org/wiki?curid=742288 |
742319 | Faraday's laws of electrolysis | Physical laws of electrochemistry
Faraday's laws of electrolysis are quantitative relationships based on the electrochemical research published by Michael Faraday in 1833.
First law.
Michael Faraday reported that the mass (m) of a substance deposited or liberated at an electrode is directly proportional to the charge (Q; SI units are ampere seconds or coulombs).
formula_0
Here, the constant of proportionality, Z, is called the electro-chemical equivalent (ECE) of the substance. Thus, the ECE can be defined as the mass of the substance deposited or liberated per unit charge.
Second law.
Faraday discovered that when the same amount of electric current is passed through different electrolytes connected in series, the masses of the substances deposited or liberated at the electrodes are directly proportional to their respective chemical equivalent/equivalent weight (E). This turns out to be the molar mass (M) divided by the valence (v)
formula_1
Derivation.
A monovalent ion requires 1 electron for discharge, a divalent ion requires 2 electrons for discharge and so on. Thus, if x electrons flow, formula_2 atoms are discharged.
Thus, the mass m discharged is
formula_3
where
Mathematical form.
Faraday's laws can be summarized by
formula_4
where M is the molar mass of the substance (usually given in SI units of grams per mole) and v is the valency of the ions .
For Faraday's first law, M, F, v are constants; thus, the larger the value of Q, the larger m will be.
For Faraday's second law, Q, F, v are constants; thus, the larger the value of formula_5 (equivalent weight), the larger m will be.
In the simple case of constant-current electrolysis, "Q" = "It", leading to
formula_6
and then to
formula_7
where:
For the case of an alloy whose constituents have different valencies, we have
formula_9
where wi represents the mass fraction of the i-th element.
In the more complicated case of a variable electric current, the total charge Q is the electric current "I"("τ") integrated over time τ:
formula_10
Here t is the "total" electrolysis time. | [
{
"math_id": 0,
"text": "m \\propto Q \\quad \\implies \\quad \\frac{m}{Q} = Z"
},
{
"math_id": 1,
"text": "\\begin{align} \n& m \\propto E; \\quad E = \\frac{\\text{molar mass}}{\\text{valence}} = \\frac{M}{v} \\\\ \n& \\implies m_1 : m_2 : m_3 : \\ldots = E_1 : E_2 : E_3 : \\ldots \\\\\n& \\implies Z_1 Q : Z_2 Q : Z_3 Q : \\ldots = E_1 : E_2 : E_3 : \\ldots \\\\\n& \\implies Z_1 : Z_2 : Z_3 : \\ldots = E_1 : E_2 : E_3 : \\ldots \n\\end{align}"
},
{
"math_id": 2,
"text": " \\tfrac{x}{v} "
},
{
"math_id": 3,
"text": "\nm = \\frac{x M}{v N_{\\rm A}} = \\frac{Q M}{e N_{\\rm A} v} = \\frac{Q M}{vF}"
},
{
"math_id": 4,
"text": "Z = \\frac{m}{Q} = \\frac{1}{F}\\left(\\frac{M}{v}\\right) = \\frac{E}{F}"
},
{
"math_id": 5,
"text": "\\tfrac{M}{v}"
},
{
"math_id": 6,
"text": "m =\\frac{ItM}{Fv}"
},
{
"math_id": 7,
"text": "n =\\frac{It}{Fv}"
},
{
"math_id": 8,
"text": "n = \\tfrac m M"
},
{
"math_id": 9,
"text": "m = \\frac{It}{F \\times \\sum_{i} \\frac{w_i v_i}{M_i}}"
},
{
"math_id": 10,
"text": " Q = \\int_0^t I(\\tau) \\, d\\tau "
}
]
| https://en.wikipedia.org/wiki?curid=742319 |
7423263 | Control reconfiguration | Control reconfiguration is an active approach in control theory to achieve fault-tolerant control for dynamic systems. It is used when severe faults, such as actuator or sensor outages, cause a break-up of the control loop, which must be restructured to prevent failure at the system level. In addition to loop restructuring, the controller parameters must be adjusted to accommodate changed plant dynamics. Control reconfiguration is a building block toward increasing the dependability of systems under feedback control.
Reconfiguration problem.
Fault modelling.
The figure to the right shows a plant controlled by a controller in a standard control loop.
The nominal linear model of the plant is
formula_0
The plant subject to a fault (indicated by a red arrow in the figure) is modelled in general by
formula_1
where the subscript formula_2 indicates that the system is faulty. This approach models multiplicative faults by modified system matrices. Specifically, actuator faults are represented by the new input matrix formula_3, sensor faults are represented by the output map formula_4, and internal plant faults are represented by the system matrix formula_5.
The upper part of the figure shows a supervisory loop consisting of "fault detection and isolation" (FDI) and "reconfiguration" which changes the loop by
To this end, the vectors of inputs and outputs contain "all available signals", not just those used by the controller in fault-free operation.
Alternative scenarios can model faults as an additive external signal formula_8 influencing the state derivatives and outputs as follows:
formula_9
Reconfiguration goals.
The goal of reconfiguration is to keep the reconfigured control-loop performance sufficient for preventing plant shutdown. The following goals are distinguished:
Internal stability of the reconfigured closed loop is usually the minimum requirement. The equilibrium recovery goal (also referred to as weak goal) refers to the steady-state output equilibrium which the reconfigured loop reaches after a given constant input. This equilibrium must equal the nominal equilibrium under the same input (as time tends to infinity). This goal ensures steady-state reference tracking after reconfiguration. The output trajectory recovery goal (also referred to as strong goal) is even stricter. It requires that the dynamic response to an input must equal the nominal response at all times. Further restrictions are imposed by the state trajectory recovery goal, which requires that the state trajectory be restored to the nominal case by the reconfiguration under any input.
Usually a combination of goals is pursued in practice, such as the equilibrium-recovery goal with stability.
The question whether or not these or similar goals can be reached for specific faults is addressed by reconfigurability analysis.
Reconfiguration approaches.
Fault hiding.
This paradigm aims at keeping the nominal controller in the loop. To this end, a reconfiguration block can be placed between the faulty plant and the nominal controller. Together with the faulty plant, it forms the reconfigured plant. The reconfiguration block has to fulfill the requirement that the behaviour of the reconfigured plant matches the behaviour of the nominal, that is fault-free plant.
Linear model following.
In linear model following, a formal feature of the nominal closed loop is attempted to be recovered. In the classical pseudo-inverse method, the closed loop system matrix formula_10 of a state-feedback control structure is used. The new controller formula_11 is found to approximate formula_12 in the sense of an induced matrix norm.
In perfect model following, a dynamic compensator is introduced to allow for the exact recovery of the complete loop behaviour under certain conditions.
In eigenstructure assignment, the nominal closed loop eigenvalues and eigenvectors (the eigenstructure) is recovered to the nominal case after a fault.
Optimisation-based control schemes.
Optimisation control schemes include: linear-quadratic regulator design (LQR), model predictive control (MPC) and eigenstructure assignment methods.
Probabilistic approaches.
Some probabilistic approaches have been developed.
Learning control.
There are learning automata, neural networks, etc.
Mathematical tools and frameworks.
The methods by which reconfiguration is achieved differ considerably. The following list gives an overview of mathematical approaches that are commonly used.
See also.
Prior to control reconfiguration, it must be at least determined whether a fault has occurred (fault detection) and if so, which components are affected (fault isolation). Preferably, a model of the faulty plant should be provided (fault identification). These questions are addressed by fault diagnosis methods.
Fault accommodation is another common approach to achieve fault tolerance. In contrast to control reconfiguration, accommodation is limited to internal controller changes. The sets of signals manipulated and measured by the controller are fixed, which means that the loop cannot be restructured. | [
{
"math_id": 0,
"text": "\\begin{cases}\\dot{\\mathbf{x}} & = \\mathbf{A}\\mathbf{x} + \\mathbf{B}\\mathbf{u}\\\\\n\\mathbf{y} & = \\mathbf{C}\\mathbf{x}\\end{cases}"
},
{
"math_id": 1,
"text": "\\begin{cases}\\dot{\\mathbf{x}}_f & = \\mathbf{A}_f\\mathbf{x}_f + \\mathbf{B}_f\\mathbf{u}\\\\\n\\mathbf{y}_f & = \\mathbf{C}_f\\mathbf{x}_f\\end{cases}"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "\\mathbf{B}_f"
},
{
"math_id": 4,
"text": "\\mathbf{C}_f"
},
{
"math_id": 5,
"text": "\\mathbf{A}_f"
},
{
"math_id": 6,
"text": "\\mathbf{u},\\mathbf{y}"
},
{
"math_id": 7,
"text": "\\mathbf{w}"
},
{
"math_id": 8,
"text": "\\mathbf{f}"
},
{
"math_id": 9,
"text": "\\begin{cases}\\dot{\\mathbf{x}}_f & = \\mathbf{A}\\mathbf{x}_f + \\mathbf{B}\\mathbf{u} + \\mathbf{E}\\mathbf{f}\\\\\n\\mathbf{y}_f & = \\mathbf{C}_f\\mathbf{x}_f + \\mathbf{F}\\mathbf{f}\\end{cases}"
},
{
"math_id": 10,
"text": "\\bar{\\mathbf{A}} = \\mathbf{A}-\\mathbf{B}\\mathbf{K}"
},
{
"math_id": 11,
"text": "\\mathbf{K}_f"
},
{
"math_id": 12,
"text": "\\bar{\\mathbf{A}}"
}
]
| https://en.wikipedia.org/wiki?curid=7423263 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.