id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1084655 | Von Staudt–Clausen theorem | Determines the fractional part of Bernoulli numbers
In number theory, the von Staudt–Clausen theorem is a result determining the fractional part of Bernoulli numbers, found independently by
Karl von Staudt (1840) and Thomas Clausen (1840).
Specifically, if n is a positive integer and we add 1/"p" to the Bernoulli number "B"2"n" for every prime p such that "p" − 1 divides 2"n", then we obtain an integer; that is,
formula_0
This fact immediately allows us to characterize the denominators of the non-zero Bernoulli numbers "B"2"n" as the product of all primes p such that "p" − 1 divides 2"n"; consequently, the denominators are square-free and divisible by 6.
These denominators are
6, 30, 42, 30, 66, 2730, 6, 510, 798, 330, 138, 2730, 6, 870, 14322, 510, 6, 1919190, 6, 13530, ... (sequence in the OEIS).
The sequence of integers formula_1 is
1, 1, 1, 1, 1, 1, 2, -6, 56, -528, 6193, -86579, 1425518, -27298230, ... (sequence in the OEIS).
Proof.
A proof of the Von Staudt–Clausen theorem follows from an explicit formula for Bernoulli numbers which is:
formula_2
and as a corollary:
formula_3
where "S"("n","j") are the Stirling numbers of the second kind.
Furthermore the following lemmas are needed:
Let p be a prime number; then
1. If "p" – 1 divides 2"n", then
formula_4
2. If "p" – 1 does not divide 2"n", then
formula_5
Proof of (1) and (2): One has from Fermat's little theorem,
formula_6
for "m" = 1, 2, ..., "p" – 1.
If "p" – 1 divides 2"n", then one has
formula_7
for "m" = 1, 2, ..., "p" – 1. Thereafter, one has
formula_8
from which (1) follows immediately.
If "p" – 1 does not divide 2"n", then after Fermat's theorem one has
formula_9
If one lets ℘ = ⌊ 2"n" / ("p" – 1) ⌋, then after iteration one has
formula_10
for "m" = 1, 2, ..., "p" – 1 and 0 < 2"n" – &wp;("p" – 1) < "p" – 1.
Thereafter, one has
formula_11
Lemma (2) now follows from the above and the fact that "S"("n","j") = 0 for "j" > "n".
(3). It is easy to deduce that for "a" > 2 and "b" > 2, ab divides ("ab" – 1)!.
(4). Stirling numbers of the second kind are integers.
Now we are ready to prove the theorem.
If "j" + 1 is composite and "j" > 3, then from (3), "j" + 1 divides "j"!.
For "j" = 3,
formula_12
If "j" + 1 is prime, then we use (1) and (2), and if "j" + 1 is composite, then we use (3) and (4) to deduce
formula_13
where "I""n" is an integer, as desired. | [
{
"math_id": 0,
"text": " B_{2n} + \\sum_{(p-1)|2n} \\frac1p \\in \\Z . "
},
{
"math_id": 1,
"text": "B_{2n} + \\sum_{(p-1)|2n} \\frac1p"
},
{
"math_id": 2,
"text": " B_{2n}=\\sum_{j=0}^{2n}{\\frac{1}{j+1}}\\sum_{m=0}^{j}{(-1)^{m}{j\\choose m}m^{2n}} "
},
{
"math_id": 3,
"text": " B_{2n}=\\sum_{j=0}^{2n}{\\frac{j!}{j+1}}(-1)^jS(2n,j) "
},
{
"math_id": 4,
"text": " \\sum_{m=0}^{p-1}{(-1)^m{p-1\\choose m} m^{2n}}\\equiv{-1}\\pmod p. "
},
{
"math_id": 5,
"text": " \\sum_{m=0}^{p-1}{(-1)^m{p-1\\choose m} m^{2n}}\\equiv0\\pmod p. "
},
{
"math_id": 6,
"text": " m^{p-1} \\equiv 1 \\pmod{p} "
},
{
"math_id": 7,
"text": " m^{2n} \\equiv 1 \\pmod{p} "
},
{
"math_id": 8,
"text": " \\sum_{m=1}^{p-1} (-1)^m \\binom{p-1}{m} m^{2n} \\equiv \\sum_{m=1}^{p-1} (-1)^m \\binom{p-1}{m} \\pmod{p},"
},
{
"math_id": 9,
"text": " m^{2n} \\equiv m^{2n-(p-1)} \\pmod{p}. "
},
{
"math_id": 10,
"text": " m^{2n} \\equiv m^{2n-\\wp(p-1)} \\pmod{p} "
},
{
"math_id": 11,
"text": " \\sum_{m=0}^{p-1} (-1)^m \\binom{p-1}{m} m^{2n} \\equiv \\sum_{m=0}^{p-1} (-1)^m \\binom{p-1}{m} m^{2n-\\wp(p-1)} \\pmod{p}. "
},
{
"math_id": 12,
"text": " \\sum_{m=0}^{3} (-1)^m \\binom{3}{m} m^{2n} = 3 \\cdot 2^{2n} - 3^{2n} - 3 \\equiv 0 \\pmod{4}. "
},
{
"math_id": 13,
"text": " B_{2n} = I_n - \\sum_{(p-1)|2n} \\frac{1}{p}, "
}
] | https://en.wikipedia.org/wiki?curid=1084655 |
1084681 | Agoh–Giuga conjecture | In number theory the Agoh–Giuga conjecture on the Bernoulli numbers "B""k" postulates that "p" is a prime number if and only if
formula_0
It is named after Takashi Agoh and Giuseppe Giuga.
Equivalent formulation.
The conjecture as stated above is due to Takashi Agoh (1990); an equivalent formulation is due to Giuseppe Giuga, from 1950, to the effect that "p" is prime if and only if
formula_1
which may also be written as
formula_2
It is trivial to show that "p" being prime is sufficient for the second equivalence to hold, since if "p" is prime, Fermat's little theorem states that
formula_3
for formula_4, and the equivalence follows, since formula_5
Status.
The statement is still a conjecture since it has not yet been proven that if a number "n" is not prime (that is, "n" is composite), then the formula does not hold. It has been shown that a composite number "n" satisfies the formula if and only if it is both a Carmichael number and a Giuga number, and that if such a number exists, it has at least 13,800 digits (Borwein, Borwein, Borwein, Girgensohn 1996). Laerte Sorini, finally, in a work of 2001 showed that a possible counterexample should be a number "n" greater than 1036067 which represents the limit suggested by Bedocchi for the demonstration technique specified by Giuga to his own conjecture.
Relation to Wilson's theorem.
The Agoh–Giuga conjecture bears a similarity to Wilson's theorem, which has been proven to be true. Wilson's theorem states that a number "p" is prime if and only if
formula_6
which may also be written as
formula_7
For an odd prime p we have
formula_8
and for p=2 we have
formula_9
So, the truth of the Agoh–Giuga conjecture combined with Wilson's theorem would give: a number "p" is prime if and only if
formula_10
and
formula_11
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "pB_{p-1} \\equiv -1 \\pmod p."
},
{
"math_id": 1,
"text": "1^{p-1}+2^{p-1}+ \\cdots +(p-1)^{p-1} \\equiv -1 \\pmod p"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^{p-1} i^{p-1} \\equiv -1 \\pmod p."
},
{
"math_id": 3,
"text": "a^{p-1} \\equiv 1 \\pmod p"
},
{
"math_id": 4,
"text": "a = 1,2,\\dots,p-1"
},
{
"math_id": 5,
"text": "p-1 \\equiv -1 \\pmod p."
},
{
"math_id": 6,
"text": "(p-1)! \\equiv -1 \\pmod p,"
},
{
"math_id": 7,
"text": "\\prod_{i=1}^{p-1} i \\equiv -1 \\pmod p."
},
{
"math_id": 8,
"text": "\\prod_{i=1}^{p-1} i^{p-1} \\equiv (-1)^{p-1} \\equiv 1 \\pmod p,"
},
{
"math_id": 9,
"text": "\\prod_{i=1}^{p-1} i^{p-1} \\equiv (-1)^{p-1} \\equiv 1 \\pmod p."
},
{
"math_id": 10,
"text": "\\sum_{i=1}^{p-1} i^{p-1} \\equiv -1 \\pmod p"
},
{
"math_id": 11,
"text": "\\prod_{i=1}^{p-1} i^{p-1} \\equiv 1 \\pmod p."
}
] | https://en.wikipedia.org/wiki?curid=1084681 |
10849414 | Lever rule | Formula for determining the mole or mass fraction of phases in a binary phase diagram
In chemistry, the lever rule is a formula used to determine the mole fraction ("xi") or the mass fraction ("wi") of each phase of a binary equilibrium phase diagram. It can be used to determine the fraction of liquid and solid phases for a given binary composition and temperature that is between the liquidus and solidus line.
In an alloy or a mixture with two phases, α and β, which themselves contain two elements, A and B, the lever rule states that the mass fraction of the α phase is
formula_0
where
all at some fixed temperature or pressure.
Derivation.
Suppose an alloy at an equilibrium temperature "T" consists of formula_3 mass fraction of element B. Suppose also that at temperature "T" the alloy consists of two phases, α and β, for which the α consists of formula_1, and β consists of formula_2. Let the mass of the α phase in the alloy be formula_4 so that the mass of the β phase is formula_5, where formula_6 is the total mass of the alloy.
By definition, then, the mass of element B in the α phase is formula_7, while the mass of element B in the β phase is formula_8. Together these two quantities sum to the total mass of element B in the alloy, which is given by formula_9. Therefore,
formula_10
By rearranging, one finds that
formula_11
This final fraction is the mass fraction of the α phase in the alloy.
Calculations.
Binary phase diagrams.
Before any calculations can be made, a "tie line" is drawn on the phase diagram to determine the mass fraction of each element; on the phase diagram to the right it is line segment LS. This tie line is drawn horizontally at the composition's temperature from one phase to another (here the liquid to the solid). The mass fraction of element B at the liquidus is given by "w"Bl (represented as "w"l in this diagram) and the mass fraction of element B at the solidus is given by "w"Bs (represented as "w"s in this diagram). The mass fraction of solid and liquid can then be calculated using the following lever rule equations:
formula_12
formula_13
where "w"B is the mass fraction of element B for the given composition (represented as "w"o in this diagram).
The numerator of each equation is the original composition that we are interested in is +/- the opposite "lever arm". That is if you want the mass fraction of solid then take the difference between the liquid composition and the original composition. And then the denominator is the overall length of the arm so the difference between the solid and liquid compositions. If you're having difficulty realising why this is so, try visualising the composition when "w"o approaches "w"l. Then the liquid concentration will start increasing.
Eutectic phase diagrams.
There is now more than one two-phase region. The tie line drawn is from the solid alpha to the liquid and by dropping a vertical line down at these points the mass fraction of each phase is directly read off the graph, that is the mass fraction in the x axis element. The same equations can be used to find the mass fraction of alloy in each of the phases, i.e. wl is the mass fraction of the whole sample in the liquid phase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w^\\alpha = \\frac{w_{\\rm B}-w_{\\rm B}^\\beta}{w_{\\rm B}^\\alpha-w_{\\rm B}^\\beta}"
},
{
"math_id": 1,
"text": "w_{\\rm B}^\\alpha"
},
{
"math_id": 2,
"text": "w_{\\rm B}^\\beta"
},
{
"math_id": 3,
"text": "w_{\\rm B}"
},
{
"math_id": 4,
"text": "m^\\alpha"
},
{
"math_id": 5,
"text": "m^\\beta = m - m^\\alpha"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "m_{\\rm B}^\\alpha = w_{\\rm B}^\\alpha m^\\alpha"
},
{
"math_id": 8,
"text": "m_{\\rm B}^\\beta = w_{\\rm B}^\\beta \\left(m -m^\\alpha\\right)"
},
{
"math_id": 9,
"text": "m_{\\rm B} = w_{\\rm B}m"
},
{
"math_id": 10,
"text": " w_{\\rm B}m = m_{\\rm B} = m_{\\rm B}^\\alpha + m_{\\rm B}^\\beta = w_{\\rm B}^\\alpha m^\\alpha + w_{\\rm B}^\\beta \\left(m - m^\\alpha\\right)"
},
{
"math_id": 11,
"text": "w^\\alpha \\equiv \\frac{m^\\alpha}{m} = \\frac{ w_{\\rm B}-w_{\\rm B}^{\\beta} }{ w_{\\rm B}^{\\alpha}-w_{\\rm B}^{\\beta} }"
},
{
"math_id": 12,
"text": "w^{\\rm s} = \\frac{w_{\\rm B} - w_{\\rm B}^{\\rm l}}{w_{\\rm B}^{\\rm s} - w_{\\rm B}^{\\rm l}}"
},
{
"math_id": 13,
"text": "w^{\\rm l} = \\frac{w_{\\rm B}^{\\rm s} - w_{\\rm B}}{w_{\\rm B}^{\\rm s} - w_{\\rm B}^{\\rm l}}"
}
] | https://en.wikipedia.org/wiki?curid=10849414 |
1085053 | Bateman–Horn conjecture | In number theory, the Bateman–Horn conjecture is a statement concerning the frequency of prime numbers among the values of a system of polynomials, named after mathematicians Paul T. Bateman and Roger A. Horn who proposed it in 1962. It provides a vast generalization of such conjectures as the Hardy and Littlewood conjecture on the density of twin primes or their conjecture on primes of the form "n"2 + 1; it is also a strengthening of Schinzel's hypothesis H.
Definition.
The Bateman–Horn conjecture provides a conjectured density for the positive integers at which a given set of polynomials all have prime values. For a set of "m" distinct irreducible polynomials "ƒ"1, ..., "ƒ""m" with integer coefficients, an obvious necessary condition for the polynomials to simultaneously generate prime values infinitely often is that they satisfy Bunyakovsky's property, that there does not exist a prime number "p" that divides their product "f"("n") for every positive integer "n". For, if there were such a prime "p", having all values of the polynomials simultaneously prime for a given "n" would imply that at least one of them must be equal to "p", which can only happen for finitely many values of "n" or there would be a polynomial with infinitely many roots, whereas the conjecture is how to give conditions where the values are simultaneously prime for infinitely many "n".
An integer "n" is prime-generating for the given system of polynomials if every polynomial "ƒi"("n") produces a prime number when given "n" as its argument. If "P"("x") is the number of prime-generating integers among the positive integers less than "x", then the Bateman–Horn conjecture states that
formula_0
where "D" is the product of the degrees of the polynomials and where "C" is the product over primes "p"
formula_1
with formula_2 the number of solutions to
formula_3
Bunyakovsky's property implies formula_4 for all primes "p",
so each factor in the infinite product "C" is positive.
Intuitively one then naturally expects that the constant "C" is itself positive, and with some work this can be proved.
Negative numbers.
As stated above, the conjecture is not true: the single polynomial "ƒ"1("x") = −"x" produces only negative numbers when given a positive argument, so the fraction of prime numbers among its values is always zero. There are two equally valid ways of refining the conjecture to avoid this difficulty:
It is reasonable to allow negative numbers to count as primes as a step towards formulating more general conjectures that apply to other systems of numbers than the integers, but at the same time it is easy
to just negate the polynomials if necessary to reduce to the case where the leading coefficients are positive.
Examples.
If the system of polynomials consists of the single polynomial "ƒ"1("x") = "x", then the values "n" for which "ƒ"1("n") is prime are themselves the prime numbers, and the conjecture becomes a restatement of the prime number theorem.
If the system of polynomials consists of the two polynomials "ƒ"1("x") = "x" and "ƒ"2("x") = "x" + 2, then the values of "n" for which both "ƒ"1("n") and "ƒ"2("n") are prime are just the smaller of the two primes in every pair of twin primes. In this case, the Bateman–Horn conjecture reduces to the Hardy–Littlewood conjecture on the density of twin primes, according to which the number of twin prime pairs less than "x" is
formula_5
Analogue for polynomials over a finite field.
When the integers are replaced by the polynomial ring "F"["u"] for a finite field "F", one can ask how often a finite set of polynomials "f""i"("x") in "F"["u"]["x"] simultaneously takes irreducible values in "F"["u"] when we substitute for "x" elements of "F"["u"]. Well-known analogies between integers and "F"["u"] suggest an analogue of the Bateman–Horn conjecture over "F"["u"], but the analogue is wrong. For example, data suggest that the polynomial
formula_6
in "F"3["u"]["x"] takes (asymptotically) the expected number of irreducible values when "x" runs over polynomials in "F"3["u"] of odd degree, but it appears to take (asymptotically) twice as many irreducible values as expected when "x" runs over polynomials of degree that is 2 mod 4, while it (provably) takes "no" irreducible values at all when "x" runs over nonconstant polynomials with degree that is a multiple of 4. An analogue of the Bateman–Horn conjecture over "F"["u"] which fits numerical data uses an additional factor in the asymptotics which depends on the value of "d" mod 4, where "d" is the degree of the polynomials in "F"["u"] over which "x" is sampled. | [
{
"math_id": 0,
"text": "P(x) \\sim \\frac{C}{D} \\int_2^x \\frac{dt}{(\\log t)^m},\\,"
},
{
"math_id": 1,
"text": "C = \\prod_p \\frac{1-N(p)/p}{(1-1/p)^m}\\ "
},
{
"math_id": 2,
"text": "N(p)"
},
{
"math_id": 3,
"text": "f(n) \\equiv 0 \\pmod p.\\ "
},
{
"math_id": 4,
"text": "N(p) < p"
},
{
"math_id": 5,
"text": "\\pi_2(x) \\sim 2 \\prod_{p\\ge 3} \\frac{p(p-2)}{(p-1)^2}\\frac{x}{(\\log x)^2 } \\approx 1.32 \\frac {x}{(\\log x)^2}."
},
{
"math_id": 6,
"text": "x^3 + u\\,"
}
] | https://en.wikipedia.org/wiki?curid=1085053 |
10851309 | Acidity function | Measure of acidity
An acidity function is a measure of the acidity of a medium or solvent system, usually expressed in terms of its ability to donate protons to (or accept protons from) a solute (Brønsted acidity). The pH scale is by far the most commonly used acidity function, and is ideal for dilute aqueous solutions. Other acidity functions have been proposed for different environments, most notably the Hammett acidity function, "H"0, for superacid media and its modified version "H"− for superbasic media. The term acidity function is also used for measurements made on basic systems, and the term basicity function is uncommon.
Hammett-type acidity functions are defined in terms of a buffered medium containing a weak base B and its conjugate acid BH+:
formula_0
where p"K"a is the dissociation constant of BH+. They were originally measured by using nitroanilines as weak bases or acid-base indicators and by measuring the concentrations of the protonated and unprotonated forms with UV-visible spectroscopy. Other spectroscopic methods, such as NMR, may also be used. The function "H"− is defined similarly for strong bases:
formula_1
Here BH is a weak acid used as an acid-base indicator, and B− is its conjugate base.
Comparison of acidity functions with aqueous acidity.
In dilute aqueous solution, the predominant acid species is the hydrated hydrogen ion H3O+ (or more accurately [H(OH2)n]+). In this case "H"0 and "H"− are equivalent to pH values determined by the buffer equation or Henderson-Hasselbalch equation.<br>
However, an "H"0 value of −21 (a 25% solution of SbF5 in HSO3F) does not imply a hydrogen ion concentration of 1021 mol/dm3: such a "solution" would have a density more than a hundred times greater than a neutron star. Rather, "H"0 = −21 implies that the reactivity (protonating power) of the solvated hydrogen ions is 1021 times greater than the reactivity of the hydrated hydrogen ions in an aqueous solution of pH 0. The actual reactive species are different in the two cases, but both can be considered to be sources of H+, i.e. Brønsted acids. <br>The hydrogen ion H+ "never" exists on its own in a condensed phase, as it is always solvated to a certain extent. The high negative value of "H"0 in SbF5/HSO3F mixtures indicates that the solvation of the hydrogen ion is much weaker in this solvent system than in water. Other way of expressing the same phenomenon is to say that SbF5·FSO3H is a much stronger proton donor than H3O+. | [
{
"math_id": 0,
"text": "H_0 = {\\rm p}K_{\\rm a} + \\log {{c_{\\rm B}}\\over{c_{\\rm BH^+}}}"
},
{
"math_id": 1,
"text": "H_- = {\\rm p}K_{\\rm a} + \\log {{c_{\\rm B^-}}\\over{c_{\\rm BH}}}"
}
] | https://en.wikipedia.org/wiki?curid=10851309 |
1085174 | Silver chloride | Chemical compound with the formula AgCl
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Silver chloride is an inorganic chemical compound with the chemical formula AgCl. This white crystalline solid is well known for its low solubility in water and its sensitivity to light. Upon illumination or heating, silver chloride converts to silver (and chlorine), which is signaled by grey to black or purplish coloration in some samples. AgCl occurs naturally as the mineral chlorargyrite.
It is produced by a metathesis reaction for use in photography and in pH meters as electrodes.
Preparation.
Silver chloride is unusual in that, unlike most chloride salts, it has very low solubility. It is easily synthesized by metathesis: combining an aqueous solution of silver nitrate (which is soluble) with a soluble chloride salt, such as sodium chloride (which is used industrially as a method of producing AgCl), or cobalt(II) chloride. The silver chloride that forms will precipitate immediately.46
<chem>AgNO3 + NaCl -> AgCl(v) + NaNO3</chem>
<chem>2 AgNO3 + CoCl2 -> 2 AgCl(v) + Co(NO3)2</chem>
It can also be produced by the reaction of silver metal and aqua regia; however, the insolubility of silver chloride decelerates the reaction. Silver chloride is also a by-product of the Miller process, where silver metal is reacted with chlorine gas at elevated temperatures.21
History.
Silver chloride has been known since ancient times. Ancient Egyptians produced it as a method of refining silver, which was done by roasting silver ores with salt to produce silver chloride, which was subsequently decomposed to silver and chlorine.19 However, it was later identified as a distinct compound of silver in 1565 by Georg Fabricius. Silver chloride, historically known as "luna cornea" (which could be translated as "horn silver" as the moon was an alchemic codename for silver), has also been an intermediate in other historical silver refining processes. One such example is the Augustin process developed in 1843, wherein copper ore containing small amounts of silver is roasted in chloridizing conditions and the silver chloride produced is leached by brine, where it is more soluble.32
Silver-based photographic films were first made in 1727 by Johann Heinrich Schulze with silver nitrate. However, he was not successful in making permanent images, as they faded away. Later in 1816, the use of silver chloride was introduced into photography by Nicéphore Niépce.38–39
Structure.
The solid adopts the "fcc" NaCl structure, in which each Ag+ ion is surrounded by an octahedron of six chloride ligands. AgF and AgBr crystallize similarly. However, the crystallography depends on the condition of crystallization, primarily free silver ion concentration, as is shown in the picture to the left (greyish tint and metallic lustre are due to partially reduced silver).
Above 7.5 GPa, silver chloride transitions into a monoclinic KOH phase. Then at 11 GPa, it undergoes another phase change to an orthorhombic TlI phase.
Reactions.
AgCl dissolves in solutions containing ligands such as chloride, cyanide, triphenylphosphine, thiosulfate, thiocyanate and ammonia. Silver chloride reacts with these ligands according to the following illustrative equations:25–33
<chem>AgCl (s) + 2 CN^- (aq) -> Ag(CN)2^- (aq) + Cl^- (aq)</chem>
<chem>AgCl (s) + 2 S2O3^2- (aq) ->(Ag(S2O3)2)^3- (aq) + Cl^- (aq)</chem>
<chem>AgCl (s) + 2 NH3(aq) -> Ag(NH3)2+ (aq) + Cl^- (aq)</chem>
Of these reactions used to leach silver chloride from silver ores, cyanidation is the most commonly used. Cyanidation produces the soluble dicyanoargentate complex, which is later turned back to silver by reduction.26
Silver chloride does not react with nitric acid, but instead reacts with sulfuric acid to produce silver sulfate. Then the sulfate is protonated in the presence of sulfuric acid to bisulfate, which can be reversed by dilution. This reaction is used to separate silver from other platinum group metals.42
Most complexes derived from AgCl are two-, three-, and, in rare cases, four-coordinate, adopting linear, trigonal planar, and tetrahedral coordination geometries, respectively.
<chem>3AgCl(s) + Na3AsO3(aq) -> Ag3AsO3(s) + 3NaCl(aq)</chem>
<chem>3AgCl(s) +Na3AsO4(aq) -> Ag3AsO4(s) + 3NaCl(aq)</chem>
These two reactions are particularly important in the qualitative analysis of AgCl in labs as AgCl is white, which changes to <chem>Ag3AsO3</chem> (silver arsenite) which is yellow, or <chem>Ag3AsO4</chem>(silver arsenate) which is reddish brown.
Chemistry.
In one of the most famous reactions in chemistry, the addition of colorless aqueous silver nitrate to an equally colorless solution of sodium chloride produces an opaque white precipitate of AgCl:
<chem>Ag+ (aq) + Cl^- (aq) -> AgCl (s)</chem>
This conversion is a common test for the presence of chloride in solution. Due to its conspicuousness, it is easily used in titration, which gives the typical case of argentometry.
The solubility product, "K"sp, for AgCl in water is at room temperature, which indicates that only 1.9 mg (that is, formula_0) of AgCl will dissolve per liter of water. The chloride content of an aqueous solution can be determined quantitatively by weighing the precipitated AgCl, which conveniently is non-hygroscopic since AgCl is one of the few transition metal chlorides that are insoluble in water.
Interfering ions for this test are bromide and iodide, as well as a variety of ligands (see silver halide).
For AgBr and AgI, the "K"sp values are 5.2 x 10−13 and 8.3 x 10−17, respectively. Silver bromide (slightly yellowish white) and silver iodide (bright yellow) are also significantly more photosensitive than is AgCl.46
AgCl quickly darkens on exposure to light by disintegrating into elemental chlorine and metallic silver. This reaction is used in photography and film and is the following:
Cl− + "hν" → Cl + e− (excitation of the chloride ion, which gives up its extra electron into the conduction band)
Ag+ + e− → Ag (liberation of a silver ion, which gains an electron to become a silver atom)
The process is not reversible because the silver atom liberated is typically found at a crystal defect or an impurity site so that the electron's energy is lowered enough that it is "trapped".
Uses.
Silver chloride electrode.
Silver chloride is a constituent of the silver chloride electrode which is a common reference electrode in electrochemistry. The electrode functions as a reversible redox electrode and the equilibrium is between the solid silver metal and silver chloride in a chloride solution of a given concentration. It is usually the internal reference electrode in pH meters and it is often used as a reference in reduction potential measurements. As an example of the latter, the silver chloride electrode is the most commonly used reference electrode for testing cathodic protection corrosion control systems in seawater environments.
Photography.
Silver chloride and silver nitrate have been used in photography since it began, and are well known for their light sensitivity. It was also a vital part of the Daguerreotype sensitization where silver plates were fumed with chlorine to produce a thin layer of silver chloride. Another famous process that used silver chloride was the gelatin silver process where embedded silver chloride crystals in gelatin were used to produce images. However, with advances in color photography, these methods of black-and-white photography have dwindled. Even though color photography uses silver chloride, it only works as a mediator for transforming light into organic image dyes.
Other photographic uses include making photographic paper, since it reacts with photons to form latent images via photoreduction; and in photochromic lenses, taking advantage of its reversible conversion to Ag metal. Unlike photography, where the photoreduction is irreversible, the glass prevents the electron from being 'trapped'. These photochromic lenses are used primarily in sunglasses.
Antimicrobial agent.
Silver chloride nanoparticles are widely sold commercially as an antimicrobial agent. The antimicrobial activity of silver chloride depends on the particle size, but are usually below 100 nm. In general, silver chloride is antimicrobial against various bacteria, such as E. coli.
Silver chloride nanoparticles for use as a microbial agent can be produced by a metathesis reaction between aqueous silver and chloride ions or can be biogenically synthesized by fungi and plants.
Other uses.
Silver chloride's low solubility makes it a useful addition to pottery glazes for the production of "Inglaze lustre".
Silver chloride has been used as an antidote for mercury poisoning, assisting in the elimination of mercury.
Other uses of AgCl include:
Natural occurrence.
Silver chloride occurs naturally as chlorargyrite in the arid and oxidized zones in silver deposits. If some of the chloride ions are replaced by bromide or iodide ions, the words bromian and iodian are added before the name, respectively. This mineral is a source of silver and is leached by cyanidation, where it will produce the soluble [Ag(CN)2]– complex.26
Safety.
According to the ECHA, silver chloride may damage the unborn child, is very toxic to aquatic life with long lasting effects and may be corrosive to metals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{1.77\\times 10^{-10}} \\ \\mathrm{mol}"
}
] | https://en.wikipedia.org/wiki?curid=1085174 |
1085176 | Isogeny | In mathematics, particularly in algebraic geometry, an isogeny is a morphism of algebraic groups (also known as group varieties) that is surjective and has a finite kernel.
If the groups are abelian varieties, then any morphism "f" : "A" → "B" of the underlying algebraic varieties which is surjective with finite fibres is automatically an isogeny, provided that "f"(1"A") = 1"B". Such an isogeny "f" then provides a group homomorphism between the groups of "k"-valued points of "A" and "B", for any field "k" over which "f" is defined.
The terms "isogeny" and "isogenous" come from the Greek word ισογενη-ς, meaning "equal in kind or nature". The term "isogeny" was introduced by Weil; before this, the term "isomorphism" was somewhat confusingly used for what is now called an isogeny.
Degree of isogeny.
Let "f" : "A" → "B" be isogeny between two algebraic groups.
This mapping induce pullback mapping "f*" : "K(B)" → "K(A)" between their rational function fields. Since mapping is nontrivial, it is a field embedding and formula_0 is a subfield of "K(A)". The degree of extension "K(A)"\im "f*" is called degree of isogeny:
formula_1
Properties of degree:
Case of abelian varieties.
For abelian varieties, such as elliptic curves, this notion can also be formulated as follows:
Let "E"1 and "E"2 be abelian varieties of the same dimension over a field "k". An isogeny between "E"1 and "E"2 is a dense morphism "f" : "E"1 → "E"2 of varieties that preserves basepoints (i.e. "f" maps the identity point on "E"1 to that on "E"2).
This is equivalent to the above notion, as every dense morphism between two abelian varieties of the same dimension is automatically surjective with finite fibres, and if it preserves identities then it is a homomorphism of groups.
Two abelian varieties "E"1 and "E"2 are called isogenous if there is an isogeny "E"1 → "E"2. This can be shown to be an equivalence relation; in the case of elliptic curves, symmetry is due to the existence of the dual isogeny. As above, every isogeny induces homomorphisms of the groups of the k-valued points of the abelian varieties.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "im f^*"
},
{
"math_id": 1,
"text": " \\deg f := [K(A): im f^*]"
},
{
"math_id": 2,
"text": "f:X \\rightarrow Y"
},
{
"math_id": 3,
"text": "g:Y \\rightarrow Z"
},
{
"math_id": 4,
"text": "\\deg (g\\circ f)=\\deg g \\cdot \\deg f "
},
{
"math_id": 5,
"text": "char \\; K \\nmid \\deg f"
},
{
"math_id": 6,
"text": "\\deg f = |ker\\; f|"
}
] | https://en.wikipedia.org/wiki?curid=1085176 |
10852103 | Acetyl-CoA synthetase | InterPro Family
Acetyl-CoA synthetase (ACS) or Acetate—CoA ligase is an enzyme (EC 6.2.1.1) involved in metabolism of acetate. It is in the ligase class of enzymes, meaning that it catalyzes the formation of a new chemical bond between two large molecules.
Reaction.
The two molecules joined that make up acetyl-CoA are acetate and coenzyme A (CoA). The complete reaction with all the substrates and products included is:
ATP + Acetate + CoA → AMP + Pyrophosphate + Acetyl-CoA
Once acetyl-CoA is formed it can be used in the TCA cycle in aerobic respiration to produce energy and electron carriers. This is an alternate method to starting the cycle, as the more common way is producing acetyl-CoA from pyruvate through the pyruvate dehydrogenase complex. The enzyme's activity takes place in the mitochondrial matrix so that the products are in the proper place to be used in the following metabolic steps. Acetyl Co-A can also be used in fatty acid synthesis, and a common function of the synthetase is to produce acetyl Co-A for this purpose.
The reaction catalyzed by acetyl-CoA synthetase takes place in two steps. First, AMP must be bound by the enzyme to cause a conformational change in the active site, which allows the reaction to take place. The active site is referred to as the A-cluster. A crucial lysine residue must be present in the active site to catalyze the first reaction where Co-A is bound. Co-A then rotates in the active site into the position where acetate can covalently bind to CoA. The covalent bond is formed between the sulfur atom in Co-A and the central carbon atom of acetate.
The ACS1 form of acetyl-CoA synthetase is encoded by the gene facA, which is activated by acetate and deactivated by glucose.
Structure.
The three dimensional structure of the asymmetric ACS (RCSB PDB ID number: 1PG3) reveals that it is composed of two subunits. Each subunit is then composed primarily of two domains. The larger N-terminal domain is composed of 517 residues, while the smaller C-terminal domain is composed of 130 residues. Each subunit has an active site where the ligands are held. The crystallized structure of ACS was determined with CoA and Adenosine- 5′-propylphosphate bound to the enzyme. The reason for using Adenosine- 5′-propylphosphate is that it is an ATP competitive inhibitor which prevents any conformational changes to the enzyme. The adenine ring of AMP/ATP is held in a hydrophobic pocket create by residues Ile (512) and Trp (413).
The source for the crystallized structure is the organism Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720). The gene for ACS was then transfected into Escherichia coli BL21(DE3) for expression. During chromatography in the process to isolate the enzyme, the subunits came out individually and the total structure was determined separately. The method used to determine the structure was X-ray diffraction with a resolution of 2.3 angstroms. The unit cell values and angles are provided in the following table:
Function.
The role of the ACS enzyme is to combine acetate and Coenzyme A to form acetyl-CoA, however its significance is much larger. The most well known function of the product from this enzymatic reaction is the use of acetyl-CoA in the role of the TCA cycle as well as in the production of fatty acid. This enzyme is vital to the action of histone acetylation as well as gene regulation. The effect this acetylation has is far reaching in mammals. It has been shown that downregulation of the acs gene in the hippocampal region of mice results in lower levels of histone acetylation, but also impairs the long-term spatial memory of the animal. This result points to a link between cellular metabolism, gene regulation and cognitive function. This enzyme has shown to be an interesting biomarker for the presence of tumors in colorectal carcinomas. When the gene is present, the cells are able to take in acetate as a food source to convert it to Acetyl-CoA during stressed conditions. In the cases of advanced carcinoma tumors, the genes for this enzyme were down regulated and indicated a poor 5-year survival rate. Expression of the enzyme has also been linked to the development of metastatic tumor nodes, leading to a poor survival rate in patients with renal cell carcinomas.
Regulation.
The activity of the enzyme is controlled in several ways. The essential lysine residue in the active site plays an important role in regulation of activity. The lysine molecule can be deacetylated by another class of enzyme called sirtuins. In mammals, the cytoplasmic-nuclear synthetase (AceCS1) is activated by SIRT1 while the mitochondrial synthetase (AceCS2) is activated by SIRT3. This action increases activity of this enzyme. The exact location of the lysine residue varies between species, occurring at Lys-642 in humans, but is always present in the active site of the enzyme.
Since there is an essential allosteric change that occurs with the binding of an AMP molecule, the presence of AMP can contribute to regulation of the enzyme. Concentration of AMP must be high enough so that it can bind in the allosteric binding site and allow the other substrates to enter the active site. Also, copper ions deactivate acetyl Co-A synthetase by occupying the proximal site of the A-cluster active site, which prevents the enzyme from accepting a methyl group to participate in the Wood-Ljungdahl Pathway.
The presence of all the reactants in the proper concentration is also needed for proper functioning as in all enzymes.
Acetyl-CoA synthetase is also produced when it is needed for fatty acid synthesis, but, under normal conditions, the gene is inactive and has certain transcriptional factors that activate transcription when necessary.
In addition to sirtuins, protein deacetylase (AcuC) also can modify acetyl-CoA synthetase at a lysine residue. However, unlike sirtuins, AcuC does not require NAD+ as a cosubstrate.
Role in gene expression.
While acetyl-CoA synthetase's activity is usually associated with metabolic pathways, the enzyme also participates in gene expression. In yeast, acetyl-CoA synthetase delivers acetyl-CoA to histone acetyltransferases for histone acetylation. Without correct acetylation, DNA cannot condense into chromatin properly, which inevitably results in transcriptional errors.
Industrial application.
By taking advantage of the pathways which use acetyl-CoA as a substrate, engineered products can be obtained which have potential to be consumer products. By overexpressing the acs gene, and using acetate as a feedstock, the production of fatty acids (FAs) may be increased. The use of acetate as a feed stock is uncommon, as acetate is a normal waste product of "E. coli" metabolism and is toxic at high levels to the organism. By adapting the E. coli to use acetate as a feedstock, these microbes were able to survive and produce their engineered products. These fatty acids could then be used as a biofuel after being separated from the media, requiring further processing (transesterification) to yield usable biodiesel fuel. Original adaptation protocol for inducing high levels of acetate uptake was innovated in 1959 as a means to induce starvation mechanisms in "E. coli".
Intracellular.
formula_0
formula_1
Acetyl-CoA from the breakdown of sugars in glycolysis have been used to build fatty acids. However the difference comes in the fact that the Keasling strain is able to synthesize its own ethanol, and process (by transesterification) the fatty acid further to create stable fatty acid ethyl esters (FAEEs). Removing the need for further processing prior to obtaining a usable fuel product in Diesel engines.
formula_2
"Acetyl CoA used in the production of both ethanol and fatty acids"
formula_3
Trans-esterification.
formula_4
Preliminary studies have been conducted where the combination of these two methods have resulted in the production of FAEEs, using acetate as the only carbon source using a combination of the methods described above. The levels of production of all methods mentioned are not up to levels required for large scale applications (yet).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Acetate \\Longrightarrow Acetyl-CoA"
},
{
"math_id": 1,
"text": "Acetyl-CoA \\Longrightarrow FAs"
},
{
"math_id": 2,
"text": "glucose \\Longrightarrow Acetyl-CoA"
},
{
"math_id": 3,
"text": "Acetyl-CoA \\Longrightarrow fatty \\; acid+ethanol"
},
{
"math_id": 4,
"text": "fatty \\; acid + ethanol \\Longrightarrow FAEE"
}
] | https://en.wikipedia.org/wiki?curid=10852103 |
1085343 | Semi-empirical mass formula | Formula to approximate nuclear mass based on nucleon counts
In nuclear physics, the semi-empirical mass formula (SEMF) (sometimes also called the Weizsäcker formula, Bethe–Weizsäcker formula, or Bethe–Weizsäcker mass formula to distinguish it from the Bethe–Weizsäcker process) is used to approximate the mass of an atomic nucleus from its number of protons and neutrons. As the name suggests, it is based partly on theory and partly on empirical measurements. The formula represents the liquid-drop model proposed by George Gamow, which can account for most of the terms in the formula and gives rough estimates for the values of the coefficients. It was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today.
The formula gives a good approximation for atomic masses and thereby other effects. However, it fails to explain the existence of lines of greater binding energy at certain numbers of protons and neutrons. These numbers, known as magic numbers, are the foundation of the nuclear shell model.
Liquid-drop model.
The liquid-drop model was first proposed by George Gamow and further developed by Niels Bohr, John Archibald Wheeler and Lise Meitner. It treats the nucleus as a drop of incompressible fluid of very high density, held together by the nuclear force (a residual effect of the strong force), there is a similarity to the structure of a spherical liquid drop. While a crude model, the liquid-drop model accounts for the spherical shape of most nuclei and makes a rough prediction of binding energy.
The corresponding mass formula is defined purely in terms of the numbers of protons and neutrons it contains. The original Weizsäcker formula defines five terms:
Formula.
The mass of an atomic nucleus, for formula_0 neutrons, formula_1 protons, and therefore formula_2 nucleons, is given by
formula_3
where formula_4 and formula_5 are the rest mass of a proton and a neutron respectively, and formula_6 is the binding energy of the nucleus. The semi-empirical mass formula states the binding energy is
formula_7
The formula_8 term is either zero or formula_9, depending on the parity of formula_0 and formula_1, where formula_10 for some exponent formula_11. Note that as formula_2, the numerator of the formula_12 term can be rewritten as formula_13.
Each of the terms in this formula has a theoretical basis. The coefficients formula_14, formula_15, formula_16, formula_12, and formula_17 are determined empirically; while they may be derived from experiment, they are typically derived from least-squares fit to contemporary data. While typically expressed by its basic five terms, further terms exist to explain additional phenomena. Akin to how changing a polynomial fit will change its coefficients, the interplay between these coefficients as new phenomena are introduced is complex; some terms influence each other, whereas the formula_17 term is largely independent.
Volume term.
The term formula_18 is known as the "volume term". The volume of the nucleus is proportional to "A", so this term is proportional to the volume, hence the name.
The basis for this term is the strong nuclear force. The strong force affects both protons and neutrons, and as expected, this term is independent of "Z". Because the number of pairs that can be taken from "A" particles is formula_19, one might expect a term proportional to formula_20. However, the strong force has a very limited range, and a given nucleon may only interact strongly with its nearest neighbors and next nearest neighbors. Therefore, the number of pairs of particles that actually interact is roughly proportional to "A", giving the volume term its form.
The coefficient formula_14 is smaller than the binding energy possessed by the nucleons with respect to their neighbors (formula_21), which is of order of 40 MeV. This is because the larger the number of nucleons in the nucleus, the larger their kinetic energy is, due to the Pauli exclusion principle. If one treats the nucleus as a Fermi ball of formula_22 nucleons, with equal numbers of protons and neutrons, then the total kinetic energy is formula_23, with formula_24 the Fermi energy, which is estimated as 38 MeV. Thus the expected value of formula_14 in this model is formula_25 not far from the measured value.
Surface term.
The term formula_26 is known as the "surface term". This term, also based on the strong force, is a correction to the volume term.
The volume term suggests that each nucleon interacts with a constant number of nucleons, independent of "A". While this is very nearly true for nucleons deep within the nucleus, those nucleons on the surface of the nucleus have fewer nearest neighbors, justifying this correction. This can also be thought of as a surface-tension term, and indeed a similar mechanism creates surface tension in liquids.
If the volume of the nucleus is proportional to "A", then the radius should be proportional to formula_27 and the surface area to formula_28. This explains why the surface term is proportional to formula_28. It can also be deduced that formula_15 should have a similar order of magnitude to formula_14.
Coulomb term.
The term formula_29 or formula_30 is known as the "Coulomb" or "electrostatic term".
The basis for this term is the electrostatic repulsion between protons. To a very rough approximation, the nucleus can be considered a sphere of uniform charge density. The potential energy of such a charge distribution can be shown to be
formula_31
where "Q" is the total charge, and "R" is the radius of the sphere. The value of formula_16 can be approximately calculated by using this equation to calculate the potential energy, using an empirical nuclear radius of formula_32 and "Q" = "Ze". However, because electrostatic repulsion will only exist for more than one proton, formula_33 becomes formula_34:
formula_35
where now the electrostatic Coulomb constant formula_16 is
formula_36
Using the fine-structure constant, we can rewrite the value of formula_16 as
formula_37
where formula_38 is the fine-structure constant, and formula_39 is the radius of a nucleus, giving formula_40 to be approximately 1.25 femtometers. formula_41 is the proton reduced Compton wavelength, and formula_4 is the proton mass. This gives formula_16 an approximate theoretical value of 0.691 MeV, not far from the measured value.
Asymmetry term.
The term formula_42 is known as the "asymmetry term" (or "Pauli term").
The theoretical justification for this term is more complex. The Pauli exclusion principle states that no two identical fermions can occupy exactly the same quantum state in an atom. At a given energy level, there are only finitely many quantum states available for particles. What this means in the nucleus is that as more particles are "added", these particles must occupy higher energy levels, increasing the total energy of the nucleus (and decreasing the binding energy). Note that this effect is not based on any of the fundamental forces (gravitational, electromagnetic, etc.), only the Pauli exclusion principle.
Protons and neutrons, being distinct types of particles, occupy different quantum states. One can think of two different "pools" of states – one for protons and one for neutrons. Now, for example, if there are significantly more neutrons than protons in a nucleus, some of the neutrons will be higher in energy than the available states in the proton pool. If we could move some particles from the neutron pool to the proton pool, in other words, change some neutrons into protons, we would significantly decrease the energy. The imbalance between the number of protons and neutrons causes the energy to be higher than it needs to be, "for a given number of nucleons". This is the basis for the asymmetry term.
The actual form of the asymmetry term can again be derived by modeling the nucleus as a Fermi ball of protons and neutrons. Its total kinetic energy is
formula_43
where formula_44 and formula_45 are the Fermi energies of the protons and neutrons. Since these are proportional to formula_46 and formula_47 respectively, one gets
formula_48 for some constant "C".
The leading terms in the expansion in the difference formula_49 are then
formula_50
At the zeroth order in the expansion the kinetic energy is just the overall Fermi energy formula_51 multiplied by formula_52. Thus we get
formula_53
The first term contributes to the volume term in the semi-empirical mass formula, and the second term is minus the asymmetry term (remember, the kinetic energy contributes to the total binding energy with a "negative" sign).
formula_24 is 38 MeV, so calculating formula_12 from the equation above, we get only half the measured value. The discrepancy is explained by our model not being accurate: nucleons in fact interact with each other and are not spread evenly across the nucleus. For example, in the shell model, a proton and a neutron with overlapping wavefunctions will have a greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons and neutrons to have the same quantum numbers (other than isospin), and thus increase the energy cost of asymmetry between them.
One can also understand the asymmetry term intuitively as follows. It should be dependent on the absolute difference formula_54, and the form formula_55 is simple and differentiable, which is important for certain applications of the formula. In addition, small differences between "Z" and "N" do not have a high energy cost. The "A" in the denominator reflects the fact that a given difference formula_54 is less significant for larger values of "A".
Pairing term.
The term formula_56 is known as the "pairing term" (possibly also known as the pairwise interaction). This term captures the effect of spin coupling. It is given by
formula_57
where formula_58 is found empirically to have a value of about 1000 keV, slowly decreasing with mass number "A". The binding energy may be increased by converting one of the odd protons or neutrons into a neutron or proton, so the odd nucleon can form a pair with its odd neighbour forming and even "Z", "N". The pairs have overlapping wave functions and sit very close together with a bond stronger than any other configuration. When the pairing term is substituted into the binding energy equation, for even "Z", "N", the pairing term adds binding energy, and for odd "Z", "N" the pairing term removes binding energy.
The dependence on mass number is commonly parametrized as
formula_59
The value of the exponent "k"P is determined from experimental binding-energy data. In the past its value was often assumed to be −3/4, but modern experimental data indicate that a value of −1/2 is nearer the mark:
formula_60 or formula_61
Due to the Pauli exclusion principle the nucleus would have a lower energy if the number of protons with spin up were equal to the number of protons with spin down. This is also true for neutrons. Only if both "Z" and "N" are even, can both protons and neutrons have equal numbers of spin-up and spin-down particles. This is a similar effect to the asymmetry term.
The factor formula_62 is not easily explained theoretically. The Fermi-ball calculation we have used above, based on the liquid-drop model but neglecting interactions, will give an formula_63 dependence, as in the asymmetry term. This means that the actual effect for large nuclei will be larger than expected by that model. This should be explained by the interactions between nucleons. For example, in the shell model, two protons with the same quantum numbers (other than spin) will have completely overlapping wavefunctions and will thus have greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons to form pairs of opposite spin. The same is true for neutrons.
Calculating coefficients.
The coefficients are calculated by fitting to experimentally measured masses of nuclei. Their values can vary depending on how they are fitted to the data and which unit is used to express the mass. Several examples are as shown below.
The formula does not consider the internal shell structure of the nucleus.
The semi-empirical mass formula therefore provides a good fit to heavier nuclei, and a poor fit to very light nuclei, especially 4He. For light nuclei, it is usually better to use a model that takes this shell structure into account.
Examples of consequences of the formula.
By maximizing "E"b("A", "Z") with respect to "Z", one would find the best neutron–proton ratio "N"/"Z" for a given atomic weight "A". We get
formula_64
This is roughly 1 for light nuclei, but for heavy nuclei the ratio grows in good agreement with experiment.
By substituting the above value of "Z" back into "E"b, one obtains the binding energy as a function of the atomic weight, "E"b("A").
Maximizing "E"b("A")/"A" with respect to "A" gives the nucleus which is most strongly bound, i.e. most stable. The value we get is "A" = 63 (copper), close to the measured values of "A" = 62 (nickel) and "A" = 58 (iron).
The liquid-drop model also allows the computation of fission barriers for nuclei, which determine the stability of a nucleus against spontaneous fission. It was originally speculated that elements beyond atomic number 104 could not exist, as they would undergo fission with very short half-lives, though this formula did not consider stabilizing effects of closed nuclear shells. A modified formula considering shell effects reproduces known data and the predicted island of stability (in which fission barriers and half-lives are expected to increase, reaching a maximum at the shell closures), though also suggests a possible limit to existence of superheavy nuclei beyond "Z" = 120 and "N" = 184.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "Z"
},
{
"math_id": 2,
"text": "A = N + Z"
},
{
"math_id": 3,
"text": "m = Z m_\\text{p} + N m_\\text{n} - \\frac{E_\\text{B}(N,Z)}{c^2},"
},
{
"math_id": 4,
"text": "m_\\text{p}"
},
{
"math_id": 5,
"text": "m_\\text{n}"
},
{
"math_id": 6,
"text": "E_\\text{B}"
},
{
"math_id": 7,
"text": "E_\\text{B} = a_\\text{V} A - a_\\text{S} A^{2/3} - a_\\text{C} \\frac{Z(Z - 1)}{A^{1/3}} - a_\\text{A} \\frac{(N - Z)^2} A \\pm \\delta(N, Z)."
},
{
"math_id": 8,
"text": "\\delta(N, Z)"
},
{
"math_id": 9,
"text": "\\pm\\delta_0"
},
{
"math_id": 10,
"text": "\\delta_0 = {a_\\text{P}}{A^{k_\\text{P}}}"
},
{
"math_id": 11,
"text": "k_\\text{P}"
},
{
"math_id": 12,
"text": "a_\\text{A}"
},
{
"math_id": 13,
"text": "(A - 2Z)^2"
},
{
"math_id": 14,
"text": "a_\\text{V}"
},
{
"math_id": 15,
"text": "a_\\text{S}"
},
{
"math_id": 16,
"text": "a_\\text{C}"
},
{
"math_id": 17,
"text": "a_\\text{P}"
},
{
"math_id": 18,
"text": "a_\\text{V} A"
},
{
"math_id": 19,
"text": "A(A - 1)/2"
},
{
"math_id": 20,
"text": "A^2"
},
{
"math_id": 21,
"text": "E_\\text{b}"
},
{
"math_id": 22,
"text": "A"
},
{
"math_id": 23,
"text": "\\tfrac{3}{5} A \\varepsilon_\\text{F}"
},
{
"math_id": 24,
"text": "\\varepsilon_\\text{F}"
},
{
"math_id": 25,
"text": "E_\\text{b} - \\tfrac{3}{5} \\varepsilon_\\text{F} \\sim 17~\\mathrm{MeV},"
},
{
"math_id": 26,
"text": "a_\\text{S} A^{2/3}"
},
{
"math_id": 27,
"text": "A^{1/3}"
},
{
"math_id": 28,
"text": "A^{2/3}"
},
{
"math_id": 29,
"text": "a_\\text{C} \\frac{Z(Z - 1)}{A^{1/3}}"
},
{
"math_id": 30,
"text": "a_\\text{C} \\frac{Z^2}{A^{1/3}}"
},
{
"math_id": 31,
"text": "E = \\frac{3}{5} \\frac{1}{4 \\pi \\varepsilon_0} \\frac{Q^2}{R},"
},
{
"math_id": 32,
"text": "R \\approx r_0 A^{\\frac{1}{3}}"
},
{
"math_id": 33,
"text": "Z^2"
},
{
"math_id": 34,
"text": "Z(Z - 1)"
},
{
"math_id": 35,
"text": "E = \\frac{3}{5} \\frac{1}{4 \\pi \\varepsilon_0} \\frac{Q^2}{R} = \\frac{3}{5} \\frac{1}{4 \\pi \\varepsilon_0} \\frac{(Ze)^2}{r_0 A^{1/3}} = \\frac{3 e^2 Z^2}{20 \\pi \\varepsilon_0 r_0 A^{1/3}} \\approx \\frac{3 e^2 Z(Z - 1)}{20 \\pi \\varepsilon_0 r_0 A^{1/3}} = a_\\text{C} \\frac{Z(Z - 1)}{A^{1/3}},"
},
{
"math_id": 36,
"text": "a_\\text{C} = \\frac{3 e^2}{20 \\pi \\varepsilon_0 r_0}."
},
{
"math_id": 37,
"text": "a_\\text{C} = \\frac{3}{5} \\frac{\\hbar c \\alpha}{r_0} = \\frac{3}{5} \\frac{R_\\text{P}}{r_0} \\alpha m_\\text{p} c^2,"
},
{
"math_id": 38,
"text": "\\alpha"
},
{
"math_id": 39,
"text": "r_0 A^{1/3}"
},
{
"math_id": 40,
"text": "r_0"
},
{
"math_id": 41,
"text": "R_\\text{P}"
},
{
"math_id": 42,
"text": "a_\\text{A} \\frac{(N - Z)^2}{A}"
},
{
"math_id": 43,
"text": "E_\\text{k} = \\frac{3}{5} (Z \\varepsilon_\\text{F,p} + N \\varepsilon_\\text{F,n}),"
},
{
"math_id": 44,
"text": "\\varepsilon_\\text{F,p}"
},
{
"math_id": 45,
"text": "\\varepsilon_\\text{F,n}"
},
{
"math_id": 46,
"text": "Z^{2/3}"
},
{
"math_id": 47,
"text": "N^{2/3}"
},
{
"math_id": 48,
"text": "E_\\text{k} = C (Z^{5/3} + N^{5/3})"
},
{
"math_id": 49,
"text": "N - Z"
},
{
"math_id": 50,
"text": "E_\\text{k} = \\frac{C}{2^{2/3}} \\left(A^{5/3} + \\frac{5}{9} \\frac{(N - Z)^2}{A^{1/3}}\\right) + O\\big((N - Z)^4\\big)."
},
{
"math_id": 51,
"text": "\\varepsilon_\\text{F} \\equiv \\varepsilon_\\text{F,p} = \\varepsilon_\\text{F,n}"
},
{
"math_id": 52,
"text": "\\tfrac{3}{5} A"
},
{
"math_id": 53,
"text": "E_\\text{k} = \\frac{3}{5} \\varepsilon_\\text{F} A + \\frac{1}{3} \\varepsilon_\\text{F} \\frac{(N - Z)^2}{A} + O\\big((N - Z)^4\\big)."
},
{
"math_id": 54,
"text": "|N - Z|"
},
{
"math_id": 55,
"text": "(N - Z)^2"
},
{
"math_id": 56,
"text": "\\delta(A, Z)"
},
{
"math_id": 57,
"text": "\\delta(A, Z) = \\begin{cases}\n +\\delta_0 & \\text{for even } Z, N ~(\\text{even } A), \\\\\n 0 & \\text{for odd } A, \\\\\n -\\delta_0 & \\text{for odd } Z, N ~(\\text{even } A),\n\\end{cases}"
},
{
"math_id": 58,
"text": "\\delta_0"
},
{
"math_id": 59,
"text": "\\delta_0 = a_\\text{P} A^{k_\\text{P}}."
},
{
"math_id": 60,
"text": "\\delta_0 = a_\\text{P} A^{-1/2}"
},
{
"math_id": 61,
"text": "\\delta_0 = a_\\text{P} A^{-3/4}."
},
{
"math_id": 62,
"text": "A^{k_\\text{P}}"
},
{
"math_id": 63,
"text": "A^{-1}"
},
{
"math_id": 64,
"text": "N/Z \\approx 1 + \\frac{a_\\text{C}}{2a_\\text{A}} A^{2/3}."
}
] | https://en.wikipedia.org/wiki?curid=1085343 |
10854684 | Karnaugh map | Graphical method to simplify Boolean expressions
The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which itself was a rediscovery of Allan Marquand's 1881 "logical diagram" aka Marquand diagram but now with a focus set on its utility for switching circuits. Veitch charts are also known as Marquand–Veitch diagrams, Svoboda charts -(albeit only rarely)- and even Karnaugh maps as Karnaugh–Veitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability. It also permits the rapid identification and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps, the cells are ordered in Gray code, and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of a canonical form of the logic in the original truth table. These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number of logic gates. A sum-of-products expression (SOP) can always be implemented using AND gates feeding into an OR gate, and a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F'). Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.
Example.
Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. For example, consider the Boolean function described by the following truth table.
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variables A, B, C, D and their inverses.
Construction.
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered in Gray code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
Grouping.
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand, would mean the cells where A is true and D is false (that is, is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore, can be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as is , which includes the four corners.
Solution.
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
Thus the first minterm in the Boolean sum-of-products expression is .
For the green grouping, "A" and "B" maintain the same state, while "C" and "D" change. "B" is 0 and has to be negated before it can be included. The second term is therefore . Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the term .
The solutions of each grouping are combined: the normal form of the circuit is formula_4.
Thus the Karnaugh map has guided a simplification of
formula_5
It would also have been possible to derive this simplification by carefully applying the axioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
Inverse.
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
This yields the inverse:
formula_6
Through the use of De Morgan's laws, the product of sums can be determined:
formula_7
Don't cares.
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of "f"(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
formula_8
Note that the first term is just A, not . In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
formula_9
Through the use of De Morgan's laws, the product of sums can be determined:
formula_10
Race hazards.
Elimination.
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding, "adjacent" has a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of formula_11 would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term of formula_12 must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression for "f", but with a new factor of formula_13.
2-variable map examples.
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of formula_14 and the race hazard free ("see previous section") minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
Related graphical methods.
Related graphical minimization methods include:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(A, B, C, D) = \\sum_{}m_i, i \\in \\{6, 8, 9, 10, 11, 12, 13, 14\\}"
},
{
"math_id": 1,
"text": "m_i"
},
{
"math_id": 2,
"text": "f(A, B, C, D) = \\prod_{}M_i, i \\in \\{0, 1, 2, 3, 4, 5, 7, 15\\}"
},
{
"math_id": 3,
"text": "M_i"
},
{
"math_id": 4,
"text": "A\\overline{C} + A\\overline{B} + BC\\overline{D}"
},
{
"math_id": 5,
"text": "\\begin{align}\n f(A, B, C, D) = {} &\\overline{A}BC\\overline{D} + A\\overline{B}\\,\\overline{C}\\,\\overline{D} + A\\overline{B}\\,\\overline{C}D + A\\overline{B}C\\overline{D} + {}\\\\\n &A\\overline{B}CD + AB\\overline{C}\\,\\overline{D} + AB\\overline{C}D + ABC\\overline{D}\\\\\n = {} &A\\overline{C} + A\\overline{B} + BC\\overline{D}\n\\end{align}"
},
{
"math_id": 6,
"text": "\\overline{f(A,B,C,D)} = \\overline{A}\\,\\overline{B} + \\overline{A}\\,\\overline{C} + BCD"
},
{
"math_id": 7,
"text": "\\begin{align}\n f(A,B,C,D) &= \\overline{\\overline{f(A,B,C,D)}} \\\\\n &= \\overline{\\overline{A}\\,\\overline{B} + \\overline{A}\\,\\overline{C} + BCD} \\\\\n &= \\left(\\overline{\\overline{A}\\,\\overline{B}}\\right) \\left(\\overline{\\overline{A}\\,\\overline{C}}\\right) \\left(\\overline{BCD}\\right) \\\\\n &= \\left(A + B\\right)\\left(A + C\\right)\\left(\\overline{B} + \\overline{C} + \\overline{D}\\right)\n\\end{align}"
},
{
"math_id": 8,
"text": "f(A,B,C,D) = A + BC\\overline{D}"
},
{
"math_id": 9,
"text": "\\overline{f(A,B,C,D)} = \\overline{A}\\,\\overline{B} + \\overline{A}\\,\\overline{C} + \\overline{A}D"
},
{
"math_id": 10,
"text": "\\begin{align}\n f(A,B,C,D) &= \\overline{\\overline{f(A,B,C,D)}} \\\\\n &= \\overline{\\overline{A}\\,\\overline{B} + \\overline{A}\\,\\overline{C} + \\overline{A}\\,D} \\\\\n &= \\left(\\overline{\\overline{A}\\,\\overline{B}}\\right) \\left(\\overline{\\overline{A}\\,\\overline{C}}\\right) \\left(\\overline{\\overline{A}\\,D}\\right) \\\\\n &= \\left(A + B\\right)\\left(A + C\\right)\\left(A +\\overline{D}\\right)\n\\end{align}"
},
{
"math_id": 11,
"text": "A\\overline{D}"
},
{
"math_id": 12,
"text": "\\overline{A}D"
},
{
"math_id": 13,
"text": "\\left(A + \\overline{D}\\right)"
},
{
"math_id": 14,
"text": "\\sum m()"
}
] | https://en.wikipedia.org/wiki?curid=10854684 |
1085606 | Hypervalent molecule | Molecule containing main group elements with more than eight valence electrons
In chemistry, a hypervalent molecule (the phenomenon is sometimes colloquially known as expanded octet) is a molecule that contains one or more main group elements apparently bearing more than eight electrons in their valence shells. Phosphorus pentachloride (), sulfur hexafluoride (), chlorine trifluoride (), the chlorite () ion in chlorous acid and the triiodide () ion are examples of hypervalent molecules.
Definitions and nomenclature.
Hypervalent molecules were first formally defined by Jeremy I. Musher in 1969 as molecules having central atoms of group 15–18 in any valence other than the lowest (i.e. 3, 2, 1, 0 for Groups 15, 16, 17, 18 respectively, based on the octet rule).
Several specific classes of hypervalent molecules exist:
N-X-L notation.
N-X-L nomenclature, introduced collaboratively by the research groups of Martin, Arduengo, and Kochi in 1980, is often used to classify hypervalent compounds of main group elements, where:
Examples of N-X-L nomenclature include:
History and controversy.
The debate over the nature and classification of hypervalent molecules goes back to Gilbert N. Lewis and Irving Langmuir and the debate over the nature of the chemical bond in the 1920s. Lewis maintained the importance of the two-center two-electron (2c-2e) bond in describing hypervalence, thus using expanded octets to account for such molecules. Using the language of orbital hybridization, the bonds of molecules like PF5 and SF6 were said to be constructed from sp3dn orbitals on the central atom. Langmuir, on the other hand, upheld the dominance of the octet rule and preferred the use of ionic bonds to account for hypervalence without violating the rule (e.g. "SF42+ 2F−" for SF6).
In the late 1920s and 1930s, Sugden argued for the existence of a two-center one-electron (2c-1e) bond and thus rationalized bonding in hypervalent molecules without the need for expanded octets or ionic bond character; this was poorly accepted at the time. In the 1940s and 1950s, Rundle and Pimentel popularized the idea of the three-center four-electron bond, which is essentially the same concept which Sugden attempted to advance decades earlier; the three-center four-electron bond can be alternatively viewed as consisting of two collinear two-center one-electron bonds, with the remaining two nonbonding electrons localized to the ligands.
The attempt to actually prepare hypervalent organic molecules began with Hermann Staudinger and Georg Wittig in the first half of the twentieth century, who sought to challenge the extant valence theory and successfully prepare nitrogen and phosphorus-centered hypervalent molecules. The theoretical basis for hypervalency was not delineated until J.I. Musher's work in 1969.
In 1990, Magnusson published a seminal work definitively excluding the significance of d-orbital hybridization in the bonding of hypervalent compounds of second-row elements. This had long been a point of contention and confusion in describing these molecules using molecular orbital theory. Part of the confusion here originates from the fact that one must include d-functions in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result), and the contribution of the d-function to the molecular wavefunction is large. These facts were historically interpreted to mean that d-orbitals must be involved in bonding. However, Magnusson concludes in his work that d-orbital involvement is not implicated in hypervalency.
Nevertheless, a 2013 study showed that although the Pimentel ionic model best accounts for the bonding of hypervalent species, the energetic contribution of an expanded octet structure is also not null. In this modern valence bond theory study of the bonding of xenon difluoride, it was found that ionic structures account for about 81% of the overall wavefunction, of which 70% arises from ionic structures employing only the p orbital on xenon while 11% arises from ionic structures employing an formula_0hybrid on xenon. The contribution of a formally hypervalent structure employing an orbital of sp3d hybridization on xenon accounts for 11% of the wavefunction, with a diradical contribution making up the remaining 8%. The 11% sp3d contribution results in a net stabilization of the molecule by mol−1, a minor but significant fraction of the total energy of the total bond energy ( mol−1). Other studies have similarly found minor but non-negligible energetic contributions from expanded octet structures in SF6 (17%) and XeF6 (14%).
Despite the lack of chemical realism, the IUPAC recommends the drawing of expanded octet structures for functional groups like sulfones and phosphoranes, in order to avoid the drawing of a large number of formal charges or partial single bonds.
Hypervalent hydrides.
A special type of hypervalent molecules is hypervalent hydrides. Most known hypervalent molecules contain substituents more electronegative than their central atoms. Hypervalent hydrides are of special interest because hydrogen is usually less electronegative than the central atom. A number of computational studies have been performed on chalcogen hydrides and pnictogen hydrides. Recently, a new computational study has showed that most hypervalent halogen hydrides XHn can exist. It is suggested that IH3 and IH5 are stable enough to be observable or, possibly, even isolable.
Criticism.
Both the term and concept of hypervalency still fall under criticism. In 1984, in response to this general controversy, Paul von Ragué Schleyer proposed the replacement of 'hypervalency' with use of the term hypercoordination because this term does not imply any mode of chemical bonding and the question could thus be avoided altogether.
The concept itself has been criticized by Ronald Gillespie who, based on an analysis of electron localization functions, wrote in 2002 that "as there is no fundamental difference between the bonds in hypervalent and non-hypervalent (Lewis octet) molecules there is no reason to continue to use the term hypervalent."
For hypercoordinated molecules with electronegative ligands such as PF5, it has been demonstrated that the ligands can pull away enough electron density from the central atom so that its net content is again 8 electrons or fewer. Consistent with this alternative view is the finding that hypercoordinated molecules based on fluorine ligands, for example PF5 do not have hydride counterparts, e.g. phosphorane (PH5) which is unknown.
The ionic model holds up well in thermochemical calculations. It predicts favorable exothermic formation of PF4+F− from phosphorus trifluoride PF3 and fluorine F2 whereas a similar reaction forming PH4+H− is not favorable.
Alternative definition.
Durrant has proposed an alternative definition of hypervalency, based on the analysis of atomic charge maps obtained from atoms in molecules theory. This approach defines a parameter called the valence electron equivalent, γ, as “the formal shared electron count at a given atom, obtained by any combination of valid ionic and covalent resonance forms that reproduces the observed charge distribution”. For any particular atom X, if the value of γ(X) is greater than 8, that atom is hypervalent. Using this alternative definition, many species such as PCl5, SO42-, and XeF4, that are hypervalent by Musher's definition, are reclassified as hypercoordinate but not hypervalent, due to strongly ionic bonding that draws electrons away from the central atom. On the other hand, some compounds that are normally written with ionic bonds in order to conform to the octet rule, such as ozone O3, nitrous oxide NNO, and trimethylamine N-oxide (CH3)3NO, are found to be genuinely hypervalent. Examples of γ calculations for phosphate PO43− (γ(P) = 2.6, non-hypervalent) and orthonitrate NO43− (γ(N) = 8.5, hypervalent) are shown below.
Bonding in hypervalent molecules.
Early considerations of the geometry of hypervalent molecules returned familiar arrangements that were well explained by the VSEPR model for atomic bonding. Accordingly, AB5 and AB6 type molecules would possess a trigonal bi-pyramidal and octahedral geometry, respectively. However, in order to account for the observed bond angles, bond lengths and apparent violation of the Lewis octet rule, several alternative models have been proposed.
In the 1950s an expanded valence shell treatment of hypervalent bonding was adduced to explain the molecular architecture, where the central atom of penta- and hexacoordinated molecules would utilize d AOs in addition to s and p AOs. However, advances in the study of "ab initio" calculations have revealed that the contribution of d-orbitals to hypervalent bonding is too small to describe the bonding properties, and this description is now regarded as much less important. It was shown that in the case of hexacoordinated SF6, d-orbitals are not involved in S-F bond formation, but charge transfer between the sulfur and fluorine atoms and the apposite resonance structures were able to account for the hypervalency (See below).
Additional modifications to the octet rule have been attempted to involve ionic characteristics in hypervalent bonding. As one of these modifications, in 1951, the concept of the 3-center 4-electron (3c-4e) bond, which described hypervalent bonding with a qualitative molecular orbital, was proposed. The 3c-4e bond is described as three molecular orbitals given by the combination of a p atomic orbital on the central atom and an atomic orbital from each of the two ligands on opposite sides of the central atom. Only one of the two pairs of electrons is occupying a molecular orbital that involves bonding to the central atom, the second pair being non-bonding and occupying a molecular orbital composed of only atomic orbitals from the two ligands. This model in which the octet rule is preserved was also advocated by Musher.
Molecular orbital theory.
A complete description of hypervalent molecules arises from consideration of molecular orbital theory through quantum mechanical methods. An LCAO in, for example, sulfur hexafluoride, taking a basis set of the one sulfur 3s-orbital, the three sulfur 3p-orbitals, and six octahedral geometry symmetry-adapted linear combinations (SALCs) of fluorine orbitals, a total of ten molecular orbitals are obtained (four fully occupied bonding MOs of the lowest energy, two fully occupied intermediate energy non-bonding MOs and four vacant antibonding MOs with the highest energy) providing room for all 12 valence electrons. This is a stable configuration only for S"X"6 molecules containing electronegative ligand atoms like fluorine, which explains why SH6 is not a stable molecule. In the bonding model, the two non-bonding MOs (1eg) are localized equally on all six fluorine atoms.
Valence bond theory.
For hypervalent compounds in which the ligands are more electronegative than the central, hypervalent atom, resonance structures can be drawn with no more than four covalent electron pair bonds and completed with ionic bonds to obey the octet rule. For example, in phosphorus pentafluoride (PF5), 5 resonance structures can be generated each with four covalent bonds and one ionic bond with greater weight in the structures placing ionic character in the axial bonds, thus satisfying the octet rule and explaining both the observed trigonal bipyramidal molecular geometry and the fact that the axial bond length (158 pm) is longer than the equatorial (154 pm).
For a hexacoordinate molecule such as sulfur hexafluoride, each of the six bonds is the same length. The rationalization described above can be applied to generate 15 resonance structures each with four covalent bonds and two ionic bonds, such that the ionic character is distributed equally across each of the sulfur-fluorine bonds.
Spin-coupled valence bond theory has been applied to diazomethane and the resulting orbital analysis was interpreted in terms of a chemical structure in which the central nitrogen has five covalent bonds;
This led the authors to the interesting conclusion that "Contrary to what we were all taught as undergraduates, the nitrogen atom does indeed form five covalent linkages and the availability or otherwise of d-orbitals has nothing to do with this state of affairs."
Structure, reactivity, and kinetics.
Structure.
Hexacoordinated phosphorus.
Hexacoordinate phosphorus molecules involving nitrogen, oxygen, or sulfur ligands provide examples of Lewis acid-Lewis base hexacoordination. For the two similar complexes shown below, the length of the C–P bond increases with decreasing length of the N–P bond; the strength of the C–P bond decreases with increasing strength of the N–P Lewis acid–Lewis base interaction.
Pentacoordinated silicon.
This trend is also generally true of pentacoordinated main-group elements with one or more lone-pair-containing ligand, including the oxygen-pentacoordinated silicon examples shown below.
The Si-halogen bonds range from close to the expected van der Waals value in A (a weak bond) almost to the expected covalent single bond value in C (a strong bond).
Reactivity.
Silicon.
Corriu and coworkers performed early work characterizing reactions thought to proceed through a hypervalent transition state. Measurements of the reaction rates of hydrolysis of tetravalent chlorosilanes incubated with catalytic amounts of water returned a rate that is first order in chlorosilane and second order in water. This indicated that two water molecules interacted with the silane during hydrolysis and from this a binucleophilic reaction mechanism was proposed. Corriu and coworkers then measured the rates of hydrolysis in the presence of nucleophilic catalyst HMPT, DMSO or DMF. It was shown that the rate of hydrolysis was again first order in chlorosilane, first order in catalyst and now first order in water. Appropriately, the rates of hydrolysis also exhibited a dependence on the magnitude of charge on the oxygen of the nucleophile.
Taken together this led the group to propose a reaction mechanism in which there is a pre-rate determining nucleophilic attack of the tetracoordinated silane by the nucleophile (or water) in which a hypervalent pentacoordinated silane is formed. This is followed by a nucleophilic attack of the intermediate by water in a rate determining step leading to hexacoordinated species that quickly decomposes giving the hydroxysilane.
Silane hydrolysis was further investigated by Holmes and coworkers in which tetracoordinated Mes2SiF2 (Mes = mesityl) and pentacoordinated Mes2SiF3- were reacted with two equivalents of water. Following twenty-four hours, almost no hydrolysis of the tetracoordinated silane was observed, while the pentacoordinated silane was completely hydrolyzed after fifteen minutes. Additionally, X-ray diffraction data collected for the tetraethylammonium salts of the fluorosilanes showed the formation of hydrogen bisilonate lattice supporting a hexacoordinated intermediate from which HF2- is quickly displaced leading to the hydroxylated product. This reaction and crystallographic data support the mechanism proposed by Corriu "et al.".
The apparent increased reactivity of hypervalent molecules, contrasted with tetravalent analogues, has also been observed for Grignard reactions. The Corriu group measured Grignard reaction half-times by NMR for related 18-crown-6 potassium salts of a variety of tetra- and pentacoordinated fluorosilanes in the presence of catalytic amounts of nucleophile.
Though the half reaction method is imprecise, the magnitudinal differences in reactions rates allowed for a proposed reaction scheme wherein, a pre-rate determining attack of the tetravalent silane by the nucleophile results in an equilibrium between the neutral tetracoordinated species and the anionic pentavalent compound. This is followed by nucleophilic coordination by two Grignard reagents as normally seen, forming a hexacoordinated transition state and yielding the expected product.
The mechanistic implications of this are extended to a hexacoordinated silicon species that is thought to be active as a transition state in some reactions. The reaction of allyl- or crotyl-trifluorosilanes with aldehydes and ketones only precedes with fluoride activation to give a pentacoordinated silicon. This intermediate then acts as a Lewis acid to coordinate with the carbonyl oxygen atom. The further weakening of the silicon–carbon bond as the silicon becomes hexacoordinate helps drive this reaction.
Phosphorus.
Similar reactivity has also been observed for other hypervalent structures such as the miscellany of phosphorus compounds, for which hexacoordinated transition states have been proposed.
Hydrolysis of phosphoranes and oxyphosphoranes have been studied and shown to be second order in water. Bel'skii "et al.". have proposed a prerate determining nucleophilic attack by water resulting in an equilibrium between the penta- and hexacoordinated phosphorus species, which is followed by a proton transfer involving the second water molecule in a rate determining ring-opening step, leading to the hydroxylated product.
Alcoholysis of pentacoordinated phosphorus compounds, such as trimethoxyphospholene with benzyl alcohol, have also been postulated to occur through a similar octahedral transition state, as in hydrolysis, however without ring opening.
It can be understood from these experiments that the increased reactivity observed for hypervalent molecules, contrasted with analogous nonhypervalent compounds, can be attributed to the congruence of these species to the hypercoordinated activated states normally formed during the course of the reaction.
Ab initio calculations.
The enhanced reactivity at pentacoordinated silicon is not fully understood. Corriu and coworkers suggested that greater electropositive character at the pentavalent silicon atom may be responsible for its increased reactivity. Preliminary ab initio calculations supported this hypothesis to some degree, but used a small basis set.
A software program for ab initio calculations, Gaussian 86, was used by Dieters and coworkers to compare tetracoordinated silicon and phosphorus to their pentacoordinate analogues. This ab initio approach is used as a supplement to determine why reactivity improves in nucleophilic reactions with pentacoordinated compounds. For silicon, the 6-31+G* basis set was used because of its pentacoordinated anionic character and for phosphorus, the 6-31G* basis set was used.
Pentacoordinated compounds should theoretically be less electrophilic than tetracoordinated analogues due to steric hindrance and greater electron density from the ligands, yet experimentally show greater reactivity with nucleophiles than their tetracoordinated analogues. Advanced ab initio calculations were performed on series of tetracoordinated and pentacoordinated species to further understand this reactivity phenomenon. Each series varied by degree of fluorination. Bond lengths and charge densities are shown as functions of how many hydride ligands are on the central atoms. For every new hydride, there is one less fluoride.
For silicon and phosphorus bond lengths, charge densities, and Mulliken bond overlap, populations were calculated for tetra and pentacoordinated species by this ab initio approach. Addition of a fluoride ion to tetracoordinated silicon shows an overall average increase of 0.1 electron charge, which is considered insignificant. In general, bond lengths in trigonal bipyramidal pentacoordinate species are longer than those in tetracoordinate analogues. Si-F bonds and Si-H bonds both increase in length upon pentacoordination and related effects are seen in phosphorus species, but to a lesser degree. The reason for the greater magnitude in bond length change for silicon species over phosphorus species is the increased effective nuclear charge at phosphorus. Therefore, silicon is concluded to be more loosely bound to its ligands.
In addition Dieters and coworkers show an inverse correlation between bond length and bond overlap for all series. Pentacoordinated species are concluded to be more reactive because of their looser bonds as trigonal-bipyramidal structures.
By calculating the energies for the addition and removal of a fluoride ion in various silicon and phosphorus species, several trends were found. In particular, the tetracoordinated species have much higher energy requirements for ligand removal than do pentacoordinated species. Further, silicon species have lower energy requirements for ligand removal than do phosphorus species, which is an indication of weaker bonds in silicon.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{sd}_{z^2}"
}
] | https://en.wikipedia.org/wiki?curid=1085606 |
10858909 | Extensions of symmetric operators | Operation on self-adjoint operators
In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.
This article discusses a few related problems of this type. The unifying theme is that each problem has an operator-theoretic characterization which gives a corresponding parametrization of solutions. More specifically, finding self-adjoint extensions, with various requirements, of symmetric operators is equivalent to finding unitary extensions of suitable partial isometries.
Symmetric operators.
Let formula_0 be a Hilbert space. A linear operator formula_1 acting on formula_0 with dense domain formula_2 is symmetric if
formula_3
If formula_4, the Hellinger-Toeplitz theorem says that formula_1 is a bounded operator, in which case formula_1 is self-adjoint and the extension problem is trivial. In general, a symmetric operator is self-adjoint if the domain of its adjoint, formula_5, lies in formula_2.
When dealing with unbounded operators, it is often desirable to be able to assume that the operator in question is closed. In the present context, it is a convenient fact that every symmetric operator formula_1 is
closable. That is, formula_1 has the smallest closed extension, called the "closure" of formula_1. This can
be shown by invoking the symmetric assumption and Riesz representation theorem. Since formula_1 and its closure have the same closed extensions, it can always be assumed that the symmetric operator of interest is closed.
In the next section, a symmetric operator will be assumed to be densely defined and closed.
Self-adjoint extensions of symmetric operators.
If an operator formula_1 on the Hilbert space formula_0 is symmetric, when does it have self-adjoint extensions? An operator that has a unique self-adjoint extension is said to be essentially self-adjoint; equivalently, an operator is essentially self-adjoint if its closure (the operator whose graph is the closure of the graph of formula_1) is self-adjoint. In general, a symmetric operator could have many self-adjoint extensions or none at all. Thus, we would like a classification of its self-adjoint extensions.
The first basic criterion for essential self-adjointness is the following:
<templatestyles src="Math_theorem/styles.css" />
Theorem — If formula_1 is a symmetric operator on formula_0, then formula_1 is essentially self-adjoint if and only if the range of the operators formula_6 and formula_7 are dense in formula_0.
Equivalently, formula_1 is essentially self-adjoint if and only if the operators formula_8 have trivial kernels. That is to say, formula_1 "fails to be" self-adjoint if and only if formula_9 has an eigenvector with complex eigenvalues formula_10.
Another way of looking at the issue is provided by the Cayley transform of a self-adjoint operator and the deficiency indices.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Suppose formula_1 is a symmetric operator. Then there is a unique densely defined linear operator
formula_11
such that
formula_12
formula_13 is isometric on its domain. Moreover, formula_14 is dense in formula_1.
Conversely, given any densely defined operator formula_15 which is isometric on its (not necessarily closed) domain and such that formula_16 is dense, then there is a (unique) densely defined symmetric operator
formula_17
such that
formula_18
The mappings formula_19 and formula_20 are inverses of each other, i.e., formula_21.
The mapping formula_22 is called the Cayley transform. It associates a partially defined isometry to any symmetric densely defined operator. Note that the mappings formula_19 and formula_20 are monotone: This means that if formula_23 is a symmetric operator that extends the densely defined symmetric operator formula_1, then formula_24 extends formula_13, and similarly for formula_20.
<templatestyles src="Math_theorem/styles.css" />
Theorem — A necessary and sufficient condition for formula_1 to be self-adjoint is that its Cayley transform formula_13 be unitary on formula_0.
This immediately gives us a necessary and sufficient condition for formula_1 to have a self-adjoint extension, as follows:
<templatestyles src="Math_theorem/styles.css" />
Theorem — A necessary and sufficient condition for formula_1 to have a self-adjoint extension is that formula_13 have a unitary extension to formula_0.
A partially defined isometric operator formula_25 on a Hilbert space formula_0 has a unique isometric extension to the norm closure of formula_26. A partially defined isometric operator with closed domain is called a partial isometry.
Define the deficiency subspaces of "A" by
formula_27
In this language, the description of the self-adjoint extension problem given by the theorem can be restated as follows: a symmetric operator formula_1 has self-adjoint extensions if and only if the deficiency subspaces formula_28 and formula_29 have the same dimension.
The deficiency indices of a partial isometry formula_25 are defined as the dimension of the orthogonal complements of the domain and range:
formula_30
<templatestyles src="Math_theorem/styles.css" />
Theorem — A partial isometry formula_25 has a unitary extension if and only if the deficiency indices are identical. Moreover, formula_25 has a "unique" unitary extension if and only if the deficiency indices are both zero.
We see that there is a bijection between symmetric extensions of an operator and isometric extensions of its Cayley transform. The symmetric extension is self-adjoint if and only if the corresponding isometric extension is unitary.
A symmetric operator has a unique self-adjoint extension if and only if both its deficiency indices are zero. Such an operator is said to be essentially self-adjoint. Symmetric operators which are not essentially self-adjoint may still have a canonical self-adjoint extension. Such is the case for "non-negative" symmetric operators (or more generally, operators which are bounded below). These operators always have a canonically defined Friedrichs extension and for these operators we can define a canonical functional calculus. Many operators that occur in analysis are bounded below (such as the negative of the Laplacian operator), so the issue of essential adjointness for these operators is less critical.
Suppose formula_1 is symmetric densely defined. Then any symmetric extension of formula_1 is a restriction of formula_9. Indeed, formula_31 and formula_23 symmetric yields formula_32 by applying the definition of formula_5. This notion leads to the von Neumann formulae:
<templatestyles src="Math_theorem/styles.css" />
Theorem — Suppose formula_1 is a densely defined symmetric operator, with domain formula_2. Let
formula_33
be any pair of its deficiency subspaces. Then
formula_34
and
formula_35
where the decomposition is orthogonal relative to the graph inner product of formula_5:
formula_36
Example.
Consider the Hilbert space formula_37. On the subspace of absolutely continuous function that vanish on the boundary, define the operator formula_1 by
formula_38
Integration by parts shows formula_1 is symmetric. Its adjoint formula_9 is the same operator with formula_5 being the absolutely continuous functions with no boundary condition. We will see that extending "A" amounts to modifying the boundary conditions, thereby enlarging formula_2 and reducing formula_5, until the two coincide.
Direct calculation shows that formula_39 and formula_40 are one-dimensional subspaces given by
formula_41
where formula_42 is a normalizing constant. The self-adjoint extensions formula_43 of formula_1 are parametrized by the circle group formula_44. For each unitary transformation formula_45 defined by
formula_46
there corresponds an extension formula_43 with domain
formula_47
If formula_48, then formula_49 is absolutely continuous and
formula_50
Conversely, if formula_49 is absolutely continuous and formula_51 for some formula_52, then formula_49 lies in the above domain.
The self-adjoint operators formula_43 are instances of the momentum operator in quantum mechanics.
Self-adjoint extension on a larger space.
Every partial isometry can be extended, on a possibly larger space, to a unitary operator. Consequently, every symmetric operator has a self-adjoint extension, on a possibly larger space.
Positive symmetric operators.
A symmetric operator formula_1 is called positive if
formula_53
It is known that for every such formula_1, one has formula_54. Therefore, every positive symmetric operator has self-adjoint extensions. The more interesting question in this direction is whether formula_1 has positive self-adjoint extensions.
For two positive operators formula_1 and formula_23, we put formula_55 if
formula_56
in the sense of bounded operators.
Structure of 2 × 2 matrix contractions.
While the extension problem for general symmetric operators is essentially that of extending partial isometries to unitaries, for positive symmetric operators the question becomes one of extending contractions: by "filling out" certain unknown entries of a 2 × 2 self-adjoint contraction, we obtain the positive self-adjoint extensions of a positive symmetric operator.
Before stating the relevant result, we first fix some terminology. For a contraction formula_57, acting on formula_0, we define its defect operators by
formula_58
The defect spaces of formula_57 are
formula_59
The defect operators indicate the non-unitarity of formula_57, while the defect spaces ensure uniqueness in some parameterizations.
Using this machinery, one can explicitly describe the structure of general matrix contractions. We will only need the 2 × 2 case. Every 2 × 2 contraction formula_57 can be uniquely expressed as
formula_60
where each formula_61 is a contraction.
Extensions of Positive symmetric operators.
The Cayley transform for general symmetric operators can be adapted to this special case. For every non-negative number formula_62,
formula_63
This suggests we assign to every positive symmetric operator formula_1 a contraction
formula_64
defined by
formula_65
which have matrix representation
formula_66
It is easily verified that the formula_67 entry, formula_68 projected onto formula_69, is self-adjoint. The operator formula_1 can be written as
formula_70
with formula_71. If formula_72 is a contraction that extends formula_68 and its projection onto its domain is self-adjoint, then it is clear that its inverse Cayley transform
formula_73
defined on formula_74 is a positive symmetric extension of formula_1. The symmetric property follows from its projection onto its own domain being self-adjoint and positivity follows from contractivity. The converse is also true: given a positive symmetric extension of formula_1, its Cayley transform is a contraction satisfying the stated "partial" self-adjoint property.
The unitarity criterion of the Cayley transform is replaced by self-adjointness for positive operators.
Therefore, finding self-adjoint extension for a positive symmetric operator becomes a "matrix completion problem". Specifically, we need to embed the column contraction formula_68 into a 2 × 2 self-adjoint contraction. This can always be done and the structure of such contractions gives a parametrization of all possible extensions.
By the preceding subsection, all self-adjoint extensions of formula_68 takes the form
formula_75
So the self-adjoint positive extensions of formula_1 are in bijective correspondence with the self-adjoint contractions formula_76 on the defect space formula_77 of formula_78. The contractions formula_79 and formula_80 give rise to positive extensions formula_81 and formula_82 respectively. These are the "smallest" and "largest" positive extensions of formula_1 in the sense that
formula_83
for any positive self-adjoint extension formula_23 of formula_1. The operator formula_84 is the Friedrichs extension of formula_1 and formula_81 is the von Neumann-Krein extension of formula_1.
Similar results can be obtained for accretive operators.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "\\operatorname{dom}(A)"
},
{
"math_id": 3,
"text": "\\langle Ax, y\\rangle = \\langle x, A y\\rangle, \\quad \\forall x,y\\in\\operatorname{dom}(A)."
},
{
"math_id": 4,
"text": "\\operatorname{dom}(A) = H"
},
{
"math_id": 5,
"text": "\\operatorname{dom}(A^*)"
},
{
"math_id": 6,
"text": "A-i"
},
{
"math_id": 7,
"text": "A+i"
},
{
"math_id": 8,
"text": "A^* \\pm i"
},
{
"math_id": 9,
"text": "A^*"
},
{
"math_id": 10,
"text": "\\pm i"
},
{
"math_id": 11,
"text": "W(A) : \\operatorname{ran}(A + i) \\to \\operatorname{ran}(A - i)"
},
{
"math_id": 12,
"text": " W(A)(Ax + ix) = Ax - ix, \\quad x \\in \\operatorname{dom}(A). "
},
{
"math_id": 13,
"text": "W(A)"
},
{
"math_id": 14,
"text": "\\operatorname{ran}(1-W(A))"
},
{
"math_id": 15,
"text": "U"
},
{
"math_id": 16,
"text": "1-U"
},
{
"math_id": 17,
"text": " S(U) : \\operatorname{ran}(1 - U) \\to \\operatorname{ran}(1 + U)"
},
{
"math_id": 18,
"text": " S(U)(x - Ux) = i(x + U x), \\quad x \\in \\operatorname{dom}(U)."
},
{
"math_id": 19,
"text": "W"
},
{
"math_id": 20,
"text": "S"
},
{
"math_id": 21,
"text": "S(W(A))=A"
},
{
"math_id": 22,
"text": "A \\mapsto W(A)"
},
{
"math_id": 23,
"text": "B"
},
{
"math_id": 24,
"text": "W(B)"
},
{
"math_id": 25,
"text": "V"
},
{
"math_id": 26,
"text": "\\operatorname{dom}(V)"
},
{
"math_id": 27,
"text": "\\begin{align}\nK_+ &= \\operatorname{ran}(A+i)^{\\perp}\\\\\nK_- &= \\operatorname{ran}(A-i)^{\\perp}\n\\end{align}"
},
{
"math_id": 28,
"text": "K_{+}"
},
{
"math_id": 29,
"text": "K_{-}"
},
{
"math_id": 30,
"text": "\\begin{align}\n n_+(V) &= \\dim \\operatorname{dom}(V)^\\perp \\\\\n n_-(V) &= \\dim \\operatorname{ran}(V)^\\perp\n\\end{align}"
},
{
"math_id": 31,
"text": "A\\subseteq B"
},
{
"math_id": 32,
"text": "B \\subseteq A^*"
},
{
"math_id": 33,
"text": " N_\\pm = \\operatorname{ran}(A \\pm i)^\\perp,"
},
{
"math_id": 34,
"text": " N_\\pm = \\operatorname{ker}(A^* \\mp i),"
},
{
"math_id": 35,
"text": " \\operatorname{dom}\\left(A^*\\right) = \\operatorname{dom}\\left(\\overline{A}\\right) \\oplus N_+ \\oplus N_-,"
},
{
"math_id": 36,
"text": "\\langle \\xi \\mid \\eta \\rangle_\\text{graph} = \\langle \\xi \\mid \\eta \\rangle + \\left\\langle A^* \\xi \\mid A^* \\eta \\right\\rangle ."
},
{
"math_id": 37,
"text": "L^2([0,1])"
},
{
"math_id": 38,
"text": "A f = i \\frac{d}{dx} f."
},
{
"math_id": 39,
"text": "K_+"
},
{
"math_id": 40,
"text": "K_-"
},
{
"math_id": 41,
"text": "\\begin{align}\nK_+ &= \\operatorname{span} \\{\\phi_+ = c \\cdot e^x \\}\\\\\nK_- &= \\operatorname{span}\\{ \\phi_- = c \\cdot e^{-x} \\}\n\\end{align}"
},
{
"math_id": 42,
"text": "c"
},
{
"math_id": 43,
"text": "A_\\alpha"
},
{
"math_id": 44,
"text": "\\mathbb T = \\{\\alpha \\in \\mathbb C : |\\alpha| = 1 \\}"
},
{
"math_id": 45,
"text": "U_\\alpha : K_- \\to K_+"
},
{
"math_id": 46,
"text": "U_\\alpha (\\phi_-) =\\alpha \\phi_+"
},
{
"math_id": 47,
"text": " \\operatorname{dom}(A_{\\alpha}) = \\{ f + \\beta (\\alpha \\phi_{-} - \\phi_+) | f \\in \\operatorname{dom}(A) , \\; \\beta \\in \\mathbb{C} \\}."
},
{
"math_id": 48,
"text": "f \\in \\operatorname{dom}(A_\\alpha)"
},
{
"math_id": 49,
"text": "f"
},
{
"math_id": 50,
"text": "\\left|\\frac{f(0)}{f(1)}\\right| = \\left|\\frac{e\\alpha -1}{\\alpha - e}\\right| = 1."
},
{
"math_id": 51,
"text": "f(0)=\\gamma f(1)"
},
{
"math_id": 52,
"text": "\\gamma \\in \\mathbb{T}"
},
{
"math_id": 53,
"text": "\\langle A x, x\\rangle\\ge 0, \\quad \\forall x\\in \\operatorname{dom}(A)."
},
{
"math_id": 54,
"text": "\\operatorname{dim}K_+ = \\operatorname{dim}K_-"
},
{
"math_id": 55,
"text": "A\\leq B"
},
{
"math_id": 56,
"text": "(A + 1)^{-1} \\ge (B + 1)^{-1}"
},
{
"math_id": 57,
"text": "\\Gamma"
},
{
"math_id": 58,
"text": "\\begin{align} \n&D_{ \\Gamma }\\; = (1 - \\Gamma^*\\Gamma )^{\\frac{1}{2}}\\\\\n&D_{\\Gamma^*} = (1 - \\Gamma \\Gamma^*)^{\\frac{1}{2}}\n\\end{align}"
},
{
"math_id": 59,
"text": "\\begin{align} \n&\\mathcal{D}_{\\Gamma}\\; = \\operatorname{ran}( D_{\\Gamma} )\\\\ \n&\\mathcal{D}_{\\Gamma^*} = \\operatorname{ran}( D_{\\Gamma^*})\n\\end{align}"
},
{
"math_id": 60,
"text": "\n\\Gamma =\n\\begin{bmatrix}\n\\Gamma_1 & D_{\\Gamma_1 ^*} \\Gamma_2\\\\\n\\Gamma_3 D_{\\Gamma_1} & - \\Gamma_3 \\Gamma_1^* \\Gamma_2 + D_{\\Gamma_3 ^*} \\Gamma_4 D_{\\Gamma_2}\n\\end{bmatrix}\n"
},
{
"math_id": 61,
"text": "\\Gamma_i"
},
{
"math_id": 62,
"text": "a"
},
{
"math_id": 63,
"text": "\\left|\\frac{a-1}{a+1}\\right| \\le 1."
},
{
"math_id": 64,
"text": "C_A : \\operatorname{ran}(A + 1) \\rightarrow \\operatorname{ran}(A-1) \\subset H "
},
{
"math_id": 65,
"text": "C_A (A+1)x = (A-1)x. \\quad \\mbox{i.e.} \\quad C_A = (A-1)(A+1)^{-1}.\\,"
},
{
"math_id": 66,
"text": "\nC_A =\n\\begin{bmatrix}\n\\Gamma_1 \\\\\n\\Gamma_3 D_{\\Gamma_1}\n\\end{bmatrix}\n: \\operatorname{ran}(A+1) \\rightarrow\n\\begin{matrix}\n\\operatorname{ran}(A+1) \\\\\n\\oplus \\\\\n\\operatorname{ran}(A+1)^{\\perp}\n\\end{matrix}.\n"
},
{
"math_id": 67,
"text": "\\Gamma_1"
},
{
"math_id": 68,
"text": "C_A"
},
{
"math_id": 69,
"text": "\\operatorname{ran}(A+1)=\\operatorname{dom}(C_A)"
},
{
"math_id": 70,
"text": "A = (1+ C_A)(1 - C_A)^{-1} \\,"
},
{
"math_id": 71,
"text": "\\operatorname{dom}(A)=\\operatorname{ran}(C_A -1)"
},
{
"math_id": 72,
"text": "\\tilde{C}"
},
{
"math_id": 73,
"text": "\\tilde{A} = ( 1 + \\tilde{C} ) ( 1 - \\tilde{C} )^{-1} "
},
{
"math_id": 74,
"text": "\\operatorname{ran}( 1 - \\tilde{C})"
},
{
"math_id": 75,
"text": "\n\\tilde{C}(\\Gamma_4) =\n\\begin{bmatrix}\n\\Gamma_1 & D_{\\Gamma_1} \\Gamma_3 ^* \\\\\n\\Gamma_3 D_{\\Gamma_1} & - \\Gamma_3 \\Gamma_1 \\Gamma_3^* + D_{\\Gamma_3^*} \\Gamma_4 D_{\\Gamma_3^*}\n\\end{bmatrix}.\n"
},
{
"math_id": 76,
"text": "\\Gamma_4"
},
{
"math_id": 77,
"text": "\\mathcal{D}_{\\Gamma_3^*}"
},
{
"math_id": 78,
"text": "\\Gamma_3"
},
{
"math_id": 79,
"text": "\\tilde{C}(-1)"
},
{
"math_id": 80,
"text": "\\tilde{C}(1)"
},
{
"math_id": 81,
"text": "A_0"
},
{
"math_id": 82,
"text": "A_{\\infty}"
},
{
"math_id": 83,
"text": "A_0 \\leq B \\leq A_{\\infty}"
},
{
"math_id": 84,
"text": "A_\\infty"
}
] | https://en.wikipedia.org/wiki?curid=10858909 |
10861304 | Gosset–Elte figures | In geometry, the Gosset–Elte figures, named by Coxeter after Thorold Gosset and E. L. Elte, are a group of uniform polytopes which are not regular, generated by a Wythoff construction with mirrors all related by order-2 and order-3 dihedral angles. They can be seen as "one-end-ringed" Coxeter–Dynkin diagrams.
The Coxeter symbol for these figures has the form "k""i,j", where each letter represents a length of order-3 branches on a Coxeter–Dynkin diagram with a single ring on the end node of a "k" length sequence of branches. The vertex figure of "k""i,j" is ("k" − 1)"i,j", and each of its facets are represented by subtracting one from one of the nonzero subscripts, i.e. "k""i" − 1,"j" and "k""i","j" − 1.
Rectified simplices are included in the list as limiting cases with "k"=0. Similarly "0""i,j,k" represents a bifurcated graph with a central node ringed.
History.
Coxeter named these figures as "k""i,j" (or "k""ij") in shorthand and gave credit of their discovery to Gosset and Elte:
Elte's enumeration included all the "k""ij" polytopes except for the "1""42" which has 3 types of 6-faces.
The set of figures extend into honeycombs of (2,2,2), (3,3,1), and (5,4,1) families in 6,7,8 dimensional Euclidean spaces respectively. Gosset's list included the "5""21" honeycomb as the only semiregular one in his definition.
Definition.
The polytopes and honeycombs in this family can be seen within ADE classification.
A finite polytope "k""ij" exists if
formula_0
or equal for Euclidean honeycombs, and less for hyperbolic honeycombs.
The Coxeter group [3i,j,k] can generate up to 3 unique uniform Gosset–Elte figures with Coxeter–Dynkin diagrams with one end node ringed. By Coxeter's notation, each figure is represented by kij to mean the end-node on the "k"-length sequence is ringed.
The simplex family can be seen as a limiting case with "k"=0, and all rectified (single-ring) Coxeter–Dynkin diagrams.
A-family [3"n"] (rectified simplices).
The family of "n"-simplices contain Gosset–Elte figures of the form 0ij as all rectified forms of the "n"-simplex ("i" + "j" = "n" − 1).
They are listed below, along with their Coxeter–Dynkin diagram, with each dimensional family drawn as a graphic orthogonal projection in the plane of the Petrie polygon of the regular simplex.
D-family [3"n"−3,1,1] demihypercube.
Each Dn group has two Gosset–Elte figures, the "n"-demihypercube as 1k1, and an alternated form of the "n"-orthoplex, k11, constructed with alternating simplex facets. Rectified "n"-demihypercubes, a lower symmetry form of a birectified "n"-cube, can also be represented as 0k11.
"E""n" family [3"n"−4,2,1].
Each En group from 4 to 8 has two or three Gosset–Elte figures, represented by one of the end-nodes ringed:k21, 1k2, 2k1. A rectified 1k2 series can also be represented as 0k21.
Euclidean and hyperbolic honeycombs.
There are three Euclidean (affine) Coxeter groups in dimensions 6, 7, and 8:
There are three hyperbolic (paracompact) Coxeter groups in dimensions 7, 8, and 9:
As a generalization more order-3 branches can also be expressed in this symbol. The 4-dimensional affine Coxeter group, formula_1, [31,1,1,1], has four order-3 branches, and can express one honeycomb, 1111, , represents a lower symmetry form of the 16-cell honeycomb, and 01111, for the rectified 16-cell honeycomb. The 5-dimensional hyperbolic Coxeter group, formula_2, [31,1,1,1,1], has five order-3 branches, and can express one honeycomb, 11111, and its rectification as 011111, .
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{i+1}+\\frac{1}{j+1}+\\frac{1}{k+1}>1"
},
{
"math_id": 1,
"text": "{\\tilde{Q}}_4"
},
{
"math_id": 2,
"text": "{\\bar{L}}_4"
}
] | https://en.wikipedia.org/wiki?curid=10861304 |
1086619 | Riemannian submanifold | A Riemannian submanifold formula_2 of a Riemannian manifold formula_3 is a submanifold formula_2 of formula_3 equipped with the Riemannian metric inherited from formula_3.
Specifically, if formula_4 is a Riemannian manifold (with or without boundary) and formula_5 is an immersed submanifold or an embedded submanifold (with or without boundary), the pullback formula_6 of formula_7 is a Riemannian metric on formula_2, and formula_8 is said to be a Riemannian submanifold of formula_4. On the other hand, if formula_2 already has a Riemannian metric formula_9, then the immersion (or embedding) formula_5 is called an isometric immersion (or isometric embedding) if formula_10. Hence isometric immersions and isometric embeddings are Riemannian submanifolds.
For example, the n-sphere formula_11 is an embedded Riemannian submanifold of formula_1 via the inclusion map formula_12 that takes a point in formula_0 to the corresponding point in the superset formula_1. The induced metric on formula_0 is called the round metric.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S^n"
},
{
"math_id": 1,
"text": "\\mathbb R^{n+1}"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "(M,g)"
},
{
"math_id": 5,
"text": "i : N \\to M"
},
{
"math_id": 6,
"text": "i^* g"
},
{
"math_id": 7,
"text": "g"
},
{
"math_id": 8,
"text": "(N, i^*g)"
},
{
"math_id": 9,
"text": "\\tilde g"
},
{
"math_id": 10,
"text": "\\tilde g = i^* g"
},
{
"math_id": 11,
"text": "S^n = \\{ x \\in \\mathbb R^{n+1} : \\lVert x \\rVert = 1 \\}"
},
{
"math_id": 12,
"text": "S^n \\hookrightarrow \\mathbb R^{n+1}"
}
] | https://en.wikipedia.org/wiki?curid=1086619 |
1086656 | Taut submanifold | In mathematics, a (compact) taut submanifold "N" of a space form "M" is a compact submanifold with the property that for every formula_0 the distance function
formula_1
is a perfect Morse function.
If "N" is not compact, one needs to consider the restriction of the formula_2 to any of their sublevel sets. | [
{
"math_id": 0,
"text": "q\\in M"
},
{
"math_id": 1,
"text": "L_q:N\\to\\mathbf R,\\qquad L_q(x) = \\operatorname{dist}(x,q)^2"
},
{
"math_id": 2,
"text": "L_q"
}
] | https://en.wikipedia.org/wiki?curid=1086656 |
10867794 | Synchronization of chaos | Synchronization of chaos is a phenomenon that may occur when two or more dissipative chaotic systems are coupled.
Because of the exponential divergence of the nearby trajectories of chaotic systems, having two chaotic systems evolving in synchrony might appear surprising. However, synchronization of coupled or driven chaotic oscillators is a phenomenon well established experimentally and reasonably well-understood theoretically.
The stability of synchronization for coupled systems can be analyzed using master stability. Synchronization of chaos is a rich phenomenon and a multi-disciplinary subject with a broad range of applications.
Synchronization may present a variety of forms depending on the nature of the interacting systems and the type of coupling, and the proximity between the systems.
Identical synchronization.
This type of synchronization is also known as complete synchronization. It can be observed for identical chaotic systems.
The systems are said to be completely synchronized when there is a set of initial conditions so that the systems eventually
evolve identically in time. In the simplest case of two diffusively coupled
dynamics is described by
formula_0
formula_1
where formula_2 is the vector field modeling the isolated chaotic dynamics and formula_3 is the coupling parameter.
The regime formula_4 defines an invariant subspace of the coupled system, if this subspace formula_4 is
locally attractive then the coupled system exhibit identical synchronization.
If the coupling vanishes the oscillators are decoupled, and the chaotic behavior leads to a divergence of nearby trajectories. Complete synchronization
occurs due to the interaction, if the coupling parameter is large enough so that the divergence of trajectories of interacting systems due to chaos is suppressed by the diffusive coupling. To find the critical coupling strength we study the behavior of the difference formula_5. Assuming that formula_6 is
small we can expand the vector field in series and obtain a linear differential equation - by neglecting the Taylor remainder - governing the behavior of the difference
formula_7
where formula_8 denotes the Jacobian of the vector field along the solution. If formula_9 then we obtain
formula_10
and since the dynamics of chaotic we have formula_11,
where formula_12 denotes the maximum Lyapunov exponent of the isolated system. Now using the ansatz formula_13
we pass from the equation for formula_14 to the equation for formula_15. Therefore, we obtain
formula_16
yield a critical coupling strength formula_17, for all formula_18 the system exhibit complete synchronization.
The existence of a critical coupling strength is related to the chaotic nature of the isolated dynamics.
In general, this reasoning leads to the correct critical coupling value for synchronization. However, in some cases one might
observe loss of synchronization for coupling strengths larger than the critical value. This occurs because the nonlinear terms
neglected in the derivation of the critical coupling value can play an important role and destroy the exponential bound for the
behavior of the difference. It is however, possible to give a rigorous treatment to this problem and obtain a critical value so that the
nonlinearities will not affect the stability.
Generalized synchronization.
This type of synchronization occurs mainly when the coupled chaotic oscillators are different, although it has also been reported between identical oscillators. Given the dynamical variables formula_19 and formula_20 that determine the state of the oscillators, generalized synchronization occurs when there is a functional, formula_21, such that, after a transitory evolution from appropriate initial conditions, it is formula_22. This means that the dynamical state of one of the oscillators is completely determined by the state of the other. When the oscillators are mutually coupled this functional has to be invertible, if there is a drive-response configuration the drive determines the evolution of the response, and Φ does not need to be invertible. Identical synchronization is the particular case of generalized synchronization when formula_21 is the identity.
Phase synchronization.
Phase synchronization occurs when the coupled chaotic oscillators keep their phase difference bounded while their amplitudes remain uncorrelated.
This phenomenon occurs even if the oscillators are not identical. Observation of phase synchronization requires a previous definition of the phase of a chaotic oscillator. In many practical cases, it is possible to find a plane in phase space in which the projection of the trajectories of the oscillator follows a rotation around a well-defined center. If this is the case, the phase is defined by the angle, φ(t), described by the segment joining the center of rotation and the projection of the trajectory point onto the plane. In other cases it is still possible to define a phase by means of techniques provided by the theory of signal processing, such as the Hilbert transform. In any case, if φ1(t) and φ2(t) denote the phases of the two coupled oscillators, synchronization of the phase is given by the relation nφ1(t)=mφ2(t) with m and n whole numbers.
Anticipated and lag synchronization.
In these cases, the synchronized state is characterized by a time interval τ such that the dynamical variables of the oscillators, formula_19 and formula_23, are related by formula_24; this means that the dynamics of one of the oscillators follows, or anticipates, the dynamics of the other. Anticipated synchronization may occur between chaotic oscillators whose dynamics is described by delay differential equations, coupled in a drive-response configuration. In this case, the response anticipates the dynamics of the drive. Lag synchronization may occur when the strength of the coupling between phase-synchronized oscillators is increased.
Amplitude envelope synchronization.
This is a mild form of synchronization that may appear between two weakly coupled chaotic oscillators. In this case, there is no correlation between phases nor amplitudes; instead, the oscillations of the two systems develop a periodic envelope that has the same frequency in the two systems.
This has the same order of magnitude than the difference between the average frequencies of oscillation of the two chaotic oscillator. Often, amplitude envelope synchronization precedes phase synchronization in the sense that when the strength of the coupling between two amplitude envelope synchronized oscillators is increased, phase synchronization develops.
All these forms of synchronization share the property of asymptotic stability. This means that once the synchronized state has been reached, the effect of a small perturbation that destroys synchronization is rapidly damped, and synchronization is recovered again. Mathematically, asymptotic stability is characterized by a positive Lyapunov exponent of the system composed of the two oscillators, which becomes negative when chaotic synchronization is achieved.
Some chaotic systems allow even stronger control of chaos, and both synchronization of chaos and control of chaos constitute parts of what's known as "cybernetical physics".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " x^{\\prime} = F(x) + \\alpha(y-x)"
},
{
"math_id": 1,
"text": " y^{\\prime} = F(y) + \\alpha(x-y)"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "x(t) = y(t)"
},
{
"math_id": 5,
"text": " v = x - y"
},
{
"math_id": 6,
"text": " v "
},
{
"math_id": 7,
"text": " v^{\\prime} = DF(x(t)) v - 2 \\alpha v "
},
{
"math_id": 8,
"text": " DF(x(t)) "
},
{
"math_id": 9,
"text": " \\alpha = 0"
},
{
"math_id": 10,
"text": " u^{\\prime} = DF(x(t)) u, "
},
{
"math_id": 11,
"text": " \\| u(t) \\| \\le \\| u(0) \\| e^{\\lambda t}"
},
{
"math_id": 12,
"text": " \\lambda "
},
{
"math_id": 13,
"text": " v = u e^{-2 \\alpha t} "
},
{
"math_id": 14,
"text": "v"
},
{
"math_id": 15,
"text": "u"
},
{
"math_id": 16,
"text": " \\|v(t) \\| \\le \\|u(0)\\| e^{(-2 \\alpha + \\lambda) t} "
},
{
"math_id": 17,
"text": " \\alpha_c = \\lambda/2"
},
{
"math_id": 18,
"text": " \\alpha > \\alpha_c"
},
{
"math_id": 19,
"text": "(x_1,x_2,...x_n)"
},
{
"math_id": 20,
"text": "(y_1,y_2,...y_m)"
},
{
"math_id": 21,
"text": "\\phi"
},
{
"math_id": 22,
"text": "[y_1(t),y_2(t),...,y_m(t)=\\phi[x_1(t),x_2(t),...,x_n(t)]"
},
{
"math_id": 23,
"text": "(x'_1,x'_2,...x'_n)"
},
{
"math_id": 24,
"text": "x'_i(t)=x_i(t+\\tau)"
}
] | https://en.wikipedia.org/wiki?curid=10867794 |
10868511 | Markovian arrival process | In queueing theory, a discipline within the mathematical theory of probability, a Markovian arrival process (MAP or MArP) is a mathematical model for the time between job arrivals to a system. The simplest such process is a Poisson process where the time between each arrival is exponentially distributed.
The processes were first suggested by Marcel F. Neuts in 1979.
Definition.
A Markov arrival process is defined by two matrices, "D"0 and "D"1 where elements of "D"0 represent hidden transitions and elements of "D"1 observable transitions. The block matrix "Q" below is a transition rate matrix for a continuous-time Markov chain.
formula_0
The simplest example is a Poisson process where "D"0 = −"λ" and "D"1 = "λ" where there is only one possible transition, it is observable, and occurs at rate "λ". For "Q" to be a valid transition rate matrix, the following restrictions apply to the "D""i"
formula_1
Special cases.
Phase-type renewal process.
The phase-type renewal process is a Markov arrival process with phase-type distributed sojourn between arrivals. For example, if an arrival process has an interarrival time distribution PHformula_2 with an exit vector denoted formula_3, the arrival process has generator matrix,
formula_4
Generalizations.
Batch Markov arrival process.
The batch Markovian arrival process ("BMAP") is a generalisation of the Markovian arrival process by allowing more than one arrival at a time. The homogeneous case has rate matrix,
formula_5
An arrival of size formula_6 occurs every time a transition occurs in the sub-matrix formula_7. Sub-matrices formula_7 have elements of formula_8, the rate of a Poisson process, such that,
formula_9
formula_10
formula_11
and
formula_12
Markov-modulated Poisson process.
The Markov-modulated Poisson process or MMPP where "m" Poisson processes are switched between by an underlying continuous-time Markov chain. If each of the "m" Poisson processes has rate "λ""i" and the modulating continuous-time Markov has "m" × "m" transition rate matrix "R", then the MAP representation is
formula_13
Fitting.
A MAP can be fitted using an expectation–maximization algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nQ=\\left[\\begin{matrix}\nD_{0}&D_{1}&0&0&\\dots\\\\\n0&D_{0}&D_{1}&0&\\dots\\\\\n0&0&D_{0}&D_{1}&\\dots\\\\\n\\vdots & \\vdots & \\ddots & \\ddots & \\ddots\n\\end{matrix}\\right]\\; ."
},
{
"math_id": 1,
"text": "\\begin{align}\n0\\leq [D_{1}]_{i,j}&<\\infty \\\\\n0\\leq [D_{0}]_{i,j}&<\\infty \\quad i\\neq j \\\\\n\\, [D_{0}]_{i,i}&<0 \\\\\n(D_{0}+D_{1})\\boldsymbol{1} &= \\boldsymbol{0}\n\\end{align}"
},
{
"math_id": 2,
"text": "(\\boldsymbol{\\alpha},S)"
},
{
"math_id": 3,
"text": "\\boldsymbol{S}^{0}=-S\\boldsymbol{1}"
},
{
"math_id": 4,
"text": "\nQ=\\left[\\begin{matrix}\nS&\\boldsymbol{S}^{0}\\boldsymbol{\\alpha}&0&0&\\dots\\\\\n0&S&\\boldsymbol{S}^{0}\\boldsymbol{\\alpha}&0&\\dots\\\\\n0&0&S&\\boldsymbol{S}^{0}\\boldsymbol{\\alpha}&\\dots\\\\\n\\vdots&\\vdots&\\ddots&\\ddots&\\ddots\\\\\n\\end{matrix}\\right]\n"
},
{
"math_id": 5,
"text": "\nQ=\\left[\\begin{matrix}\nD_{0}&D_{1}&D_{2}&D_{3}&\\dots\\\\\n0&D_{0}&D_{1}&D_{2}&\\dots\\\\\n0&0&D_{0}&D_{1}&\\dots\\\\\n\\vdots & \\vdots & \\ddots & \\ddots & \\ddots\n\\end{matrix}\\right]\\; ."
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "D_{k}"
},
{
"math_id": 8,
"text": "\\lambda_{i,j}"
},
{
"math_id": 9,
"text": "\n0\\leq [D_{k}]_{i,j}<\\infty\\;\\;\\;\\; 1\\leq k\n"
},
{
"math_id": 10,
"text": "\n0\\leq [D_{0}]_{i,j}<\\infty\\;\\;\\;\\; i\\neq j\n"
},
{
"math_id": 11,
"text": "\n[D_{0}]_{i,i}<0\\; \n"
},
{
"math_id": 12,
"text": "\n\\sum^{\\infty}_{k=0}D_{k}\\boldsymbol{1}=\\boldsymbol{0}\n"
},
{
"math_id": 13,
"text": "\\begin{align}\nD_{1} &= \\operatorname{diag}\\{\\lambda_{1},\\dots,\\lambda_{m}\\}\\\\\nD_{0} &=R-D_1.\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10868511 |
1087009 | Space form | In mathematics, a space form is a complete Riemannian manifold "M" of constant sectional curvature "K". The three most fundamental examples are Euclidean "n"-space, the "n"-dimensional sphere, and hyperbolic space, although a space form need not be simply connected.
Reduction to generalized crystallography.
The Killing–Hopf theorem of Riemannian geometry states that the universal cover of an "n"-dimensional space form formula_0 with curvature formula_1 is isometric to formula_2, hyperbolic space, with curvature formula_3 is isometric to formula_4, Euclidean "n"-space, and with curvature formula_5 is isometric to formula_6, the n-dimensional sphere of points distance 1 from the origin in formula_7.
By rescaling the Riemannian metric on formula_2, we may create a space formula_8 of constant curvature formula_9 for any formula_10. Similarly, by rescaling the Riemannian metric on formula_6, we may create a space formula_8 of constant curvature formula_9 for any formula_11. Thus the universal cover of a space form formula_12 with constant curvature formula_9 is isometric to formula_8.
This reduces the problem of studying space forms to studying discrete groups of isometries formula_13 of formula_8 which act properly discontinuously. Note that the fundamental group of formula_12, formula_14, will be isomorphic to formula_13. Groups acting in this manner on formula_4 are called crystallographic groups. Groups acting in this manner on formula_15 and formula_16 are called Fuchsian groups and Kleinian groups, respectively. | [
{
"math_id": 0,
"text": "M^n"
},
{
"math_id": 1,
"text": "K = -1"
},
{
"math_id": 2,
"text": "H^n"
},
{
"math_id": 3,
"text": "K = 0"
},
{
"math_id": 4,
"text": "R^n"
},
{
"math_id": 5,
"text": "K = +1"
},
{
"math_id": 6,
"text": "S^n"
},
{
"math_id": 7,
"text": "R^{n+1}"
},
{
"math_id": 8,
"text": "M_K"
},
{
"math_id": 9,
"text": "K"
},
{
"math_id": 10,
"text": "K < 0"
},
{
"math_id": 11,
"text": "K > 0"
},
{
"math_id": 12,
"text": "M"
},
{
"math_id": 13,
"text": "\\Gamma"
},
{
"math_id": 14,
"text": "\\pi_1(M)"
},
{
"math_id": 15,
"text": "H^2"
},
{
"math_id": 16,
"text": "H^3"
}
] | https://en.wikipedia.org/wiki?curid=1087009 |
10871335 | Gammatone filter | Linear filter
A gammatone filter is a linear filter described by an impulse response that is the product of a gamma distribution and sinusoidal tone. It is a widely used model of auditory filters in the auditory system.
A gammatone response was originally proposed in 1972 as a description of revcor functions measured in the cochlear nucleus of cats.
The gammatone impulse response is given by
formula_0
where
formula_1 (in Hz) is the center frequency,
formula_2 (in radians) is the phase of the carrier,
formula_3 is the amplitude,
formula_4 is the filter's order,
formula_5 (in Hz) is the filter's bandwidth, and
formula_6 (in seconds) is time.
This time-domain impulse response is a sinusoid (a pure tone) with an amplitude envelope which is a scaled gamma distribution function.
Gammatone filterbank cepstral coefficients (GFCCs) are auditory features that have been used first in the speech domain, and later in the field of underwater target recognition.
A bank of gammatone filters is used as an improvement on the triangular filters conventionally used in mel scale filterbanks and MFCC features.
Different ways of motivating the gammatone filter for auditory processing have been presented by
Johannesma,
Patterson et al.,
Hewitt and Meddis,
and Lindeberg and Friberg.
Variations.
Variations and improvements of the gammatone model of auditory filtering include the complex gammatone filter, the gammachirp filter, the all-pole and one-zero gammatone filters, the two-sided gammatone filter, and filter-cascade models, and various level-dependent and dynamically nonlinear versions of these. Lindeberg and Friberg define a new family of generalized gammatone filters.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\ng(t) = at^{n-1} e^{-2\\pi bt} \\cos(2\\pi ft + \\phi), \\,\n"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=10871335 |
10874374 | Dynkin index | In mathematics, the Dynkin index formula_0 of finite-dimensional highest-weight representations of a compact simple Lie algebra formula_1 relates their trace forms via
formula_2
In the particular case where formula_3 is the highest root, so that formula_4 is the adjoint representation, the Dynkin index formula_5 is equal to the dual Coxeter number.
The notation formula_6 is the trace form on the representation formula_7. By Schur's lemma, since the trace forms are all invariant forms, they are related by constants, so the index is well-defined.
Since the trace forms are bilinear forms, we can take traces to obtain
formula_8
where the Weyl vector
formula_9
is equal to half of the sum of all the positive roots of formula_1. The expression formula_10 is the value of the quadratic Casimir in the representation formula_4. | [
{
"math_id": 0,
"text": "I({\\lambda})"
},
{
"math_id": 1,
"text": "\\mathfrak g"
},
{
"math_id": 2,
"text": " \\frac{\\text{Tr}_{V_\\lambda}}{\\text{Tr}_{V_\\mu}}= \\frac{I(\\lambda)}{I(\\mu)}."
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "V_\\lambda"
},
{
"math_id": 5,
"text": "I(\\lambda)"
},
{
"math_id": 6,
"text": "\\text{Tr}_V"
},
{
"math_id": 7,
"text": "\\rho: \\mathfrak{g} \\rightarrow \\text{End}(V)"
},
{
"math_id": 8,
"text": "I(\\lambda)=\\frac{\\dim V_\\lambda}{2\\dim\\mathfrak g}(\\lambda, \\lambda +2\\rho)"
},
{
"math_id": 9,
"text": "\\rho=\\frac{1}{2}\\sum_{\\alpha\\in \\Delta^+} \\alpha"
},
{
"math_id": 10,
"text": "(\\lambda, \\lambda +2\\rho)"
}
] | https://en.wikipedia.org/wiki?curid=10874374 |
10874531 | Pert | Pert or PERT may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "P e ^ {rt}"
}
] | https://en.wikipedia.org/wiki?curid=10874531 |
1087483 | Projectionless C*-algebra | In mathematics, a projectionless C*-algebra is a C*-algebra with no nontrivial projections. For a unital C*-algebra, the projections 0 and 1 are trivial. While for a non-unital C*-algebra, only 0 is considered trivial. The problem of whether simple infinite-dimensional C*-algebras with this property exist was posed in 1958 by Irving Kaplansky, and the first example of one was published in 1981 by Bruce Blackadar. For commutative C*-algebras, being projectionless is equivalent to its spectrum being connected. Due to this, being projectionless can be considered as a noncommutative analogue of a connected space.
Dimension drop algebras.
Let formula_0 be the class consisting of the C*-algebras formula_1 for each formula_2, and let formula_3 be the class of all C*-algebras of the form
formula_4,
where formula_5 are integers, and where formula_6 belong to formula_7.
Every C*-algebra A in formula_3 is projectionless, moreover, its only projection is 0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{B}_0"
},
{
"math_id": 1,
"text": "C_0(\\mathbb{R}), C_0(\\mathbb{R}^2), D_n, SD_n"
},
{
"math_id": 2,
"text": "n \\geq 2"
},
{
"math_id": 3,
"text": "\\mathcal{B}"
},
{
"math_id": 4,
"text": "M_{k_1}(B_1) \\oplus M_{k_2}(B_2) \\oplus ... \\oplus M_{k_r}(B_r) "
},
{
"math_id": 5,
"text": "r, k_1, ..., k_r "
},
{
"math_id": 6,
"text": "B_1, ..., B_r "
},
{
"math_id": 7,
"text": "\\mathcal{B}_0 "
}
] | https://en.wikipedia.org/wiki?curid=1087483 |
10875031 | Periodic continued fraction | In mathematics, an infinite periodic continued fraction is a continued fraction that can be placed in the form
formula_0
where the initial block ["a""0", "a"1..."a""k"] of "k"+1 partial denominators is followed by a block ["a""k"+1, "a""k"+2..."a""k"+"m"] of "m" partial denominators that repeats "ad infinitum". For example, formula_1 can be expanded to the periodic continued fraction [1,2,2,2...].
This article considers only the case of periodic regular continued fractions. In other words, the remainder of this article assumes that all the partial denominators "a""i" ("i" ≥ 1) are positive integers. The general case, where the partial denominators "a""i" are arbitrary real or complex numbers, is treated in the article convergence problem.
Purely periodic and periodic fractions.
Since all the partial numerators in a regular continued fraction are equal to unity we can adopt a shorthand notation in which the continued fraction shown above is written as
formula_2
where, in the second line, a vinculum marks the repeating block. Some textbooks use the notation
formula_3
where the repeating block is indicated by dots over its first and last terms.
If the initial non-repeating block is not present – that is, if k = -1, a0 = am and
formula_4
the regular continued fraction "x" is said to be "purely periodic". For example, the regular continued fraction [1; 1, 1, 1, ...] of the golden ratio φ is purely periodic, while the regular continued fraction [1; 2, 2, 2, ...] of formula_1 is periodic, but not purely periodic.
As unimodular matrices.
Periodic continued fractions are in one-to-one correspondence with the real quadratic irrationals. The correspondence is explicitly provided by Minkowski's question-mark function. That article also reviews tools that make it easy to work with such continued fractions. Consider first the purely periodic part
formula_5
This can, in fact, be written as
formula_6
with the formula_7 being integers, and satisfying formula_8 Explicit values can be obtained by writing
formula_9
which is termed a "shift", so that
formula_10
and similarly a reflection, given by
formula_11
so that formula_12. Both of these matrices are unimodular, arbitrary products remain unimodular. Then, given formula_13 as above, the corresponding matrix is of the form
formula_14
and one has
formula_15
as the explicit form. As all of the matrix entries are integers, this matrix belongs to the modular group formula_16
Relation to quadratic irrationals.
A quadratic irrational number is an irrational real root of the quadratic equation
formula_17
where the coefficients "a", "b", and "c" are integers, and the discriminant, "b"2 − 4"ac", is greater than zero. By the quadratic formula, every quadratic irrational can be written in the form
formula_18
where "P", "D", and "Q" are integers, "D" > 0 is not a perfect square (but not necessarily square-free), and "Q" divides the quantity "P"2 − "D" (for example (6+√8)/4). Such a quadratic irrational may also be written in another form with a square-root of a square-free number (for example (3+√2)/2) as explained for quadratic irrationals.
By considering the complete quotients of periodic continued fractions, Euler was able to prove that if "x" is a regular periodic continued fraction, then "x" is a quadratic irrational number. The proof is straightforward. From the fraction itself, one can construct the quadratic equation with integral coefficients that "x" must satisfy.
Lagrange proved the converse of Euler's theorem: if "x" is a quadratic irrational, then the regular continued fraction expansion of "x" is periodic. Given a quadratic irrational "x" one can construct "m" different quadratic equations, each with the same discriminant, that relate the successive complete quotients of the regular continued fraction expansion of "x" to one another. Since there are only finitely many of these equations (the coefficients are bounded), the complete quotients (and also the partial denominators) in the regular continued fraction that represents "x" must eventually repeat.
Reduced surds.
The quadratic surd formula_19 is said to be "reduced" if formula_20 and its conjugate formula_21
satisfies the inequalities formula_22. For instance, the golden ratio formula_23 is a reduced surd because it is greater than one and its conjugate formula_24 is greater than −1 and less than zero. On the other hand, the square root of two formula_25 is greater than one but is not a reduced surd because its conjugate formula_26 is less than −1.
Galois proved that the regular continued fraction which represents a quadratic surd ζ is purely periodic if and only if ζ is a reduced surd. In fact, Galois showed more than this. He also proved that if ζ is a reduced quadratic surd and η is its conjugate, then the continued fractions for ζ and for (−1/η) are both purely periodic, and the repeating block in one of those continued fractions is the mirror image of the repeating block in the other. In symbols we have
formula_27
where ζ is any reduced quadratic surd, and η is its conjugate.
From these two theorems of Galois a result already known to Lagrange can be deduced. If "r" > 1 is a rational number that is not a perfect square, then
formula_28
In particular, if "n" is any non-square positive integer, the regular continued fraction expansion of √"n" contains a repeating block of length "m", in which the first "m" − 1 partial denominators form a palindromic string.
Length of the repeating block.
By analyzing the sequence of combinations
formula_29
that can possibly arise when ζ = ("P" + √"D")/"Q" is expanded as a regular continued fraction, Lagrange showed that the largest partial denominator "a""i" in the expansion is less than 2√"D", and that the length of the repeating block is less than 2"D".
More recently, sharper arguments based on the divisor function have shown that the length of the repeating block for a quadratic surd of discriminant "D" is on the order of formula_30
Canonical form and repetend.
The following iterative algorithm can be used to obtain the continued fraction expansion in canonical form ("S" is any natural number that is not a perfect square):
formula_31
formula_32
formula_33
formula_34
formula_35
formula_36
Notice that "m"n, "d"n, and "a"n are always integers.
The algorithm terminates when this triplet is the same as one encountered before.
The algorithm can also terminate on ai when ai = 2 a0, which is easier to implement.
The expansion will repeat from then on. The sequence ["a"0; "a"1, "a"2, "a"3, ...] is the continued fraction expansion:
formula_37
Example.
To obtain √114 as a continued fraction, begin with "m"0 = 0; "d"0 = 1; and "a"0 = 10 (102 = 100 and 112 = 121 > 114 so 10 chosen).
formula_38
formula_39
formula_40
formula_41
So, "m"1 = 10; "d"1 = 14; and "a"1 = 1.
formula_42
Next, "m"2 = 4; "d"2 = 7; and "a"2 = 2.
formula_43
formula_44
formula_45
formula_46
formula_47
Now, loop back to the second equation above.
Consequently, the simple continued fraction for the square root of 114 is
formula_48 (sequence in the OEIS)
√114 is approximately 10.67707 82520. After one expansion of the repetend, the continued fraction yields the rational fraction formula_49 whose decimal value is approx. 10.67707 80856, a relative error of
0.0000016% or 1.6 parts in 100,000,000.
Generalized continued fraction.
A more rapid method is to evaluate its generalized continued fraction. From the formula derived there:
formula_50
and the fact that 114 is 2/3 of the way between 102=100 and 112=121 results in
formula_51
which is simply the aforementioned [10;1,2, 10,2,1, 20,1,2] evaluated at every third term. Combining pairs of fractions produces
formula_52
which is now formula_53 evaluated at the third term and every six terms thereafter.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\nx = a_0 + \\cfrac{1}{a_1 + \\cfrac{1}{a_2 + \\cfrac{1}{\\quad\\ddots\\quad a_k + \\cfrac{1}{a_{k+1} + \\cfrac{\\ddots}{\\quad\\ddots\\quad a_{k+m-1} + \\cfrac{1}{a_{k+m} + \\cfrac{1}{a_{k+1} + \\cfrac{1}{a_{k+2} + {\\ddots}}}}}}}}}\n"
},
{
"math_id": 1,
"text": "\\sqrt2"
},
{
"math_id": 2,
"text": "\n\\begin{align}\nx& = [a_0;a_1,a_2,\\dots,a_k,a_{k+1},a_{k+2},\\dots,a_{k+m},a_{k+1},a_{k+2},\\dots,a_{k+m},\\dots]\\\\\n& = [a_0;a_1,a_2,\\dots,a_k,\\overline{a_{k+1},a_{k+2},\\dots,a_{k+m}}]\n\\end{align}\n"
},
{
"math_id": 3,
"text": "\n\\begin{align}\nx& = [a_0;a_1,a_2,\\dots,a_k,\\dot a_{k+1},a_{k+2},\\dots,\\dot a_{k+m}]\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\nx = [\\overline{a_0;a_1,a_2,\\dots,a_{m-1}}],\n"
},
{
"math_id": 5,
"text": " \nx = [0;\\overline{a_1,a_2,\\dots,a_m}],\n"
},
{
"math_id": 6,
"text": "x = \\frac{\\alpha x+\\beta}{\\gamma x+\\delta}"
},
{
"math_id": 7,
"text": "\\alpha,\\beta,\\gamma,\\delta"
},
{
"math_id": 8,
"text": "\\alpha \\delta-\\beta \\gamma=1."
},
{
"math_id": 9,
"text": "S = \\begin{pmatrix} 1 & 0\\\\ 1 & 1\\end{pmatrix}"
},
{
"math_id": 10,
"text": "S^n = \\begin{pmatrix} 1 & 0\\\\ n & 1\\end{pmatrix}"
},
{
"math_id": 11,
"text": "T\\mapsto \\begin{pmatrix} -1 & 1\\\\ 0 & 1\\end{pmatrix}"
},
{
"math_id": 12,
"text": "T^2=I"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "S^{a_1}TS^{a_2}T\\cdots TS^{a_m} = \\begin{pmatrix} \\alpha & \\beta\\\\ \\gamma & \\delta\\end{pmatrix}"
},
{
"math_id": 15,
"text": "x = [0;\\overline{a_1,a_2,\\dots,a_m}] = \\frac{\\alpha x+\\beta}{\\gamma x+\\delta}"
},
{
"math_id": 16,
"text": "SL(2,\\mathbb{Z})."
},
{
"math_id": 17,
"text": "\nax^2 + bx + c = 0\n"
},
{
"math_id": 18,
"text": "\n\\zeta = \\frac{P+\\sqrt{D}}{Q}\n"
},
{
"math_id": 19,
"text": "\\zeta = \\frac{P + \\sqrt{D}}{Q}"
},
{
"math_id": 20,
"text": "\\zeta > 1"
},
{
"math_id": 21,
"text": "\\eta = \\frac{P - \\sqrt{D}}{Q}"
},
{
"math_id": 22,
"text": "-1 < \\eta < 0"
},
{
"math_id": 23,
"text": "\\phi = (1 + \\sqrt{5})/2 = 1.618033..."
},
{
"math_id": 24,
"text": "(1 - \\sqrt{5})/2 = -0.618033..."
},
{
"math_id": 25,
"text": "\\sqrt{2} = (0 + \\sqrt{8})/2"
},
{
"math_id": 26,
"text": "-\\sqrt{2} = (0 - \\sqrt{8})/2"
},
{
"math_id": 27,
"text": "\n\\begin{align}\n\\zeta& = [\\overline{a_0;a_1,a_2,\\dots,a_{m-1}}]\\\\[3pt]\n\\frac{-1}{\\eta}& = [\\overline{a_{m-1};a_{m-2},a_{m-3},\\dots,a_0}]\\,\n\\end{align}\n"
},
{
"math_id": 28,
"text": "\n\\sqrt{r} = [a_0;\\overline{a_1,a_2,\\dots,a_2,a_1,2a_0}].\n"
},
{
"math_id": 29,
"text": "\n\\frac{P_n + \\sqrt{D}}{Q_n}\n"
},
{
"math_id": 30,
"text": "\n\\mathcal{O}(\\sqrt{D}\\ln{D}).\n"
},
{
"math_id": 31,
"text": "m_0=0\\,\\!"
},
{
"math_id": 32,
"text": "d_0=1\\,\\!"
},
{
"math_id": 33,
"text": "a_0=\\left\\lfloor\\sqrt{S}\\right\\rfloor\\,\\!"
},
{
"math_id": 34,
"text": "m_{n+1}=d_na_n-m_n\\,\\!"
},
{
"math_id": 35,
"text": "d_{n+1}=\\frac{S-m_{n+1}^2}{d_n}\\,\\!"
},
{
"math_id": 36,
"text": "a_{n+1} =\\left\\lfloor\\frac{\\sqrt{S}+m_{n+1}}{d_{n+1}}\\right\\rfloor =\\left\\lfloor\\frac{a_0+m_{n+1}}{d_{n+1}}\\right\\rfloor\\!."
},
{
"math_id": 37,
"text": "\\sqrt{S} = a_0 + \\cfrac{1}{a_1 + \\cfrac{1}{a_2 + \\cfrac{1}{a_3+\\,\\ddots}}} "
},
{
"math_id": 38,
"text": "\n\\begin{align}\n\\sqrt{114} & = \\frac{\\sqrt{114}+0}{1} = 10+\\frac{\\sqrt{114}-10}{1} = 10+\\frac{(\\sqrt{114}-10)(\\sqrt{114}+10)}{\\sqrt{114}+10} \\\\\n& = 10+\\frac{114-100}{\\sqrt{114}+10} = 10+\\frac{1}{\\frac{\\sqrt{114}+10}{14}}.\n\\end{align}\n"
},
{
"math_id": 39,
"text": "m_{1} = d_{0} \\cdot a_{0} - m_{0} = 1 \\cdot 10 - 0 = 10 \\,."
},
{
"math_id": 40,
"text": "d_{1} = \\frac{S-m_{1}^2}{d_0} = \\frac{114-10^2}{1} = 14 \\,."
},
{
"math_id": 41,
"text": "a_{1} = \\left\\lfloor \\frac{a_0+m_{1}}{d_{1}} \\right\\rfloor = \\left\\lfloor \\frac{10+10}{14} \\right\\rfloor = \\left\\lfloor \\frac{20}{14} \\right\\rfloor = 1 \\,."
},
{
"math_id": 42,
"text": "\n\\frac{\\sqrt{114}+10}{14} = 1+\\frac{\\sqrt{114}-4}{14} = 1+\\frac{114-16}{14(\\sqrt{114}+4)} = 1+\\frac{1}{\\frac{\\sqrt{114}+4}{7}}.\n"
},
{
"math_id": 43,
"text": "\n\\frac{\\sqrt{114}+4}{7} = 2+\\frac{\\sqrt{114}-10}{7} = 2+\\frac{14}{7(\\sqrt{114}+10)} = 2+\\frac{1}{\\frac{\\sqrt{114}+10}{2}}.\n"
},
{
"math_id": 44,
"text": "\\frac{\\sqrt{114}+10}{2}=10+\\frac{\\sqrt{114}-10}{2}=10+\\frac{14}{2(\\sqrt{114}+10)} = 10+\\frac{1}{\\frac{\\sqrt{114}+10}{7}}."
},
{
"math_id": 45,
"text": "\\frac{\\sqrt{114}+10}{7}=2+\\frac{\\sqrt{114}-4}{7}=2+\\frac{98}{7(\\sqrt{114}+4)} = 2+\\frac{1}{\\frac{\\sqrt{114}+4}{14}}."
},
{
"math_id": 46,
"text": "\\frac{\\sqrt{114}+4}{14}=1+\\frac{\\sqrt{114}-10}{14}=1+\\frac{14}{14(\\sqrt{114}+10)} = 1+\\frac{1}{\\frac{\\sqrt{114}+10}{1}}."
},
{
"math_id": 47,
"text": "\\frac{\\sqrt{114}+10}{1}=20+\\frac{\\sqrt{114}-10}{1}=20+\\frac{14}{\\sqrt{114}+10} = 20+\\frac{1}{\\frac{\\sqrt{114}+10}{14}}."
},
{
"math_id": 48,
"text": "\\sqrt{114} = [10;\\overline{1,2,10,2,1,20}].\\,"
},
{
"math_id": 49,
"text": "\\frac{21194}{1985}"
},
{
"math_id": 50,
"text": "\n\\begin{align}\n\\sqrt{z} = \\sqrt{x^2+y} &= x+\\cfrac{y} {2x+\\cfrac{y} {2x+\\cfrac{y} {2x+\\ddots}}} \\\\\n &= x+\\cfrac{2x \\cdot y}{2(2z-y)-y-\\cfrac{y^2} {2(2z-y)-\\cfrac{y^2} {2(2z-y)-\\ddots}}}\n\\end{align}\n"
},
{
"math_id": 51,
"text": "\n\\begin{align}\n\\sqrt{114} = \\cfrac{\\sqrt{1026}}{3} = \\cfrac{\\sqrt{32^2+2}}{3} &= \\cfrac{32}{3}+\\cfrac{2/3} {64+\\cfrac{2} {64+\\cfrac{2} {64+\\cfrac{2} {64+\\ddots}}}} \\\\\n&= \\cfrac{32}{3}+\\cfrac{2} {192+\\cfrac{18} {192+\\cfrac{18} {192+\\ddots}}},\n\\end{align}\n"
},
{
"math_id": 52,
"text": "\n\\begin{align}\n\\sqrt{114} = \\cfrac{\\sqrt{32^2+2}}{3} &= \\cfrac{32}{3}+\\cfrac{64/3} {2050-1-\\cfrac{1} {2050-\\cfrac{1} {2050-\\ddots}}} \\\\\n &= \\cfrac{32}{3}+\\cfrac{64}{6150-3-\\cfrac{9} {6150-\\cfrac{9} {6150-\\ddots}}},\n\\end{align}\n"
},
{
"math_id": 53,
"text": "[10;1,2, \\overline{10,2,1,20,1,2}]"
}
] | https://en.wikipedia.org/wiki?curid=10875031 |
10875676 | Particle physics in cosmology | Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology. Particle physics must be taken into account in cosmological models of the early universe, when the average energy density was very high. The processes of particle pair production, scattering and decay influence the cosmology.
As a rough approximation, a particle scattering or decay process is important at a particular cosmological epoch if its time scale is shorter than or similar to the time scale of the universe's expansion. The latter quantity is formula_0 where formula_1 is the time-dependent Hubble parameter. This is roughly equal to the age of the universe at that time.
For example, the pion has a mean lifetime to decay of about 26 nanoseconds. This means that particle physics processes involving pion decay can be neglected until roughly that much time has passed since the Big Bang.
Cosmological observations of phenomena such as the cosmic microwave background and the cosmic abundance of elements, together with the predictions of the Standard Model of particle physics, place constraints on the physical conditions in the early universe. The success of the Standard Model at explaining these observations support its validity under conditions beyond those which can be produced in a laboratory. Conversely, phenomena discovered through cosmological observations, such as dark matter and baryon asymmetry, suggest the presence of physics that goes beyond the Standard Model. | [
{
"math_id": 0,
"text": "\\frac{1}{H}"
},
{
"math_id": 1,
"text": "H"
}
] | https://en.wikipedia.org/wiki?curid=10875676 |
10875756 | Lie's third theorem | In the mathematics of Lie theory, Lie's third theorem states that every finite-dimensional Lie algebra formula_0 over the real numbers is associated to a Lie group "formula_1". The theorem is part of the Lie group–Lie algebra correspondence.
Historically, the third theorem referred to a different but related result. The two preceding theorems of Sophus Lie, restated in modern language, relate to the infinitesimal transformations of a group action on a smooth manifold. The third theorem on the list stated the Jacobi identity for the infinitesimal transformations of a local Lie group. Conversely, in the presence of a Lie algebra of vector fields, integration gives a "local" Lie group action. The result now known as the third theorem provides an intrinsic and global converse to the original theorem.
Historical notes.
The equivalence between the category of simply connected real Lie groups and finite-dimensional real Lie algebras is usually called (in the literature of the second half of 20th century) Cartan's or the Cartan-Lie theorem as it was proved by Élie Cartan. Sophus Lie had previously proved the infinitesimal version: local solvability of the Maurer-Cartan equation, or the equivalence between the category of finite-dimensional Lie algebras and the category of local Lie groups.
Lie listed his results as three direct and three converse theorems. The infinitesimal variant of Cartan's theorem was essentially Lie's third converse theorem. In an influential book Jean-Pierre Serre called it the third theorem of Lie. The name is historically somewhat misleading, but often used in connection to generalizations.
Serre provided two proofs in his book: one based on Ado's theorem and another recounting the proof by Élie Cartan.
Proofs.
There are several proofs of Lie's third theorem, each of them employing different algebraic and/or geometric techniques.
Algebraic proof.
The classical proof is straightforward but relies on Ado's theorem, whose proof is algebraic and highly non-trivial. Ado's theorem states that any finite-dimensional Lie algebra can be represented by matrices. As a consequence, integrating such algebra of matrices via the matrix exponential yields a Lie group integrating the original Lie algebra.
Cohomological proof.
A more geometric proof is due to Élie Cartan and was published by Willem van Est. This proof uses induction on the dimension of the center and it involves the Chevalley-Eilenberg complex.
Geometric proof.
A different geometric proof was discovered in 2000 by Duistermaat and Kolk. Unlike the previous ones, it is a constructive proof: the integrating Lie group is built as the quotient of the (infinite-dimensional) Banach Lie group of paths on the Lie algebra by a suitable subgroup. This proof was influential for Lie theory since it paved the way to the generalisation of Lie third theorem for Lie groupoids and Lie algebroids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{g}"
},
{
"math_id": 1,
"text": "G"
}
] | https://en.wikipedia.org/wiki?curid=10875756 |
1087590 | Sound energy density | Sound energy density or sound density is the sound energy per unit volume. The SI unit of sound energy density is the pascal (Pa), which is 1 kg⋅m−1⋅s−2 in SI base units or 1 joule per cubic metre (J/m3).Section 2.3.4: Derived units, Table 4
Mathematical definition.
Sound energy density, denoted "w", is defined by
formula_0
where
The terms instantaneous energy density, maximum energy density, and peak energy density have meanings analogous to the related terms used for sound pressure. In speaking of average energy density, it is necessary to distinguish between the space average (at a given instant) and the time average (at a given point).
Sound energy density level.
The sound energy density level gives the ratio of a sound incidence as a sound energy value in comparison to the reference level of 1 pPa (= 10−12 pascals). It is a logarithmic measure of the ratio of two sound energy densities. The unit of the sound energy density level is the decibel (dB), a non-SI unit accepted for use with the SI Units.
The sound energy density level, "L"("E"), for a given sound energy density, "E"1, in pascals, is
formula_1,
where "E"0 is the standard reference sound energy density
formula_2 . | [
{
"math_id": 0,
"text": "w = \\frac{p v}{c}"
},
{
"math_id": 1,
"text": "\nL(E) = 10\\, \\log_{10}\\left(\\frac{E_1}{E_0}\\right) ~ \\text{dB}\n"
},
{
"math_id": 2,
"text": "\nE_0 = 10^{-12}\\ \\mathrm{Pa} \n"
}
] | https://en.wikipedia.org/wiki?curid=1087590 |
1087818 | Frobenius normal form | Canonical form of matrices over a field
In linear algebra, the Frobenius normal form or rational canonical form of a square matrix "A" with entries in a field "F" is a canonical form for matrices obtained by conjugation by invertible matrices over "F". The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for "A" (i.e., spanned by some vector and its repeated images under "A"). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix "B" is similar to "A" if and only if it has the same rational canonical form as "A". Since this form can be found without any operations that might change when extending the field "F" (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius.
Some authors use the term rational canonical form for a somewhat different form that is more properly called the primary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined over "F", but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field of "F". This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.
Motivation.
When trying to find out whether two square matrices "A" and "B" are similar, one approach is to try, for each of them, to decompose the vector space as far as possible into a direct sum of stable subspaces, and compare the respective actions on these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces (for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparing eigenvalues and their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say as roots of the characteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. Finally "A" and "B" might not be diagonalizable even over this larger field, in which case one must instead use a decomposition into generalized eigenspaces, and possibly into Jordan blocks.
But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vector "v" and all its images by repeated application of the linear operator associated to the matrix; such subspaces are called cyclic subspaces (by analogy with cyclic subgroups) and they are clearly stable under the linear operator. A basis of such a subspace is obtained by taking "v" and its successive images as long as they are linearly independent. The matrix of the linear operator with respect to such a basis is the companion matrix of a monic polynomial; this polynomial (the minimal polynomial of the operator restricted to the subspace, which notion is analogous to that of the order of a cyclic subgroup) determines the action of the operator on the cyclic subspace up to isomorphism, and is independent of the choice of the vector "v" generating the subspace.
A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by the Chinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to exclude trivial cyclic subspaces of dimension 0). The resulting list of polynomials are called the invariant factors of (the "K"["X"]-module defined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrix "A" is obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors of "A"; two matrices are similar if and only if they have the same rational canonical form.
Example.
Consider the following matrix A, over Q:
formula_0
"A" has minimal polynomial formula_1, so that the dimension of a subspace generated by the repeated images of a single vector is at most 6. The characteristic polynomial is formula_2, which is a multiple of the minimal polynomial by a factor formula_3. There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vector formula_4 does so: the vectors formula_5 for formula_6 are linearly independent and span a cyclic subspace with minimal polynomial formula_7. There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectors formula_8 and formula_9 is an example. In fact one has formula_10, so the complementary subspace is a cyclic subspace generated by formula_11; it has minimal polynomial formula_3. Since formula_7 is the minimal polynomial of the whole space, it is clear that formula_3 must divide formula_7 (and it is easily checked that it does), and we have found the invariant factors formula_3 and formula_1 of "A". Then the rational canonical form of "A" is the block diagonal matrix with the corresponding companion matrices as diagonal blocks, namely
formula_12
A basis on which this form is attained is formed by the vectors formula_13 above, followed by formula_5 for formula_6; explicitly this means that for
formula_14,
one has formula_15
General case and theory.
Fix a base field "F" and a finite-dimensional vector space "V" over "F". Given a polynomial "P" ∈ "F"["X"], there is associated to it a companion matrix "C""P" whose characteristic polynomial and minimal polynomial are both equal to "P".
Theorem: Let "V" be a finite-dimensional vector space over a field "F", and "A" a square matrix over "F". Then "V" (viewed as an "F"["X"]-module with the action of "X" given by "A") admits a "F"["X"]-module isomorphism
"V" ≅ "F"["X"]/"f"1 ⊕ … ⊕ "F"["X"]/"f""k"
where the "f""i" ∈ "F"["X"] may be taken to be monic polynomials of positive degree (so they are non-units in "F"["X"]) that satisfy the relations
"f"1 | "f"2 | … | "f""k"
where "a | b" is notation for ""a" divides "b""; with these conditions the list of polynomials "f""i" is unique.
"Sketch of Proof": Apply the structure theorem for finitely generated modules over a principal ideal domain to "V", viewing it as an "F"["X"]-module. The structure theorem provides a decomposition into cyclic factors, each of which is a quotient of "F"["X"] by a proper ideal; the zero ideal cannot be present since the resulting free module would be infinite-dimensional as "F" vector space, while "V" is finite-dimensional. For the polynomials "f""i" one then takes the unique monic generators of the respective ideals, and since the structure theorem ensures containment of every ideal in the preceding ideal, one obtains the divisibility conditions for the "f""i". See [DF] for details.
Given an arbitrary square matrix, the elementary divisors used in the construction of the Jordan normal form do not exist over "F"["X"], so the invariant factors "f""i" as given above must be used instead. The last of these factors "f""k" is then the minimal polynomial, which all the
invariant factors therefore divide, and the product of the invariant factors gives the characteristic polynomial. Note that this implies that the minimal polynomial divides the characteristic polynomial (which is essentially the Cayley-Hamilton theorem), and that every irreducible factor of the characteristic polynomial also divides the minimal polynomial (possibly with lower multiplicity).
For each invariant factor "f""i" one takes its companion matrix "C""f""i", and the block diagonal matrix formed from these blocks yields the rational canonical form of "A". When the minimal polynomial is identical to the characteristic polynomial (the case "k" = 1), the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated to "A", and these invariant factors are independent of basis, it follows that two square matrices "A" and "B" are similar if and only if they have the same rational canonical form.
A rational normal form generalizing the Jordan normal form.
The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground field "F". This implies that it is invariant when "F" is replaced by a different field (as long as it contains the entries of the original matrix "A"). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably the diagonal form (if "A" is diagonalizable) or more generally the Jordan normal form (if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial.
There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same field "F" as "A", but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors over "F", and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form is sometimes called the generalized Jordan normal form, or primary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to the "distinct" irreducible factors "P" of the characteristic polynomial (as stated by the lemme des noyaux), where the characteristic polynomial of each summand is a power of the corresponding "P". These summands can be further decomposed, non-canonically, as a direct sum of cyclic "F"["x"]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power of "P". The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form called "generalized Jordan block" in the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself a block matrix of the form
formula_16
where "C" is the companion matrix of the irreducible polynomial "P", and "U" is a matrix whose sole nonzero entry is a 1 in the upper right hand corner. For the case of a linear irreducible factor "P"
"x" − "λ", these blocks are reduced to single entries "C"
"λ" and "U"
1 and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vector "v" (one that is not annihilated by "P""k"−1("A") where the minimal polynomial of the cyclic module is "P""k"), and taking as basis
formula_17
where "d"
deg("P").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle A=\\begin{pmatrix}\n -1& 3&-1& 0&-2& 0& 0&-2 \\\\\n -1&-1& 1& 1&-2&-1& 0&-1 \\\\\n -2&-6& 4& 3&-8&-4&-2& 1 \\\\\n -1& 8&-3&-1& 5& 2& 3&-3 \\\\\n 0& 0& 0& 0& 0& 0& 0& 1 \\\\\n 0& 0& 0& 0&-1& 0& 0& 0 \\\\\n 1& 0& 0& 0& 2& 0& 0& 0 \\\\\n 0& 0& 0& 0& 4& 0& 1& 0 \\end{pmatrix}."
},
{
"math_id": 1,
"text": "\\mu=X^6-4X^4-2X^3+4X^2+4X+1"
},
{
"math_id": 2,
"text": "\\chi=X^8-X^7-5X^6+2X^5+10X^4+2X^3-7X^2-5X-1"
},
{
"math_id": 3,
"text": "X^2-X-1"
},
{
"math_id": 4,
"text": "e_1"
},
{
"math_id": 5,
"text": "A^k(e_1)"
},
{
"math_id": 6,
"text": "k=0,1,\\ldots,5"
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "v=(3,4,8,0,-1,0,2,-1)^\\top"
},
{
"math_id": 9,
"text": "w=(5,4,5,9,-1,1,1,-2)^\\top"
},
{
"math_id": 10,
"text": "A\\cdot v=w"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "\\scriptstyle C=\\begin{pmatrix}\n 0& 1& 0& 0& 0& 0& 0& 0 \\\\\n 1& 1& 0& 0& 0& 0& 0& 0 \\\\\n 0& 0& 0& 0& 0& 0& 0&-1 \\\\\n 0& 0& 1& 0& 0& 0& 0&-4 \\\\\n 0& 0& 0& 1& 0& 0& 0&-4 \\\\\n 0& 0& 0& 0& 1& 0& 0& 2 \\\\\n 0& 0& 0& 0& 0& 1& 0& 4 \\\\\n 0& 0& 0& 0& 0& 0& 1& 0 \\end{pmatrix}."
},
{
"math_id": 13,
"text": "v,w"
},
{
"math_id": 14,
"text": "\\scriptstyle P=\\begin{pmatrix}\n 3& 5& 1&-1& 0& 0& -4& 0\\\\\n 4& 4& 0&-1&-1&-2& -3&-5\\\\\n 8& 5& 0&-2&-5&-2&-11&-6\\\\\n 0& 9& 0&-1& 3&-2& 0& 0\\\\\n -1&-1& 0& 0& 0& 1& -1& 4\\\\\n 0& 1& 0& 0& 0& 0& -1& 1\\\\\n 2& 1& 0& 1&-1& 0& 2&-6\\\\\n -1&-2& 0& 0& 1&-1& 4&-2 \\end{pmatrix}"
},
{
"math_id": 15,
"text": "A=PCP^{-1}."
},
{
"math_id": 16,
"text": "\\scriptstyle\\begin{pmatrix}C&0&\\cdots&0\\\\U&C&\\cdots&0\\\\\\vdots&\\ddots&\\ddots&\\vdots\\\\0&\\cdots&U&C\\end{pmatrix}"
},
{
"math_id": 17,
"text": "v,A(v),A^2(v),\\ldots,A^{d-1}(v), ~\n P(A)(v), A(P(A)(v)),\\ldots,A^{d-1}(P(A)(v)), ~\n P^2(A)(v),\\ldots, ~\n P^{k-1}(A)(v),\\ldots,A^{d-1}(P^{k-1}(A)(v))"
}
] | https://en.wikipedia.org/wiki?curid=1087818 |
108824 | Unijunction transistor | Type of transistor
A unijunction transistor (UJT) is a three-lead electronic semiconductor device with only one junction. It acts exclusively as an electrically controlled switch.
The UJT is not used as a linear amplifier. It is used in free-running oscillators, synchronized or triggered oscillators, and pulse generation circuits at low to moderate frequencies (hundreds of kilohertz). It is widely used in the triggering circuits for silicon controlled rectifiers. In the 1960s, the low cost per unit, combined with its unique characteristic, warranted its use in a wide variety of applications like oscillators, pulse generators, saw-tooth generators, triggering circuits, phase control, timing circuits, and voltage- or current-regulated supplies. The original unijunction transistor types are now considered obsolete, but a later multi-layer device, the programmable unijunction transistor, is still widely available.
Types.
There are three types of unijunction transistor:
Applications.
Unijunction transistor circuits were popular in hobbyist electronics circuits in the 1960s and 1970s because they allowed simple oscillators to be built using just one active device. For example, they were used for relaxation oscillators in variable-rate strobe lights.
Later, as integrated circuits became more popular, oscillators such as the 555 timer IC became more commonly used.
In addition to its use as the active device in relaxation oscillators, one of the most important applications of UJTs or PUTs is to trigger thyristors (silicon controlled rectifiers (SCR), TRIACs, etc.). A DC voltage can be used to control a UJT or PUT circuit such that the "on-period" increases with an increase in the DC control voltage. This application is important for large AC current control.
UJTs can also be used to measure magnetic flux. The Hall effect modulates the voltage at the PN junction. This affects the frequency of UJT relaxation oscillators. This only works with UJTs. PUTs do not exhibit this phenomenon.
Construction.
The UJT has three terminals: an emitter (E) and two bases (B1 and B2) and so is sometimes known a "double-base diode". The base is formed by a lightly doped n-type bar of silicon. Two ohmic contacts B1 and B2 are attached at its ends. The emitter is of heavily-doped p-type material. The single PN junction between the emitter and the base gives the device its name. The resistance between B1 and B2 when the emitter is open-circuit is called "interbase resistance". The emitter junction is usually located closer to base-2 (B2) than base-1 (B1) so that the device is not symmetrical, because a symmetrical unit does not provide optimum electrical characteristics for most of the applications.
If no potential difference exists between its emitter and either of its base leads, there is an extremely small current from B1 to B2. On the other hand, if an adequately large voltage relative to its base leads, known as the "trigger voltage", is applied to its emitter, then a very large current from its emitter joins the current from B1 to B2, which creates a larger B2 output current.
The schematic diagram symbol for a unijunction transistor represents the emitter lead with an arrow, showing the direction of conventional current when the emitter-base junction is conducting a current. A complementary UJT uses a p-type base and an n-type emitter, and operates the same as the n-type base device but with all voltage polarities reversed.
The structure of a UJT is similar to that of an N-channel JFET, but p-type (gate) material surrounds the N-type (channel) material in a JFET, and the gate surface is larger than the emitter junction of UJT. A UJT is operated with the emitter junction forward-biased while the JFET is normally operated with the gate junction reverse-biased. The UJT is a current-controlled negative resistance device.
Device operation.
The device has a unique characteristic in that when it is triggered, its emitter current increases regeneratively until it is restricted by the emitter power supply. It exhibits a negative resistance characteristic and so it can be employed as an oscillator.
The UJT is biased with a positive voltage between the two bases. This causes a potential drop along the length of the device. When the emitter voltage is driven approximately one diode voltage above the voltage at the point where the P diffusion (emitter) is, current will begin to flow from the emitter into the base region. Because the base region is very lightly doped, the additional current (actually charges in the base region) causes conductivity modulation, which reduces the resistance of the portion of the base between the emitter junction and the B2 terminal. This reduction in resistance means that the emitter junction is more forward biased, and so even more current is injected. Overall, the effect is a negative resistance at the emitter terminal. This is what makes the UJT useful, especially in simple oscillator circuits.
Invention.
The unijunction transistor was invented as a byproduct of research on germanium tetrode transistors at General Electric. It was patented in 1953. Commercially, silicon devices were manufactured. A common part number is 2N2646.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta"
}
] | https://en.wikipedia.org/wiki?curid=108824 |
1088270 | Frobenius endomorphism | In a ring with prime characteristic p, the map raising elements to the pth power
In commutative algebra and field theory, the Frobenius endomorphism (after Ferdinand Georg Frobenius) is a special endomorphism of commutative rings with prime characteristic p, an important class that includes finite fields. The endomorphism maps every element to its p-th power. In certain contexts it is an automorphism, but this is not true in general.
Definition.
Let R be a commutative ring with prime characteristic p (an integral domain of positive characteristic always has prime characteristic, for example). The Frobenius endomorphism "F" is defined by
formula_0
for all "r" in "R". It respects the multiplication of "R":
formula_1
and "F"(1) is 1 as well. Moreover, it also respects the addition of R. The expression ("r" + "s")"p" can be expanded using the binomial theorem. Because p is prime, it divides "p"! but not any "q"! for "q" < "p"; it therefore will divide the numerator, but not the denominator, of the explicit formula of the binomial coefficients
formula_2
if 1 ≤ "k" ≤ "p" − 1. Therefore, the coefficients of all the terms except "r""p" and "s""p" are divisible by p, and hence they vanish. Thus
formula_3
This shows that "F" is a ring homomorphism.
If "φ" : "R" → "S" is a homomorphism of rings of characteristic p, then
formula_4
If "FR" and "FS" are the Frobenius endomorphisms of R and S, then this can be rewritten as:
formula_5
This means that the Frobenius endomorphism is a natural transformation from the identity functor on the category of characteristic p rings to itself.
If the ring R is a ring with no nilpotent elements, then the Frobenius endomorphism is injective: "F"("r")
0 means "r""p"
0, which by definition means that r is nilpotent of order at most p. In fact, this is necessary and sufficient, because if r is any nilpotent, then one of its powers will be nilpotent of order at most p. In particular, if R is a field then the Frobenius endomorphism is injective.
The Frobenius morphism is not necessarily surjective, even when R is a field. For example, let "K"
F"p"("t") be the finite field of p elements together with a single transcendental element; equivalently, K is the field of rational functions with coefficients in F"p". Then the image of F does not contain t. If it did, then there would be a rational function "q"("t")/"r"("t") whose p-th power "q"("t")"p"/"r"("t")"p" would equal t. But the degree of this p-th power (the difference between the degrees of its numerator and denominator) is "p" deg("q") − "p" deg("r"), which is a multiple of p. In particular, it can't be 1, which is the degree of t. This is a contradiction; so t is not in the image of F.
A field K is called "perfect" if either it is of characteristic zero or it is of positive characteristic and its Frobenius endomorphism is an automorphism. For example, all finite fields are perfect.
Fixed points of the Frobenius endomorphism.
Consider the finite field F"p". By Fermat's little theorem, every element x of F"p" satisfies "x""p"
"x". Equivalently, it is a root of the polynomial "X""p" − "X". The elements of F"p" therefore determine p roots of this equation, and because this equation has degree p it has no more than p roots over any extension. In particular, if K is an algebraic extension of F"p" (such as the algebraic closure or another finite field), then F"p" is the fixed field of the Frobenius automorphism of K.
Let R be a ring of characteristic "p" > 0. If R is an integral domain, then by the same reasoning, the fixed points of Frobenius are the elements of the prime field. However, if R is not a domain, then "X""p" − "X" may have more than p roots; for example, this happens if "R"
F"p" × F"p".
A similar property is enjoyed on the finite field formula_6 by the "n"th iterate of the Frobenius automorphism: Every element of formula_6 is a root of formula_7, so if K is an algebraic extension of formula_6 and F is the Frobenius automorphism of K, then the fixed field of "F""n" is formula_6. If "R" is a domain that is an formula_6-algebra, then the fixed points of the "n"th iterate of Frobenius are the elements of the image of formula_6.
Iterating the Frobenius map gives a sequence of elements in R:
formula_8
This sequence of iterates is used in defining the Frobenius closure and the tight closure of an ideal.
As a generator of Galois groups.
The Galois group of an extension of finite fields is generated by an iterate of the Frobenius automorphism. First, consider the case where the ground field is the prime field F"p". Let F"q" be the finite field of q elements, where "q"
"p""n". The Frobenius automorphism F of F"q" fixes the prime field F"p", so it is an element of the Galois group Gal(F"q"/F"p"). In fact, since formula_9 is cyclic with "q" − 1 elements,
we know that the Galois group is cyclic and F is a generator. The order of F is n because "F""j" acts on an element x by sending it to "x""pj", and formula_10 can only have formula_11 many roots, since we are in a field. Every automorphism of F"q" is a power of F, and the generators are the powers "F""i" with i coprime to n.
Now consider the finite field F"q""f" as an extension of F"q", where "q"
"p""n" as above. If "n" > 1, then the Frobenius automorphism F of F"q""f" does not fix the ground field F"q", but its nth iterate "F""n" does. The Galois group Gal(F"q""f" /F"q") is cyclic of order f and is generated by "F""n". It is the subgroup of Gal(F"q""f" /F"p") generated by "F""n". The generators of Gal(F"q""f" /F"q") are the powers "F""ni" where i is coprime to f.
The Frobenius automorphism is not a generator of the absolute Galois group
formula_12
because this Galois group is isomorphic to the profinite integers
formula_13
which are not cyclic. However, because the Frobenius automorphism is a generator of the Galois group of every finite extension of F"q", it is a generator of every finite quotient of the absolute Galois group. Consequently, it is a topological generator in the usual Krull topology on the absolute Galois group.
Frobenius for schemes.
There are several different ways to define the Frobenius morphism for a scheme. The most fundamental is the absolute Frobenius morphism. However, the absolute Frobenius morphism behaves poorly in the relative situation because it pays no attention to the base scheme. There are several different ways of adapting the Frobenius morphism to the relative situation, each of which is useful in certain situations.
The absolute Frobenius morphism.
Suppose that X is a scheme of characteristic "p" > 0. Choose an open affine subset "U"
Spec "A" of X. The ring A is an F"p"-algebra, so it admits a Frobenius endomorphism. If V is an open affine subset of U, then by the naturality of Frobenius, the Frobenius morphism on U, when restricted to V, is the Frobenius morphism on V. Consequently, the Frobenius morphism glues to give an endomorphism of X. This endomorphism is called the absolute Frobenius morphism of X, denoted "FX". By definition, it is a homeomorphism of X with itself. The absolute Frobenius morphism is a natural transformation from the identity functor on the category of F"p"-schemes to itself.
If X is an S-scheme and the Frobenius morphism of S is the identity, then the absolute Frobenius morphism is a morphism of S-schemes. In general, however, it is not. For example, consider the ring formula_14. Let X and S both equal Spec "A" with the structure map "X" → "S" being the identity. The Frobenius morphism on A sends a to "a""p". It is not a morphism of formula_15-algebras. If it were, then multiplying by an element b in formula_15 would commute with applying the Frobenius endomorphism. But this is not true because:
formula_16
The former is the action of b in the formula_15-algebra structure that A begins with, and the latter is the action of formula_15 induced by Frobenius. Consequently, the Frobenius morphism on Spec "A" is not a morphism of formula_15-schemes.
The absolute Frobenius morphism is a purely inseparable morphism of degree p. Its differential is zero. It preserves products, meaning that for any two schemes X and Y, "F""X"×"Y"
"FX" × "FY".
Restriction and extension of scalars by Frobenius.
Suppose that "φ" : "X" → "S" is the structure morphism for an S-scheme X. The base scheme S has a Frobenius morphism "F""S". Composing φ with "F""S" results in an S-scheme "X""F" called the restriction of scalars by Frobenius. The restriction of scalars is actually a functor, because an S-morphism "X" → "Y" induces an S-morphism "XF" → "YF".
For example, consider a ring "A" of characteristic "p" > 0 and a finitely presented algebra over "A":
formula_17
The action of "A" on "R" is given by:
formula_18
where α is a multi-index. Let "X"
Spec "R". Then "XF" is the affine scheme Spec "R", but its structure morphism Spec "R" → Spec "A", and hence the action of "A" on "R", is different:
formula_19
Because restriction of scalars by Frobenius is simply composition, many properties of X are inherited by "X""F" under appropriate hypotheses on the Frobenius morphism. For example, if X and "S""F" are both finite type, then so is "X""F".
The extension of scalars by Frobenius is defined to be:
formula_20
The projection onto the S factor makes "X"("p") an S-scheme. If S is not clear from the context, then "X"("p") is denoted by "X"("p"/"S"). Like restriction of scalars, extension of scalars is a functor: An S-morphism "X" → "Y" determines an S-morphism "X"("p") → "Y"("p").
As before, consider a ring "A" and a finitely presented algebra "R" over "A", and again let "X"
Spec "R". Then:
formula_21
A global section of "X"("p") is of the form:
formula_22
where "α" is a multi-index and every "a""iα" and "b""i" is an element of "A". The action of an element "c" of "A" on this section is:
formula_23
Consequently, "X"("p") is isomorphic to:
formula_24
where, if:
formula_25
then:
formula_26
A similar description holds for arbitrary "A"-algebras "R".
Because extension of scalars is base change, it preserves limits and coproducts. This implies in particular that if X has an algebraic structure defined in terms of finite limits (such as being a group scheme), then so does "X"("p"). Furthermore, being a base change means that extension of scalars preserves properties such as being of finite type, finite presentation, separated, affine, and so on.
Extension of scalars is well-behaved with respect to base change: Given a morphism "S"′ → "S", there is a natural isomorphism:
formula_27
Relative Frobenius.
Let "X" be an "S"-scheme with structure morphism "φ". The relative Frobenius morphism of "X" is the morphism:
formula_28
defined by the universal property of the pullback "X"("p") (see the diagram above):
formula_29
Because the absolute Frobenius morphism is natural, the relative Frobenius morphism is a morphism of S-schemes.
Consider, for example, the "A"-algebra:
formula_17
We have:
formula_30
The relative Frobenius morphism is the homomorphism "R"("p") → "R" defined by:
formula_31
Relative Frobenius is compatible with base change in the sense that, under the natural isomorphism of "X"("p"/"S") ×"S" "S"′ and ("X" ×"S" "S"′)("p"/"S"′), we have:
formula_32
Relative Frobenius is a universal homeomorphism. If "X" → "S" is an open immersion, then it is the identity. If "X" → "S" is a closed immersion determined by an ideal sheaf "I" of "OS", then "X"("p") is determined by the ideal sheaf "I""p" and relative Frobenius is the augmentation map "OS"/"I""p" → "OS"/"I".
"X" is unramified over S if and only if "F""X"/"S" is unramified and if and only if "F""X"/"S" is a monomorphism. "X" is étale over S if and only if "F""X"/"S" is étale and if and only if "F""X"/"S" is an isomorphism.
Arithmetic Frobenius.
The arithmetic Frobenius morphism of an S-scheme X is a morphism:
formula_33
defined by:
formula_34
That is, it is the base change of "F""S" by 1"X".
Again, if:
formula_35
formula_36
then the arithmetic Frobenius is the homomorphism:
formula_37
If we rewrite "R"("p") as:
formula_38
then this homomorphism is:
formula_39
Geometric Frobenius.
Assume that the absolute Frobenius morphism of S is invertible with inverse formula_40. Let formula_41 denote the S-scheme formula_42. Then there is an extension of scalars of X by formula_40:
formula_43
If:
formula_35
then extending scalars by formula_40 gives:
formula_44
If:
formula_25
then we write:
formula_45
and then there is an isomorphism:
formula_46
The geometric Frobenius morphism of an S-scheme X is a morphism:
formula_47
defined by:
formula_48
It is the base change of formula_40 by 1"X".
Continuing our example of "A" and "R" above, geometric Frobenius is defined to be:
formula_49
After rewriting "R"(1/"p") in terms of formula_50, geometric Frobenius is:
formula_51
Arithmetic and geometric Frobenius as Galois actions.
Suppose that the Frobenius morphism of S is an isomorphism. Then it generates a subgroup of the automorphism group of S. If "S"
Spec "k" is the spectrum of a finite field, then its automorphism group is the Galois group of the field over the prime field, and the Frobenius morphism and its inverse are both generators of the automorphism group. In addition, "X"("p") and "X"(1/"p") may be identified with X. The arithmetic and geometric Frobenius morphisms are then endomorphisms of X, and so they lead to an action of the Galois group of "k" on "X".
Consider the set of "K"-points "X"("K"). This set comes with a Galois action: Each such point "x" corresponds to a homomorphism "OX" → "K" from the structure sheaf to "K", which factors via "k(x)", the residue field at "x", and the action of Frobenius on "x" is the application of the Frobenius morphism to the residue field. This Galois action agrees with the action of arithmetic Frobenius: The composite morphism
formula_52
is the same as the composite morphism:
formula_53
by the definition of the arithmetic Frobenius. Consequently, arithmetic Frobenius explicitly exhibits the action of the Galois group on points as an endomorphism of "X".
Frobenius for local fields.
Given an unramified finite extension "L/K" of local fields, there is a concept of Frobenius endomorphism that induces the Frobenius endomorphism in the corresponding extension of residue fields.
Suppose "L/K" is an unramified extension of local fields, with ring of integers "OK" of K such that the residue field, the integers of K modulo their unique maximal ideal φ, is a finite field of order q, where q is a power of a prime. If Φ is a prime of L lying over φ, that "L/K" is unramified means by definition that the integers of L modulo Φ, the residue field of L, will be a finite field of order "q""f" extending the residue field of K where f is the degree of "L"/"K". We may define the Frobenius map for elements of the ring of integers "OL" of L as an automorphism "s"Φ of L such that
formula_54
Frobenius for global fields.
In algebraic number theory, Frobenius elements are defined for extensions "L"/"K" of global fields that are finite Galois extensions for prime ideals Φ of L that are unramified in "L"/"K". Since the extension is unramified the decomposition group of Φ is the Galois group of the extension of residue fields. The Frobenius element then can be defined for elements of the ring of integers of L as in the local case, by
formula_55
where q is the order of the residue field "OK"/(Φ ∩ "OK").
Lifts of the Frobenius are in correspondence with p-derivations.
Examples.
The polynomial
"x"5 − "x" − 1
has discriminant
19 × 151,
and so is unramified at the prime 3; it is also irreducible mod 3. Hence adjoining a root ρ of it to the field of 3-adic numbers Q3 gives an unramified extension Q3("ρ") of Q3. We may find the image of ρ under the Frobenius map by locating the root nearest to "ρ"3, which we may do by Newton's method. We obtain an element of the ring of integers Z3["ρ"] in this way; this is a polynomial of degree four in ρ with coefficients in the 3-adic integers Z3. Modulo 38 this polynomial is
formula_56.
This is algebraic over Q and is the correct global Frobenius image in terms of the embedding of Q into Q3; moreover, the coefficients are algebraic and the result can be expressed algebraically. However, they are of degree 120, the order of the Galois group, illustrating the fact that explicit computations are much more easily accomplished if p-adic results will suffice.
If "L/K" is an abelian extension of global fields, we get a much stronger congruence since it depends only on the prime φ in the base field K. For an example, consider the extension Q("β") of Q obtained by adjoining a root β satisfying
formula_57
to Q. This extension is cyclic of order five, with roots
formula_58
for integer n. It has roots that are Chebyshev polynomials of β:
"β"2 − 2, "β"3 − 3"β", "β"5 − 5"β"3 + 5"β"
give the result of the Frobenius map for the primes 2, 3 and 5, and so on for larger primes not equal to 11 or of the form 22"n" + 1 (which split). It is immediately apparent how the Frobenius map gives a result equal mod p to the p-th power of the root β.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(r) = r^p"
},
{
"math_id": 1,
"text": "F(rs) = (rs)^p = r^ps^p = F(r)F(s),"
},
{
"math_id": 2,
"text": "\\frac{p!}{k! (p-k)!},"
},
{
"math_id": 3,
"text": "F(r + s) = (r + s)^p = r^p + s^p = F(r) + F(s)."
},
{
"math_id": 4,
"text": "\\varphi(x^p) = \\varphi(x)^p."
},
{
"math_id": 5,
"text": "\\varphi \\circ F_R = F_S \\circ \\varphi."
},
{
"math_id": 6,
"text": "\\mathbf{F}_{p^n}"
},
{
"math_id": 7,
"text": "X^{p^n} - X"
},
{
"math_id": 8,
"text": "x, x^p, x^{p^2}, x^{p^3}, \\ldots."
},
{
"math_id": 9,
"text": "\\mathbf{F}_q^{\\times}"
},
{
"math_id": 10,
"text": "x^{p^j} = x"
},
{
"math_id": 11,
"text": "p^j"
},
{
"math_id": 12,
"text": "\\operatorname{Gal} \\left (\\overline{\\mathbf{F}_q}/\\mathbf{F}_q \\right ),"
},
{
"math_id": 13,
"text": "\\widehat{\\mathbf{Z}} = \\varprojlim_n \\mathbf{Z}/n\\mathbf{Z},"
},
{
"math_id": 14,
"text": "A = \\mathbf{F}_{p^2}"
},
{
"math_id": 15,
"text": "\\mathbf{F}_{p^2}"
},
{
"math_id": 16,
"text": "b \\cdot a = ba \\neq F(b) \\cdot a = b^p a."
},
{
"math_id": 17,
"text": "R = A[X_1, \\ldots, X_n] / (f_1, \\ldots, f_m)."
},
{
"math_id": 18,
"text": "c \\cdot \\sum a_\\alpha X^\\alpha = \\sum c a_\\alpha X^\\alpha,"
},
{
"math_id": 19,
"text": "c \\cdot \\sum a_\\alpha X^\\alpha = \\sum F(c) a_\\alpha X^\\alpha = \\sum c^p a_\\alpha X^\\alpha."
},
{
"math_id": 20,
"text": "X^{(p)} = X \\times_S S_F."
},
{
"math_id": 21,
"text": "X^{(p)} = \\operatorname{Spec} R \\otimes_A A_F."
},
{
"math_id": 22,
"text": "\\sum_i \\left(\\sum_\\alpha a_{i\\alpha} X^\\alpha\\right) \\otimes b_i = \\sum_i \\sum_\\alpha X^\\alpha \\otimes a_{i\\alpha}^p b_i,"
},
{
"math_id": 23,
"text": "c \\cdot \\sum_i \\left (\\sum_\\alpha a_{i\\alpha} X^\\alpha\\right) \\otimes b_i = \\sum_i \\left(\\sum_\\alpha a_{i\\alpha} X^\\alpha\\right) \\otimes b_i c."
},
{
"math_id": 24,
"text": "\\operatorname{Spec} A[X_1, \\ldots, X_n] / \\left (f_1^{(p)}, \\ldots, f_m^{(p)} \\right ),"
},
{
"math_id": 25,
"text": "f_j = \\sum_\\beta f_{j\\beta} X^\\beta,"
},
{
"math_id": 26,
"text": "f_j^{(p)} = \\sum_\\beta f_{j\\beta}^p X^\\beta."
},
{
"math_id": 27,
"text": "X^{(p/S)} \\times_S S' \\cong (X \\times_S S')^{(p/S')}."
},
{
"math_id": 28,
"text": "F_{X/S} : X \\to X^{(p)}"
},
{
"math_id": 29,
"text": "F_{X/S} = (F_X, \\varphi)."
},
{
"math_id": 30,
"text": "R^{(p)} = A[X_1, \\ldots, X_n] / (f_1^{(p)}, \\ldots, f_m^{(p)})."
},
{
"math_id": 31,
"text": "\\sum_i \\sum_\\alpha X^\\alpha \\otimes a_{i\\alpha} \\mapsto \\sum_i \\sum_\\alpha a_{i\\alpha}X^{p\\alpha}."
},
{
"math_id": 32,
"text": "F_{X / S} \\times 1_{S'} = F_{X \\times_S S' / S'}."
},
{
"math_id": 33,
"text": "F^a_{X/S} : X^{(p)} \\to X \\times_S S \\cong X"
},
{
"math_id": 34,
"text": "F^a_{X/S} = 1_X \\times F_S."
},
{
"math_id": 35,
"text": "R = A[X_1, \\ldots, X_n] / (f_1, \\ldots, f_m),"
},
{
"math_id": 36,
"text": "R^{(p)} = A[X_1, \\ldots, X_n] / (f_1, \\ldots, f_m) \\otimes_A A_F,"
},
{
"math_id": 37,
"text": "\\sum_i \\left(\\sum_\\alpha a_{i\\alpha} X^\\alpha\\right) \\otimes b_i \\mapsto \\sum_i \\sum_\\alpha a_{i\\alpha} b_i^p X^\\alpha."
},
{
"math_id": 38,
"text": "R^{(p)} = A[X_1, \\ldots, X_n] / \\left (f_1^{(p)}, \\ldots, f_m^{(p)} \\right ),"
},
{
"math_id": 39,
"text": "\\sum a_\\alpha X^\\alpha \\mapsto \\sum a_\\alpha^p X^\\alpha."
},
{
"math_id": 40,
"text": "F_S^{-1}"
},
{
"math_id": 41,
"text": "S_{F^{-1}}"
},
{
"math_id": 42,
"text": "F_S^{-1} : S \\to S"
},
{
"math_id": 43,
"text": "X^{(1/p)} = X \\times_S S_{F^{-1}}."
},
{
"math_id": 44,
"text": "R^{(1/p)} = A[X_1, \\ldots, X_n] / (f_1, \\ldots, f_m) \\otimes_A A_{F^{-1}}."
},
{
"math_id": 45,
"text": "f_j^{(1/p)} = \\sum_\\beta f_{j\\beta}^{1/p} X^\\beta,"
},
{
"math_id": 46,
"text": "R^{(1/p)} \\cong A[X_1, \\ldots, X_n] / (f_1^{(1/p)}, \\ldots, f_m^{(1/p)})."
},
{
"math_id": 47,
"text": "F^g_{X/S} : X^{(1/p)} \\to X \\times_S S \\cong X"
},
{
"math_id": 48,
"text": "F^g_{X/S} = 1_X \\times F_S^{-1}."
},
{
"math_id": 49,
"text": "\\sum_i \\left(\\sum_\\alpha a_{i\\alpha} X^\\alpha\\right) \\otimes b_i \\mapsto \\sum_i \\sum_\\alpha a_{i\\alpha} b_i^{1/p} X^\\alpha."
},
{
"math_id": 50,
"text": "\\{f_j^{(1/p)}\\}"
},
{
"math_id": 51,
"text": "\\sum a_\\alpha X^\\alpha \\mapsto \\sum a_\\alpha^{1/p} X^\\alpha."
},
{
"math_id": 52,
"text": "\\mathcal{O}_X \\to k(x) \\xrightarrow{\\overset{}F} k(x)"
},
{
"math_id": 53,
"text": "\\mathcal{O}_X \\xrightarrow{\\overset{}F^a_{X/S}} \\mathcal{O}_X \\to k(x)"
},
{
"math_id": 54,
"text": "s_\\Phi(x) \\equiv x^q \\mod \\Phi."
},
{
"math_id": 55,
"text": "s_\\Phi(x) \\equiv x^q \\mod \\Phi,"
},
{
"math_id": 56,
"text": "\\rho^3 + 3(460+183\\rho-354\\rho^2-979\\rho^3-575\\rho^4)"
},
{
"math_id": 57,
"text": "\\beta^5+\\beta^4-4\\beta^3-3\\beta^2+3\\beta+1=0"
},
{
"math_id": 58,
"text": "2 \\cos \\tfrac{2 \\pi n}{11}"
}
] | https://en.wikipedia.org/wiki?curid=1088270 |
1088331 | Radical of an algebraic group | The radical of an algebraic group is the identity component of its maximal normal solvable subgroup.
For example, the radical of the general linear group formula_0 (for a field "K") is the subgroup consisting of scalar matrices, i.e. matrices formula_1 with formula_2 and formula_3 for formula_4.
An algebraic group is called semisimple if its radical is trivial, i.e., consists of the identity element only. The group formula_5 is semi-simple, for example.
The subgroup of unipotent elements in the radical is called the unipotent radical, it serves to define reductive groups. | [
{
"math_id": 0,
"text": "\\operatorname{GL}_n(K)"
},
{
"math_id": 1,
"text": "(a_{ij})"
},
{
"math_id": 2,
"text": "a_{11} = \\dots = a_{nn}"
},
{
"math_id": 3,
"text": "a_{ij}=0"
},
{
"math_id": 4,
"text": "i \\ne j"
},
{
"math_id": 5,
"text": "\\operatorname{SL}_n(K)"
}
] | https://en.wikipedia.org/wiki?curid=1088331 |
1088386 | Reductive group | Concept in mathematics
In mathematics, a reductive group is a type of linear algebraic group over a field. One definition is that a connected linear algebraic group "G" over a perfect field is reductive if it has a representation that has a finite kernel and is a direct sum of irreducible representations. Reductive groups include some of the most important groups in mathematics, such as the general linear group "GL"("n") of invertible matrices, the special orthogonal group "SO"("n"), and the symplectic group "Sp"(2"n"). Simple algebraic groups and (more generally) semisimple algebraic groups are reductive.
Claude Chevalley showed that the classification of reductive groups is the same over any algebraically closed field. In particular, the simple algebraic groups are classified by Dynkin diagrams, as in the theory of compact Lie groups or complex semisimple Lie algebras. Reductive groups over an arbitrary field are harder to classify, but for many fields such as the real numbers R or a number field, the classification is well understood. The classification of finite simple groups says that most finite simple groups arise as the group "G"("k") of "k"-rational points of a simple algebraic group "G" over a finite field "k", or as minor variants of that construction.
Reductive groups have a rich representation theory in various contexts. First, one can study the representations of a reductive group "G" over a field "k" as an algebraic group, which are actions of "G" on "k"-vector spaces. But also, one can study the complex representations of the group "G"("k") when "k" is a finite field, or the infinite-dimensional unitary representations of a real reductive group, or the automorphic representations of an adelic algebraic group. The structure theory of reductive groups is used in all these areas.
Definitions.
A linear algebraic group over a field "k" is defined as a smooth closed subgroup scheme of "GL"("n") over "k", for some positive integer "n". Equivalently, a linear algebraic group over "k" is a smooth affine group scheme over "k".
With the unipotent radical.
A connected linear algebraic group formula_0 over an algebraically closed field is called semisimple if every smooth connected solvable normal subgroup of formula_0 is trivial. More generally, a connected linear algebraic group formula_0 over an algebraically closed field is called reductive if the largest smooth connected unipotent normal subgroup of formula_0 is trivial. This normal subgroup is called the unipotent radical and is denoted formula_1. (Some authors do not require reductive groups to be connected.) A group formula_0 over an arbitrary field "k" is called semisimple or reductive if the base change formula_2 is semisimple or reductive, where formula_3 is an algebraic closure of "k". (This is equivalent to the definition of reductive groups in the introduction when "k" is perfect.) Any torus over "k", such as the multiplicative group "G""m", is reductive.
With representation theory.
Over fields of characteristic zero another equivalent definition of a reductive group is a connected group formula_0 admitting a faithful semisimple representation which remains semisimple over its algebraic closure formula_4 page 424.
Simple reductive groups.
A linear algebraic group "G" over a field "k" is called simple (or "k"-simple) if it is semisimple, nontrivial, and every smooth connected normal subgroup of "G" over "k" is trivial or equal to "G". (Some authors call this property "almost simple".) This differs slightly from the terminology for abstract groups, in that a simple algebraic group may have nontrivial center (although the center must be finite). For example, for any integer "n" at least 2 and any field "k", the group "SL"("n") over "k" is simple, and its center is the group scheme μ"n" of "n"th roots of unity.
A central isogeny of reductive groups is a surjective homomorphism with kernel a finite central subgroup scheme. Every reductive group over a field admits a central isogeny from the product of a torus and some simple groups. For example, over any field "k",
formula_5
It is slightly awkward that the definition of a reductive group over a field involves passage to the algebraic closure. For a perfect field "k", that can be avoided: a linear algebraic group "G" over "k" is reductive if and only if every smooth connected unipotent normal "k"-subgroup of "G" is trivial. For an arbitrary field, the latter property defines a pseudo-reductive group, which is somewhat more general.
Split-reductive groups.
A reductive group "G" over a field "k" is called split if it contains a split maximal torus "T" over "k" (that is, a split torus in "G" whose base change to formula_3 is a maximal torus in formula_2). It is equivalent to say that "T" is a split torus in "G" that is maximal among all "k"-tori in "G". These kinds of groups are useful because their classification can be described through combinatorical data called root data.
Examples.
GL"n" and SL"n".
A fundamental example of a reductive group is the general linear group formula_6 of invertible "n" × "n" matrices over a field "k", for a natural number "n". In particular, the multiplicative group "G""m" is the group "GL"(1), and so its group "G""m"("k") of "k"-rational points is the group "k"* of nonzero elements of "k" under multiplication. Another reductive group is the special linear group "SL"("n") over a field "k", the subgroup of matrices with determinant 1. In fact, "SL"("n") is a simple algebraic group for "n" at least 2.
O("n"), SO("n"), and Sp("n").
An important simple group is the symplectic group "Sp"(2"n") over a field "k", the subgroup of "GL"(2"n") that preserves a nondegenerate alternating bilinear form on the vector space "k"2"n". Likewise, the orthogonal group "O"("q") is the subgroup of the general linear group that preserves a nondegenerate quadratic form "q" on a vector space over a field "k". The algebraic group "O"("q") has two connected components, and its identity component "SO"("q") is reductive, in fact simple for "q" of dimension "n" at least 3. (For "k" of characteristic 2 and "n" odd, the group scheme "O"("q") is in fact connected but not smooth over "k". The simple group "SO"("q") can always be defined as the maximal smooth connected subgroup of "O"("q") over "k".) When "k" is algebraically closed, any two (nondegenerate) quadratic forms of the same dimension are isomorphic, and so it is reasonable to call this group "SO"("n"). For a general field "k", different quadratic forms of dimension "n" can yield non-isomorphic simple groups "SO"("q") over "k", although they all have the same base change to the algebraic closure formula_3.
Tori.
The group formula_7 and products of it are called the algebraic tori. They are examples of reductive groups since they embed in formula_6 through the diagonal, and from this representation, their unipotent radical is trivial. For example, formula_8 embeds in formula_9 from the mapformula_10
Non-examples.
Associated reductive group.
Note that the normality of the unipotent radical formula_1 implies that the quotient group formula_15 is reductive. For example,formula_16
Other characterizations of reductive groups.
Every compact connected Lie group has a complexification, which is a complex reductive algebraic group. In fact, this construction gives a one-to-one correspondence between compact connected Lie groups and complex reductive groups, up to isomorphism. For a compact Lie group "K" with complexification "G", the inclusion from "K" into the complex reductive group "G"(C) is a homotopy equivalence, with respect to the classical topology on "G"(C). For example, the inclusion from the unitary group "U"("n") to "GL"("n",C) is a homotopy equivalence.
For a reductive group "G" over a field of characteristic zero, all finite-dimensional representations of "G" (as an algebraic group) are completely reducible, that is, they are direct sums of irreducible representations. That is the source of the name "reductive". Note, however, that complete reducibility fails for reductive groups in positive characteristic (apart from tori). In more detail: an affine group scheme "G" of finite type over a field "k" is called linearly reductive if its finite-dimensional representations are completely reducible. For "k" of characteristic zero, "G" is linearly reductive if and only if the identity component "G"o of "G" is reductive. For "k" of characteristic "p">0, however, Masayoshi Nagata showed that "G" is linearly reductive if and only if "G"o is of multiplicative type and "G"/"G"o has order prime to "p".
Roots.
The classification of reductive algebraic groups is in terms of the associated root system, as in the theories of complex semisimple Lie algebras or compact Lie groups. Here is the way roots appear for reductive groups.
Let "G" be a split reductive group over a field "k", and let "T" be a split maximal torus in "G"; so "T" is isomorphic to ("G""m")"n" for some "n", with "n" called the rank of "G". Every representation of "T" (as an algebraic group) is a direct sum of 1-dimensional representations. A weight for "G" means an isomorphism class of 1-dimensional representations of "T", or equivalently a homomorphism "T" → "G""m". The weights form a group "X"("T") under tensor product of representations, with "X"("T") isomorphic to the product of "n" copies of the integers, Z"n".
The adjoint representation is the action of "G" by conjugation on its Lie algebra formula_17. A root of "G" means a nonzero weight that occurs in the action of "T" ⊂ "G" on formula_17. The subspace of formula_17 corresponding to each root is 1-dimensional, and the subspace of formula_17 fixed by "T" is exactly the Lie algebra formula_18 of "T". Therefore, the Lie algebra of "G" decomposes into formula_18 together with 1-dimensional subspaces indexed by the set Φ of roots:
formula_19
For example, when "G" is the group "GL"("n"), its Lie algebra formula_20 is the vector space of all "n" × "n" matrices over "k". Let "T" be the subgroup of diagonal matrices in "G". Then the root-space decomposition expresses formula_20 as the direct sum of the diagonal matrices and the 1-dimensional subspaces indexed by the off-diagonal positions ("i", "j"). Writing "L"1...,"L""n" for the standard basis for the weight lattice "X"("T") ≅ Z"n", the roots are the elements "L""i" − "L""j" for all "i" ≠ "j" from 1 to "n".
The roots of a semisimple group form a root system; this is a combinatorial structure which can be completely classified. More generally, the roots of a reductive group form a root datum, a slight variation. The Weyl group of a reductive group "G" means the quotient group of the normalizer of a maximal torus by the torus, "W" = "N""G"("T")/"T". The Weyl group is in fact a finite group generated by reflections. For example, for the group "GL"("n") (or "SL"("n")), the Weyl group is the symmetric group "S""n".
There are finitely many Borel subgroups containing a given maximal torus, and they are permuted simply transitively by the Weyl group (acting by conjugation). A choice of Borel subgroup determines a set of positive roots Φ+ ⊂ Φ, with the property that Φ is the disjoint union of Φ+ and −Φ+. Explicitly, the Lie algebra of "B" is the direct sum of the Lie algebra of "T" and the positive root spaces:
formula_21
For example, if "B" is the Borel subgroup of upper-triangular matrices in "GL"("n"), then this is the obvious decomposition of the subspace formula_22 of upper-triangular matrices in formula_20. The positive roots are "L""i" − "L""j" for 1 ≤ "i" < "j" ≤ "n".
A simple root means a positive root that is not a sum of two other positive roots. Write Δ for the set of simple roots. The number "r" of simple roots is equal to the rank of the commutator subgroup of "G", called the semisimple rank of "G" (which is simply the rank of "G" if "G" is semisimple). For example, the simple roots for "GL"("n") (or "SL"("n")) are "L""i" − "L""i"+1 for 1 ≤ "i" ≤ "n" − 1.
Root systems are classified by the corresponding Dynkin diagram, which is a finite graph (with some edges directed or multiple). The set of vertices of the Dynkin diagram is the set of simple roots. In short, the Dynkin diagram describes the angles between the simple roots and their relative lengths, with respect to a Weyl group-invariant inner product on the weight lattice. The connected Dynkin diagrams (corresponding to simple groups) are pictured below.
For a split reductive group "G" over a field "k", an important point is that a root α determines not just a 1-dimensional subspace of the Lie algebra of "G", but also a copy of the additive group "G"a in "G" with the given Lie algebra, called a root subgroup "U"α. The root subgroup is the unique copy of the additive group in "G" which is normalized by "T" and which has the given Lie algebra. The whole group "G" is generated (as an algebraic group) by "T" and the root subgroups, while the Borel subgroup "B" is generated by "T" and the positive root subgroups. In fact, a split semisimple group "G" is generated by the root subgroups alone.
Parabolic subgroups.
For a split reductive group "G" over a field "k", the smooth connected subgroups of "G" that contain a given Borel subgroup "B" of "G" are in one-to-one correspondence with the subsets of the set Δ of simple roots (or equivalently, the subsets of the set of vertices of the Dynkin diagram). Let "r" be the order of Δ, the semisimple rank of "G". Every parabolic subgroup of "G" is conjugate to a subgroup containing "B" by some element of "G"("k"). As a result, there are exactly 2"r" conjugacy classes of parabolic subgroups in "G" over "k". Explicitly, the parabolic subgroup corresponding to a given subset "S" of Δ is the group generated by "B" together with the root subgroups "U"−α for α in "S". For example, the parabolic subgroups of "GL"("n") that contain the Borel subgroup "B" above are the groups of invertible matrices with zero entries below a given set of squares along the diagonal, such as:
formula_23
By definition, a parabolic subgroup "P" of a reductive group "G" over a field "k" is a smooth "k"-subgroup such that the quotient variety "G"/"P" is proper over "k", or equivalently projective over "k". Thus the classification of parabolic subgroups amounts to a classification of the projective homogeneous varieties for "G" (with smooth stabilizer group; that is no restriction for "k" of characteristic zero). For "GL"("n"), these are the flag varieties, parametrizing sequences of linear subspaces of given dimensions "a"1...,"a""i" contained in a fixed vector space "V" of dimension "n":
formula_24
For the orthogonal group or the symplectic group, the projective homogeneous varieties have a similar description as varieties of isotropic flags with respect to a given quadratic form or symplectic form. For any reductive group "G" with a Borel subgroup "B", "G"/"B" is called the flag variety or flag manifold of "G".
Classification of split reductive groups.
Chevalley showed in 1958 that the reductive groups over any algebraically closed field are classified up to isomorphism by root data. In particular, the semisimple groups over an algebraically closed field are classified up to central isogenies by their Dynkin diagram, and the simple groups correspond to the connected diagrams. Thus there are simple groups of types A"n", B"n", C"n", D"n", E6, E7, E8, F4, G2. This result is essentially identical to the classifications of compact Lie groups or complex semisimple Lie algebras, by Wilhelm Killing and Élie Cartan in the 1880s and 1890s. In particular, the dimensions, centers, and other properties of the simple algebraic groups can be read from the list of simple Lie groups. It is remarkable that the classification of reductive groups is independent of the characteristic. For comparison, there are many more simple Lie algebras in positive characteristic than in characteristic zero.
The exceptional groups "G" of type G2 and E6 had been constructed earlier, at least in the form of the abstract group "G"("k"), by L. E. Dickson. For example, the group "G"2 is the automorphism group of an octonion algebra over "k". By contrast, the Chevalley groups of type F4, E7, E8 over a field of positive characteristic were completely new.
More generally, the classification of "split" reductive groups is the same over any field. A semisimple group "G" over a field "k" is called simply connected if every central isogeny from a semisimple group to "G" is an isomorphism. (For "G" semisimple over the complex numbers, being simply connected in this sense is equivalent to "G"(C) being simply connected in the classical topology.) Chevalley's classification gives that, over any field "k", there is a unique simply connected split semisimple group "G" with a given Dynkin diagram, with simple groups corresponding to the connected diagrams. At the other extreme, a semisimple group is of adjoint type if its center is trivial. The split semisimple groups over "k" with given Dynkin diagram are exactly the groups "G"/"A", where "G" is the simply connected group and "A" is a "k"-subgroup scheme of the center of "G".
For example, the simply connected split simple groups over a field "k" corresponding to the "classical" Dynkin diagrams are as follows:
formula_25
formula_26
The outer automorphism group of a split reductive group "G" over a field "k" is isomorphic to the automorphism group of the root datum of "G". Moreover, the automorphism group of "G" splits as a semidirect product:
formula_27
where "Z" is the center of "G". For a split semisimple simply connected group "G" over a field, the outer automorphism group of "G" has a simpler description: it is the automorphism group of the Dynkin diagram of "G".
Reductive group schemes.
A group scheme "G" over a scheme "S" is called reductive if the morphism "G" → "S" is smooth and affine, and every geometric fiber formula_2 is reductive. (For a point "p" in "S", the corresponding geometric fiber means the base change of "G" to an algebraic closure formula_3 of the residue field of "p".) Extending Chevalley's work, Michel Demazure and Grothendieck showed that split reductive group schemes over any nonempty scheme "S" are classified by root data. This statement includes the existence of Chevalley groups as group schemes over Z, and it says that every split reductive group over a scheme "S" is isomorphic to the base change of a Chevalley group from Z to "S".
Real reductive groups.
In the context of Lie groups rather than algebraic groups, a real reductive group is a Lie group "G" such that there is a linear algebraic group "L" over R whose identity component (in the Zariski topology) is reductive, and a homomorphism "G" → "L"(R) whose kernel is finite and whose image is open in "L"(R) (in the classical topology). It is also standard to assume that the image of the adjoint representation Ad("G") is contained in Int("g"C) = Ad("L"0(C)) (which is automatic for "G" connected).
In particular, every connected semisimple Lie group (meaning that its Lie algebra is semisimple) is reductive. Also, the Lie group R is reductive in this sense, since it can be viewed as the identity component of "GL"(1,R) ≅ R*. The problem of classifying the real reductive groups largely reduces to classifying the simple Lie groups. These are classified by their Satake diagram; or one can just refer to the list of simple Lie groups (up to finite coverings).
Useful theories of admissible representations and unitary representations have been developed for real reductive groups in this generality. The main differences between this definition and the definition of a reductive algebraic group have to do with the fact that an algebraic group "G" over R may be connected as an algebraic group while the Lie group "G"(R) is not connected, and likewise for simply connected groups.
For example, the projective linear group "PGL"(2) is connected as an algebraic group over any field, but its group of real points "PGL"(2,R) has two connected components. The identity component of "PGL"(2,R) (sometimes called "PSL"(2,R)) is a real reductive group that cannot be viewed as an algebraic group. Similarly, "SL"(2) is simply connected as an algebraic group over any field, but the Lie group "SL"(2,R) has fundamental group isomorphic to the integers Z, and so "SL"(2,R) has nontrivial covering spaces. By definition, all finite coverings of "SL"(2,R) (such as the metaplectic group) are real reductive groups. On the other hand, the universal cover of "SL"(2,R) is not a real reductive group, even though its Lie algebra is reductive, that is, the product of a semisimple Lie algebra and an abelian Lie algebra.
For a connected real reductive group "G", the quotient manifold "G"/"K" of "G" by a maximal compact subgroup "K" is a symmetric space of non-compact type. In fact, every symmetric space of non-compact type arises this way. These are central examples in Riemannian geometry of manifolds with nonpositive sectional curvature. For example, "SL"(2,R)/"SO"(2) is the hyperbolic plane, and "SL"(2,C)/"SU"(2) is hyperbolic 3-space.
For a reductive group "G" over a field "k" that is complete with respect to a discrete valuation (such as the p-adic numbers Q"p"), the affine building "X" of "G" plays the role of the symmetric space. Namely, "X" is a simplicial complex with an action of "G"("k"), and "G"("k") preserves a CAT(0) metric on "X", the analog of a metric with nonpositive curvature. The dimension of the affine building is the "k"-rank of "G". For example, the building of "SL"(2,Q"p") is a tree.
Representations of reductive groups.
For a split reductive group "G" over a field "k", the irreducible representations of "G" (as an algebraic group) are parametrized by the dominant weights, which are defined as the intersection of the weight lattice "X"("T") ≅ Z"n" with a convex cone (a Weyl chamber) in R"n". In particular, this parametrization is independent of the characteristic of "k". In more detail, fix a split maximal torus and a Borel subgroup, "T" ⊂ "B" ⊂ "G". Then "B" is the semidirect product of "T" with a smooth connected unipotent subgroup "U". Define a highest weight vector in a representation "V" of "G" over "k" to be a nonzero vector "v" such that "B" maps the line spanned by "v" into itself. Then "B" acts on that line through its quotient group "T", by some element λ of the weight lattice "X"("T"). Chevalley showed that every irreducible representation of "G" has a unique highest weight vector up to scalars; the corresponding "highest weight" λ is dominant; and every dominant weight λ is the highest weight of a unique irreducible representation "L"(λ) of "G", up to isomorphism.
There remains the problem of describing the irreducible representation with given highest weight. For "k" of characteristic zero, there are essentially complete answers. For a dominant weight λ, define the Schur module ∇(λ) as the "k"-vector space of sections of the "G"-equivariant line bundle on the flag manifold "G"/"B" associated to λ; this is a representation of "G". For "k" of characteristic zero, the Borel–Weil theorem says that the irreducible representation "L"(λ) is isomorphic to the Schur module ∇(λ). Furthermore, the Weyl character formula gives the character (and in particular the dimension) of this representation.
For a split reductive group "G" over a field "k" of positive characteristic, the situation is far more subtle, because representations of "G" are typically not direct sums of irreducibles. For a dominant weight λ, the irreducible representation "L"(λ) is the unique simple submodule (the socle) of the Schur module ∇(λ), but it need not be equal to the Schur module. The dimension and character of the Schur module are given by the Weyl character formula (as in characteristic zero), by George Kempf. The dimensions and characters of the irreducible representations "L"(λ) are in general unknown, although a large body of theory has been developed to analyze these representations. One important result is that the dimension and character of "L"(λ) are known when the characteristic "p" of "k" is much bigger than the Coxeter number of "G", by Henning Andersen, Jens Jantzen, and Wolfgang Soergel (proving Lusztig's conjecture in that case). Their character formula for "p" large is based on the Kazhdan–Lusztig polynomials, which are combinatorially complex. For any prime "p", Simon Riche and Geordie Williamson conjectured the irreducible characters of a reductive group in terms of the "p"-Kazhdan-Lusztig polynomials, which are even more complex, but at least are computable.
Non-split reductive groups.
As discussed above, the classification of split reductive groups is the same over any field. By contrast, the classification of arbitrary reductive groups can be hard, depending on the base field. Some examples among the classical groups are:
As a result, the problem of classifying reductive groups over "k" essentially includes the problem of classifying all quadratic forms over "k" or all central simple algebras over "k". These problems are easy for "k" algebraically closed, and they are understood for some other fields such as number fields, but for arbitrary fields there are many open questions.
A reductive group over a field "k" is called isotropic if it has "k"-rank greater than 0 (that is, if it contains a nontrivial split torus), and otherwise anisotropic. For a semisimple group "G" over a field "k", the following conditions are equivalent:
For "k" perfect, it is also equivalent to say that "G"("k") contains a unipotent element other than 1.
For a connected linear algebraic group "G" over a local field "k" of characteristic zero (such as the real numbers), the group "G"("k") is compact in the classical topology (based on the topology of "k") if and only if "G" is reductive and anisotropic. Example: the orthogonal group "SO"("p","q") over R has real rank min("p","q"), and so it is anisotropic if and only if "p" or "q" is zero.
A reductive group "G" over a field "k" is called quasi-split if it contains a Borel subgroup over "k". A split reductive group is quasi-split. If "G" is quasi-split over "k", then any two Borel subgroups of "G" are conjugate by some element of "G"("k"). Example: the orthogonal group "SO"("p","q") over R is split if and only if |"p"−"q"| ≤ 1, and it is quasi-split if and only if |"p"−"q"| ≤ 2.
Structure of semisimple groups as abstract groups.
For a simply connected split semisimple group "G" over a field "k", Robert Steinberg gave an explicit presentation of the abstract group "G"("k"). It is generated by copies of the additive group of "k" indexed by the roots of "G" (the root subgroups), with relations determined by the Dynkin diagram of "G".
For a simply connected split semisimple group "G" over a perfect field "k", Steinberg also determined the automorphism group of the abstract group "G"("k"). Every automorphism is the product of an inner automorphism, a diagonal automorphism (meaning conjugation by a suitable formula_3-point of a maximal torus), a graph automorphism (corresponding to an automorphism of the Dynkin diagram), and a field automorphism (coming from an automorphism of the field "k").
For a "k"-simple algebraic group "G", Tits's simplicity theorem says that the abstract group "G"("k") is close to being simple, under mild assumptions. Namely, suppose that "G" is isotropic over "k", and suppose that the field "k" has at least 4 elements. Let "G"("k")+ be the subgroup of the abstract group "G"("k") generated by "k"-points of copies of the additive group "G""a" over "k" contained in "G". (By the assumption that "G" is isotropic over "k", the group "G"("k")+ is nontrivial, and even Zariski dense in "G" if "k" is infinite.) Then the quotient group of "G"("k")+ by its center is simple (as an abstract group). The proof uses Jacques Tits's machinery of BN-pairs.
The exceptions for fields of order 2 or 3 are well understood. For "k" = F2, Tits's simplicity theorem remains valid except when "G" is split of type "A"1, "B"2, or "G"2, or non-split (that is, unitary) of type "A"2. For "k" = F3, the theorem holds except for "G" of type "A"1.
For a "k"-simple group "G", in order to understand the whole group "G"("k"), one can consider the Whitehead group "W"("k","G")="G"("k")/"G"("k")+. For "G" simply connected and quasi-split, the Whitehead group is trivial, and so the whole group "G"("k") is simple modulo its center. More generally, the Kneser–Tits problem asks for which isotropic "k"-simple groups the Whitehead group is trivial. In all known examples, "W"("k","G") is abelian.
For an anisotropic "k"-simple group "G", the abstract group "G"("k") can be far from simple. For example, let "D" be a division algebra with center a "p"-adic field "k". Suppose that the dimension of "D" over "k" is finite and greater than 1. Then "G" = "SL"(1,"D") is an anisotropic "k"-simple group. As mentioned above, "G"("k") is compact in the classical topology. Since it is also totally disconnected, "G"("k") is a profinite group (but not finite). As a result, "G"("k") contains infinitely many normal subgroups of finite index.
Lattices and arithmetic groups.
Let "G" be a linear algebraic group over the rational numbers Q. Then "G" can be extended to an affine group scheme "G" over Z, and this determines an abstract group "G"(Z). An arithmetic group means any subgroup of "G"(Q) that is commensurable with "G"(Z). (Arithmeticity of a subgroup of "G"(Q) is independent of the choice of Z-structure.) For example, "SL"("n",Z) is an arithmetic subgroup of "SL"("n",Q).
For a Lie group "G", a lattice in "G" means a discrete subgroup Γ of "G" such that the manifold "G"/Γ has finite volume (with respect to a "G"-invariant measure). For example, a discrete subgroup Γ is a lattice if "G"/Γ is compact. The Margulis arithmeticity theorem says, in particular: for a simple Lie group "G" of real rank at least 2, every lattice in "G" is an arithmetic group.
The Galois action on the Dynkin diagram.
In seeking to classify reductive groups which need not be split, one step is the Tits index, which reduces the problem to the case of anisotropic groups. This reduction generalizes several fundamental theorems in algebra. For example, Witt's decomposition theorem says that a nondegenerate quadratic form over a field is determined up to isomorphism by its Witt index together with its anisotropic kernel. Likewise, the Artin–Wedderburn theorem reduces the classification of central simple algebras over a field to the case of division algebras. Generalizing these results, Tits showed that a reductive group over a field "k" is determined up to isomorphism by its Tits index together with its anisotropic kernel, an associated anisotropic semisimple "k"-group.
For a reductive group "G" over a field "k", the absolute Galois group Gal("k""s"/"k") acts (continuously) on the "absolute" Dynkin diagram of "G", that is, the Dynkin diagram of "G" over a separable closure "k"s (which is also the Dynkin diagram of "G" over an algebraic closure formula_29). The Tits index of "G" consists of the root datum of "G""k""s", the Galois action on its Dynkin diagram, and a Galois-invariant subset of the vertices of the Dynkin diagram. Traditionally, the Tits index is drawn by circling the Galois orbits in the given subset.
There is a full classification of quasi-split groups in these terms. Namely, for each action of the absolute Galois group of a field "k" on a Dynkin diagram, there is a unique simply connected semisimple quasi-split group "H" over "k" with the given action. (For a quasi-split group, every Galois orbit in the Dynkin diagram is circled.) Moreover, any other simply connected semisimple group "G" over "k" with the given action is an inner form of the quasi-split group "H", meaning that "G" is the group associated to an element of the Galois cohomology set "H"1("k","H"/"Z"), where "Z" is the center of "H". In other words, "G" is the twist of "H" associated to some "H"/"Z"-torsor over "k", as discussed in the next section.
Example: Let "q" be a nondegenerate quadratic form of even dimension 2"n" over a field "k" of characteristic not 2, with "n" ≥ 5. (These restrictions can be avoided.) Let "G" be the simple group "SO"("q") over "k". The absolute Dynkin diagram of "G" is of type D"n", and so its automorphism group is of order 2, switching the two "legs" of the D"n" diagram. The action of the absolute Galois group of "k" on the Dynkin diagram is trivial if and only if the signed discriminant "d" of "q" in "k"*/("k"*)2 is trivial. If "d" is nontrivial, then it is encoded in the Galois action on the Dynkin diagram: the index-2 subgroup of the Galois group that acts as the identity is formula_30. The group "G" is split if and only if "q" has Witt index "n", the maximum possible, and "G" is quasi-split if and only if "q" has Witt index at least "n" − 1.
Torsors and the Hasse principle.
A torsor for an affine group scheme "G" over a field "k" means an affine scheme "X" over "k" with an action of "G" such that formula_31 is isomorphic to formula_2 with the action of formula_2 on itself by left translation. A torsor can also be viewed as a principal G-bundle over "k" with respect to the fppf topology on "k", or the étale topology if "G" is smooth over "k". The pointed set of isomorphism classes of "G"-torsors over "k" is called "H"1("k","G"), in the language of Galois cohomology.
Torsors arise whenever one seeks to classify forms of a given algebraic object "Y" over a field "k", meaning objects "X" over "k" which become isomorphic to "Y" over the algebraic closure of "k". Namely, such forms (up to isomorphism) are in one-to-one correspondence with the set "H"1("k",Aut("Y")). For example, (nondegenerate) quadratic forms of dimension "n" over "k" are classified by "H"1("k","O"("n")), and central simple algebras of degree "n" over "k" are classified by "H"1("k","PGL"("n")). Also, "k"-forms of a given algebraic group "G" (sometimes called "twists" of "G") are classified by "H"1("k",Aut("G")). These problems motivate the systematic study of "G"-torsors, especially for reductive groups "G".
When possible, one hopes to classify "G"-torsors using cohomological invariants, which are invariants taking values in Galois cohomology with "abelian" coefficient groups "M", "H""a"("k","M"). In this direction, Steinberg proved Serre's "Conjecture I": for a connected linear algebraic group "G" over a perfect field of cohomological dimension at most 1, "H"1("k","G") = 1. (The case of a finite field was known earlier, as Lang's theorem.) It follows, for example, that every reductive group over a finite field is quasi-split.
Serre's Conjecture II predicts that for a simply connected semisimple group "G" over a field of cohomological dimension at most 2, "H"1("k","G") = 1. The conjecture is known for a totally imaginary number field (which has cohomological dimension 2). More generally, for any number field "k", Martin Kneser, Günter Harder and Vladimir Chernousov (1989) proved the Hasse principle: for a simply connected semisimple group "G" over "k", the map
formula_32
is bijective. Here "v" runs over all places of "k", and "k""v" is the corresponding local field (possibly R or C). Moreover, the pointed set "H"1("k""v","G") is trivial for every nonarchimidean local field "k""v", and so only the real places of "k" matter. The analogous result for a global field "k" of positive characteristic was proved earlier by Harder (1975): for every simply connected semisimple group "G" over "k", "H"1("k","G") is trivial (since "k" has no real places).
In the slightly different case of an adjoint group "G" over a number field "k", the Hasse principle holds in a weaker form: the natural map
formula_32
is injective. For "G" = "PGL"("n"), this amounts to the Albert–Brauer–Hasse–Noether theorem, saying that a central simple algebra over a number field is determined by its local invariants.
Building on the Hasse principle, the classification of semisimple groups over number fields is well understood. For example, there are exactly three Q-forms of the exceptional group E8, corresponding to the three real forms of E8.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "R_u(G)"
},
{
"math_id": 2,
"text": "G_{\\overline k}"
},
{
"math_id": 3,
"text": "\\overline k"
},
{
"math_id": 4,
"text": "k^{al}"
},
{
"math_id": 5,
"text": "GL(n)\\cong (G_m\\times SL(n))/\\mu_n."
},
{
"math_id": 6,
"text": "\\text{GL}_n"
},
{
"math_id": 7,
"text": "\\mathbb{G}_m"
},
{
"math_id": 8,
"text": "\\mathbb{G}_m\\times \\mathbb {G}_m"
},
{
"math_id": 9,
"text": "\\text{GL}_2"
},
{
"math_id": 10,
"text": "(a_1,a_2) \\mapsto \\begin{bmatrix}\na_1 & 0 \\\\\n0 & a_2\n\\end{bmatrix}."
},
{
"math_id": 11,
"text": "\\mathbb{G}_a"
},
{
"math_id": 12,
"text": "B_n"
},
{
"math_id": 13,
"text": "\\mathbb{U}_n"
},
{
"math_id": 14,
"text": "1"
},
{
"math_id": 15,
"text": "G/R_u(G)"
},
{
"math_id": 16,
"text": "B_n/(R_u(B_n)) \\cong \\prod^n_{i=1} \\mathbb{G}_m."
},
{
"math_id": 17,
"text": "\\mathfrak g"
},
{
"math_id": 18,
"text": "\\mathfrak t"
},
{
"math_id": 19,
"text": "{\\mathfrak g} = {\\mathfrak t}\\oplus \\bigoplus_{\\alpha\\in\\Phi} {\\mathfrak g}_{\\alpha}."
},
{
"math_id": 20,
"text": "{\\mathfrak gl}(n)"
},
{
"math_id": 21,
"text": "{\\mathfrak b}={\\mathfrak t}\\oplus \\bigoplus_{\\alpha\\in\\Phi^{+}} {\\mathfrak g}_{\\alpha}."
},
{
"math_id": 22,
"text": "\\mathfrak b"
},
{
"math_id": 23,
"text": "\\left \\{ \\begin{bmatrix}\n * & * & * & *\\\\\n * & * & * & *\\\\\n 0 & 0 & * & *\\\\\n 0 & 0 & 0 & *\n\\end{bmatrix} \\right \\}"
},
{
"math_id": 24,
"text": "0\\subset S_{a_1}\\subset \\cdots \\subset S_{a_i}\\subset V."
},
{
"math_id": 25,
"text": "q(x_1,\\ldots,x_{2n+1})=x_1x_2+x_3x_4+\\cdots+x_{2n-1}x_{2n}+x_{2n+1}^2;"
},
{
"math_id": 26,
"text": "q(x_1,\\ldots,x_{2n})=x_1x_2+x_3x_4+\\cdots+x_{2n-1}x_{2n}."
},
{
"math_id": 27,
"text": "\\operatorname{Aut}(G)\\cong \\operatorname{Out}(G)\\ltimes (G/Z)(k),"
},
{
"math_id": 28,
"text": "\\lfloor n/2\\rfloor"
},
{
"math_id": 29,
"text": "{\\overline k}"
},
{
"math_id": 30,
"text": "\\operatorname{Gal}(k_s/k(\\sqrt{d}))\\subset \\operatorname{Gal}(k_s/k)"
},
{
"math_id": 31,
"text": "X_{\\overline k}"
},
{
"math_id": 32,
"text": "H^1(k,G)\\to \\prod_{v} H^1(k_v,G)"
}
] | https://en.wikipedia.org/wiki?curid=1088386 |
1088425 | Mahler measure | Measure of polynomial height
In mathematics, the Mahler measure formula_0 of a polynomial formula_1 with complex coefficients is defined as
formula_2
where formula_1 factorizes over the complex numbers formula_3 as
formula_4
The Mahler measure can be viewed as a kind of height function. Using Jensen's formula, it can be proved that this measure is also equal to the geometric mean of formula_5 for formula_6 on the unit circle (i.e., formula_7):
formula_8
By extension, the Mahler measure of an algebraic number formula_9 is defined as the Mahler measure of the minimal polynomial of formula_9 over formula_10. In particular, if formula_9 is a Pisot number or a Salem number, then its Mahler measure is simply formula_9.
The Mahler measure is named after the German-born Australian mathematician Kurt Mahler.
Higher-dimensional Mahler measure.
The Mahler measure formula_0 of a multi-variable polynomial formula_21 is defined similarly by the formula
formula_22
It inherits the above three properties of the Mahler measure for a one-variable polynomial.
The multi-variable Mahler measure has been shown, in some cases, to be related to special values
of zeta-functions and formula_23-functions. For example, in 1981, Smyth proved the formulas
formula_24
where formula_25 is a Dirichlet L-function, and
formula_26
where formula_27 is the Riemann zeta function. Here formula_28 is called the "logarithmic Mahler measure".
Some results by Lawton and Boyd.
From the definition, the Mahler measure is viewed as the integrated values of polynomials over the torus (also see Lehmer's conjecture). If formula_15 vanishes on the torus formula_29, then the convergence of the integral defining formula_0 is not obvious, but it is known that formula_0 does converge and is equal to a limit of one-variable Mahler measures, which had been conjectured by Boyd.
This is formulated as follows: Let formula_30 denote the integers and define formula_31 . If formula_32 is a polynomial in formula_33 variables and formula_34 define the polynomial formula_35 of one variable by
formula_36
and define formula_37 by
formula_38
where formula_39.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Lawton) — Let formula_32 be a polynomial in "N" variables with complex coefficients. Then the following limit is valid (even if the condition that formula_40 is relaxed):
formula_41
Boyd's proposal.
Boyd provided more general statements than the above theorem. He pointed out that the classical Kronecker's theorem, which characterizes monic polynomials with integer coefficients all of whose roots are inside the unit disk, can be regarded as characterizing those polynomials of one variable whose measure is exactly 1, and that this result extends to polynomials in several variables.
Define an "extended cyclotomic polynomial" to be a polynomial of the form
formula_42
where formula_43 is the "m"-th cyclotomic polynomial, the formula_44 are integers, and the formula_45 are chosen minimally so that formula_46 is a polynomial in the formula_47. Let formula_48 be the set of polynomials that are products of monomials formula_49 and extended cyclotomic polynomials.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Boyd) — Let formula_50 be a polynomial with integer coefficients. Then formula_51 if and only if formula_52 is an element of formula_48.
This led Boyd to consider the set of values
formula_53
and the union formula_54. He made the far-reaching conjecture that the set of formula_55 is a closed subset of formula_56. An immediate consequence of this conjecture would be the truth of Lehmer's conjecture, albeit without an explicit lower bound. As Smyth's result suggests that formula_57 , Boyd further conjectures that
formula_58
Mahler measure and entropy.
An action formula_59 of formula_60 by automorphisms of a compact metrizable abelian group may be associated via duality to any countable module formula_33 over the ring formula_61. The topological entropy (which is equal to the measure-theoretic entropy) of this action, formula_62, is given by a Mahler measure (or is infinite). In the case of a cyclic module formula_63 for a non-zero polynomial formula_50 the formula proved by Lind, Schmidt, and Ward gives formula_64, the logarithmic Mahler measure of formula_52. In the general case, the entropy of the action is expressed as a sum of logarithmic Mahler measures over the generators of the principal associated prime ideals of the module. As pointed out earlier by Lind in the case formula_65 of a single compact group automorphism, this means that the set of possible values of the entropy of such actions is either all of formula_66 or a countable set depending on the solution to Lehmer's problem. Lind also showed that the infinite-dimensional torus formula_67 either has ergodic automorphisms of finite positive entropy or only has automorphisms of infinite entropy depending on the solution to Lehmer's problem.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M(p)"
},
{
"math_id": 1,
"text": "p(z) "
},
{
"math_id": 2,
"text": "M(p) = |a|\\prod_{|\\alpha_i| \\ge 1} |\\alpha_i| = |a| \\prod_{i=1}^n \\max\\{1,|\\alpha_i|\\},"
},
{
"math_id": 3,
"text": "\\mathbb{C}"
},
{
"math_id": 4,
"text": "p(z) = a(z-\\alpha_1)(z-\\alpha_2)\\cdots(z-\\alpha_n)."
},
{
"math_id": 5,
"text": "|p(z)| "
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": "|z| = 1"
},
{
"math_id": 8,
"text": "M(p) = \\exp\\left(\\int_{0}^{1} \\ln(|p(e^{2\\pi i\\theta})|)\\, d\\theta \\right)."
},
{
"math_id": 9,
"text": "\\alpha"
},
{
"math_id": 10,
"text": "\\mathbb{Q}"
},
{
"math_id": 11,
"text": "\\forall p, q, \\,\\, M(p \\cdot q) = M(p) \\cdot M(q)."
},
{
"math_id": 12,
"text": "M(p) = \\lim_{\\tau \\to 0} \\|p\\|_{\\tau}"
},
{
"math_id": 13,
"text": "\\, \\|p\\|_\\tau =\\left(\\int_0^{1} |p(e^{2\\pi i\\theta})|^\\tau d\\theta \\right)^{1/\\tau} "
},
{
"math_id": 14,
"text": "L_\\tau"
},
{
"math_id": 15,
"text": "p"
},
{
"math_id": 16,
"text": "M(p) = 1"
},
{
"math_id": 17,
"text": "p(z) = z,"
},
{
"math_id": 18,
"text": "\\mu>1"
},
{
"math_id": 19,
"text": "M(p)=1"
},
{
"math_id": 20,
"text": "M(p)>\\mu"
},
{
"math_id": 21,
"text": "p(x_1,\\ldots,x_n) \\in \\mathbb{C}[x_1,\\ldots,x_n]"
},
{
"math_id": 22,
"text": "M(p) = \\exp\\left(\\int_0^{1} \\int_0^{1} \\cdots \\int_0^{1} \\log \\Bigl( \\bigl |p(e^{2\\pi i\\theta_1}, e^{2\\pi i\\theta_2}, \\ldots, e^{2\\pi i\\theta_n}) \\bigr| \\Bigr) \\, d\\theta_1\\, d\\theta_2\\cdots d\\theta_n \\right)."
},
{
"math_id": 23,
"text": "L"
},
{
"math_id": 24,
"text": " m(1+x+y)=\\frac{3\\sqrt{3}}{4\\pi}L(\\chi_{-3},2)"
},
{
"math_id": 25,
"text": "L(\\chi_{-3},s)"
},
{
"math_id": 26,
"text": " m(1+x+y+z)=\\frac{7}{2\\pi^2}\\zeta(3),"
},
{
"math_id": 27,
"text": "\\zeta"
},
{
"math_id": 28,
"text": " m(P)=\\log M(P)"
},
{
"math_id": 29,
"text": "(S^1)^n"
},
{
"math_id": 30,
"text": "\\mathbb{Z}"
},
{
"math_id": 31,
"text": "\\mathbb{Z}^N_+=\\{r=(r_1,\\dots,r_N)\\in\\mathbb{Z}^N:r_j\\ge0\\ \\text{for}\\ 1\\le j\\le N\\}"
},
{
"math_id": 32,
"text": "Q(z_1,\\dots,z_N)"
},
{
"math_id": 33,
"text": "N"
},
{
"math_id": 34,
"text": "r=(r_1,\\dots,r_N)\\in\\mathbb{Z}^N_+"
},
{
"math_id": 35,
"text": "Q_r(z)"
},
{
"math_id": 36,
"text": "Q_r(z):=Q(z^{r_1},\\dots,z^{r_N})"
},
{
"math_id": 37,
"text": "q(r)"
},
{
"math_id": 38,
"text": "q(r) := \\min \\left\\{H(s):s=(s_1,\\dots,s_N)\\in\\mathbb{Z}^N, s\\ne(0,\\dots,0)~\\text{and}~\\sum^N_{j=1} s_j r_j = 0 \\right\\}"
},
{
"math_id": 39,
"text": "H(s)=\\max\\{|s_j|:1\\le j\\le N\\}"
},
{
"math_id": 40,
"text": "r_i \\geq 0"
},
{
"math_id": 41,
"text": " \\lim_{q(r)\\to\\infty}M(Q_r)=M(Q)"
},
{
"math_id": 42,
"text": "\\Psi(z)=z_1^{b_1} \\dots z_n^{b_n}\\Phi_m(z_1^{v_1}\\dots z_n^{v_n}),"
},
{
"math_id": 43,
"text": "\\Phi_m(z)"
},
{
"math_id": 44,
"text": "v_i"
},
{
"math_id": 45,
"text": "b_i = \\max(0, -v_i\\deg\\Phi_m)"
},
{
"math_id": 46,
"text": "\\Psi(z)"
},
{
"math_id": 47,
"text": "z_i"
},
{
"math_id": 48,
"text": "K_n"
},
{
"math_id": 49,
"text": "\\pm z_1^{c_1}\\dots z_n^{c_n}"
},
{
"math_id": 50,
"text": " F(z_1,\\dots,z_n)\\in\\mathbb{Z}[z_1,\\ldots,z_n]"
},
{
"math_id": 51,
"text": "M(F)=1"
},
{
"math_id": 52,
"text": "F"
},
{
"math_id": 53,
"text": "L_n:=\\bigl\\{m(P(z_1,\\dots,z_n)):P\\in\\mathbb{Z}[z_1,\\dots,z_n]\\bigr\\},"
},
{
"math_id": 54,
"text": "{L}_\\infty = \\bigcup^\\infty_{n=1}L_n"
},
{
"math_id": 55,
"text": "{L}_\\infty"
},
{
"math_id": 56,
"text": "\\mathbb R"
},
{
"math_id": 57,
"text": "L_1\\subsetneqq L_2"
},
{
"math_id": 58,
"text": "L_1\\subsetneqq L_2\\subsetneqq L_3\\subsetneqq\\ \\cdots ."
},
{
"math_id": 59,
"text": "\\alpha_M"
},
{
"math_id": 60,
"text": "\\mathbb{Z}^n"
},
{
"math_id": 61,
"text": "R=\\mathbb{Z}[z_1^{\\pm1},\\dots,z_n^{\\pm1}]"
},
{
"math_id": 62,
"text": "h(\\alpha_N)"
},
{
"math_id": 63,
"text": "M=R/\\langle F\\rangle"
},
{
"math_id": 64,
"text": "h(\\alpha_N)=\\log M(F)"
},
{
"math_id": 65,
"text": "n=1"
},
{
"math_id": 66,
"text": "[0,\\infty]"
},
{
"math_id": 67,
"text": "\\mathbb{T}^{\\infty}"
}
] | https://en.wikipedia.org/wiki?curid=1088425 |
108848 | Hebron, Connecticut | Town in Connecticut, United States
Hebron ( ) is a town in Tolland County, Connecticut, United States. The town is part of the Capitol Planning Region. The population was 9,098 at the 2020 census. Hebron was incorporated May 26, 1708. In 2010, Hebron was rated #6 in Top Towns in Connecticut with population between 6,500 and 10,000, according to "Connecticut Magazine".
The villages of Hebron Center, Gilead and Amston, are located within Hebron. Amston has its own ZIP Code and post office. The remnants of two long since abandoned communities, Grayville and Gay City, are also located in Hebron. The site of the latter is now Gay City State Park.
History.
The town of Hebron was settled in 1704, and incorporated on May 26, 1708, within Hartford County from Non-County Area 1 of the Connecticut Colony. The diamond shape of the town seal has its origins in the diamond figure brand, formula_0, required on all horses kept in Hebron by a May 1710 act of the Colonial Assembly.
Hebron became a town in Windham County upon its formation on May 12, 1726. It became a town in Tolland County upon its formation from part of Windham County on October 13, 1785. On October 13, 1803, the town of Marlborough, Hartford County was created from parts of the towns of Colchester (New London County), Glastonbury (Hartford County), and Hebron.
Geography.
According to the United States Census Bureau, the town has a total area of , of which is land and (0.97%) is water.
Demographics.
<templatestyles src="US Census population/styles.css"/>
As of the census of 2000, there were 8,610 people, 2,993 households, and 2,466 families residing in the town. The population density was . There were 3,110 housing units at an average density of . The racial makeup of the town was 97.69% White, 0.58% African American, 0.13% Native American, 0.56% Asian, 0.03% Pacific Islander, 0.20% from other races, and 0.81% from two or more races. Hispanic or Latino of any race were 1.07% of the population.
There were 2,993 households, out of which 45.1% had children under the age of 18 living with them, 74.4% were married couples living together, 5.9% had a female householder with no husband present, and 17.6% were non-families. 13.4% of all households were made up of individuals, and 4.2% had someone living alone who was 65 years of age or older. The average household size was 2.88 and the average family size was 3.19.
In the town, the population was spread out, with 30.0% under the age of 18, 4.6% from 18 to 24, 33.7% from 25 to 44, 25.7% from 45 to 64, and 6.0% who were 65 years of age or older. The median age was 37 years. For every 100 females, there were 100.1 males. For every 100 females age 18 and over, there were 97.2 males.
The median income for a household in the town was $115,980. Males had a median income of $62,109 versus $52,237 for females. The per capita income for the town was $39,775. About 0.3% of families and 1.0% of the population were below the poverty line, including 0.2% of those under age 18 and 3.2% of those age 65 or over.
Arts and culture.
Annual cultural events.
A major commercial attraction is the annual Hebron Harvest Fair, which features bingo, fried foods, rides, prizes, arts & crafts, pig races, tractor pulls, prizes for the best pies and the biggest pumpkins. The event occurs every September. This event is not only for the people of Hebron, but also for many tourists visiting the town.
Parks and recreation.
Hebron's most popular year-round recreation area is Gay City State Park, Connecticut's fourth-largest state park. There is a 5-mile perimeter trail and an extensive network of cross trails that run throughout the park. All are suitable for woodland hiking and trail biking. Gay City also has a pond in which swimming is available in season, fishing, picnic areas, cross-country skiing and snowshoeing.
In addition, Hebron has several town parks and ballfields, and the Town Recreation Department has organized sports and other activities throughout the year. The rails-to-trails Airline Trail State Park goes through Hebron, with several access points for walkers, bikers and horseback riders.
Education.
The town hosts the regional middle and high school RHAM High School for two adjacent towns, Marlborough and Andover.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\stackrel{\\bigwedge}{\\vee}"
}
] | https://en.wikipedia.org/wiki?curid=108848 |
10886610 | Trivialism | Logical theory
Trivialism is the logical theory that all statements (also known as propositions) are true and that all contradictions of the form "p and not p" (e.g. the ball is red and not red) are true. In accordance with this, a trivialist is a person who believes everything is true.
In classical logic, trivialism is in direct violation of Aristotle's law of noncontradiction. In philosophy, trivialism is considered by some to be the complete opposite of skepticism. Paraconsistent logics may use "the law of non-triviality" to abstain from trivialism in logical practices that involve true contradictions.
Theoretical arguments and anecdotes have been offered for trivialism to contrast it with theories such as modal realism, dialetheism and paraconsistent logics.
Overview.
Etymology.
"Trivialism", as a term, is derived from the Latin word "trivialis," meaning commonplace, in turn derived from the "trivium", the three introductory educational topics (grammar, logic, and rhetoric) expected to be learned by all freemen. In logic, from this meaning, a "trivial" theory is something regarded as defective in the face of a complex phenomenon that needs to be completely represented. Thus, literally, the trivialist theory is something expressed in the simplest possible way.
Theory.
In symbolic logic, trivialism may be expressed as the following:
<templatestyles src="Block indent/styles.css"/>formula_0
The above would be read as "given any proposition, it is a true proposition" through universal quantification (∀).
A claim of trivialism may always apply its fundamental truth, otherwise known as a truth predicate:
<templatestyles src="Block indent/styles.css"/>formula_1
The above would be read as a "proposition if and only if a true proposition", meaning that all propositions are believed to be inherently proven as true. Without consistent use of this concept, a claim of advocating trivialism may not be seen as genuine and complete trivialism; as to claim a proposition is true but deny it as probably true may be considered inconsistent with the assumed theory.
Taxonomy of trivialisms.
Luis Estrada-González in "Models of Possibilism and Trivialism" lists four types of trivialism through the concept of possible worlds, with a "world" being a possibility and "the actual world" being reality. It is theorized a trivialist simply designates a value to all propositions in equivalence to seeing all propositions and their negations as true. This taxonomy is used to demonstrate the different strengths and plausibility of trivialism in this context:
Arguments against trivialism.
The consensus among the majority of philosophers is descriptively a denial of trivialism, termed as non-trivialism or anti-trivialism. This is due to it being unable to produce a sound argument through the principle of explosion and it being considered an absurdity (reductio ad absurdum).
Aristotle.
Aristotle's law of noncontradiction and other arguments are considered to be against trivialism. Luis Estrada-González in "Models of Possiblism and Trivialism" has interpreted Aristotle's "Metaphysics Book IV" as such: "A family of arguments between 1008a26 and 1007b12 of the form 'If trivialism is right, then X is the case, but if X is the case then all things are one. But it is impossible that all things are one, so trivialism is impossible.' ... these Aristotelian considerations are the seeds of virtually all subsequent suspicions against trivialism: Trivialism has to be rejected because it identifies what should not be identified, and is undesirable from a logical point of view because it identifies what is not identical, namely, truth and falsehood."
Priest.
Graham Priest considers trivialism untenable: "a substantial case can be made for dialetheism; belief in [trivialism], though, would appear to be grounds for certifiable insanity".
He formulated the "law of non-triviality" as a replacement for the law of non-contradiction in paraconsistent logic and dialetheism.
Arguments for trivialism.
There are theoretical arguments for trivialism argued from the position of a devil's advocate:
Argument from possibilism.
Paul Kabay has argued for trivialism in "On the Plenitude of Truth" from the following:
<templatestyles src="Template:Blockquote/styles.css" />
Above, possibilism (modal realism; related to possible worlds) is the oft-debated theory that every proposition is possible. With this assumed to be true, trivialism can be assumed to be true as well according to Kabay.
Paradoxes.
The liar's paradox, Curry's paradox, and the principle of explosion all can be asserted as valid and not required to be resolved and used to defend trivialism.
Philosophical implications.
Comparison to skepticism.
In Paul Kabay's comparison of trivialism to schools of philosophical skepticism (in "On the Plenitude of Truth")—such as Pyrrhonism—who seek to attain a form of ataraxia, or state of imperturbability; it is purported the figurative trivialist inherently attains this state. This is claimed to be justified by the figurative trivialist seeing every state of affairs being true, even in a state of anxiety. Once universally accepted as true, the trivialist is free from any further anxieties regarding whether any state of affairs is true.
Kabay compares the Pyrrhonian skeptic to the figurative trivialist and claims that as the skeptic reportedly attains a state of imperturbability through a suspension of belief, the trivialist may attain such a state through an abundance of belief.
In this case—and according to independent claims by Graham Priest—trivialism is considered the complete opposite of skepticism. However, insofar as the trivialist affirms all states of affairs as universally true, the Pyrrhonist neither affirms nor denies the truth (or falsity) of such affairs.
Impossibility of action.
It is asserted by both Priest and Kabay that it is impossible for a trivialist to truly choose and thus act. Priest argues this by the following in "Doubt Truth to Be a Liar": "One cannot intend to act in such a way as to bring about some state of affairs, "s", if one believes "s" already to hold. Conversely, if one acts with the purpose of bringing "s" about, one cannot believe that "s" already obtains." Due to their suspension of determination upon striking equipollence between claims, the Pyrrhonist has also remained subject to apraxia charges.
Advocates.
Paul Kabay, an Australian philosopher, in his book "A Defense of Trivialism" has argued that various philosophers in history have held views resembling trivialism, although he stops short of calling them trivialists. He mentions various pre-Socratic Greek philosophers as philosophers holding views resembling trivialism. He mentions that Aristotle in his book "Metaphysics" appears to suggest that Heraclitus and Anaxagoras advocated trivialism. He quotes Anaxagoras as saying that all things are one. Kabay also suggests Heraclitus' ideas are similar to trivialism because Heraclitus believed in a union of opposites, shown in such quotes as "the way up and down is the same". Kabay also mentions a fifteenth century Roman Catholic cardinal, Nicholas of Cusa, stating that what Cusa wrote in "De Docta Ignorantia" is interpreted as stating that God contained every fact, which Kabay argues would result in trivialism, but Kabay admits that mainstream Cusa scholars would not agree with interpreting Cusa as a trivialist. Kabay also mentions Spinoza as a philosopher whose views resemble trivialism. Kabay argues Spinoza was a trivialist because Spinoza believed everything was made of one substance which had infinite attributes. Kabay also mentions Hegel as a philosopher whose views resemble trivialism, quoting Hegel as stating in "The Science of Logic" "everything is inherently contradictory."
Azzouni.
Jody Azzouni is a purported advocate of trivialism in his article "The Strengthened Liar" by claiming that natural language is trivial and inconsistent through the existence of the liar paradox ("This sentence is false"), and claiming that natural language has developed without central direction. Azzouni implies that every sentence in any natural language is true. "According to Azzouni, natural language is trivial, that is to say, every sentence in natural language is true...And, of course, trivialism follows straightforwardly from the triviality of natural language: after all, 'trivialism is true' is a sentence in natural language."
Anaxagoras.
The Greek philosopher Anaxagoras is suggested as a possible trivialist by Graham Priest in his 2005 book "Doubt Truth to Be a Liar". Priest writes, "He held that, at least at one time, everything was all mixed up so that no predicate applied to any one thing more than a contrary predicate."
Anti-trivialism.
Luis Estrada-González in "Models of Possibilism and Trivialism" lists eight types of anti-trivialism (or non-trivialism) through the use of possible worlds:
(AT0) Actualist minimal anti-trivialism: In the actual world, some propositions do not have a value of true or false.
(AT1) Actualist absolute anti-trivialism: In the actual world, all propositions do not have a value of true or false.
(AT2) Minimal anti-trivialism: In some worlds, some propositions do not have a value of true or false.
(AT3) Pointed anti-trivialism (or minimal logical nihilism): In some worlds, every proposition does not have a value of true or false.
(AT4) Distributed anti-trivialism: In every world, some propositions do not have a value of true or false.
(AT5) Strong anti-trivialism: Some propositions do not have a value of true or false in every world.
(AT6) Super anti-trivialism (or moderate logical nihilism): All propositions do not have a value of true or false at some world.
(AT7) Absolute anti-trivialism (or maximal logical nihilism): All propositions do not have a value of true or false in every world.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\forall pTp "
},
{
"math_id": 1,
"text": "p \\leftrightarrow Tp"
}
] | https://en.wikipedia.org/wiki?curid=10886610 |
1088787 | Identity component | Concept in group theory
In mathematics, specifically group theory, the identity component of a group "G" (also known as its unity component) refers to several closely related notions of the largest connected subgroup of "G" containing the identity element.
In point set topology, the identity component of a topological group "G" is the connected component "G"0 of "G" that contains the identity element of the group. The identity path component of a topological group "G" is the path component of "G" that contains the identity element of the group.
In algebraic geometry, the identity component of an algebraic group "G" over a field "k" is the identity component of the underlying topological space. The identity component of a group scheme "G" over a base scheme "S" is, roughly speaking, the group scheme "G"0 whose fiber over the point "s" of "S" is the connected component "(Gs)0" of the fiber "Gs", an algebraic group.
Properties.
The identity component "G"0 of a topological or algebraic group "G" is a closed normal subgroup of "G". It is closed since components are always closed. It is a subgroup since multiplication and inversion in a topological or algebraic group are continuous maps by definition. Moreover, for any continuous automorphism "a" of "G" we have
"a"("G"0) = "G"0.
Thus, "G"0 is a characteristic subgroup of "G", so it is normal.
The identity component "G"0 of a topological group "G" need not be open in "G". In fact, we may have "G"0 = {"e"}, in which case "G" is totally disconnected. However, the identity component of a locally path-connected space (for instance a Lie group) is always open, since it contains a path-connected neighbourhood of {"e"}; and therefore is a clopen set.
The identity path component of a topological group may in general be smaller than the identity component (since path connectedness is a stronger condition than connectedness), but these agree if "G" is locally path-connected.
Component group.
The quotient group "G"/"G"0 is called the group of components or component group of "G". Its elements are just the connected components of "G". The component group "G"/"G"0 is a discrete group if and only if "G"0 is open. If "G" is an algebraic group of finite type, such as an affine algebraic group, then "G"/"G"0 is actually a finite group.
One may similarly define the path component group as the group of path components (quotient of "G" by the identity path component), and in general the component group is a quotient of the path component group, but if "G" is locally path connected these groups agree. The path component group can also be characterized as the zeroth homotopy group, formula_0
Examples.
An algebraic group "G" over a topological field "K" admits two natural topologies, the Zariski topology and the topology inherited from "K". The identity component of "G" often changes depending on the topology. For instance, the general linear group GL"n"(R) is connected as an algebraic group but has two path components as a Lie group, the matrices of positive determinant and the matrices of negative determinant. Any connected algebraic group over a non-Archimedean local field "K" is totally disconnected in the "K"-topology and thus has trivial identity component in that topology.
note.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi_0(G,e)."
}
] | https://en.wikipedia.org/wiki?curid=1088787 |
10890 | Fundamental interaction | Most basic type of physical force
In physics, the fundamental interactions or fundamental forces are the interactions that do not appear to be reducible to more basic interactions. There are four fundamental interactions known to exist:
The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at minuscule, subatomic distances and govern nuclear interactions inside atoms.
Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. It is possible, however, that the fifth force is a combination of the prior four forces in the form of a scalar field; such as the Higgs field.
Each of the known fundamental interactions can be described mathematically as a "field". The gravitational force is attributed to the curvature of spacetime, described by Einstein's general theory of relativity. The other three are discrete quantum fields, and their interactions are mediated by elementary particles described by the Standard Model of particle physics.
Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons, such as protons and neutrons. As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei. The weak interaction is carried by particles called W and Z bosons, and also acts on the nucleus of atoms, mediating radioactive decay. The electromagnetic force, carried by the photon, creates electric and magnetic fields, which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves, including visible light, and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies.
Many theoretical physicists believe these fundamental forces to be related and to become unified into a single force at very high energies on a minuscule scale, the Planck scale, but particle accelerators cannot produce the enormous energies required to experimentally probe this. Devising a common theoretical framework that would explain the relation between the forces in a single theory is perhaps the greatest goal of today's theoretical physicists. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg, for which they received the 1979 Nobel Prize in physics. Some physicists seek to unite the electroweak and strong fields within what is called a Grand Unified Theory (GUT). An even bigger challenge is to find a way to quantize the gravitational field, resulting in a theory of quantum gravity (QG) which would unite gravity in a common theoretical framework with the other three forces. Some theories, notably string theory, seek both QG and GUT within one framework, unifying all four fundamental interactions along with mass generation within a theory of everything (ToE).
History.
Classical theory.
In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time. Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. As conventionally interpreted, Newton's theory of motion modelled a "central force" without a communicating medium. Thus Newton's theory violated the tradition, going back to Descartes, that there should be no action at a distance. Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a "field" filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one.
In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference, this would contradict Newton's theory of motion, which relied on Galilean relativity. If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether—presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.)
Standard Model.
The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles, whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity, altogether relativistic quantum field theory (QFT). Force particles, called gauge bosons—"force carriers" or "messenger particles" of underlying fields—interact with matter particles, called fermions. Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules, and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED).
The force carriers of the weak interaction are the massive W and Z bosons. Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang, the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism, the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon, traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory.
Beyond the Standard Model, some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle, and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory, although to model matter particles, it added SUSY to force particles—and so, strictly speaking, became superstring theory. Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory. Theories beyond the Standard Model remain highly speculative, lacking great experimental support.
Overview of the fundamental interactions.
In the conceptual model of fundamental interactions, matter consists of fermions, which carry properties called charges and spin ±<templatestyles src="Fraction/styles.css" />1⁄2 (intrinsic angular momentum ±<templatestyles src="Fraction/styles.css" />"ħ"⁄2, where ħ is the reduced Planck constant). They attract or repel each other by exchanging bosons.
The interaction of any pair of fermions in perturbation theory can then be modelled thus:
Two fermions go in → "interaction" by boson exchange → two changed fermions go out.
The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from +<templatestyles src="Fraction/styles.css" />1⁄2 to −<templatestyles src="Fraction/styles.css" />1⁄2 (or vice versa) during such an exchange (in units of the reduced Planck constant). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces. In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force".
According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of:
Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research.
The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges. The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples.
Interactions.
Gravity.
"Gravitation" is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate.
Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out.
Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe.
The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it retards the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high.
Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution, Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime.
Merging general relativity and quantum mechanics (or quantum field theory) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton.
Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity. These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible.
Proposed extra dimensions could explain why the gravity force is so weak.
Electroweak interaction.
Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force.
The electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 1015 K, the electromagnetic force and the weak force were still merged as a combined electroweak force.
For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979.
Electromagnetism.
Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other.
Electromagnetism has an infinite range like gravity, but is vastly stronger than it, and therefore describes several macroscopic phenomena of everyday experience such as friction, rainbows, lightning, and all human-made devices using electric current, such as television, lasers, and computers. Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements, including all chemical bonding.
In a four kilogram (~1 gallon) jug of water, there is
formula_0
of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of
formula_1
This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity but cancel out so that for large bodies gravity dominates.
Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus, is the classical theory of electromagnetism, suitable for most technological purposes.
The constant speed of light in vacuum (customarily denoted with a lowercase letter "c") can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein's 1905 theory of special relativity, however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space.
In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, completed this theory, which is now called quantum electrodynamics, the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function.
Weak interaction.
The "weak interaction" or "weak nuclear force" is responsible for some nuclear phenomena such as beta decay. Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model. In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction that does not conserve parity; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT.
Strong interaction.
The "strong interaction", or "strong nuclear force", is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10−15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows.
After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion, a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10−15 m, much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV.
The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably:
While each of these approaches offered insights, no approach led directly to a fundamental theory.
Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu, who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined.
In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross, Frank Wilczek, and David Politzer discovered that this theory had the property of asymptotic freedom, allowing them to make contact with experimental evidence. They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined: the strong force increases indefinitely with distance, trapping quarks inside the hadrons.
Assuming that quarks are confined, Mikhail Shifman, Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions.
QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances.
Higgs interaction.
Conventionally, the Higgs interaction is not counted among the four fundamental forces.
Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism, Yukawa terms remain of the form
formula_2,
with Yukawa coupling formula_3, particle mass formula_4 (in eV), and Higgs vacuum expectation value . Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form
formula_5,
with Higgs mass . Because the reduced Compton wavelength of the Higgs boson is so small (, comparable to the W and Z bosons), this potential has an effective range of a few attometers. Between two electrons, it begins roughly 1011 times weaker than the weak interaction, and grows exponentially weaker at non-zero distances.
Beyond the Standard Model.
Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification.
Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces (this was, for example, verified at the Large Electron–Positron Collider in 1991 for supersymmetric theories).
Theories of everything, which integrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory, loop quantum gravity, and twistor theory, have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it.
Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 4000 \\ \\mbox{g}\\,\\rm{H}_2 \\rm{O} \\cdot \\frac{1 \\ \\mbox{mol}\\,\\rm{H}_2 \\rm{O}}{18 \\ \\mbox{g}\\,H_2 O} \\cdot \\frac{10 \\ \\mbox{mol}\\,e^{-}}{1 \\ \\mbox{mol}\\,H_2 O} \\cdot \\frac{96,000 \\ \\mbox{C}\\,}{1 \\ \\mbox{mol}\\,e^{-}} = 2.1 \\times 10^{8} C \\ \\, \\ "
},
{
"math_id": 1,
"text": " {1 \\over 4\\pi\\varepsilon_0}\\frac{(2.1 \\times 10^{8} \\mathrm{C})^2}{(1 m)^2} = 4.1 \\times 10^{26} \\mathrm{N}."
},
{
"math_id": 2,
"text": "\\frac{\\lambda_i}{\\sqrt 2} \\bar{\\psi} \\phi' \\psi = \\frac{m_i}{\\nu} \\bar{\\psi} \\phi' \\psi"
},
{
"math_id": 3,
"text": "\\lambda_i"
},
{
"math_id": 4,
"text": "m_i"
},
{
"math_id": 5,
"text": "V(r) = - \\frac{m_i m_j}{m_{\\rm H}^2} \\frac{1}{4\\pi r} e^{-m_{\\rm H}\\, c\\, r/\\hbar}"
}
] | https://en.wikipedia.org/wiki?curid=10890 |
1089079 | Miller index | Notation system for crystal lattice planes
Miller indices form a notation system in crystallography for lattice planes in crystal (Bravais) lattices.
In particular, a family of lattice planes of a given (direct) Bravais lattice is determined by three integers "h", "k", and "ℓ", the "Miller indices". They are written ("hkℓ"), and denote the family of (parallel) lattice planes (of the given Bravais lattice) orthogonal to formula_0, where formula_1 are the basis or primitive translation vectors of the reciprocal lattice for the given Bravais lattice. (Note that the plane is not always orthogonal to the linear combination of direct or original lattice vectors formula_2 because the direct lattice vectors need not be mutually orthogonal.) This is based on the fact that a reciprocal lattice vector formula_3 (the vector indicating a reciprocal lattice point from the reciprocal lattice origin) is the wavevector of a plane wave in the Fourier series of a spatial function (e.g., electronic density function) which periodicity follows the original Bravais lattice, so wavefronts of the plane wave are coincident with parallel lattice planes of the original lattice. Since a measured scattering vector in X-ray crystallography, formula_4 with formula_5 as the outgoing (scattered from a crystal lattice) X-ray wavevector and formula_6 as the incoming (toward the crystal lattice) X-ray wavevector, is equal to a reciprocal lattice vector formula_3 as stated by the Laue equations, the measured scattered X-ray peak at each measured scattering vector formula_7 is marked by "Miller indices". By convention, negative integers are written with a bar, as in 3 for −3. The integers are usually written in lowest terms, i.e. their greatest common divisor should be 1. Miller indices are also used to designate reflections in X-ray crystallography. In this case the integers are not necessarily in lowest terms, and can be thought of as corresponding to planes spaced such that the reflections from adjacent planes would have a phase difference of exactly one wavelength (2π), regardless of whether there are atoms on all these planes or not.
There are also several related notations:
In the context of crystal "directions" (not planes), the corresponding notations are:
Note, for Laue–Bragg interferences
Miller indices were introduced in 1839 by the British mineralogist William Hallowes Miller, although an almost identical system ("Weiss parameters") had already been used by German mineralogist Christian Samuel Weiss since 1817. The method was also historically known as the Millerian system, and the indices as Millerian, although this is now rare.
The Miller indices are defined with respect to any choice of unit cell and not only with respect to primitive basis vectors, as is sometimes stated.
Definition.
There are two equivalent ways to define the meaning of the Miller indices: via a point in the reciprocal lattice, or as the inverse intercepts along the lattice vectors. Both definitions are given below. In either case, one needs to choose the three lattice vectors a1, a2, and a3 that define the unit cell (note that the conventional unit cell may be larger than the primitive cell of the Bravais lattice, as the examples below illustrate). Given these, the three primitive reciprocal lattice vectors are also determined (denoted b1, b2, and b3).
Then, given the three Miller indices formula_14 denotes planes orthogonal to the reciprocal lattice vector:
formula_15
That is, ("hkℓ") simply indicates a normal to the planes in the basis of the primitive reciprocal lattice vectors. Because the coordinates are integers, this normal is itself always a reciprocal lattice vector. The requirement of lowest terms means that it is the "shortest" reciprocal lattice vector in the given direction.
Equivalently, ("hkℓ") denotes a plane that intercepts the three points a1/"h", a2/"k", and a3/"ℓ", or some multiple thereof. That is, the Miller indices are proportional to the "inverses" of the intercepts of the plane, in the basis of the lattice vectors. If one of the indices is zero, it means that the planes do not intersect that axis (the intercept is "at infinity").
Considering only ("hkℓ") planes intersecting one or more lattice points (the "lattice planes"), the perpendicular distance "d" between adjacent lattice planes is related to the (shortest) reciprocal lattice vector orthogonal to the planes by the formula: formula_16.
The related notation [hkℓ] denotes the "direction":
formula_17
That is, it uses the direct lattice basis instead of the reciprocal lattice. Note that [hkℓ] is "not" generally normal to the ("hkℓ") planes, except in a cubic lattice as described below.
Case of cubic structures.
For the special case of simple cubic crystals, the lattice vectors are orthogonal and of equal length (usually denoted "a"), as are those of the reciprocal lattice. Thus, in this common case, the Miller indices ("hkℓ") and ["hkℓ"] both simply denote normals/directions in Cartesian coordinates.
For cubic crystals with lattice constant "a", the spacing "d" between adjacent ("hkℓ") lattice planes is (from above)
formula_18.
Because of the symmetry of cubic crystals, it is possible to change the place and sign of the integers and have equivalent directions and planes:
For face-centered cubic and body-centered cubic lattices, the primitive lattice vectors are not orthogonal. However, in these cases the Miller indices are conventionally defined relative to the lattice vectors of the cubic supercell and hence are again simply the Cartesian directions.
Case of hexagonal and rhombohedral structures.
With hexagonal and rhombohedral lattice systems, it is possible to use the Bravais–Miller system, which uses four indices ("h" "k" "i" "ℓ") that obey the constraint
"h" + "k" + "i" = 0.
Here "h", "k" and "ℓ" are identical to the corresponding Miller indices, and "i" is a redundant index.
This four-index scheme for labeling planes in a hexagonal lattice makes permutation symmetries apparent. For example, the similarity between (110) ≡ (1120) and (120) ≡ (1210) is more obvious when the redundant index is shown.
In the figure at right, the (001) plane has a 3-fold symmetry: it remains unchanged by a rotation of 1/3 (2π/3 rad, 120°). The [100], [010] and the [110] directions are really similar. If "S" is the intercept of the plane with the [110] axis, then
"i" = 1/"S".
There are also "ad hoc" schemes (e.g. in the transmission electron microscopy literature) for indexing hexagonal "lattice vectors" (rather than reciprocal lattice vectors or planes) with four indices. However they don't operate by similarly adding a redundant index to the regular three-index set.
For example, the reciprocal lattice vector ("hkℓ") as suggested above can be written in terms of reciprocal lattice vectors as formula_19. For hexagonal crystals this may be expressed in terms of direct-lattice basis-vectors a1, a2 and a3 as
formula_20
Hence zone indices of the direction perpendicular to plane ("hkℓ") are, in suitably normalized triplet form, simply formula_21. When "four indices" are used for the zone normal to plane ("hkℓ"), however, the literature often uses formula_22 instead. Thus as you can see, four-index zone indices in square or angle brackets sometimes mix a single direct-lattice index on the right with reciprocal-lattice indices (normally in round or curly brackets) on the left.
And, note that for hexagonal interplanar distances, they take the form
formula_23
Crystallographic planes and directions.
Crystallographic directions are lines linking nodes (atoms, ions or molecules) of a crystal. Similarly, crystallographic planes are "planes" linking nodes. Some directions and planes have a higher density of nodes; these dense planes have an influence on the behavior of the crystal:
For all these reasons, it is important to determine the planes and thus to have a notation system.
Integer versus irrational Miller indices: Lattice planes and quasicrystals.
Ordinarily, Miller indices are always integers by definition, and this constraint is physically significant. To understand this, suppose that we allow a plane ("abc") where the Miller "indices" "a", "b" and "c" (defined as above) are not necessarily integers.
If "a", "b" and "c" have rational ratios, then the same family of planes can be written in terms of integer indices ("hkℓ") by scaling "a", "b" and "c" appropriately: divide by the largest of the three numbers, and then multiply by the least common denominator. Thus, integer Miller indices implicitly include indices with all rational ratios. The reason why planes where the components (in the reciprocal-lattice basis) have rational ratios are of special interest is that these are the lattice planes: they are the only planes whose intersections with the crystal are 2d-periodic.
For a plane (abc) where "a", "b" and "c" have irrational ratios, on the other hand, the intersection of the plane with the crystal is "not" periodic. It forms an aperiodic pattern known as a quasicrystal. This construction corresponds precisely to the standard "cut-and-project" method of defining a quasicrystal, using a plane with irrational-ratio Miller indices. (Although many quasicrystals, such as the Penrose tiling, are formed by "cuts" of periodic lattices in more than three dimensions, involving the intersection of more than one such hyperplane.)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{g}_{hk\\ell} = h\\mathbf{b}_1 + k\\mathbf{b}_2 + \\ell\\mathbf{b}_3 "
},
{
"math_id": 1,
"text": "\\mathbf{b}_i"
},
{
"math_id": 2,
"text": "h\\mathbf{a}_1 + k\\mathbf{a}_2 + \\ell\\mathbf{a}_3 "
},
{
"math_id": 3,
"text": "\\mathbf{g}"
},
{
"math_id": 4,
"text": "\\Delta\\mathbf{k}= \\mathbf{k}_{\\mathrm{out}} - \\mathbf{k}_{\\mathrm{in}}"
},
{
"math_id": 5,
"text": "\\mathbf{k}_{\\mathrm{out}}"
},
{
"math_id": 6,
"text": "\\mathbf{k}_{\\mathrm{in}}"
},
{
"math_id": 7,
"text": "\\Delta\\mathbf{k}"
},
{
"math_id": 8,
"text": " \\{hk\\ell\\} "
},
{
"math_id": 9,
"text": " (hk\\ell) "
},
{
"math_id": 10,
"text": " [hk\\ell],"
},
{
"math_id": 11,
"text": " \\langle hk\\ell\\rangle "
},
{
"math_id": 12,
"text": " [hk\\ell] "
},
{
"math_id": 13,
"text": " hk\\ell "
},
{
"math_id": 14,
"text": " h, k, \\ell, (hk\\ell) "
},
{
"math_id": 15,
"text": " \\mathbf{g}_{hk\\ell} = h \\mathbf{b}_1 + k \\mathbf{b}_2 + \\ell \\mathbf{b}_3 ."
},
{
"math_id": 16,
"text": "d = 2\\pi / |\\mathbf{g}_{h k \\ell}|"
},
{
"math_id": 17,
"text": "h \\mathbf{a}_1 + k \\mathbf{a}_2 + \\ell \\mathbf{a}_3 ."
},
{
"math_id": 18,
"text": "d_{hk \\ell}= \\frac {a} { \\sqrt{h^2 + k^2 + \\ell ^2} }"
},
{
"math_id": 19,
"text": "h\\mathbf{b}_1 + k\\mathbf{b}_2 + \\ell\\mathbf{b}_3 "
},
{
"math_id": 20,
"text": "h\\mathbf{b}_1 + k\\mathbf{b}_2 + \\ell \\mathbf{b}_3 = \\frac{2}{3 a^2}(2 h + k)\\mathbf{a}_1 + \\frac{2}{3 a^2}(h+2k)\\mathbf{a}_2 + \\frac{1}{c^2} (\\ell) \\mathbf{a}_3."
},
{
"math_id": 21,
"text": "[2h+k,h+2k,\\ell(3/2)(a/c)^2]"
},
{
"math_id": 22,
"text": "[h,k,-h-k,\\ell(3/2)(a/c)^2]"
},
{
"math_id": 23,
"text": "\nd_{hk\\ell} = \\frac{a}{\\sqrt{\\tfrac{4}{3}\\left(h^2+k^2+hk \\right)+\\tfrac{a^2}{c^2}\\ell^2}}\n"
}
] | https://en.wikipedia.org/wiki?curid=1089079 |
1089161 | Euler–Tricomi equation | In mathematics, the Euler–Tricomi equation is a linear partial differential equation useful in the study of transonic flow. It is named after mathematicians Leonhard Euler and Francesco Giacomo Tricomi.
formula_0
It is elliptic in the half plane "x" > 0, parabolic at "x" = 0 and hyperbolic in the half plane "x" < 0.
Its characteristics are
formula_1
which have the integral
formula_2
where "C" is a constant of integration. The characteristics thus comprise two families of semicubical parabolas, with cusps on the line "x" = 0, the curves lying on the right hand side of the "y"-axis.
Particular solutions.
A general expression for particular solutions to the Euler–Tricomi equations is:
formula_3
where
formula_4
formula_5
formula_6
formula_7
formula_8
These can be linearly combined to form further solutions such as:
for "k = 0":
formula_9
for "k = 1":
formula_10
etc.
The Euler–Tricomi equation is a limiting form of Chaplygin's equation. | [
{
"math_id": 0,
"text": "\nu_{xx}+xu_{yy}=0. \\,\n"
},
{
"math_id": 1,
"text": " x\\,dx^2+dy^2=0, \\, "
},
{
"math_id": 2,
"text": " y\\pm\\frac{2}{3}x^{3/2}=C,"
},
{
"math_id": 3,
"text": " u_{k,p,q}=\\sum_{i=0}^k(-1)^i\\frac{x^{m_i}y^{n_i}}{c_i} \\, "
},
{
"math_id": 4,
"text": " k \\in \\mathbb{N} "
},
{
"math_id": 5,
"text": " p, q \\in \\{0,1\\} "
},
{
"math_id": 6,
"text": " m_i = 3i+p "
},
{
"math_id": 7,
"text": " n_i = 2(k-i)+q "
},
{
"math_id": 8,
"text": " c_i = m_i!!! \\cdot (m_i-1)!!! \\cdot n_i!! \\cdot (n_i-1)!!"
},
{
"math_id": 9,
"text": " u=A + Bx + Cy + Dxy \\, "
},
{
"math_id": 10,
"text": " u=A(\\tfrac{1}{2}y^2 - \\tfrac{1}{6}x^3) + B(\\tfrac{1}{2}xy^2 - \\tfrac{1}{12}x^4) + C(\\tfrac{1}{6}y^3 - \\tfrac{1}{6}x^3y) + D(\\tfrac{1}{6}xy^3 - \\tfrac{1}{12}x^4y) \\, "
}
] | https://en.wikipedia.org/wiki?curid=1089161 |
1089172 | Chaplygin's equation | In gas dynamics, Chaplygin's equation, named after Sergei Alekseevich Chaplygin (1902), is a partial differential equation useful in the study of transonic flow. It is
formula_0
Here, formula_1 is the speed of sound, determined by the equation of state of the fluid and conservation of energy. For polytropic gases, we have formula_2, where formula_3 is the specific heat ratio and formula_4 is the stagnation enthalpy, in which case the Chaplygin's equation reduces to
formula_5
The Bernoulli equation (see the derivation below) states that maximum velocity occurs when specific enthalpy is at the smallest value possible; one can take the specific enthalpy to be zero corresponding to absolute zero temperature as the reference value, in which case formula_6 is the maximum attainable velocity. The particular integrals of above equation can be expressed in terms of hypergeometric functions.
Derivation.
For two-dimensional potential flow, the continuity equation and the Euler equations (in fact, the compressible Bernoulli's equation due to irrotationality) in Cartesian coordinates formula_7 involving the variables fluid velocity formula_8, specific enthalpy formula_9 and density formula_10 are
formula_11
with the equation of state formula_12 acting as third equation. Here formula_13 is the stagnation enthalpy, formula_14 is the magnitude of the velocity vector and formula_15 is the entropy. For isentropic flow, density can be expressed as a function only of enthalpy formula_16, which in turn using Bernoulli's equation can be written as formula_17.
Since the flow is irrotational, a velocity potential formula_18 exists and its differential is simply formula_19. Instead of treating formula_20 and formula_21 as dependent variables, we use a coordinate transform such that formula_22 and formula_23 become new dependent variables. Similarly the velocity potential is replaced by a new function (Legendre transformation)
formula_24
such then its differential is formula_25, therefore
formula_26
Introducing another coordinate transformation for the independent variables from formula_8 to formula_27 according to the relation formula_28 and formula_29, where formula_30 is the magnitude of the velocity vector and formula_31 is the angle that the velocity vector makes with the formula_32-axis, the dependent variables become
formula_33
The continuity equation in the new coordinates become
formula_34
For isentropic flow, formula_35, where formula_36 is the speed of sound. Using the Bernoulli's equation we find
formula_37
where formula_1. Hence, we have
formula_38
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\frac{\\partial^2 \\Phi}{\\partial \\theta^2} +\n\\frac{v^2}{1-v^2/c^2}\\frac{\\partial^2 \\Phi}{\\partial v^2}+v \\frac{\\partial \\Phi}{\\partial v}=0."
},
{
"math_id": 1,
"text": "c=c(v)"
},
{
"math_id": 2,
"text": "c^2/(\\gamma-1) = h_0- v^2/2"
},
{
"math_id": 3,
"text": "\\gamma"
},
{
"math_id": 4,
"text": "h_0"
},
{
"math_id": 5,
"text": "\n\\frac{\\partial^2 \\Phi}{\\partial \\theta^2} +\nv^2\\frac{2h_0-v^2}{2h_0-(\\gamma+1)v^2/(\\gamma-1)}\\frac{\\partial^2 \\Phi}{\\partial v^2}+v \\frac{\\partial \\Phi}{\\partial v}=0."
},
{
"math_id": 6,
"text": "2h_0"
},
{
"math_id": 7,
"text": "(x,y)"
},
{
"math_id": 8,
"text": "(v_x,v_y)"
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "\\rho"
},
{
"math_id": 11,
"text": "\n\\begin{align}\n\\frac{\\partial }{\\partial x}(\\rho v_x) + \\frac{\\partial }{\\partial y}(\\rho v_y) &=0,\\\\\nh + \\frac{1}{2}v^2 &= h_o.\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\\rho=\\rho(s,h)"
},
{
"math_id": 13,
"text": "h_o"
},
{
"math_id": 14,
"text": "v^2 = v_x^2 + v_y^2"
},
{
"math_id": 15,
"text": "s"
},
{
"math_id": 16,
"text": "\\rho=\\rho(h)"
},
{
"math_id": 17,
"text": "\\rho=\\rho(v)"
},
{
"math_id": 18,
"text": "\\phi"
},
{
"math_id": 19,
"text": "d\\phi = v_x dx + v_y dy"
},
{
"math_id": 20,
"text": "v_x=v_x(x,y)"
},
{
"math_id": 21,
"text": "v_y=v_y(x,y)"
},
{
"math_id": 22,
"text": "x=x(v_x,v_y)"
},
{
"math_id": 23,
"text": "y=y(v_x,v_y)"
},
{
"math_id": 24,
"text": "\\Phi = xv_x + yv_y - \\phi"
},
{
"math_id": 25,
"text": "d\\Phi = xdv_x + y dv_y"
},
{
"math_id": 26,
"text": "x = \\frac{\\partial \\Phi}{\\partial v_x}, \\quad y = \\frac{\\partial \\Phi}{\\partial v_y}."
},
{
"math_id": 27,
"text": "(v,\\theta)"
},
{
"math_id": 28,
"text": "v_x = v\\cos\\theta"
},
{
"math_id": 29,
"text": "v_y = v\\sin\\theta"
},
{
"math_id": 30,
"text": "v"
},
{
"math_id": 31,
"text": "\\theta"
},
{
"math_id": 32,
"text": "v_x"
},
{
"math_id": 33,
"text": "\n\\begin{align}\nx &= \\cos\\theta \\frac{\\partial \\Phi}{\\partial v}-\\frac{\\sin\\theta}{v}\\frac{\\partial \\Phi}{\\partial \\theta},\\\\\ny &= \\sin\\theta \\frac{\\partial \\Phi}{\\partial v}+\\frac{\\cos\\theta}{v}\\frac{\\partial \\Phi}{\\partial \\theta},\\\\\n\\phi & = -\\Phi + v\\frac{\\partial \\Phi}{\\partial v}.\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\\frac{d(\\rho v)}{dv} \\left(\\frac{\\partial \\Phi}{\\partial v} + \\frac{1}{v} \\frac{\\partial^2 \\Phi}{\\partial \\theta^2}\\right) + \\rho v \\frac{\\partial^2 \\Phi}{\\partial v^2} =0."
},
{
"math_id": 35,
"text": "dh=\\rho^{-1}c^2 d\\rho"
},
{
"math_id": 36,
"text": "c"
},
{
"math_id": 37,
"text": "\\frac{d(\\rho v)}{d v} = \\rho \\left(1-\\frac{v^2}{c^2}\\right)"
},
{
"math_id": 38,
"text": "\n\\frac{\\partial^2 \\Phi}{\\partial \\theta^2} +\n\\frac{v^2}{1-\\frac{v^2}{c^2}}\\frac{\\partial^2 \\Phi}{\\partial v^2}+v \\frac{\\partial \\Phi}{\\partial v}=0."
}
] | https://en.wikipedia.org/wiki?curid=1089172 |
1089270 | Random sample consensus | Statistical method
<templatestyles src="Machine learning/styles.css"/>
Random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers, when outliers are to be accorded no influence on the values of the estimates. Therefore, it also can be interpreted as an outlier detection method. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles at SRI International in 1981. They used RANSAC to solve the Location Determination Problem (LDP), where the goal is to determine the points in the space that project onto an image into a set of landmarks with known locations.
RANSAC uses repeated random sub-sampling. A basic assumption is that the data consists of "inliers", i.e., data whose distribution can be explained by some set of model parameters, though may be subject to noise, and "outliers" which are data that do not fit the model. The outliers can come, for example, from extreme values of the noise or from erroneous measurements or incorrect hypotheses about the interpretation of data. RANSAC also assumes that, given a (usually small) set of inliers, there exists a procedure which can estimate the parameters of a model that optimally explains or fits this data.
Example.
A simple example is fitting a line in two dimensions to a set of observations. Assuming that this set contains both "inliers", i.e., points which approximately can be fitted to a line, and "outliers", points which cannot be fitted to this line, a simple least squares method for line fitting will generally produce a line with a bad fit to the data including inliers and outliers. The reason is that it is optimally fitted to all points, including the outliers. RANSAC, on the other hand, attempts to exclude the outliers and find a linear model that only uses the inliers in its calculation. This is done by fitting linear models to several random samplings of the data and returning the model that has the best fit to a subset of the data. Since the inliers tend to be more linearly related than a random mixture of inliers and outliers, a random subset that consists entirely of inliers will have the best model fit. In practice, there is no guarantee that a subset of inliers will be randomly sampled, and the probability of the algorithm succeeding depends on the proportion of inliers in the data as well as the choice of several algorithm parameters.
Overview.
The RANSAC algorithm is a learning technique to estimate parameters of a model by random sampling of observed data. Given a dataset whose data elements contain both inliers and outliers, RANSAC uses the voting scheme to find the optimal fitting result. Data elements in the dataset are used to vote for one or multiple models. The implementation of this voting scheme is based on two assumptions: that the noisy features will not vote consistently for any single model (few outliers) and there are enough features to agree on a good model (few missing data). The RANSAC algorithm is essentially composed of two steps that are iteratively repeated:
The set of inliers obtained for the fitting model is called the "consensus set". The RANSAC algorithm will iteratively repeat the above two steps until the obtained consensus set in certain iteration has enough inliers.
The input to the RANSAC algorithm is a set of observed data values, a model to fit to the observations, and some confidence parameters defining outliers. In more details than the aforementioned RANSAC algorithm overview, RANSAC achieves its goal by repeating the following steps:
To converge to a sufficiently good model parameter set, this procedure is repeated a fixed number of times, each time producing either the rejection of a model because too few points are a part of the consensus set, or a refined model with a consensus set size larger than the previous consensus set.
Pseudocode.
The generic RANSAC algorithm works as the following pseudocode:
Given:
data – A set of observations.
model – A model to explain the observed data points.
n – The minimum number of data points required to estimate the model parameters.
k – The maximum number of iterations allowed in the algorithm.
t – A threshold value to determine data points that are fit well by the model (inlier).
d – The number of close data points (inliers) required to assert that the model fits well to the data.
Return:
bestFit – The model parameters which may best fit the data (or null if no good model is found).
iterations = 0
bestFit = null
bestErr = something really large // This parameter is used to sharpen the model parameters to the best data fitting as iterations go on.
while "iterations" < "k" do
maybeInliers := n randomly selected values from data
maybeModel := model parameters fitted to maybeInliers
confirmedInliers := empty set
for every point in data do
if point fits maybeModel with an error smaller than t then
add point to confirmedInliers
end if
end for
if the number of elements in confirmedInliers is > d then
// This implies that we may have found a good model.
// Now test how good it is.
betterModel := model parameters fitted to all the points in confirmedInliers
thisErr := a measure of how well betterModel fits these points
if thisErr < bestErr then
bestFit := betterModel
bestErr := thisErr
end if
end if
increment iterations
end while
return bestFit
Example code.
A Python implementation mirroring the pseudocode. This also defines a codice_0 based on least squares, applies codice_1 to a 2D regression problem, and visualizes the outcome:
from copy import copy
import numpy as np
from numpy.random import default_rng
rng = default_rng()
class RANSAC:
def __init__(self, n=10, k=100, t=0.05, d=10, model=None, loss=None, metric=None):
self.n = n # `n`: Minimum number of data points to estimate parameters
self.k = k # `k`: Maximum iterations allowed
self.t = t # `t`: Threshold value to determine if points are fit well
self.d = d # `d`: Number of close data points required to assert model fits well
self.model = model # `model`: class implementing `fit` and `predict`
self.loss = loss # `loss`: function of `y_true` and `y_pred` that returns a vector
self.metric = metric # `metric`: function of `y_true` and `y_pred` and returns a float
self.best_fit = None
self.best_error = np.inf
def fit(self, X, y):
for _ in range(self.k):
ids = rng.permutation(X.shape[0])
maybe_inliers = ids[: self.n]
maybe_model = copy(self.model).fit(X[maybe_inliers], y[maybe_inliers])
thresholded = (
self.loss(y[ids][self.n :], maybe_model.predict(X[ids][self.n :]))
< self.t
inlier_ids = ids[self.n :][np.flatnonzero(thresholded).flatten()]
if inlier_ids.size > self.d:
inlier_points = np.hstack([maybe_inliers, inlier_ids])
better_model = copy(self.model).fit(X[inlier_points], y[inlier_points])
this_error = self.metric(
y[inlier_points], better_model.predict(X[inlier_points])
if this_error < self.best_error:
self.best_error = this_error
self.best_fit = better_model
return self
def predict(self, X):
return self.best_fit.predict(X)
def square_error_loss(y_true, y_pred):
return (y_true - y_pred) ** 2
def mean_square_error(y_true, y_pred):
return np.sum(square_error_loss(y_true, y_pred)) / y_true.shape[0]
class LinearRegressor:
def __init__(self):
self.params = None
def fit(self, X: np.ndarray, y: np.ndarray):
r, _ = X.shape
X = np.hstack([np.ones((r, 1)), X])
self.params = np.linalg.inv(X.T @ X) @ X.T @ y
return self
def predict(self, X: np.ndarray):
r, _ = X.shape
X = np.hstack([np.ones((r, 1)), X])
return X @ self.params
if __name__ == "__main__":
regressor = RANSAC(model=LinearRegressor(), loss=square_error_loss, metric=mean_square_error)
X = np.array([-0.848,-0.800,-0.704,-0.632,-0.488,-0.472,-0.368,-0.336,-0.280,-0.200,-0.00800,-0.0840,0.0240,0.100,0.124,0.148,0.232,0.236,0.324,0.356,0.368,0.440,0.512,0.548,0.660,0.640,0.712,0.752,0.776,0.880,0.920,0.944,-0.108,-0.168,-0.720,-0.784,-0.224,-0.604,-0.740,-0.0440,0.388,-0.0200,0.752,0.416,-0.0800,-0.348,0.988,0.776,0.680,0.880,-0.816,-0.424,-0.932,0.272,-0.556,-0.568,-0.600,-0.716,-0.796,-0.880,-0.972,-0.916,0.816,0.892,0.956,0.980,0.988,0.992,0.00400]).reshape(-1,1)
y = np.array([-0.917,-0.833,-0.801,-0.665,-0.605,-0.545,-0.509,-0.433,-0.397,-0.281,-0.205,-0.169,-0.0531,-0.0651,0.0349,0.0829,0.0589,0.175,0.179,0.191,0.259,0.287,0.359,0.395,0.483,0.539,0.543,0.603,0.667,0.679,0.751,0.803,-0.265,-0.341,0.111,-0.113,0.547,0.791,0.551,0.347,0.975,0.943,-0.249,-0.769,-0.625,-0.861,-0.749,-0.945,-0.493,0.163,-0.469,0.0669,0.891,0.623,-0.609,-0.677,-0.721,-0.745,-0.885,-0.897,-0.969,-0.949,0.707,0.783,0.859,0.979,0.811,0.891,-0.137]).reshape(-1,1)
regressor.fit(X, y)
import matplotlib.pyplot as plt
plt.style.use("seaborn-darkgrid")
fig, ax = plt.subplots(1, 1)
ax.set_box_aspect(1)
plt.scatter(X, y)
line = np.linspace(-1, 1, num=100).reshape(-1, 1)
plt.plot(line, regressor.predict(line), c="peru")
plt.show()
Parameters.
The threshold value to determine when a data point fits a model (t), and the number of inliers (data points fitted to the model within "t") required to assert that the model fits well to data (d) are determined based on specific requirements of the application and the dataset, and possibly based on experimental evaluation. The number of iterations (k), however, can be roughly determined as a function of the desired probability of success (p) as shown below.
Let p be the desired probability that the RANSAC algorithm provides at least one useful result after running. In extreme (for simplifying the derivation), RANSAC returns a successful result if in some iteration it selects only inliers from the input data set when it chooses n points from the data set from which the model parameters are estimated. (In other words, all the selected n data points are inliers of the model estimated by these points). Let formula_0 be the probability of choosing an inlier each time a single data point is selected, that is roughly,
formula_0 = number of inliers in data / number of points in data
A common case is that formula_0 is not well known beforehand because of an unknown number of inliers in data before running the RANSAC algorithm, but some rough value can be given. With a given rough value of formula_0 and roughly assuming that the n points needed for estimating a model are selected independently (It is a rough assumption because each data point selection reduces the number of data point candidates to choose in the next selection in reality), formula_1 is the probability that all "n" points are inliers and formula_2 is the probability that at least one of the n points is an outlier, a case which implies that a bad model will be estimated from this point set. That probability to the power of k (the number of iterations in running the algorithm) is the probability that the algorithm never selects a set of n points which all are inliers, and this is the same as formula_3 (the probability that the algorithm does not result in a successful model estimation) in extreme. Consequently,
formula_4
which, after taking the logarithm of both sides, leads to
formula_5
This result assumes that the n data points are selected independently, that is, a point which has been selected once is replaced and can be selected again in the same iteration. This is often not a reasonable approach and the derived value for k should be taken as an upper limit in the case that the points are selected without replacement. For example, in the case of finding a line which fits the data set illustrated in the above figure, the RANSAC algorithm typically chooses two points in each iteration and computes codice_2 as the line between the points and it is then critical that the two points are distinct.
To gain additional confidence, the standard deviation or multiples thereof can be added to k. The standard deviation of k is defined as
formula_6
Advantages and disadvantages.
An advantage of RANSAC is its ability to do robust estimation of the model parameters, i.e., it can estimate the parameters with a high degree of accuracy even when a significant number of outliers are present in the data set. A disadvantage of RANSAC is that there is no upper bound on the time it takes to compute these parameters (except exhaustion). When the number of iterations computed is limited the solution obtained may not be optimal, and it may not even be one that fits the data in a good way. In this way RANSAC offers a trade-off; by computing a greater number of iterations the probability of a reasonable model being produced is increased. Moreover, RANSAC is not always able to find the optimal set even for moderately contaminated sets and it usually performs badly when the number of inliers is less than 50%. Optimal RANSAC was proposed to handle both these problems and is capable of finding the optimal set for heavily contaminated sets, even for an inlier ratio under 5%. Another disadvantage of RANSAC is that it requires the setting of problem-specific thresholds.
RANSAC can only estimate one model for a particular data set. As for any one-model approach when two (or more) model instances exist, RANSAC may fail to find either one. The Hough transform is one alternative robust estimation technique that may be useful when more than one model instance is present. Another approach for multi model fitting is known as PEARL, which combines model sampling from data points as in RANSAC with iterative re-estimation of inliers and the multi-model fitting being formulated as an optimization problem with a global energy function describing the quality of the overall solution.
Applications.
The RANSAC algorithm is often used in computer vision, e.g., to simultaneously solve the correspondence problem and estimate the fundamental matrix related to a pair of stereo cameras; see also: Structure from motion, scale-invariant feature transform, image stitching, rigid motion segmentation.
Development and improvements.
Since 1981 RANSAC has become a fundamental tool in the computer vision and image processing community. In 2006, for the 25th anniversary of the algorithm, a workshop was organized at the International Conference on Computer Vision and Pattern Recognition (CVPR) to summarize the most recent contributions and variations to the original algorithm, mostly meant to improve the speed of the algorithm, the robustness and accuracy of the estimated solution and to decrease the dependency from user defined constants.
RANSAC can be sensitive to the choice of the correct noise threshold that defines which data points fit a model instantiated with a certain set of parameters. If such threshold is too large, then all the hypotheses tend to be ranked equally (good). On the other hand, when the noise threshold is too small, the estimated parameters tend to be unstable ( i.e. by simply adding or removing a datum to the set of inliers, the estimate of the parameters may fluctuate). To partially compensate for this undesirable effect, Torr et al. proposed two modification of RANSAC called MSAC (M-estimator SAmple and Consensus) and MLESAC (Maximum Likelihood Estimation SAmple and Consensus). The main idea is to evaluate the quality of the consensus set ( i.e. the data that fit a model and a certain set of parameters) calculating its likelihood (whereas in the original formulation by Fischler and Bolles the rank was the cardinality of such set). An extension to MLESAC which takes into account the prior probabilities associated to the input dataset is proposed by Tordoff. The resulting algorithm is dubbed Guided-MLESAC. Along similar lines, Chum proposed to guide the sampling procedure if some a priori information regarding the input data is known, i.e. whether a datum is likely to be an inlier or an outlier. The proposed approach is called PROSAC, PROgressive SAmple Consensus.
Chum et al. also proposed a randomized version of RANSAC called R-RANSAC to reduce the computational burden to identify a good consensus set. The basic idea is to initially evaluate the goodness of the currently instantiated model using only a reduced set of points instead of the entire dataset. A sound strategy will tell with high confidence when it is the case to evaluate the fitting of the entire dataset or when the model can be readily discarded. It is reasonable to think that the impact of this approach is more relevant in cases where the percentage of inliers is large. The type of strategy proposed by Chum et al. is called preemption scheme. Nistér proposed a paradigm called Preemptive RANSAC that allows real time robust estimation of the structure of a scene and of the motion of the camera. The core idea of the approach consists in generating a fixed number of hypotheses so that the comparison happens with respect to the quality of the generated hypothesis rather than against some absolute quality metric.
Other researchers tried to cope with difficult situations where the noise scale is not known and/or multiple model instances are present. The first problem has been tackled in the work by Wang and Suter. Toldo et al. represent each datum with the characteristic function of the set of random models that fit the point. Then multiple models are revealed as clusters which group the points supporting the same model. The clustering algorithm, called J-linkage, does not require prior specification of the number of models, nor does it necessitate manual parameters tuning.
RANSAC has also been tailored for recursive state estimation applications, where the input measurements are corrupted by outliers and Kalman filter approaches, which rely on a Gaussian distribution of the measurement error, are doomed to fail. Such an approach is dubbed KALMANSAC.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w"
},
{
"math_id": 1,
"text": "w^{n}"
},
{
"math_id": 2,
"text": "1 - w^{n}"
},
{
"math_id": 3,
"text": "1 - p"
},
{
"math_id": 4,
"text": "\n1 - p = (1 - w^n)^k \\,\n"
},
{
"math_id": 5,
"text": "\nk = \\frac{\\log(1 - p)}{\\log(1 - w^n)}\n"
},
{
"math_id": 6,
"text": " \\operatorname{SD}(k) = \\frac{\\sqrt{1 - w^n}}{w^n}"
}
] | https://en.wikipedia.org/wiki?curid=1089270 |
1089282 | Ultrafast monochromator | An ultrafast monochromator is a monochromator that preserves the duration of an ultrashort pulse (in the femtosecond, or lower, time-scale). Monochromators are devices that select for a particular wavelength, typically using a diffraction grating to disperse the light and a slit to select the desired wavelength; however, a diffraction grating introduces path delays that measurably lengthen the duration of an ultrashort pulse. An ultrafast monochromator uses a second diffraction grating to compensate time delays introduced to the pulse by the first grating and other dispersive optical elements.
Diffraction grating.
Diffraction gratings are constructed such that the angle of the incident ray, "θi", is related to the angle of the "m"th outgoing ray, "θm", by the expression
formula_0.
Two rays diffracted by adjacent grooves will differ in path length by a distance "mλ". The total difference between the longest and shortest path within a beam is computed by multiplying "mλ" by the total number of grooves illuminated.
For instance, a beam of width 10 mm illuminating a grating with 1200 grooves/mm uses 12,000 grooves. At a wavelength of 10 nm, the first order diffracted beam, "m" = 1, will have a path length variation across the beam of 120 μm. This corresponds to a time difference in the arrival of 400 femtoseconds. This is often negligible for picosecond pulses but not for those of femtosecond duration.
Applications.
A major application is the extraction, without time-broadening, of a single high-order harmonic pulse out of the many generated by an ultrafast laser pulse interacting with a gas target.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " m \\lambda = d ( \\sin{\\theta_i} - \\sin{\\theta_m})"
}
] | https://en.wikipedia.org/wiki?curid=1089282 |
1089311 | Modular lattice | In the branch of mathematics called order theory, a modular lattice is a lattice that satisfies the following self-dual condition,
where "x", "a", "b" are arbitrary elements in the lattice, ≤ is the partial order, and ∨ and ∧ (called join and meet respectively) are the operations of the lattice. This phrasing emphasizes an interpretation in terms of projection onto the sublattice ["a", "b"], a fact known as the diamond isomorphism theorem. An alternative but equivalent condition stated as an equation (see below) emphasizes that modular lattices form a variety in the sense of universal algebra.
Modular lattices arise naturally in algebra and in many other areas of mathematics. In these scenarios, modularity is an abstraction of the 2nd Isomorphism Theorem. For example, the subspaces of a vector space (and more generally the submodules of a module over a ring) form a modular lattice.
In a not necessarily modular lattice, there may still be elements b for which the modular law holds in connection with arbitrary elements x and a (for "a" ≤ "b"). Such an element is called a right modular element. Even more generally, the modular law may hold for any a and a fixed pair ("x", "b"). Such a pair is called a modular pair, and there are various generalizations of modularity related to this notion and to semimodularity.
Modular lattices are sometimes called Dedekind lattices after Richard Dedekind, who discovered the modular identity in several motivating examples.
Introduction.
The modular law can be seen as a restricted associative law that connects the two lattice operations similarly to the way in which the associative law λ(μ"x") = (λμ)"x" for vector spaces connects multiplication in the field and scalar multiplication.
The restriction "a" ≤ "b" is clearly necessary, since it follows from "a" ∨ ("x" ∧ "b") = ("a" ∨ "x") ∧ "b". In other words, no lattice with more than one element satisfies the unrestricted consequent of the modular law.
It is easy to see that "a" ≤ "b" implies "a" ∨ ("x" ∧ "b") ≤ ("a" ∨ "x") ∧ "b" in every lattice. Therefore, the modular law can also be stated as
The modular law can be expressed as an equation that is required to hold unconditionally. Since "a" ≤ "b" implies "a"
"a" ∧ "b" and since "a" ∧ "b" ≤ "b", replace "a" with "a" ∧ "b" in the defining equation of the modular law to obtain:
This shows that, using terminology from universal algebra, the modular lattices form a subvariety of the variety of lattices. Therefore, all homomorphic images, sublattices and direct products of modular lattices are again modular.
Examples.
The lattice of submodules of a module over a ring is modular. As a special case, the lattice of subgroups of an abelian group is modular.
The lattice of normal subgroups of a group is modular. But in general the lattice of all subgroups of a group is not modular. For an example, the lattice of subgroups of the dihedral group of order 8 is not modular.
The smallest non-modular lattice is the "pentagon" lattice "N"5 consisting of five elements 0, 1, "x", "a", "b" such that 0 < "x" < "b" < 1, 0 < "a" < 1, and "a" is not comparable to "x" or to "b". For this lattice,
"x" ∨ ("a" ∧ "b") = "x" ∨ 0 = "x" < "b" = 1 ∧ "b" = ("x" ∨ "a") ∧ "b"
holds, contradicting the modular law. Every non-modular lattice contains a copy of "N"5 as a sublattice.
Properties.
Every distributive lattice is modular.
proved that, in every finite modular lattice, the number of join-irreducible elements equals the number of meet-irreducible elements. More generally, for every k, the number of elements of the lattice that cover exactly k other elements equals the number that are covered by exactly k other elements.
A useful property to show that a lattice is not modular is as follows:
A lattice G is modular if and only if, for any "a", "b", "c" ∈ "G",
formula_0
Sketch of proof: Let G be modular, and let the premise of the implication hold. Then using absorption and modular identity:
"c" = ("c"∧"b") ∨ "c" = ("a"∧"b") ∨ "c" = "a" ∧ ("b"∨"c") = "a" ∧ ("b"∨"a") = "a"
For the other direction, let the implication of the theorem hold in G. Let "a","b","c" be any elements in G, such that "c" ≤ "a". Let "x" = ("a"∧"b") ∨ "c", "y" = "a" ∧ ("b"∨"c"). From the modular inequality immediately follows that "x" ≤ "y". If we show that "x"∧"b" = "y"∧"b", "x"∨"b" = "y"∨"b", then using the assumption "x" = "y" must hold. The rest of the proof is routine manipulation with infima, suprema and inequalities.
Diamond isomorphism theorem.
For any two elements "a","b" of a modular lattice, one can consider the intervals ["a" ∧ "b", "b"] and ["a", "a" ∨ "b"]. They are connected by order-preserving maps
φ: ["a" ∧ "b", "b"] → ["a", "a" ∨ "b"] and
ψ: ["a", "a" ∨ "b"] → ["a" ∧ "b", "b"]
that are defined by φ("x") = "x" ∨ "a" and ψ("y") = "y" ∧ "b".
The composition ψφ is an order-preserving map from the interval ["a" ∧ "b", "b"] to itself which also satisfies the inequality ψ(φ("x")) = ("x" ∨ "a") ∧ "b" ≥ "x". The example shows that this inequality can be strict in general. In a modular lattice, however, equality holds. Since the dual of a modular lattice is again modular, φψ is also the identity on ["a", "a" ∨ "b"], and therefore the two maps φ and ψ are isomorphisms between these two intervals. This result is sometimes called the diamond isomorphism theorem for modular lattices. A lattice is modular if and only if the diamond isomorphism theorem holds for every pair of elements.
The diamond isomorphism theorem for modular lattices is analogous to the second isomorphism theorem in algebra, and it is a generalization of the lattice theorem.
Modular pairs and related notions.
In any lattice, a modular pair is a pair ("a, b") of elements such that for all "x" satisfying "a" ∧ "b" ≤ "x" ≤ "b", we have ("x" ∨ "a") ∧ "b" = "x", i.e. if one half of the diamond isomorphism theorem holds for the pair. An element "b" of a lattice is called a right modular element if ("a, b") is a modular pair for all elements "a", and an element "a" is called a left modular element if ("a, b") is a modular pair for all elements "b".
A lattice with the property that if ("a, b") is a modular pair, then ("b, a") is also a modular pair is called an M-symmetric lattice. Thus, in an M-symmetric lattice, every right modular element is also left modular, and vice-versa. Since a lattice is modular if and only if all pairs of elements are modular, clearly every modular lattice is M-symmetric. In the lattice "N"5 described above, the pair ("b, a") is modular, but the pair ("a, b") is not. Therefore, "N"5 is not M-symmetric. The centred hexagon lattice "S"7 is M-symmetric but not modular. Since "N"5 is a sublattice of "S"7, it follows that the M-symmetric lattices do not form a subvariety of the variety of lattices.
M-symmetry is not a self-dual notion. A dual modular pair is a pair which is modular in the dual lattice, and a lattice is called dually M-symmetric or M*-symmetric if its dual is M-symmetric. It can be shown that a finite lattice is modular if and only if it is M-symmetric and M*-symmetric. The same equivalence holds for infinite lattices which satisfy the ascending chain condition (or the descending chain condition).
Several less important notions are also closely related. A lattice is cross-symmetric if for every modular pair ("a, b") the pair ("b, a") is dually modular. Cross-symmetry implies M-symmetry but not M*-symmetry. Therefore, cross-symmetry is not equivalent to dual cross-symmetry. A lattice with a least element 0 is ⊥-symmetric if for every modular pair ("a, b") satisfying "a" ∧ "b" = 0 the pair ("b, a") is also modular.
History.
The definition of modularity is due to Richard Dedekind, who published most of the relevant papers after his retirement.
In a paper published in 1894 he studied lattices, which he called "dual groups" () as part of his "algebra of modules" and observed that ideals satisfy what we now call the modular law. He also observed that for lattices in general, the modular law is equivalent to its dual.
In another paper in 1897, Dedekind studied the lattice of divisors with gcd and lcm as operations, so that the lattice order is given by divisibility.
In a digression he introduced and studied lattices formally in a general context. He observed that the lattice of submodules of a module satisfies the modular identity. He called such lattices "dual groups of module type" (). He also proved that the modular identity and its dual are equivalent.
In the same paper, Dedekind also investigated the following stronger form of the modular identity, which is also self-dual:
("x" ∧ "b") ∨ ("a" ∧ "b") = ["x" ∨ "a"] ∧ "b".
He called lattices that satisfy this identity "dual groups of ideal type" (). In modern literature, they are more commonly referred to as distributive lattices. He gave examples of a lattice that is not modular and of a modular lattice that is not of ideal type.
A paper published by Dedekind in 1900 had lattices as its central topic: He described the free modular lattice generated by three elements, a lattice with 28 elements (see picture). | [
{
"math_id": 0,
"text": "\\Big((c\\leq a)\\text{ and }(a\\wedge b=c\\wedge b)\\text{ and }(a\\vee b=c\\vee b)\\Big)\\Rightarrow(a=c)"
}
] | https://en.wikipedia.org/wiki?curid=1089311 |
10896234 | Normal-gamma distribution | Family of continuous probability distributions
In probability theory and statistics, the normal-gamma distribution (or Gaussian-gamma distribution) is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.
Definition.
For a pair of random variables, ("X","T"), suppose that the conditional distribution of "X" given "T" is given by
formula_0
meaning that the conditional distribution is a normal distribution with mean formula_1 and precision formula_2 — equivalently, with variance formula_3
Suppose also that the marginal distribution of "T" is given by
formula_4
where this means that "T" has a gamma distribution. Here "λ", "α" and "β" are parameters of the joint distribution.
Then ("X","T") has a normal-gamma distribution, and this is denoted by
formula_5
Properties.
Probability density function.
The joint probability density function of ("X","T") is
formula_6
Marginal distributions.
By construction, the marginal distribution of formula_7 is a gamma distribution, and the conditional distribution of formula_8 given formula_7 is a Gaussian distribution. The marginal distribution of formula_8 is a three-parameter non-standardized Student's t-distribution with parameters formula_9.
Exponential family.
The normal-gamma distribution is a four-parameter exponential family with natural parameters formula_10 and natural statistics formula_11.
Moments of the natural statistics.
The following moments can be easily computed using the moment generating function of the sufficient statistic:
formula_12
where formula_13 is the digamma function,
formula_14
Scaling.
If formula_15 then for any formula_16 is distributed as formula_17
Posterior distribution of the parameters.
Assume that "x" is distributed according to a normal distribution with unknown mean formula_18 and precision formula_7.
formula_19
and that the prior distribution on formula_18 and formula_7, formula_20, has a normal-gamma distribution
formula_21
for which the density π satisfies
formula_22
Suppose
formula_23
i.e. the components of formula_24 are conditionally independent given formula_25 and the conditional distribution of each of them given formula_26 is normal with expected value formula_18 and variance formula_27 The posterior distribution of formula_18 and formula_7 given this dataset formula_28 can be analytically determined by Bayes' theorem explicitly,
formula_29
where formula_30 is the likelihood of the parameters given the data.
Since the data are i.i.d, the likelihood of the entire dataset is equal to the product of the likelihoods of the individual data samples:
formula_31
This expression can be simplified as follows:
formula_32
where formula_33, the mean of the data samples, and formula_34, the sample variance.
The posterior distribution of the parameters is proportional to the prior times the likelihood.
formula_35
The final exponential term is simplified by completing the square.
formula_36
On inserting this back into the expression above,
formula_37
This final expression is in exactly the same form as a Normal-Gamma distribution, i.e.,
formula_38
Interpretation of parameters.
The interpretation of parameters in terms of pseudo-observations is as follows:
As a consequence, if one has a prior mean of formula_44 from formula_45 samples and a prior precision of formula_46 from formula_47 samples, the prior distribution over formula_48 and formula_49 is
formula_50
and after observing formula_43 samples with mean formula_18 and variance formula_51, the posterior probability is
formula_52
Note that in some programming languages, such as Matlab, the gamma distribution is implemented with the inverse definition of formula_53, so the fourth argument of the Normal-Gamma distribution is formula_54.
Generating normal-gamma random variates.
Generation of random variates is straightforward:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " X\\mid T \\sim N(\\mu,1 /(\\lambda T)) \\,\\! , "
},
{
"math_id": 1,
"text": " \\mu"
},
{
"math_id": 2,
"text": " \\lambda T "
},
{
"math_id": 3,
"text": " 1 / (\\lambda T) . "
},
{
"math_id": 4,
"text": "T \\mid \\alpha, \\beta \\sim \\operatorname{Gamma}(\\alpha,\\beta),"
},
{
"math_id": 5,
"text": " (X,T) \\sim \\operatorname{NormalGamma}(\\mu,\\lambda,\\alpha,\\beta).\n"
},
{
"math_id": 6,
"text": "f(x,\\tau\\mid\\mu,\\lambda,\\alpha,\\beta) = \\frac{\\beta^\\alpha \\sqrt{\\lambda}}{\\Gamma(\\alpha)\\sqrt{2\\pi}} \\, \\tau^{\\alpha-\\frac{1}{2}}\\,e^{-\\beta\\tau}\\exp\\left( -\\frac{ \\lambda \\tau (x- \\mu)^2}{2}\\right)"
},
{
"math_id": 7,
"text": "\\tau"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "(\\nu, \\mu, \\sigma^2)=(2\\alpha, \\mu, \\beta/(\\lambda\\alpha))"
},
{
"math_id": 10,
"text": "\\alpha-1/2, -\\beta-\\lambda\\mu^2/2, \\lambda\\mu, -\\lambda/2"
},
{
"math_id": 11,
"text": "\\ln\\tau, \\tau, \\tau x, \\tau x^2"
},
{
"math_id": 12,
"text": "\\operatorname{E}(\\ln T)=\\psi\\left(\\alpha\\right) - \\ln\\beta,"
},
{
"math_id": 13,
"text": "\\psi\\left(\\alpha\\right)"
},
{
"math_id": 14,
"text": "\n\\begin{align}\n\\operatorname{E}(T) & =\\frac{\\alpha}{\\beta}, \\\\[5pt]\n\\operatorname{E}(TX) & =\\mu \\frac{\\alpha}{\\beta}, \\\\[5pt]\n\\operatorname{E}(TX^2) & =\\frac{1}{\\lambda} + \\mu^2 \\frac{\\alpha}{\\beta}.\n\\end{align}\n"
},
{
"math_id": 15,
"text": " (X,T) \\sim \\mathrm{NormalGamma}(\\mu,\\lambda,\\alpha,\\beta), "
},
{
"math_id": 16,
"text": "b>0, (bX,bT)"
},
{
"math_id": 17,
"text": "{\\rm NormalGamma}(b\\mu, \\lambda/ b^3, \\alpha, \\beta/ b )."
},
{
"math_id": 18,
"text": "\\mu"
},
{
"math_id": 19,
"text": " x \\sim \\mathcal{N}(\\mu, \\tau^{-1}) "
},
{
"math_id": 20,
"text": "(\\mu,\\tau)"
},
{
"math_id": 21,
"text": "\n(\\mu,\\tau) \\sim \\text{NormalGamma}(\\mu_0,\\lambda_0,\\alpha_0,\\beta_0) ,\n"
},
{
"math_id": 22,
"text": " \n\\pi(\\mu,\\tau) \\propto \\tau^{\\alpha_0-\\frac{1}{2}}\\,\\exp[-\\beta_0\\tau]\\,\\exp\\left[ -\\frac{\\lambda_0\\tau(\\mu-\\mu_0)^2} 2 \\right].\n"
},
{
"math_id": 23,
"text": "\nx_1,\\ldots,x_n \\mid \\mu,\\tau \\sim \\operatorname{{i.}{i.}{d.}} \\operatorname N\\left( \\mu, \\tau^{-1} \\right),\n"
},
{
"math_id": 24,
"text": "\\mathbf X = (x_1,\\ldots,x_n)"
},
{
"math_id": 25,
"text": "\\mu,\\tau"
},
{
"math_id": 26,
"text": " \\mu,\\tau"
},
{
"math_id": 27,
"text": " 1 / \\tau. "
},
{
"math_id": 28,
"text": " \\mathbb X"
},
{
"math_id": 29,
"text": "\\mathbf{P}(\\tau,\\mu \\mid \\mathbf{X}) \\propto \\mathbf{L}(\\mathbf{X} \\mid \\tau,\\mu) \\pi(\\tau,\\mu),"
},
{
"math_id": 30,
"text": "\\mathbf{L}"
},
{
"math_id": 31,
"text": "\n\\mathbf{L}(\\mathbf{X} \\mid \\tau, \\mu) = \\prod_{i=1}^n \\mathbf{L}(x_i \\mid \\tau, \\mu).\n"
},
{
"math_id": 32,
"text": "\n\\begin{align}\n\\mathbf{L}(\\mathbf{X} \\mid \\tau, \\mu) & \\propto \\prod_{i=1}^n \\tau^{1/2} \\exp\\left[\\frac{-\\tau}{2}(x_i-\\mu)^2\\right] \\\\[5pt]\n& \\propto \\tau^{n/2} \\exp\\left[\\frac{-\\tau}{2}\\sum_{i=1}^n(x_i-\\mu)^2\\right] \\\\[5pt]\n& \\propto \\tau^{n/2} \\exp\\left[\\frac{-\\tau}{2} \\sum_{i=1}^n(x_i-\\bar{x} +\\bar{x} -\\mu)^2 \\right] \\\\[5pt]\n& \\propto \\tau^{n/2} \\exp\\left[\\frac{-\\tau} 2 \\sum_{i=1}^n \\left((x_i-\\bar{x})^2 + (\\bar{x} -\\mu)^2 \\right)\\right] \\\\[5pt]\n& \\propto \\tau^{n/2} \\exp\\left[\\frac{-\\tau}{2}\\left(n s + n(\\bar{x} -\\mu)^2\\right)\\right],\n\\end{align}\n"
},
{
"math_id": 33,
"text": "\\bar{x}= \\frac{1}{n}\\sum_{i=1}^n x_i"
},
{
"math_id": 34,
"text": "s= \\frac{1}{n} \\sum_{i=1}^n(x_i-\\bar{x})^2"
},
{
"math_id": 35,
"text": "\n\\begin{align}\n\\mathbf{P}(\\tau, \\mu \\mid \\mathbf{X}) &\\propto \\mathbf{L}(\\mathbf{X} \\mid \\tau,\\mu) \\pi(\\tau,\\mu) \\\\\n&\\propto \\tau^{n/2} \\exp \\left[ \\frac{-\\tau}{2}\\left(n s + n(\\bar{x} -\\mu)^2\\right) \\right] \n \\tau^{\\alpha_0-\\frac{1}{2}}\\,\\exp[{-\\beta_0\\tau}]\\,\\exp\\left[-\\frac{\\lambda_0\\tau(\\mu-\\mu_0)^2}{2}\\right] \\\\ \n &\\propto \\tau^{\\frac{n}{2} + \\alpha_0 - \\frac{1}{2}}\\exp\\left[-\\tau \\left( \\frac{1}{2} n s + \\beta_0 \\right) \\right] \\exp\\left[- \\frac{\\tau}{2}\\left(\\lambda_0(\\mu-\\mu_0)^2 + n(\\bar{x} -\\mu)^2\\right)\\right] \n\\end{align}\n"
},
{
"math_id": 36,
"text": "\n\\begin{align}\n\\lambda_0(\\mu-\\mu_0)^2 + n(\\bar{x} -\\mu)^2&=\\lambda_0 \\mu^2 - 2 \\lambda_0 \\mu \\mu_0 + \\lambda_0 \\mu_0^2 + n \\mu^2 - 2 n \\bar{x} \\mu + n \\bar{x}^2 \\\\\n&= (\\lambda_0 + n) \\mu^2 - 2(\\lambda_0 \\mu_0 + n \\bar{x}) \\mu + \\lambda_0 \\mu_0^2 +n \\bar{x}^2 \\\\\n&= (\\lambda_0 + n)( \\mu^2 - 2 \\frac{\\lambda_0 \\mu_0 + n \\bar{x}}{\\lambda_0 + n} \\mu ) + \\lambda_0 \\mu_0^2 +n \\bar{x}^2 \\\\\n&= (\\lambda_0 + n)\\left(\\mu - \\frac{\\lambda_0 \\mu_0 + n \\bar{x}}{\\lambda_0 + n} \\right) ^2 + \\lambda_0 \\mu_0^2 +n \\bar{x}^2 - \\frac{\\left(\\lambda_0 \\mu_0 +n \\bar{x}\\right)^2} {\\lambda_0 + n} \\\\\n&= (\\lambda_0 + n)\\left(\\mu - \\frac{\\lambda_0 \\mu_0 + n \\bar{x}}{\\lambda_0 + n} \\right) ^2 + \\frac{\\lambda_0 n (\\bar{x} - \\mu_0 )^2}{\\lambda_0 +n}\n\\end{align}\n"
},
{
"math_id": 37,
"text": "\n\\begin{align}\n\\mathbf{P}(\\tau, \\mu \\mid \\mathbf{X}) & \\propto \\tau^{\\frac{n}{2} + \\alpha_0 - \\frac{1}{2}} \\exp \\left[-\\tau \\left( \\frac{1}{2} n s + \\beta_0 \\right) \\right] \\exp \\left[- \\frac{\\tau}{2} \\left( \\left(\\lambda_0 + n \\right) \\left(\\mu- \\frac{\\lambda_0 \\mu_0 + n \\bar{x}}{\\lambda_0 + n} \\right)^2 + \\frac{\\lambda_0 n (\\bar{x} - \\mu_0 )^2}{\\lambda_0 +n} \\right) \\right]\\\\\n& \\propto \\tau^{\\frac{n}{2} + \\alpha_0 - \\frac{1}{2}} \\exp \\left[-\\tau \\left( \\frac{1}{2} n s + \\beta_0 + \\frac{\\lambda_0 n (\\bar{x} - \\mu_0 )^2}{2(\\lambda_0 +n)} \\right) \\right] \\exp \\left[- \\frac{\\tau}{2} \\left(\\lambda_0 + n \\right) \\left(\\mu- \\frac{\\lambda_0 \\mu_0 + n \\bar{x}}{\\lambda_0 + n} \\right)^2 \\right]\n\\end{align}\n"
},
{
"math_id": 38,
"text": "\n\\mathbf{P}(\\tau, \\mu \\mid \\mathbf{X}) = \\text{NormalGamma}\\left(\\frac{\\lambda_0 \\mu_0 + n \\bar{x}}{\\lambda_0 + n}, \\lambda_0 + n, \\alpha_0+\\frac{n}{2}, \\beta_0+ \\frac{1}{2}\\left(n s + \\frac{\\lambda_0 n (\\bar{x} - \\mu_0 )^2}{\\lambda_0 +n} \\right) \\right)\n"
},
{
"math_id": 39,
"text": "2\\alpha"
},
{
"math_id": 40,
"text": "\\frac{\\beta}{\\alpha}"
},
{
"math_id": 41,
"text": "2\\beta"
},
{
"math_id": 42,
"text": "\\lambda_{0}"
},
{
"math_id": 43,
"text": "n"
},
{
"math_id": 44,
"text": "\\mu_0"
},
{
"math_id": 45,
"text": " n_\\mu "
},
{
"math_id": 46,
"text": " \\tau_0 "
},
{
"math_id": 47,
"text": "n_\\tau"
},
{
"math_id": 48,
"text": " \\mu "
},
{
"math_id": 49,
"text": " \\tau "
},
{
"math_id": 50,
"text": "\n\\mathbf{P}(\\tau,\\mu \\mid \\mathbf{X}) = \\operatorname{NormalGamma} \\left(\\mu_0, n_\\mu , \\frac{n_\\tau}{2}, \\frac{n_\\tau}{2 \\tau_0}\\right)\n"
},
{
"math_id": 51,
"text": "s"
},
{
"math_id": 52,
"text": "\n\\mathbf{P}(\\tau,\\mu \\mid \\mathbf{X}) = \\text{NormalGamma}\\left( \\frac{n_\\mu \\mu_0 + n \\mu}{n_\\mu +n}, n_\\mu +n ,\\frac{1}{2}(n_\\tau+n), \\frac{1}{2}\\left(\\frac{n_\\tau}{\\tau_0} + n s + \\frac{n_\\mu n (\\mu-\\mu_0)^2}{n_\\mu+n}\\right) \\right)\n"
},
{
"math_id": 53,
"text": "\\beta"
},
{
"math_id": 54,
"text": " 2 \\tau_0 /n_\\tau"
},
{
"math_id": 55,
"text": "\\alpha"
},
{
"math_id": 56,
"text": "1/(\\lambda \\tau)"
}
] | https://en.wikipedia.org/wiki?curid=10896234 |
10896748 | Myron L. Bender | American biochemist
Myron Lee Bender (1924–1988) was born in St. Louis, Missouri. He obtained his B.S. (1944) and his Ph.D. (1948) from Purdue University. The latter was under the direction of Henry B. Hass. After postdoctoral research under Paul D. Barlett (Harvard University), and Frank H. Westheimer (University of Chicago), he spent one year as a faculty member at the University of Connecticut. Thereafter, he was a professor of Chemistry at Illinois Institute of Technology in 1951, and then at Northwestern University in 1960. He worked primarily in the study of reaction mechanisms and the biochemistry of enzyme action. Myron L. Bender demonstrated the two-step mechanism of catalysis for serine proteases, nucleophilic catalysis in ester hydrolysis and intramolecular catalysis in water. He also showed that cyclodextrin can be used to investigate catalysis of organic reactions within the scope of host–guest chemistry. Finally, he and others reported on the synthesis of an organic compound as a model of an acylchymotrypsin intermediate.
During his career, Myron L. Bender was an active member of the Chicago Section of the American Chemical Society. He was elected a Fellow of Merton College, Oxford University, and to the National Academy of Sciences, the latter in 1968. He received an honorary degree from Purdue University in 1969. He was the recipient of the Midwest Award of the American Chemical Society in 1972.
Professor Bender retired from Northwestern in 1988. Both he and his wife, Muriel S. Bender, died that year.
Research.
Research papers.
Bender's initial work concerned mechanisms of chemical reactions, and although this continued through his career he became increasingly interested in enzyme mechanisms, especially that of α-chymotrypsin. Later he broadened his interest to encompass other enzymes, such as acetylcholinesterase and carboxypeptidase, and others.
Bender pioneered the use of "p"-nitrophenyl acetate as a model substrate for studying proteolysis, as it is particularly convenient in spectroscopic experiments. He likewise used imidazole as a model catalyst for shedding light on enzyme action.
He also studied artificial enzymes, starting with modified subtilisin in which a serine residue was replaced by cysteine (replacing an ester group with a thiol). Polgar and Bender laid stress on the fact that the modified enzyme was catalytically active,
whereas Koshland and Neet, who made essentially the same observation the same year, drew the opposite conclusion, that despite replacing group with one in principle more reactive, the modified enzyme was less effective as a catalyst than the unmodified enzyme. Philipp and Bender later did a detailed study of the catalytic differences between native subtilisin and thiolsubtilisin. Bender also studied other artificial enzymes, such as cycloamyloses, that were not simply modified natural enzymes.
Bender may have been the first to recognize that the specificity constant (formula_0, the ratio of catalytic constant to Michaelis constant) provides the best measure of enzyme specificity, and to use the term "specificity constant" for it, as later recommended by the IUBMB. Philipp and Bender proposed that this specificity constant is the same as the second-order rate constant for enzyme-substrate binding for the most active substrates.
Reviews.
Bender authored or co-authored several reviews, for example summarizing several years' work on α-chymotrypsin, and proteolytic enzymes in general.
Books.
Bender's books primarily concerned catalysis, especially catalysis by enzymes
and its underlying chemistry, and also cyclodextrin chemistry;
Bender Distinguished Summer Lecturers.
The series of Myron L. Bender & Muriel S. Bender Distinguished Summer Lectures in Organic Chemistry was established in 1989 and hosted by the Department of Chemistry at Northwestern University. The scientists who have given these lectures include
Julius Rebek (1990),
JoAnne Stubbe (1992),
Peter B. Dervan (1993),
Marye Anne Fox (1994),
Richard Lerner (1995),
Eric Jacobsen (1997),
Larry E. Overman (1998),
Ronald Breslow (1999),
Jean Fréchet (2000),
Dale Boger (2001),
François Diederich (2004),
Christopher T. Walsh (2008),
Stephen L. Buchwald (2009),
Paul Wender (2010), and
Kendall Houk (2011).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k_\\mathrm{cat}/K_\\mathrm{m}"
}
] | https://en.wikipedia.org/wiki?curid=10896748 |
10897286 | Hans B. Pacejka | Hans Bastiaan Pacejka (12 September 1934 – 17 September 2017) was an expert in vehicle system dynamics and particularly in tire dynamics, fields in which his works are now standard references. He was Professor emeritus at Delft University of Technology in Delft, Netherlands.
Magic Formula tire models.
Pacejka developed a series of tire design models during his career. They were named the "Magic Formula" because there is no particular physical basis for the structure of the equations chosen, but they fit a wide variety of tire constructions and operating conditions. Each tire is characterized by 10–20 coefficients for each important force that it can produce at the contact patch, typically lateral and longitudinal force, and self-aligning torque, as the best fit between experimental data and the model. These coefficients are then used to generate equations showing how much force is generated for a given vertical load on the tire, camber angle and slip angle.
The Pacejka tire models are widely used in professional vehicle dynamics simulations, and racing car games, as they are reasonably accurate, easy to program, and solve quickly. A problem with Pacejka's model is that when implemented into computer code, it doesn't work for low speeds (from around the pit-entry speed), because a velocity term in the denominator makes the formula diverge. An alternative to Pacejka tire models are brush tire models, which can be analytically derived, although empirical curve fitting is still required for good correlation, and they tend to be less accurate than the MF models.
Solving a model based on the Magic curve with high frequency can also be a problem, determined by how is the input of the Pacejka curve computed. The slipping velocity (difference between the velocity of the car and the velocity of the tire in the contact point) will change very quickly and the model becomes a stiff system (a system, whose eigenvalues differ a lot), which may require special solver.
The general form of the Magic Formula, given by Pacejka, is:
formula_0
where "B", "C", "D" and "E" represent fitting constants and "y" is a force or moment resulting from a slip parameter "x". The formula may be translated away from the origin of the "x"–"y" axes. The Magic Model became the basis for many variants.
Professional activities.
Pacejka was a co-founder in 1972 and editor-in-chief of "Vehicle System Dynamics–International Journal of Vehicle Mechanics and Mobility" until 1989. At the time of the founding of the journal, Pacejka had been an associate professor at Delft University, specializing in vehicle dynamics. His 1966 doctoral thesis addressed the "wheel shimmy problem". He published approximately 90 academic papers and was advisor to 15 PhD and 170 M.Sc. graduate students.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y = D \\cdot \\sin \\left\\{ C \\cdot \\arctan \\left[ Bx - E \\cdot (Bx-\\arctan\\left(Bx)\\right)\\right]\\right\\} \\,"
}
] | https://en.wikipedia.org/wiki?curid=10897286 |
10897780 | Mersenne's laws | Laws describing the frequency of oscillation of a stretched string
Mersenne's laws are laws describing the frequency of oscillation of a stretched string or monochord, useful in musical tuning and musical instrument construction.
Overview.
The equation was first proposed by French mathematician and music theorist Marin Mersenne in his 1636 work "Harmonie universelle". Mersenne's laws govern the construction and operation of string instruments, such as pianos and harps, which must accommodate the total tension force required to keep the strings at the proper pitch. Lower strings are thicker, thus having a greater mass per length. They typically have lower tension. Guitars are a familiar exception to this: string tensions are similar, for playability, so lower string pitch is largely achieved with increased mass per length. Higher-pitched strings typically are thinner, have higher tension, and may be shorter. "This result does not differ substantially from Galileo's, yet it is rightly known as Mersenne's law," because Mersenne physically proved their truth through experiments (while Galileo considered their proof impossible). "Mersenne investigated and refined these relationships by experiment but did not himself originate them". Though his theories are correct, his measurements are not very exact, and his calculations were greatly improved by Joseph Sauveur (1653–1716) through the use of acoustic beats and metronomes.
Equations.
The natural frequency is:
formula_0 (equation 26)
formula_1 (equation 27)
formula_2 (equation 28)
Thus, for example, all other properties of the string being equal, to make the note one octave higher (2/1) one would need either to decrease its length by half (1/2), to increase the tension to the square (4), or to decrease its mass per length by the inverse square (1/4).
These laws are derived from Mersenne's equation 22:
formula_3
The formula for the fundamental frequency is:
formula_4
where "f" is the frequency, "L" is the length, "F" is the force and "μ" is the mass per length.
Similar laws were not developed for pipes and wind instruments at the same time since Mersenne's laws predate the conception of wind instrument pitch being dependent on longitudinal waves rather than "percussion".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " f_0 \\propto \\tfrac{1}{L}."
},
{
"math_id": 1,
"text": " f_0 \\propto \\sqrt{F}."
},
{
"math_id": 2,
"text": " f_0 \\propto \\frac{1}{\\sqrt{\\mu}}."
},
{
"math_id": 3,
"text": " f_0 = \\frac{\\nu}{\\lambda} = \\frac{1}{2L}\\sqrt{\\frac{F}{\\mu}}."
},
{
"math_id": 4,
"text": " f_0=\\frac{1}{2L}\\sqrt{\\frac{F}{\\mu}}, "
}
] | https://en.wikipedia.org/wiki?curid=10897780 |
1090018 | Newton's cradle | Device that demonstrates conservation of momentum and energy by a series of swinging spheres
Newton's cradle is a device, usually made of metal, that demonstrates the principles of conservation of momentum and conservation of energy in physics with swinging spheres. When one sphere at the end is lifted and released, it strikes the stationary spheres, compressing them and thereby transmitting a pressure wave through the stationary spheres, which creates a force that pushes the last sphere upward. The last sphere swings back and strikes the stationary spheres, repeating the effect in the opposite direction. The device is named after 17th-century English scientist Sir Isaac Newton and was designed by French scientist Edme Mariotte. It is also known as Newton's pendulum, Newton's balls, Newton's rocker or executive ball clicker (since the device makes a click each time the balls collide, which they do repeatedly in a steady rhythm).
Operation.
When one of the end balls ("the first") is pulled sideways, the attached string makes it follow an upward arc. When the ball is let go, it strikes the second ball and comes to nearly a dead stop. The ball on the opposite side acquires most of the velocity of the first ball and swings in an arc almost as high as the release height of the first ball. This shows that the last ball receives most of the energy and momentum of the first ball. The impact produces a sonic wave that propagates through the intermediate balls. Any efficiently elastic material such as steel does this, as long as the kinetic energy is temporarily stored as potential energy in the compression of the material rather than being lost as heat. This is similar to bouncing one coin of a line of touching coins by striking it with another coin, and which happens even if the first struck coin is constrained by pressing on its center such that it cannot move.
There are slight movements in all the balls after the initial strike, but the last ball receives most of the initial energy from the impact of the first ball. When two (or three) balls are dropped, the two (or three) balls on the opposite side swing out. Some say that this behavior demonstrates the conservation of momentum and kinetic energy in elastic collisions. However, if the colliding balls behave as described above with the same mass possessing the same velocity before and after the collisions, then any function of mass and velocity is conserved in such an event. Thus, this first-level explanation is a true, but not a complete description of the motion.
Physics explanation.
Newton's cradle can be modeled fairly accurately with simple mathematical equations with the assumption that the balls always collide in pairs. If one ball strikes four stationary balls that are already touching, these simple equations can not explain the resulting movements in all five balls, which are not due to friction losses. For example, in a real Newton's cradle the fourth has some movement and the first ball has a slight reverse movement. All the animations in this article show idealized action (simple solution) that only occurs if the balls are "not" touching initially and only collide in pairs.
Simple solution.
The conservation of momentum (mass × velocity) and kinetic energy (1/2 × mass × velocity2) can be used to find the resulting velocities for two colliding perfectly elastic objects. These two equations are used to determine the resulting velocities of the two objects. For the case of two balls constrained to a straight path by the strings in the cradle, the velocities are a single number instead of a 3D vector for 3D space, so the math requires only two equations to solve for two unknowns. When the two objects have the same mass, the solution is simple: the moving object stops relative to the stationary one and the stationary one picks up all the other's initial velocity. This assumes perfectly elastic objects, so there is no need to account for heat and sound energy losses.
Steel does not compress much, but its elasticity is very efficient, so it does not cause much waste heat. The simple effect from two same-mass efficiently elastic colliding objects constrained to a straight path is the basis of the effect seen in the cradle and gives an approximate solution to all its activities.
For a sequence of same-mass elastic objects constrained to a straight path, the effect continues to each successive object. For example, when two balls are dropped to strike three stationary balls in a cradle, there is an unnoticed but crucial small distance between the two dropped balls, and the action is as follows: the first moving ball that strikes the first stationary ball (the second ball striking the third ball) transfers all of its momentum to the third ball and stops. The third ball then transfers the momentum to the fourth ball and stops, and then the fourth to the fifth ball.
Right behind this sequence, the second moving ball is transferring its momentum to the first moving ball that just stopped, and the sequence repeats immediately and imperceptibly behind the first sequence, ejecting the fourth ball right behind the fifth ball with the same small separation that was between the two initial striking balls. If they are simply touching when they strike the third ball, precision requires the more complete solution below.
Other examples of this effect.
The effect of the last ball ejecting with a velocity nearly equal to the first ball can be seen in sliding a coin on a table into a line of identical coins, as long as the striking coin and its twin targets are in a straight line. The effect can similarly be seen in billiard balls. The effect can also be seen when a sharp and strong pressure wave strikes a dense homogeneous material immersed in a less-dense medium. If the identical atoms, molecules, or larger-scale sub-volumes of the dense homogeneous material are at least partially elastically connected to each other by electrostatic forces, they can act as a sequence of colliding identical elastic balls.
The surrounding atoms, molecules, or sub-volumes experiencing the pressure wave act to constrain each other similarly to how the string constrains the cradle's balls to a straight line. As a medical example, lithotripsy shock waves can be sent through the skin and tissue without harm to burst kidney stones. The side of the stones opposite to the incoming pressure wave bursts, not the side receiving the initial strike. In the Indian game carrom, a striker stops after hitting a stationery playing piece, transferring all of its momentum into the piece that was hit.
When the simple solution applies.
For the simple solution to precisely predict the action, no pair in the midst of colliding may touch the third ball, because the presence of the third ball effectively makes the struck ball appear more massive. Applying the two conservation equations to solve the final velocities of three or more balls in a single collision results in many possible solutions, so these two principles are not enough to determine resulting action.
Even when there is a small initial separation, a third ball may become involved in the collision if the initial separation is not large enough. When this occurs, the complete solution method described below must be used.
Small steel balls work well because they remain efficiently elastic with little heat loss under strong strikes and do not compress much (up to about 30 μm in a small Newton's cradle). The small, stiff compressions mean they occur rapidly, less than 200 microseconds, so steel balls are more likely to complete a collision before touching a nearby third ball. Softer elastic balls require a larger separation to maximize the effect from pair-wise collisions.
More complete solution.
A cradle that best follows the simple solution needs to have an initial separation between the balls that measures at least twice the amount that any one ball compresses, but most do not. This section describes the action when the initial separation is not enough and in subsequent collisions that involve more than two balls even when there is an initial separation. This solution simplifies to the simple solution when only two balls touch during a collision. It applies to all perfectly elastic identical balls that have no energy losses due to friction and can be approximated by materials such as steel, glass, plastic, and rubber.
For two balls colliding, only the two equations for conservation of momentum and energy are needed to solve the two unknown resulting velocities. For three or more simultaneously colliding elastic balls, the relative compressibilities of the colliding surfaces are the additional variables that determine the outcome. For example, five balls have four colliding points and scaling (dividing) three of them by the fourth gives the three extra variables needed to solve for all five post-collision velocities.
Newtonian, Lagrangian, Hamiltonian, and stationary action are the different ways of mathematically expressing classical mechanics. They describe the same physics but must be solved by different methods. All enforce the conservation of energy and momentum. Newton's law has been used in research papers. It is applied to each ball and the sum of forces is made equal to zero. So there are five equations, one for each ball—and five unknowns, one for each velocity. If the balls are identical, the absolute compressibility of the surfaces becomes irrelevant, because it can be divided out of both sides of all five equations, producing zero.
Determining the velocities for the case of one ball striking four initially touching balls is found by modeling the balls as weights with non-traditional springs on their colliding surfaces. Most materials, like steel, that are efficiently elastic approximately follow Hooke's force law for springs, formula_0, but because the area of contact for a sphere increases as the force increases, colliding elastic balls follow Hertz's adjustment to Hooke's law, formula_1. This and Newton's law for motion (formula_2) are applied to each ball, giving five simple but interdependent differential equations that can be solved numerically.
When the fifth ball begins accelerating, it is receiving momentum and energy from the third and fourth balls through the spring action of their compressed surfaces. For identical elastic balls of any type with initially touching balls, the action is the same for the first strike, except the time to complete a collision increases in softer materials. Forty to fifty percent of the kinetic energy of the initial ball from a single-ball strike is stored in the ball surfaces as potential energy for most of the collision process. Of the initial velocity, 13% is imparted to the fourth ball (which can be seen as a 3.3-degree movement if the fifth ball moves out 25 degrees) and there is a slight reverse velocity in the first three balls, the first ball having the largest at −7% of the initial velocity. This separates the balls, but they come back together just before as the fifth ball returns. This is due to the pendulum phenomenon of different small angle disturbances having approximately the same time to return to the center.
The Hertzian differential equations predict that if two balls strike three, the fifth and fourth balls will leave with velocities of 1.14 and 0.80 times the initial velocity. This is 2.03 times more kinetic energy in the fifth ball than the fourth ball, which means the fifth ball would swing twice as high in the vertical direction as the fourth ball. But in a real Newton's cradle, the fourth ball swings out as far as the fifth ball. To explain the difference between theory and experiment, the two striking balls must have at least ≈ 10 μm separation (given steel, 100 g, and 1 m/s). This shows that in the common case of steel balls, unnoticed separations can be important and must be included in the Hertzian differential equations, or the simple solution gives a more accurate result.
Effect of pressure waves.
The forces in the Hertzian solution above were assumed to propagate in the balls immediately, which is not the case. Sudden changes in the force between the atoms of material build up to form a pressure wave. Pressure waves (sound) in steel travel about 5 cm in 10 microseconds, which is about 10 times faster than the time between the first ball striking and the last ball being ejected. The pressure waves reflect back and forth through all five balls about ten times, although dispersing to less of a wavefront with more reflections. This is fast enough for the Hertzian solution to not require a substantial modification to adjust for the delay in force propagation through the balls. In less-rigid but still very elastic balls such as rubber, the propagation speed is slower, but the duration of collisions is longer, so the Hertzian solution still applies. The error introduced by the limited speed of the force propagation biases the Hertzian solution towards the simple solution because the collisions are not affected as much by the inertia of the balls that are further away.
Identically shaped balls help the pressure waves converge on the contact point of the last ball: at the initial strike point one pressure wave goes forward to the other balls while another goes backward to reflect off the opposite side of the first ball, and then it follows the first wave, being exactly one ball's diameter behind. The two waves meet up at the last contact point because the first wave reflects off the opposite side of the last ball and it meets up at the last contact point with the second wave. Then they reverberate back and forth like this about 10 times until the first ball stops connecting with the second ball. Then the reverberations reflect off the contact point between the second and third balls, but still converge at the last contact point, until the last ball is ejected—but it is less of a wavefront with each reflection.
Effect of different types of balls.
Using different types of material does not change the action as long as the material is efficiently elastic. The size of the spheres does not change the results unless the increased weight exceeds the elastic limit of the material. If the solid balls are too large, energy is being lost as heat, because the elastic limit increases with the radius raised to the power 1.5, but the energy which had to be absorbed and released increases as the cube of the radius. Making the contact surfaces flatter can overcome this to an extent by distributing the compression to a larger amount of material but it can introduce an alignment problem. Steel is better than most materials because it allows the simple solution to apply more often in collisions after the first strike, its elastic range for storing energy remains good despite the higher energy caused by its weight, and the higher weight decreases the effect of air resistance.
Uses.
The most common application is that of a desktop executive toy. Another use is as an educational physics demonstration, as an example of conservation of momentum and conservation of energy.
History.
The principle demonstrated by the device, the law of impacts between bodies, was first demonstrated by the French physicist Abbé Mariotte in the 17th century. His work on the topic was first presented to the French Academy of Sciences in 1671; it was published in 1673 as "Traité de la percussion ou choc des corps" ("Treatise on percussion or shock of bodies").
Newton acknowledged Mariotte's work, along with Wren, Wallis and Huygens as the pioneers of experiments on the collisions of pendulum balls, in his "Principia".
Christiaan Huygens used pendulums to study collisions. His work, "De Motu Corporum ex Percussione" (On the Motion of Bodies by Collision) published posthumously in 1703, contains a version of Newton's first law and discusses the collision of suspended bodies including two bodies of equal mass with the motion of the moving body being transferred to the one at rest.
There is much confusion over the origins of the modern Newton's cradle. Marius J. Morin has been credited as being the first to name and make this popular executive toy. However, in early 1967, an English actor, Simon Prebble, coined the name "Newton's cradle" (now used generically) for the wooden version manufactured by his company, Scientific Demonstrations Ltd. After some initial resistance from retailers, they were first sold by Harrods of London, thus creating the start of an enduring market for executive toys. Later a very successful chrome design for the Carnaby Street store Gear was created by the sculptor and future film director Richard Loncraine.
The largest cradle device in the world was designed by "MythBusters" and consisted of five one-ton concrete and steel rebar-filled buoys suspended from a steel truss. The buoys also had a steel plate inserted in between their two-halves to act as a "contact point" for transferring the energy; this cradle device did not function well because concrete is not elastic so most of the energy was lost to a heat buildup in the concrete. A smaller-scale version constructed by them consists of five chrome steel ball bearings, each weighing , and is nearly as efficient as a desktop model.
The cradle device with the largest-diameter collision balls on public display was visible for more than a year in Milwaukee, Wisconsin, at the retail store American Science and Surplus (see photo). Each ball was an inflatable exercise ball in diameter (encased in steel rings), and was supported from the ceiling using extremely strong magnets. It was dismantled in early August 2010 due to maintenance concerns.
In popular culture.
Newton's cradle appears in some films, often as a trope on the desk of a lead villain such as Paul Newman's role in "The Hudsucker Proxy", Magneto in "X-Men", and the Kryptonians in "Superman II". It was used to represent the unyielding position of the NFL towards head injuries in "Concussion." It has also been used as a relaxing diversion on the desk of lead intelligent/anxious/sensitive characters such as Henry Winkler's role in "Night Shift", Dustin Hoffman's role in "Straw Dogs", and Gwyneth Paltrow's role in "Iron Man 2". It was featured more prominently as a series of clay pots in "Rosencrantz and Guildenstern Are Dead", and as a row of 1968 Eero Aarnio bubble chairs with scantily clad women in them in "Gamer". In "Storks", Hunter, the CEO of Cornerstore, has one not with balls, but with little birds. Newton's cradle is an item in Nintendo's "Animal Crossing" where it is referred to as "executive toy". In 2017, an episode of the "Omnibus" podcast, featuring "Jeopardy!" champion Ken Jennings and musician John Roderick, focused on the history of Newton's cradle. Newton's cradle is also featured on the desk of Deputy White House Communications Director Sam Seaborn in "The West Wing". In the "Futurama" episode "The Day the Earth Stood Stupid", professor Hubert Farnsworth is shown with his head in a Newton's cradle and saying he's a genius as Philip J. Fry walks by.
Progressive rock band Dream Theater uses the cradle as imagery in album art of their 2005 release "Octavarium". Rock band Jefferson Airplane used the cradle on the 1968 album "Crown of Creation" as a rhythm device to create polyrhythms on an instrumental track.
References.
<templatestyles src="Reflist/styles.css" />
Literature.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "F = k \\cdot x"
},
{
"math_id": 1,
"text": "F = k \\cdot x^{1.5}"
},
{
"math_id": 2,
"text": "F = m \\cdot a"
}
] | https://en.wikipedia.org/wiki?curid=1090018 |
10900720 | Electronic circuit simulation | Electronic circuit simulation uses mathematical models to replicate the behavior of an actual electronic device or circuit.
Simulation software allows for the modeling of circuit operation and is an invaluable analysis tool. Due to its highly accurate modeling capability, many colleges and universities use this type of software for the teaching of electronics technician and electronics engineering programs. Electronics simulation software engages its users by integrating them into the learning experience. These kinds of interactions actively engage learners to analyze, synthesize, organize, and evaluate content and result in learners constructing their own knowledge.
Simulating a circuit’s behavior before actually building it can greatly improve design efficiency by making faulty designs known as such, and providing insight into the behavior of electronic circuit designs. In particular, for integrated circuits, the tooling (photomasks) is expensive, breadboards are impractical, and probing the behavior of internal signals is extremely difficult. Therefore, almost all IC design rely heavily on simulation. The most well known analog simulator is SPICE. Probably the best known digital simulators are those based on Verilog and VHDL.
Some electronics simulators integrate a schematic editor, a simulation engine, and an on-screen waveform display (see Figure 1), allowing designers to rapidly modify a simulated circuit and see what effect the changes have on the output. They also typically contain extensive model and device libraries. These models typically include IC specific transistor models such as BSIM, generic components such as resistors, capacitors, inductors and transformers, user defined models (such as controlled current and voltage sources, or models in Verilog-A or VHDL-AMS). Printed circuit board (PCB) design requires specific models as well, such as transmission lines for the traces and IBIS models for driving and receiving electronics.
Types.
While there are strictly analog electronics circuit simulators, popular simulators often include both analog and event-driven digital simulation capabilities, and are known as mixed-mode or mixed-signal simulators if they can simulate both simultaneously. An entire mixed signal analysis can be driven from one integrated schematic. All the digital models in mixed-mode simulators provide accurate specification of propagation time and rise/fall time delays.
The event driven algorithm provided by mixed-mode simulators is general-purpose and supports non-digital types of data. For example, elements can use real or integer values to simulate DSP functions or sampled data filters. Because the event driven algorithm is faster than the standard SPICE matrix solution, simulation time is greatly reduced for circuits that use event driven models in place of analog models.
Mixed-mode simulation is handled on three levels; (a) with primitive digital elements that use timing models and the built-in 12 or 16 state digital logic simulator, (b) with subcircuit models that use the actual transistor topology of the integrated circuit, and finally, (c) with In-line Boolean logic expressions.
Exact representations are used mainly in the analysis of transmission line and signal integrity problems where a close inspection of an IC’s I/O characteristics is needed. Boolean logic expressions are delay-less functions that are used to provide efficient logic signal processing in an analog environment. These two modeling techniques use SPICE to solve a problem while the third method, digital primitives, uses mixed mode capability. Each of these methods has its merits and target applications. In fact, many simulations (particularly those which use A/D technology) call for the combination of all three approaches. No one approach alone is sufficient.
Another type of simulation used mainly for power electronics represent piecewise linear algorithms. These algorithms use an analog (linear) simulation until a power electronic switch changes its state. At this time a new analog model is calculated to be used for the next simulation period. This methodology both enhances simulation speed and stability significantly.
Complexities.
Process variations occur when the design is fabricated and circuit simulators often do not take these variations into account. These variations can be small, but taken together, they can change the output of a chip significantly.
Temperature variation can also be modeled to simulate the circuit's performance through temperature ranges.
Simulation from admittance matrix.
A common method of simulating linear circuits systems is with admittance matrices, or Y matrices. The technique involves modeling the individual linear components as an N port admittance matrix, inserting the component Y matrix into a circuits nodal admittance matrix, installing port terminations at nodes that contain ports, eliminating ports without nodes though Kron reduction, converting the final Y matrix to an S or Z matrix as needed, and extracting desired measurements from the Y, Z, and/or S matrix.
Simple Chebyshev filter example.
A fifth order, 50 ohm, Chebyshev filter with 1dB of pass band ripple and cutoff frequency of 1GHz designed using the Chebyshev Cauar topology and subsequent impedance and frequency scaling produces the elements shown in the table and Micro-cap schematic below.
Modeling the 2 port Y parameters.
The table above provides a list of ideal elements to model along with a node attachments to simulate. Next, each non-port element must be converted into a 2X2 Y parameter model for each frequency to be simulated. For this example, a frequency of 1GHz is selected.
Elements connected to node 0, the ground node, do not need their respective Y12 or Y21 calculated, and are shown as "n/a" in the table.
Inserting the 2 port Y parameters into the nodal admittance matrix.
It should be remembered that while Ideal inductor and capacitor modals consist of very simple 2x2 models where Y11 = Y22 = -Y12 = -Y21, most real world elements cannot be modeled so simply. With transmission lines and real world inductor and capacitor models, for example, Y11 != -Y12, and for some more complex passive asymmetric elements Y11 != Y22. For many active linear devices, such as operational amplifiers, Y12 != Y21. Therefore, the example in this section uses independent Y11, Y12, Y21, and Y22 to illustrate the simulation processes that applies to more complex real world devices.
Each element Y parameter is inserted into the nodal admittance matrix by summing in them into the nodes they are attached to following the rules below.
If the second node is not 0, that is, not a ground:
The table below shows the Chebyshev element 2x2 Y parameters summed in at the appropriate locations.
Nodal admittance matrix numerical entries.
To simulate the filter at 1GHz, or any frequency, the element Y parameters must be converted to numerical entries using Y parameter models appropriate for the element installed. For ideal inductors and capacitors, the well known Y11 = Y22 = -Y12 = -Y21 = formula_0 for inductors and Y11 = Y22 = -Y12 = -Y21 =formula_1 for capacitors are sufficient. The numerical conversion are shown in the table below.
Removing internal nodes.
Since ports are only attached to node 1 and node 4, nodes 2 and 3 need to be removed through Kron reduction. The table below shows the reduced Y parameter matrix of the Chebyshev filter example simulation after nodes 2 and 4 are eliminated. The nodes of the reduced table are renumbered to 1 and 2.
Converting to an S parameter matrix.
Since the Chebyshev frequency response is observed from the S parameter matrix, namely |S12|, the next step is to convert the Y parameter matrix to an S parameter matrix, using well known Y matrix to S matrix conversions with the port impedance as the characteristic impedance (or characteristic admittance) for each node.
Simulated S parameters also allow for useful post simulation processing for things such as group delay and phase delay.
S parameter magnitudes.
Since the Chebyshev frequency response is expected to be observable in |S12| as a 1dB equi-ripple response from 0 to 1GHz, the complex S parameter entries need to be converted to their respective magnitudes, using the standard formula_2.
Check the results.
It may be useful to do some quick validity checks at this point. Since the example Chebyshev filter design requirement is for -1dB attenuation at the cutoff frequency of 1GHz, |S12| at 1 GHz is expected to be -1dB. Furthermore, since all simulation elements are lossless, the well known relation, |S11|2+|S12|2 = 1 applies at all frequencies, including 1GHz.
Full frequency Simulation.
The final validity test for the example is to simulate the Chebyshev filter frequency response through the full useful range, which will be taken to be 100 MHz to 5 GHz for this case. This range should permit viewing of the equi-ripple |S12| of the pass band between 0 and -1 dB, somewhat steep stop band |S12| falling off at 1GHz, and an equi-ripple |S12| at the expected peak values of 20log10(.4535...) = -6.86825 dB.
Since all simulation outputs conform to the expected results, the Chebyshev filter example simulation is confirmed to be correct.
Simulating unterminated nodes.
Since S parameters require terminations on all nodes being simulated, simulating the S parameter value for unterminated nodes, such as the internal nodes of a network, are technically unsupported. However, placing a resistive termination on unterminated nodes that is large enough to not introduce any error of significance to make the nodes terminated is sufficient to accurately simulate the node. For example, the two internal nodes that were eliminated above could alternatively have had a 1e+09 ohm port attached to them, so instead of using Kron reduction to eliminate the nodes, the nodes could be accurately simulated with excessively large resistive ports.
Simulating zero resistance sources.
If the input source to the network is an ideal voltage source with no resistance, the example above may be made to work by including a port resistance small small enough to no introduce any error of significance. For example, a port with a resistance of 1e-09 in a network that is terminated elsewhere by 50 ohms would model an ideal source with sufficient accuracy.
Simulating the transfer function.
Since the example above simulates S parameters, another conversion is necessary to obtain the transfer function from S parameters. The conversion is, formula_3.
See also.
<templatestyles src="Div col/styles.css"/>
Concepts:
HDL:
Lists:
Software: | [
{
"math_id": 0,
"text": "j2\\pi fL"
},
{
"math_id": 1,
"text": "-j/(2\\pi fC)"
},
{
"math_id": 2,
"text": "|S_{ij}| = \\sqrt{S_{ij\\text{ real}}^2 + S_{ij\\text{ imag}}^2}"
},
{
"math_id": 3,
"text": "\\frac{V_i}{V_j} = \\frac{S_{ij}}{2}\\sqrt{\\frac{R_j}{R_i}}, \\text{ } i\\neq j"
}
] | https://en.wikipedia.org/wiki?curid=10900720 |
10902 | Force | Influence that can change motion of an object
<templatestyles src="Hlist/styles.css"/>
A force is an influence that can cause an object to change its velocity unless counterbalanced by other forces. The concept of force makes the everyday notion of pushing or pulling mathematically precise. Because the magnitude and direction of a force are both important, force is a vector quantity. The SI unit of force is the newton (N), and force is often represented by the symbol F.
Force plays an important role in classical mechanics. The concept of force is central to all three of Newton's laws of motion. Types of forces often encountered in classical mechanics include elastic, frictional, contact or "normal" forces, and gravitational. The rotational version of force is torque, which produces changes in the rotational speed of an object. In an extended body, each part often applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. In equilibrium these stresses cause no acceleration of the body as the forces balance one another. If these are not in equilibrium they can cause deformation of solid materials, or flow in fluids.
In modern physics, which includes relativity and quantum mechanics, the laws governing motion are revised to rely on fundamental interactions as the ultimate origin of force. However, the understanding of force provided by classical mechanics is useful for practical purposes.
Development of the concept.
Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part, this was due to an incomplete understanding of the sometimes non-obvious force of friction and a consequently inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Newton formulated laws of motion that were not improved for over two hundred years.
By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.
Pre-Newtonian concepts.
Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.
Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, were in their natural place when on the ground, and that they stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. An archer causes the arrow to move at the start of the flight, and it then sails through the air even though no discernible efficient cause acts upon it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation requires a continuous medium such as air to sustain the motion.
Though Aristotelian physics was criticized as early as the 6th century, its shortcomings would not be corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Galileo's idea that force is needed to change motion rather than to sustain it, further improved upon by Isaac Beeckman, René Descartes, and Pierre Gassendi, became a key principle of Newtonian physics.
In the early 17th century, before Newton's "Principia", the term "force" () was applied to many physical and non-physical phenomena, e.g., for an acceleration of a point. The product of a point mass and the square of its velocity was named (live force) by Leibniz. The modern concept of force corresponds to Newton's (accelerating force).
Newtonian mechanics.
Sir Isaac Newton described the motion of all objects using the concepts of inertia and force. In 1687, Newton published his magnum opus, "Philosophiæ Naturalis Principia Mathematica". In this work Newton set out three laws of motion that have dominated the way forces are described in physics to this day. The precise ways in which Newton's laws are expressed have evolved in step with new mathematical approaches.
First law.
Newton's first law of motion states that the natural behavior of an object at rest is to continue being at rest, and the natural behavior of an object moving at constant speed in a straight line is to continue moving at that constant speed along that straight line. The latter follows from the former because of the principle that the laws of physics are the same for all inertial observers, i.e., all observers who do not feel themselves to be in motion. An observer moving in tandem with an object will see it as being at rest. So, its natural behavior will be to remain at rest with respect to that observer, which means that an observer who sees it moving at constant speed in a straight line will see it continuing to do so.
Second law.
According to the first law, motion at constant speed in a straight line does not need a cause. It is "change" in motion that requires a cause, and Newton's second law gives the quantitative relationship between force and change of motion.
Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.
A modern statement of Newton's second law is a vector equation:
formula_0
where formula_1 is the momentum of the system, and formula_2 is the net (vector sum) force.399 If a body is in equilibrium, there is zero "net" force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an "unbalanced" force acting on an object it will result in the object's momentum changing over time.
In common engineering applications the mass in a system remains constant allowing as simple algebraic form for the second law. By the definition of momentum,
formula_3
where "m" is the mass and formula_4 is the velocity. If Newton's second law is applied to a system of constant mass, "m" may be moved outside the derivative operator. The equation then becomes
formula_5
By substituting the definition of acceleration, the algebraic version of Newton's second law is derived:
formula_6
Third law.
Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if formula_7 is the force of body 1 on body 2 and formula_8 that of body 2 on body 1, then
formula_9
This law is sometimes referred to as the "action-reaction law", with formula_10 called the "action" and formula_11 the "reaction".
Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are "interactions" between different bodies. and thus that there is no such thing as a unidirectional force or a force that acts on only one body.
In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero:
formula_12
More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.
Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved in any closed system. In a system of two particles, if formula_13 is the momentum of object 1 and formula_14 the momentum of object 2, then
formula_15
Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.ch.12
Defining "force".
Some textbooks use Newton's second law as a "definition" of force. However, for the equation formula_16 for a constant mass formula_17 to then have any predictive content, it must be combined with further information. Moreover, inferring that a force is present because a body is accelerating is only valid in an inertial frame of reference. The question of which aspects of Newton's laws to take as definitions and which to regard as holding physical content has been answered in various ways, which ultimately do not affect how the theory is used in practice. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll.
Combining forces.
Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous.
Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction. When two forces act on a point particle, the resulting force, the "resultant" (also called the "net force"), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram. The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action.ch.12
Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force.
As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions. This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right angles to the other two.ch.12
Equilibrium.
When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium. Hence, equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. A body is in "static equilibrium" with respect to a frame of reference if it at rest and not accelerating, whereas a body in "dynamic equilibrium" is moving at a constant speed in a straight line, i.e., moving but not accelerating. What one observer sees as static equilibrium, another can see as dynamic equilibrium and vice versa.
Static.
Static equilibrium was understood well before the invention of classical mechanics. Objects that are not accelerating have zero net force acting on them.
The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration.
Pushing against an object that rests on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force "exactly" balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object.
A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.ch.12
Dynamic.
Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. When this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.
Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. When kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.ch.12
Examples of forces in classical mechanics.
Some forces are consequences of the fundamental ones. In such situations, idealized models can be used to gain physical insight. For example, each solid object is considered a rigid body.
Gravitational force or Gravity.
What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as formula_19 and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth. This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of formula_17 will experience a force:
formula_20
For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.ch.12
Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's laws of planetary motion.
Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body. Combining these ideas gives a formula that relates the mass (formula_21) and the radius (formula_22) of the Earth to the gravitational acceleration:
formula_23
where the vector direction is given by formula_24, is the unit vector directed outward from the center of the Earth.
In this equation, a dimensional constant formula_25 is used to describe the relative strength of gravity. This constant has come to be known as the Newtonian constant of gravitation, though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of formula_25 using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing formula_25 could allow one to solve for the Earth's mass given the above equation. Newton realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's law of gravitation states that the force on a spherical object of mass formula_26 due to the gravitational pull of mass formula_27 is
formula_28
where formula_29 is the distance between the two objects' centers of mass and formula_24 is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.
This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.
Electromagnetic.
The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges. The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement.
Subsequent mathematicians and physicists found the construct of the "electric field" to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force. Thus the electric field anywhere in space is defined as
formula_30
where formula_31 is the magnitude of the hypothetical test charge. Similarly, the idea of the "magnetic field" was introduced to express how magnets can influence one another at a distance. The Lorentz force law gives the force upon a body with charge formula_31 due to electric and magnetic fields:
formula_32
where formula_2 is the electromagnetic force, formula_33 is the electric field at the body's location, formula_34 is the magnetic field, and formula_4 is the velocity of the particle. The magnetic contribution to the Lorentz force is the cross product of the velocity vector with the magnetic field.
The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs. These "Maxwell's equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.
Normal.
When objects are in contact, the force directly between them is called the normal force, the component of the total force in the system exerted normal to the interface between the objects. The normal force is closely related to Newton's third law. The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.ch.12
Friction.
Friction is a force that opposes relative motion of two bodies. At the macroscopic scale, the frictional force is directly related to the normal force at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction.
The static friction force (formula_35) will exactly oppose forces applied to an object parallel to a surface up to the limit specified by the coefficient of static friction (formula_36) multiplied by the normal force (formula_37). In other words, the magnitude of the static friction force satisfies the inequality:
formula_38
The kinetic friction force (formula_39) is typically independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals:
formula_40
where formula_41 is the coefficient of kinetic friction. The coefficient of kinetic friction is normally less than the coefficient of static friction.
Tension.
Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and do not stretch. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action–reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object. By connecting the same string multiple times to the same object through the use of a configuration that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. Such machines allow a mechanical advantage for a corresponding increase in the length of displaced string needed to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.ch.12
Spring.
A simple elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position. This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If formula_42 is the displacement, the force exerted by an ideal spring equals:
formula_43
where formula_44 is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.ch.12
Centripetal.
For an object in uniform circular motion, the net force acting on the object equals:
formula_45
where formula_17 is the mass of the object, formula_18 is the velocity of the object and formula_29 is the distance to the center of the circular path and formula_46 is the unit vector pointing in the radial direction outwards from the center. This means that the net force felt by the object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. More generally, the net force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.ch.12
Continuum mechanics.
Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. In real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows:
formula_47
where formula_48 is the volume of the object in the fluid and formula_49 is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.ch.12
A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction:
formula_50
where:
More formally, forces in continuum mechanics are fully described by a stress tensor with terms that are roughly defined as
formula_52
where formula_53 is the relevant cross-sectional area for the volume for which the stress tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.
Fictitious.
There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force. These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.ch.12 Because these forces are not genuine they are also referred to as "pseudo forces".
In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry.
Concepts derived from force.
Rotation and torque.
Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force formula_2 is defined relative to an arbitrary reference point as the cross product:
formula_54
where formula_55 is the position vector of the force application point relative to the reference point.
Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's first law of motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's second law of motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:
formula_56
where
This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation.
Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque:
formula_59
where formula_60 is the angular momentum of the particle.
Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques, and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques.
Yank.
The yank is defined as the rate of change of force
formula_61
The term is used in biomechanical analysis, athletic assessment and robotic control. The second ("tug"), third ("snatch"), fourth ("shake"), and higher derivatives are rarely used.
Kinematic integrals.
Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:
formula_62
which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem).
Similarly, integrating with respect to position gives a definition for the work done by a force:
formula_63
which is equivalent to changes in kinetic energy (yielding the work energy theorem).
Power "P" is the rate of change d"W"/d"t" of the work "W", as the trajectory is extended by a position change formula_64 in a time interval d"t":
formula_65
so
formula_66
with formula_67 the velocity.
Potential energy.
Instead of a force, often the mathematically related concept of a potential energy field is used. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field formula_68 is defined as that field whose gradient is equal and opposite to the force produced at every point:
formula_69
Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.ch.12
Conservation.
A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space, and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.ch.12
Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector formula_55 emanating from spherically symmetric potentials. Examples of this follow:
For gravity:
formula_70
where formula_25 is the gravitational constant, and formula_71 is the mass of object "n".
For electrostatic forces:
formula_72
where formula_73 is electric permittivity of free space, and formula_74 is the electric charge of object "n".
For spring forces:
formula_75
where formula_44 is the spring constant.ch.12
For certain physical scenarios, it is impossible to model forces as being due to a simple gradient of potentials. This is often due a macroscopic statistical average of microstates. For example, static friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. For any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.ch.12
The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.ch.12
Units.
The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes.
The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2. The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force. An alternative unit of force in a different foot–pound–second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared.
The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass. The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated, sometimes used for expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque.
See also "Ton-force".
Revisions of the force concept.
At the beginning of the 20th century, new physical ideas emerged to explain experimental results in astronomical and submicroscopic realms. As discussed below, relativity alters the definition of momentum and quantum mechanics reuses the concept of "force" in microscopic contexts where Newton's laws do not apply directly.
Special theory of relativity.
In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law,
formula_76
remains valid because it is a mathematical definition. But for momentum to be conserved at relativistic relative velocity, formula_18, momentum must be redefined as:
formula_77
where formula_78 is the rest mass and formula_79 the speed of light.
The expression relating force and acceleration for a particle with constant non-zero rest mass formula_17 moving in the formula_80 direction at velocity formula_18 is:
formula_81
where
formula_82
is called the Lorentz factor. The Lorentz factor increases steeply as the relative velocity approaches the speed of light. Consequently, the greater and greater force must be applied to produce the same acceleration at extreme velocity. The relative velocity cannot reach formula_79.§15–8
If formula_18 is very small compared to formula_79, then formula_83 is very close to 1 and
formula_84
is a close approximation. Even for use in relativity, one can restore the form of
formula_85
through the use of four-vectors. This relation is correct in relativity when formula_86 is the four-force, formula_17 is the invariant mass, and formula_87 is the four-acceleration.
The "general" theory of relativity incorporates a more radical departure from the Newtonian way of thinking about force, specifically gravitational force. This reimagining of the nature of gravity is described more fully below.
Quantum mechanics.
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object "has", one considers what result might "appear" when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
In quantum mechanics, interactions are typically described in terms of energy rather than force. The Ehrenfest theorem provides a connection between quantum expectation values and the classical concept of force, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law, with a force defined as the negative derivative of the potential energy. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance.
Quantum mechanics also introduces two new constraints that interact with forces at the submicroscopic scale and which are especially important for atoms. Despite the strong attraction of the nucleus, the uncertainty principle limits the minimum extent of an electron probability distribution and the Pauli exclusion principle prevents electrons from sharing the same probability distribution. This gives rise to an emergent pressure known as degeneracy pressure. The dynamic equilibrium between the degeneracy pressure and the attractive electromagnetic force give atoms, molecules, liquids, and solids stability.
Quantum field theory.
In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".
While sophisticated mathematical descriptions are needed to predict, in full detail, the result of such interactions, there is a conceptually simple way to describe them through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex. The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force.
Fundamental interactions.
All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.
The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons. This Standard Model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation.
Gravitational.
Newton's law of gravitation is an example of "action at a distance": one body, like the Sun, exerts an influence upon any other body, like the Earth, no matter how far apart they are. Moreover, this action at a distance is "instantaneous." According to Newton's theory, the one body shifting position changes the gravitational pulls felt by all other bodies, all at the same instant of time. Albert Einstein recognized that this was inconsistent with special relativity and its prediction that influences cannot travel faster than the speed of light. So, he sought a new theory of gravitation that would be relativistically consistent. Mercury's orbit did not match that predicted by Newton's law of gravitation. Some astrophysicists predicted the existence of an undiscovered planet (Vulcan) that could explain the discrepancies. When Einstein formulated his theory of general relativity (GR) he focused on Mercury's problematic orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's theory of gravity had been shown to be inexact.
Since then, general relativity has been acknowledged as the theory that best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved spacetime – defined as the shortest spacetime path between two spacetime events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of spacetime can be observed and the force is inferred from the object's curved path. Thus, the straight line path in spacetime is seen as a curved line in space, and it is called the "ballistic trajectory" of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its spacetime trajectory is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force".
Electromagnetic.
Maxwell's equations and the set of techniques built around them adequately describe a wide range of physics involving force in electricity and magnetism. This classical theory already includes relativity effects. Understanding quantized electromagnetic interactions between elementary particles requires quantum electrodynamics (or QED). In QED, photons are fundamental exchange particles, describing all interactions relating to electromagnetism including the electromagnetic force.
Strong nuclear.
There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force is the force responsible for the structural integrity of atomic nuclei, and gains its name from its ability to overpower the electromagnetic repulsion between protons.
The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD). The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The strong force only acts "directly" upon elementary particles. A residual is observed between hadrons (notably, the nucleons in atomic nuclei), known as the nuclear force. Here the strong force acts indirectly, transmitted as gluons that form part of the virtual pi and rho mesons, the classical transmitters of the nuclear force. The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.232
Weak nuclear.
Unique among the fundamental interactions, the weak nuclear force creates no bound states. The weak force is due to the exchange of the heavy W and Z bosons. Since the weak force is mediated by two types of bosons, it can be divided into two types of interaction or "vertices" — charged current, involving the electrically charged W+ and W− bosons, and neutral current, involving electrically neutral Z0 bosons. The most familiar effect of weak interaction is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. This is a type of charged-current interaction. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately . Such temperatures occurred in the plasma collisions in the early moments of the Big Bang.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{F} = \\frac{\\mathrm{d}\\mathbf{p}}{\\mathrm{d}t},"
},
{
"math_id": 1,
"text": " \\mathbf{p}"
},
{
"math_id": 2,
"text": " \\mathbf{F}"
},
{
"math_id": 3,
"text": "\\mathbf{F} = \\frac{\\mathrm{d}\\mathbf{p}}{\\mathrm{d}t} = \\frac{\\mathrm{d}\\left(m\\mathbf{v}\\right)}{\\mathrm{d}t},"
},
{
"math_id": 4,
"text": " \\mathbf{v}"
},
{
"math_id": 5,
"text": "\\mathbf{F} = m\\frac{\\mathrm{d}\\mathbf{v}}{\\mathrm{d}t}."
},
{
"math_id": 6,
"text": "\\mathbf{F} =m\\mathbf{a}."
},
{
"math_id": 7,
"text": "\\mathbf{F}_{1,2}"
},
{
"math_id": 8,
"text": "\\mathbf{F}_{2,1}"
},
{
"math_id": 9,
"text": "\\mathbf{F}_{1,2}=-\\mathbf{F}_{2,1}."
},
{
"math_id": 10,
"text": " \\mathbf{F}_{1,2}"
},
{
"math_id": 11,
"text": " -\\mathbf{F}_{2,1}"
},
{
"math_id": 12,
"text": "\\mathbf{F}_{1,2}+\\mathbf{F}_{2,1}=0."
},
{
"math_id": 13,
"text": " \\mathbf{p}_1"
},
{
"math_id": 14,
"text": " \\mathbf{p}_{2}"
},
{
"math_id": 15,
"text": "\\frac{\\mathrm{d}\\mathbf{p}_1}{\\mathrm{d}t} + \\frac{\\mathrm{d}\\mathbf{p}_2}{\\mathrm{d}t}= \\mathbf{F}_{1,2} + \\mathbf{F}_{2,1} = 0 ."
},
{
"math_id": 16,
"text": "\\mathbf{F} = m\\mathbf{a}"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "v"
},
{
"math_id": 19,
"text": " \\mathbf{g}"
},
{
"math_id": 20,
"text": "\\mathbf{F} = m\\mathbf{g}."
},
{
"math_id": 21,
"text": " m_\\oplus"
},
{
"math_id": 22,
"text": " R_\\oplus"
},
{
"math_id": 23,
"text": "\\mathbf{g}=-\\frac{Gm_\\oplus}{{R_\\oplus}^2} \\hat\\mathbf{r},"
},
{
"math_id": 24,
"text": "\\hat\\mathbf{r}"
},
{
"math_id": 25,
"text": "G"
},
{
"math_id": 26,
"text": "m_1"
},
{
"math_id": 27,
"text": "m_2"
},
{
"math_id": 28,
"text": "\\mathbf{F}=-\\frac{Gm_{1}m_{2}}{r^2} \\hat\\mathbf{r},"
},
{
"math_id": 29,
"text": "r"
},
{
"math_id": 30,
"text": "\\mathbf{E} = {\\mathbf{F} \\over{q}},"
},
{
"math_id": 31,
"text": "q"
},
{
"math_id": 32,
"text": "\\mathbf{F} = q\\left(\\mathbf{E} + \\mathbf{v} \\times \\mathbf{B}\\right),"
},
{
"math_id": 33,
"text": " \\mathbf{E}"
},
{
"math_id": 34,
"text": "\\mathbf{B}"
},
{
"math_id": 35,
"text": "\\mathbf F_{\\mathrm{sf}}"
},
{
"math_id": 36,
"text": "\\mu_{\\mathrm{sf}}"
},
{
"math_id": 37,
"text": "\\mathbf F_\\text{N}"
},
{
"math_id": 38,
"text": "0 \\le\\mathbf F_{\\mathrm{sf}} \\le \\mu_{\\mathrm{sf}} \\mathbf F_\\mathrm{N}."
},
{
"math_id": 39,
"text": "F_{\\mathrm{kf}}"
},
{
"math_id": 40,
"text": "\\mathbf F_{\\mathrm{kf}} = \\mu_{\\mathrm{kf}} \\mathbf F_\\mathrm{N},"
},
{
"math_id": 41,
"text": "\\mu_{\\mathrm{kf}}"
},
{
"math_id": 42,
"text": "\\Delta x"
},
{
"math_id": 43,
"text": "\\mathbf{F}=-k \\Delta \\mathbf{x},"
},
{
"math_id": 44,
"text": "k"
},
{
"math_id": 45,
"text": "\\mathbf{F} = - \\frac{mv^2}{r}\\hat\\mathbf{r},"
},
{
"math_id": 46,
"text": " \\hat\\mathbf{r}"
},
{
"math_id": 47,
"text": "\\frac{\\mathbf{F}}{V} = - \\mathbf{\\nabla} P,"
},
{
"math_id": 48,
"text": "V"
},
{
"math_id": 49,
"text": "P"
},
{
"math_id": 50,
"text": "\\mathbf{F}_\\mathrm{d} = - b \\mathbf{v}, "
},
{
"math_id": 51,
"text": "b"
},
{
"math_id": 52,
"text": "\\sigma = \\frac{F}{A},"
},
{
"math_id": 53,
"text": "A"
},
{
"math_id": 54,
"text": "\\boldsymbol\\tau = \\mathbf{r} \\times \\mathbf{F},"
},
{
"math_id": 55,
"text": " \\mathbf{r}"
},
{
"math_id": 56,
"text": "\\boldsymbol\\tau = I\\boldsymbol\\alpha,"
},
{
"math_id": 57,
"text": "I"
},
{
"math_id": 58,
"text": " \\boldsymbol\\alpha"
},
{
"math_id": 59,
"text": "\\boldsymbol\\tau = \\frac{\\mathrm{d}\\mathbf{L}}{\\mathrm{dt}},"
},
{
"math_id": 60,
"text": " \\mathbf{L}"
},
{
"math_id": 61,
"text": "\\mathbf Y = \\frac{\\mathrm d\\mathbf F}{\\mathrm dt}"
},
{
"math_id": 62,
"text": "\\mathbf{J}=\\int_{t_1}^{t_2}{\\mathbf{F} \\, \\mathrm{d}t},"
},
{
"math_id": 63,
"text": "W= \\int_{\\mathbf{x}_1}^{\\mathbf{x}_2} {\\mathbf{F} \\cdot {\\mathrm{d}\\mathbf{x}}},"
},
{
"math_id": 64,
"text": " d\\mathbf{x}"
},
{
"math_id": 65,
"text": " \\mathrm{d}W = \\frac{\\mathrm{d}W}{\\mathrm{d}\\mathbf{x}} \\cdot \\mathrm{d}\\mathbf{x} = \\mathbf{F} \\cdot \\mathrm{d}\\mathbf{x},"
},
{
"math_id": 66,
"text": "P = \\frac{\\mathrm{d}W}{\\mathrm{d}t} = \\frac{\\mathrm{d}W}{\\mathrm{d}\\mathbf{x}} \\cdot \\frac{\\mathrm{d}\\mathbf{x}}{\\mathrm{d}t} = \\mathbf{F} \\cdot \\mathbf{v},\n"
},
{
"math_id": 67,
"text": "\\mathbf{v} = \\mathrm{d}\\mathbf{x}/\\mathrm{d}t"
},
{
"math_id": 68,
"text": "U(\\mathbf{r})"
},
{
"math_id": 69,
"text": "\\mathbf{F}=-\\mathbf{\\nabla} U."
},
{
"math_id": 70,
"text": "\\mathbf{F}_\\text{g} = - \\frac{G m_1 m_2}{r^2} \\hat\\mathbf{r},"
},
{
"math_id": 71,
"text": "m_n"
},
{
"math_id": 72,
"text": "\\mathbf{F}_\\text{e} = \\frac{q_1 q_2}{4 \\pi \\varepsilon_{0} r^2} \\hat\\mathbf{r},"
},
{
"math_id": 73,
"text": "\\varepsilon_{0}"
},
{
"math_id": 74,
"text": "q_n"
},
{
"math_id": 75,
"text": "\\mathbf{F}_\\text{s} = -kr\\hat\\mathbf{r},"
},
{
"math_id": 76,
"text": "\\mathbf{F} = \\frac{\\mathrm{d}\\mathbf{p}}{\\mathrm{d}t},"
},
{
"math_id": 77,
"text": " \\mathbf{p} = \\frac{m_0\\mathbf{v}}{\\sqrt{1 - v^2/c^2}} ,"
},
{
"math_id": 78,
"text": "m_0"
},
{
"math_id": 79,
"text": "c"
},
{
"math_id": 80,
"text": "x"
},
{
"math_id": 81,
"text": "\\mathbf{F} = \\left(\\gamma^3 m a_x, \\gamma m a_y, \\gamma m a_z\\right),"
},
{
"math_id": 82,
"text": " \\gamma = \\frac{1}{\\sqrt{1 - v^2/c^2}}."
},
{
"math_id": 83,
"text": "\\gamma"
},
{
"math_id": 84,
"text": "\\mathbf F = m\\mathbf a"
},
{
"math_id": 85,
"text": "F^\\mu = mA^\\mu "
},
{
"math_id": 86,
"text": "F^\\mu"
},
{
"math_id": 87,
"text": "A^\\mu"
}
] | https://en.wikipedia.org/wiki?curid=10902 |
1090253 | Biorthogonal system | In mathematics, a biorthogonal system is a pair of indexed families of vectors
formula_0
such that
formula_1
where formula_2 and formula_3 form a pair of topological vector spaces that are in duality, formula_4 is a bilinear mapping and formula_5 is the Kronecker delta.
An example is the pair of sets of respectively left and right eigenvectors of a matrix, indexed by eigenvalue, if the eigenvalues are distinct.
A biorthogonal system in which formula_6 and formula_7 is an orthonormal system.
Projection.
Related to a biorthogonal system is the projection
formula_8
where formula_9 its image is the linear span of formula_10 and the kernel is formula_11
Construction.
Given a possibly non-orthogonal set of vectors formula_12 and formula_13 the projection related is
formula_14
where formula_15 is the matrix with entries formula_16
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tilde v_i \\text{ in } E \\text{ and } \\tilde u_i \\text{ in } F"
},
{
"math_id": 1,
"text": "\\left\\langle\\tilde v_i , \\tilde u_j\\right\\rangle = \\delta_{i,j},"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "\\langle \\,\\cdot, \\cdot\\, \\rangle"
},
{
"math_id": 5,
"text": "\\delta_{i,j}"
},
{
"math_id": 6,
"text": "E = F"
},
{
"math_id": 7,
"text": "\\tilde v_i = \\tilde u_i"
},
{
"math_id": 8,
"text": "P := \\sum_{i \\in I} \\tilde u_i \\otimes \\tilde v_i,"
},
{
"math_id": 9,
"text": "(u \\otimes v) (x) := u \\langle v, x \\rangle;"
},
{
"math_id": 10,
"text": "\\left\\{\\tilde u_i: i \\in I\\right\\},"
},
{
"math_id": 11,
"text": "\\left\\{\\left\\langle \\tilde v_i, \\cdot \\right\\rangle = 0 : i \\in I\\right\\}."
},
{
"math_id": 12,
"text": "\\mathbf{u} = \\left(u_i\\right)"
},
{
"math_id": 13,
"text": "\\mathbf{v} = \\left(v_i\\right)"
},
{
"math_id": 14,
"text": "P = \\sum_{i,j} u_i \\left(\\langle\\mathbf{v}, \\mathbf{u}\\rangle^{-1}\\right)_{j,i} \\otimes v_j,"
},
{
"math_id": 15,
"text": " \\langle\\mathbf{v},\\mathbf{u}\\rangle "
},
{
"math_id": 16,
"text": "\\left(\\langle\\mathbf{v}, \\mathbf{u}\\rangle\\right)_{i,j} = \\left\\langle v_i, u_j\\right\\rangle."
},
{
"math_id": 17,
"text": "\\tilde u_i := (I - P) u_i,"
},
{
"math_id": 18,
"text": "\\tilde v_i := (I - P)^* v_i"
}
] | https://en.wikipedia.org/wiki?curid=1090253 |
10904266 | Coinduction | In computer science, coinduction is a technique for defining and proving properties of systems of concurrent interacting objects.
Coinduction is the mathematical dual to structural induction. Coinductively defined data types are known as codata and are typically infinite data structures, such as streams.
As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification.
To generate and manipulate codata, one typically uses corecursive functions, in conjunction with lazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result.
In programming, co-logic programming (co-LP for brevity) "is a natural generalization of logic programming and coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation, concurrent logic programming, model checking, bisimilarity proofs, etc." Experimental implementations of co-LP are available from the University of Texas at Dallas and in the language Logtalk (for examples see ) and SWI-Prolog.
Description.
In a concise statement is given of both the "principle of induction" and the "principle of coinduction". While this article is not primarily concerned with "induction", it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required.
Preliminaries.
Let formula_0 be a set and formula_1 be a monotone function formula_2, that is:
formula_3
Unless otherwise stated, formula_1 will be assumed to be monotone.
"X" is "F-closed" if formula_4
"X" is "F-consistent" if formula_5
"X" is a fixed point if formula_6
These terms can be intuitively understood in the following way. Suppose that formula_7 is a set of assertions, and formula_8 is the operation that yields the consequences of formula_7. Then formula_7 is "F-closed" when you cannot conclude anymore than you've already asserted, while formula_7 is "F-consistent" when all of your assertions are supported by other assertions (i.e. there are no "non-"F"-logical assumptions").
The Knaster–Tarski theorem tells us that the "least fixed-point" of formula_1 (denoted formula_9) is given by the intersection of all "F-closed" sets, while the "greatest fixed-point" (denoted formula_10) is given by the union of all "F-consistent" sets. We can now state the principles of induction and coinduction.
"Principle of induction": If formula_7 is "F-closed", then formula_11
"Principle of coinduction": If formula_7 is "F-consistent", then formula_12
Discussion.
The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property of formula_9. By the "principle of induction", it suffices to exhibit an "F-closed" set formula_7 for which the property holds. Dually, suppose you wish to show that formula_13. Then it suffices to exhibit an "F-consistent" set that formula_14 is known to be a member of.
Examples.
Defining a set of datatypes.
Consider the following grammar of datatypes:
formula_15
That is, the set of types includes the "bottom type" formula_16, the "top type" formula_17, and (non-homogenous) lists. These types can be identified with strings over the alphabet formula_18. Let formula_19 denote all (possibly infinite) strings over formula_20. Consider the function formula_21:
formula_22
In this context, formula_23 means "the concatenation of string formula_14, the symbol formula_24, and string formula_25." We should now define our set of datatypes as a fixpoint of formula_1, but it matters whether we take the "least" or "greatest" fixpoint.
Suppose we take formula_9 as our set of datatypes. Using the "principle of induction", we can prove the following claim:
All datatypes in formula_9 are "finite"
To arrive at this conclusion, consider the set of all finite strings over formula_20. Clearly formula_1 cannot produce an infinite string, so it turns out this set is "F-closed" and the conclusion follows.
Now suppose that we take formula_10 as our set of datatypes. We would like to use the "principle of coinduction" to prove the following claim:
The type formula_26
Here formula_27 denotes the infinite list consisting of all formula_16. To use the "principle of coinduction", consider the set:
formula_28
This set turns out to be "F-consistent", and therefore formula_29. This depends on the suspicious statement that
formula_30
The formal justification of this is technical and depends on interpreting strings as sequences, i.e. functions from formula_31. Intuitively, the argument is similar to the argument that formula_32 (see Repeating decimal).
Coinductive datatypes in programming languages.
Consider the following definition of a stream:
data Stream a = S a (Stream a)
-- Stream "destructors"
head (S a astream) = a
tail (S a astream) = astream
This would seem to be a definition that is not well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream.
Relationship with "F"-coalgebras.
Source:
Consider the endofunctor formula_1 in the category of sets:
formula_33
formula_34
The "final F-coalgebra" formula_10 has the following morphism associated with it:
formula_35
This induces another coalgebra formula_36 with associated morphism formula_37. Because formula_10 is "final", there is a unique morphism
formula_38
such that
formula_39
The composition formula_40 induces another "F"-coalgebra homomorphism formula_41. Since formula_10 is final, this homomorphism is unique and therefore formula_42. Altogether we have:
formula_43
formula_44
This witnesses the isomorphism formula_45, which in categorical terms indicates that formula_10 is a fixpoint of formula_1 and justifies the notation.
Stream as a final coalgebra.
We will show that is the final coalgebra of the functor formula_33. Consider the following implementations:
out astream = (head astream, tail astream)
out' (a, astream) = S a astream
These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details.
Relationship with mathematical induction.
We will demonstrate how the "principle of induction" subsumes mathematical induction.
Let formula_46 be some property of natural numbers. We will take the following definition of mathematical induction:
formula_47
Now consider the function formula_48:
formula_49
It should not be difficult to see that formula_50. Therefore, by the "principle of induction", if we wish to prove some property formula_46 of formula_51, it suffices to show that formula_46 is "F-closed". In detail, we require:
formula_52
That is,
formula_53
This is precisely "mathematical induction" as stated.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "2^U \\rightarrow 2^U"
},
{
"math_id": 3,
"text": " X \\subseteq Y \\Rightarrow F(X) \\subseteq F(Y) "
},
{
"math_id": 4,
"text": "F(X) \\subseteq X "
},
{
"math_id": 5,
"text": "X \\subseteq F(X) "
},
{
"math_id": 6,
"text": "X = F(X) "
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "F(X)"
},
{
"math_id": 9,
"text": "\\mu F"
},
{
"math_id": 10,
"text": "\\nu F"
},
{
"math_id": 11,
"text": "\\mu F \\subseteq X"
},
{
"math_id": 12,
"text": "X \\subseteq \\nu F"
},
{
"math_id": 13,
"text": "x \\in \\nu F"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": " T = \\bot \\;|\\;\\top \\;|\\; T \\times T "
},
{
"math_id": 16,
"text": "\\bot"
},
{
"math_id": 17,
"text": "\\top"
},
{
"math_id": 18,
"text": "\\Sigma = \\{\\bot, \\top, \\times\\}"
},
{
"math_id": 19,
"text": "\\Sigma^{\\leq \\omega}"
},
{
"math_id": 20,
"text": "\\Sigma"
},
{
"math_id": 21,
"text": "F: 2^{\\Sigma^{\\leq \\omega}} \\rightarrow 2^{\\Sigma^{\\leq \\omega}}"
},
{
"math_id": 22,
"text": " F(X) = \\{\\bot, \\top\\} \\cup \\{ x \\times y : x,y \\in X \\} "
},
{
"math_id": 23,
"text": "x \\times y"
},
{
"math_id": 24,
"text": "\\times"
},
{
"math_id": 25,
"text": "y"
},
{
"math_id": 26,
"text": "\\bot \\times \\bot \\times \\cdots \\in \\nu F"
},
{
"math_id": 27,
"text": "\\bot \\times \\bot \\times \\cdots "
},
{
"math_id": 28,
"text": " \\{\\bot \\times \\bot \\times \\cdots \\} "
},
{
"math_id": 29,
"text": " \\bot \\times \\bot \\times \\cdots \\in \\nu F "
},
{
"math_id": 30,
"text": " \\bot \\times \\bot \\times \\cdots = (\\bot \\times \\bot \\times \\cdots) \\times (\\bot \\times \\bot \\times \\cdots) "
},
{
"math_id": 31,
"text": "\\mathbb{N} \\rightarrow \\Sigma"
},
{
"math_id": 32,
"text": "0.\\bar{0}1 = 0"
},
{
"math_id": 33,
"text": "F(x) = A \\times x"
},
{
"math_id": 34,
"text": "F(f) = \\langle \\mathrm{id}_A, f \\rangle "
},
{
"math_id": 35,
"text": " \\mathrm{out}: \\nu F \\rightarrow F(\\nu F) = A \\times \\nu F "
},
{
"math_id": 36,
"text": "F(\\nu F)"
},
{
"math_id": 37,
"text": "F(\\mathrm{out})"
},
{
"math_id": 38,
"text": " \\overline{F(\\mathrm{out})}: F(\\nu F) \\rightarrow \\nu F "
},
{
"math_id": 39,
"text": " \\mathrm{out} \\circ \\overline{F(\\mathrm{out})} = F\\left(\\overline{F(\\mathrm{out})}\\right) \\circ F(\\mathrm{out}) \n= F\\left(\\overline{F(\\mathrm{out})} \\circ \\mathrm{out}\\right)"
},
{
"math_id": 40,
"text": "\\overline{F(\\mathrm{out})} \\circ \\mathrm{out}"
},
{
"math_id": 41,
"text": "\\nu F \\rightarrow \\nu F"
},
{
"math_id": 42,
"text": "\\mathrm{id}_{\\nu F}"
},
{
"math_id": 43,
"text": " \\overline{F(\\mathrm{out})} \\circ \\mathrm{out} = \\mathrm{id}_{\\nu F} "
},
{
"math_id": 44,
"text": " \\mathrm{out} \\circ \\overline{F(\\mathrm{out})} = F\\left(\\overline{F(\\mathrm{out})}\\right) \\circ \\mathrm{out}) = \\mathrm{id}_{F(\\nu F)} "
},
{
"math_id": 45,
"text": "\\nu F \\simeq F(\\nu F)"
},
{
"math_id": 46,
"text": "P"
},
{
"math_id": 47,
"text": "0 \\in P \\and (n \\in P \\Rightarrow n+1 \\in P) \\Rightarrow P=\\mathbb{N}"
},
{
"math_id": 48,
"text": "F: 2^{\\mathbb{N}} \\rightarrow 2^{\\mathbb{N}}"
},
{
"math_id": 49,
"text": "F(X) = \\{0\\} \\cup \\{x + 1 : x \\in X \\}"
},
{
"math_id": 50,
"text": "\\mu F = \\mathbb{N}"
},
{
"math_id": 51,
"text": "\\mathbb{N}"
},
{
"math_id": 52,
"text": "F(P) \\subseteq P"
},
{
"math_id": 53,
"text": "\\{0\\} \\cup \\{x + 1 : x \\in P \\} \\subseteq P "
}
] | https://en.wikipedia.org/wiki?curid=10904266 |
10906098 | Exothermic welding | Using pyrotechnic metal to join two metal pieces together
Exothermic welding, also known as exothermic bonding, thermite welding (TW), and thermit welding, is a welding process that employs molten metal to permanently join the conductors. The process employs an exothermic reaction of a thermite composition to heat the metal, and requires no external source of heat or current. The chemical reaction that produces the heat is an aluminothermic reaction between aluminium powder and a metal oxide.
Overview.
In exothermic welding, aluminium dust reduces the oxide of another metal, most commonly iron oxide, because aluminium is highly reactive. Iron(III) oxide is commonly used:
formula_0
The products are aluminium oxide, free elemental iron, and a large amount of heat. The reactants are commonly powdered and mixed with a binder to keep the material solid and prevent separation.
Commonly the reacting composition is five parts iron oxide red (rust) powder and three parts aluminium powder by weight, ignited at high temperatures. A strongly exothermic (heat-generating) reaction occurs that via reduction and oxidation produces a white hot mass of molten iron and a slag of refractory aluminium oxide. The molten iron is the actual welding material; the aluminium oxide is much less dense than the liquid iron and so floats to the top of the reaction, so the set-up for welding must take into account that the actual molten metal is at the bottom of the crucible and covered by floating slag.
Other metal oxides can be used, such as chromium oxide, to generate the given metal in its elemental form. Copper thermite, using copper oxide, is used for creating electric joints:
formula_1
Thermite welding is widely used to weld railway rails. One of the first railroads to evaluate the use of thermite welding was the Delaware and Hudson Railroad in the United States in 1935 The weld quality of chemically pure thermite is low due to the low heat penetration into the joining metals and the very low carbon and alloy content in the nearly pure molten iron. To obtain sound railroad welds, the ends of the rails being thermite welded are preheated with a torch to an orange heat, to ensure the molten steel is not chilled during the pour.
Because the thermite reaction yields relatively pure iron, not the much stronger steel, some small pellets or rods of high-carbon alloying metal are included in the thermite mix; these alloying materials melt from the heat of the thermite reaction and mix into the weld metal. The alloying beads composition will vary, according to the rail alloy being welded.
The reaction reaches very high temperatures, depending on the metal oxide used. The reactants are usually supplied in the form of powders, with the reaction triggered using a spark from a flint lighter. The activation energy for this reaction is very high however, and initiation requires either the use of a "booster" material such as powdered magnesium metal or a very hot flame source. The aluminium oxide slag that it produces is discarded.
When welding copper conductors, the process employs a semi-permanent graphite crucible mould, in which the molten copper, produced by the reaction, flows through the mould and over and around the conductors to be welded, forming an electrically conductive weld between them. When the copper cools, the mould is either broken off or left in place. Alternatively, hand-held graphite crucibles can be used. The advantages of these crucibles include portability, lower cost (because they can be reused), and flexibility, especially in field applications.
Properties.
An exothermic weld has higher mechanical strength than other forms of weld, and excellent corrosion resistance It is also highly stable when subject to repeated short-circuit pulses, and does not suffer from increased electrical resistance over the lifetime of the installation. However, the process is costly relative to other welding processes, requires a supply of replaceable moulds, suffers from a lack of repeatability, and can be impeded by wet conditions or bad weather (when performed outdoors).
Applications.
Exothermic welding is usually used for welding copper conductors but is suitable for welding a wide range of metals, including stainless steel, cast iron, common steel, brass, bronze, and Monel. It is especially useful for joining dissimilar metals. The process is marketed under a variety of names such as AIWeld, American Rail Weld, AmiableWeld, Ardo Weld, ERICO Cadweld, FurseWeld, Harger Ultrashot, Quikweld, StaticWeld, Techweld, Tectoweld, TerraWeld, Thermoweld and Ultraweld.
Because of the good electrical conductivity and high stability in the face of short-circuit pulses, exothermic welds are one of the options specified by §250.7 of the United States National Electrical Code for grounding conductors and bonding jumpers. It is the preferred method of bonding, and indeed it is the only acceptable means of bonding copper to galvanized cable. The NEC does not require such exothermically welded connections to be listed or labelled, but some engineering specifications require that completed exothermic welds be examined using X-ray equipment.
Rail welding.
History.
Modern thermite rail welding was first developed by Hans Goldschmidt in the mid-1890s as another application for the thermite reaction which he was initially exploring for the use of producing high-purity chromium and manganese. The first rail line was welded using the process in Essen, Germany in 1899, and thermite welded rails gained popularity as they had the advantage of greater reliability with the additional wear placed on rails by new electric and high speed rail systems. Some of the earliest adopters of the process were the cities of Dresden, Leeds, and Singapore. In 1904 Goldschmidt established his eponymous Goldschmidt Thermit Company (known by that name today) in New York City to bring the practice to railways in North America.
In 1904, George E. Pellissier, an engineering student at Worcester Polytechnic Institute who had been following Goldschmidt's work, reached out to the new company as well as the Holyoke Street Railway in Massachusetts. Pellissier oversaw the first installation of track in the United States using this process on August 8, 1904, and went on to improve upon it further for both the railway and Goldschmidt's company as an engineer and superintendent, including early developments in continuous welded rail processes that allowed the entirety of each rail to be joined rather than the foot and web alone. Although not all rail welds are completed using the thermite process, it still remains a standard operating procedure throughout the world.
Process.
Typically, the ends of the rails are cleaned, aligned flat and true, and spaced apart . This gap between rail ends for welding is to ensure consistent results in the pouring of the molten steel into the weld mold. In the event of a welding failure, the rail ends can be cropped to a gap, removing the melted and damaged rail ends, and a new weld attempted with a special mould and larger thermite charge. A two or three piece hardened sand mould is clamped around the rail ends, and a torch of suitable heat capacity is used to preheat the ends of the rail and the interior of the mould.
The proper amount of thermite with alloying metal is placed in a refractory crucible, and when the rails have reached a sufficient temperature, the thermite is ignited and allowed to react to completion (allowing time for any alloying metal to fully melt and mix, yielding the desired molten steel or alloy). The reaction crucible is then tapped at the bottom. Modern crucibles have a self-tapping thimble in the pouring nozzle. The molten steel flows into the mould, fusing with the rail ends and forming the weld.
The slag, being lighter than the steel, flows last from the crucible and overflows the mould into a steel catch basin, to be disposed of after cooling. The entire setup is allowed to cool. The mould is removed and the weld is cleaned by hot chiselling and grinding to produce a smooth joint. Typical time from start of the work until a train can run over the rail is approximately 45 minutes to more than an hour, depending on the rail size and ambient temperature. In any case, the rail steel must be cooled to less than before it can sustain the weight of rail locomotives.
When a thermite process is used for track circuits – the bonding of wires to the rails with a copper alloy, a graphite mould is used. The graphite mould is reusable many times, because the copper alloy is not as hot as the steel alloys used in rail welding. In signal bonding, the volume of molten copper is quite small, approximately and the mould is lightly clamped to the side of the rail, also holding a signal wire in place. In rail welding, the weld charge can weigh up to .
The hardened sand mould is heavy and bulky, must be securely clamped in a very specific position and then subjected to intense heat for several minutes before firing the charge. When rail is welded into long strings, the longitudinal expansion and contraction of steel must be taken into account. British practice sometimes uses a sliding joint of some sort at the end of long runs of continuously welded rail, to allow some movement, although by using a heavy concrete sleeper and an extra amount of ballast at the sleeper ends, the track, which will be prestressed according to the ambient temperature at the time of its installation, will develop compressive stress in hot ambient temperature, or tensile stress in cold ambient temperature, its strong attachment to the heavy sleepers preventing sun kink (buckling) or other deformation.
Current practice is to use welded rails throughout on high speed lines, and expansion joints are kept to a minimum, often only to protect junctions and crossings from excessive stress. American practice appears to be very similar, a straightforward physical restraint of the rail. The rail is prestressed, or considered "stress neutral" at some particular ambient temperature. This "neutral" temperature will vary according to local climate conditions, taking into account lowest winter and warmest summer temperatures.
The rail is physically secured to the ties or sleepers with rail anchors, or anti-creepers. If the track ballast is good and clean and the ties are in good condition, and the track geometry is good, then the welded rail will withstand ambient temperature swings normal to the region.
Remote welding.
"Remote exothermic welding" is a type of exothermic welding process for joining two electrical conductors from a distance. The process reduces the inherent risks associated with exothermic welding and is used in installations that require a welding operator to permanently join conductors a safe distance from the superheated copper alloy.
The process incorporates either an igniter for use with standard graphite molds or a consumable sealed drop-in weld metal cartridge, semi-permanent graphite crucible mold, and an ignition source that tethers to the cartridge with a cable that provides the safe remote ignition.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Fe_2O_3 + 2 \\ Al \\longrightarrow 2 \\ Fe + Al_2O_3}"
},
{
"math_id": 1,
"text": "\\mathrm{3 \\ Cu_2O + 2 Al \\longrightarrow 6 \\ Cu + Al_2O_3}"
}
] | https://en.wikipedia.org/wiki?curid=10906098 |
10906395 | Neutron supermirror | A neutron supermirror is a highly polished, layered material used to reflect neutron beams. Supermirrors are a special case of multi-layer neutron reflectors with varying layer thicknesses.
The first neutron supermirror concept was proposed by Ferenc Mezei, inspired by earlier work with X-rays.
Supermirrors are produced by depositing alternating layers of strongly contrasting substances, such as nickel and titanium, on a smooth substrate. A single layer of high refractive index material (e.g. nickel) exhibits total external reflection at small grazing angles up to a critical angle formula_0. For nickel with natural isotopic abundances, formula_0 in degrees is approximately formula_1 where formula_2 is the neutron wavelength in Angstrom units.
A mirror with a larger effective critical angle can be made by exploiting diffraction (with non-zero losses) that occurs from stacked multilayers. The critical angle of total reflection, in degrees, becomes approximately formula_3, where formula_4 is the "m-value" relative to natural nickel. formula_4 values in the range of 1–3 are common, in specific areas for high-divergence (e.g. using focussing optics near the source, choppers, or experimental areas) m=6 is readily available.
Nickel has a positive scattering cross section, and titanium has a negative scattering cross section, and in both elements the absorption cross section is small, which makes Ni-Ti the most efficient technology with neutrons. The number of Ni-Ti layers needed increases rapidly as formula_5, with formula_6 in the range 2–4, which affects the cost. This has a strong bearing on the economic strategy of neutron instrument design.
External links.
Hungarian inventions
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_c"
},
{
"math_id": 1,
"text": "0.1 \\cdot \\lambda"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "0.1 \\cdot \\lambda \\cdot m"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "\\propto m^z"
},
{
"math_id": 6,
"text": "z"
}
] | https://en.wikipedia.org/wiki?curid=10906395 |
1090710 | Wallace Clement Sabine | American acoustic physicist (1868–1919)
Wallace Clement Sabine (June 13, 1868 – January 10, 1919) was an American physicist who founded the field of architectural acoustics. Sabine was the architectural acoustician of Boston's Symphony Hall, widely considered one of the two or three best concert halls in the world for its acoustics.
Early life.
Wallace Clement Sabine was born on June 13, 1868, in Richwood, Ohio, of Dutch, English, French, and Scottish descent. He graduated with a Bachelor of Arts from Ohio State University in 1886 at the age of 18. He then attended Harvard University and graduated with a Master of Arts in 1888.
His sister was Annie W. S. Siebert.
Career.
After graduating, Sabine became an assistant professor of physics at Harvard in 1889. He became an instructor in 1890 and a member of the faculty in 1892. In 1895, he became an assistant professor and in 1905, he was promoted to professor of physics. In October 1906, he became dean of the Lawrence Scientific School, succeeding Nathaniel Shaler.
Sabine's career is the story of the birth of the field of modern architectural acoustics. In 1895, acoustically improving the Fogg Lecture Hall, part of the recently constructed Fogg Art Museum, was considered an impossible task by the senior staff of the physics department at Harvard. (The original Fogg Museum was designed by Richard Morris Hunt and constructed in 1893. After the completion of the present Fogg Museum the building was repurposed for academic use and renamed Hunt Hall in 1935.) The assignment was passed down until it landed on the shoulders of a young physics professor, Sabine. Although considered a popular lecturer by the students, Sabine had never received his PhD and did not have any particular background dealing with sound.
Sabine tackled the problem by trying to determine what made the Fogg Lecture Hall different from other, acoustically acceptable facilities. In particular, the Sanders Theater was considered acoustically excellent. For the next several years, Sabine and his assistants spent each night moving materials between the two lecture halls and testing the acoustics. On some nights they would borrow hundreds of seat cushions from the Sanders Theater. Using an organ pipe and a stopwatch, Sabine performed thousands of careful measurements (though inaccurate by present standards) of the time required for different frequencies of sounds to decay to inaudibility in the presence of the different materials. He tested reverberation time with several different types of Oriental rugs inside Fogg Lecture Hall, and with various numbers of people occupying its seats, and found that the body of an average person decreased reverberation time by about as much as six seat cushions. Once the measurements were taken and before morning arrived, everything was quickly replaced in both lecture halls, in order to be ready for classes the next day.
Sabine was able to determine, through the experiments, that a definitive relationship exists between the quality of the acoustics, the size of the chamber, and the amount of absorption surface present. He formally defined the reverberation time, which is still the most important characteristic currently in use for gauging the acoustical quality of a room, as number of seconds required for the intensity of the sound to drop from the starting level, by an amount of 60 dB (decibels).
His formula is
formula_0
where
T = the reverberation time
V = the room volume
A = the effective absorption area
By studying various rooms judged acoustically optimal for their intended uses, Sabine determined that acoustically appropriate concert halls had reverberation times of 2-2.25 seconds (with shorter reverberation times, a music hall seems too "dry" to the listener), while optimal lecture hall acoustics featured reverberation times of slightly under 1 second. Regarding the Fogg Museum lecture room, Sabine noted that a spoken word remained audible for about 5.5 seconds, or about an additional 12-15 words if the speaker continued talking. Listeners thus contended with a very high degree of resonance and echo. Sabine's work was continued by his cousin Paul Earls Sabine at the Riverbank Laboratories from 1919.
Using what he discovered, Sabine deployed sound absorbing materials throughout the Fogg Lecture Hall to cut its reverberation time and reduce the "echo effect." This accomplishment cemented Wallace Sabine's career, and led to his hiring as the acoustical consultant for Boston's Symphony Hall, the first concert hall to be designed using quantitative acoustics. His acoustic design was successful and Symphony Hall is generally considered one of the best symphony halls in the world.
The unit of sound absorption, the Sabin, was named in his honor.
Personal life.
Sabine had a wife and two daughters.
Death.
Sabine died on January 11, 1919, at his home in Boston, Massachusetts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nT=\\frac{V}{A} \\cdot 0.161\\,\\mathrm{s\\,m^{-1}}\n"
}
] | https://en.wikipedia.org/wiki?curid=1090710 |
10907456 | Contraction (operator theory) | Bounded operators with sub-unit norm
In operator theory, a bounded operator "T": "X" → "Y" between normed vector spaces "X" and "Y" is said to be a contraction if its operator norm ||"T" || ≤ 1. This notion is a special case of the concept of a contraction mapping, but every bounded operator becomes a contraction after suitable scaling. The analysis of contractions provides insight into the structure of operators, or a family of operators. The theory of contractions on Hilbert space is largely due to Béla Szőkefalvi-Nagy and Ciprian Foias.
Contractions on a Hilbert space.
If "T" is a contraction acting on a Hilbert space formula_0, the following basic objects associated with "T" can be defined.
The defect operators of "T" are the operators "DT" = (1 − "T*T")½ and "DT*" = (1 − "TT*")½. The square root is the positive semidefinite one given by the spectral theorem. The defect spaces formula_1 and formula_2 are the closure of the ranges Ran("DT") and Ran("DT*") respectively. The positive operator "DT" induces an inner product on formula_0. The inner product space can be identified naturally with Ran("D""T"). A similar statement holds for formula_2.
The defect indices of "T" are the pair
formula_3
The defect operators and the defect indices are a measure of the non-unitarity of "T".
A contraction "T" on a Hilbert space can be canonically decomposed into an orthogonal direct sum
formula_4
where "U" is a unitary operator and Γ is "completely non-unitary" in the sense that it has no non-zero reducing subspaces on which its restriction is unitary. If "U" = 0, "T" is said to be a completely non-unitary contraction. A special case of this decomposition is the Wold decomposition for an isometry, where Γ is a proper isometry.
Contractions on Hilbert spaces can be viewed as the operator analogs of cos θ and are called operator angles in some contexts. The explicit description of contractions leads to (operator-)parametrizations of positive and unitary matrices.
Dilation theorem for contractions.
Sz.-Nagy's dilation theorem, proved in 1953, states that for any contraction "T" on a Hilbert space "H", there is a unitary operator "U" on a larger Hilbert space "K" ⊇ "H" such that if "P" is the orthogonal projection of "K" onto "H" then "T""n" = "P" "U""n" "P" for all "n" > 0. The operator "U" is called a dilation of "T" and is uniquely determined if "U" is minimal, i.e. "K" is the smallest closed subspace invariant under "U" and "U"* containing "H".
In fact define
formula_5
the orthogonal direct sum of countably many copies of "H".
Let "V" be the isometry on formula_6 defined by
formula_7
Let
formula_8
Define a unitary "W" on formula_9 by
formula_10
"W" is then a unitary dilation of "T" with "H" considered as the first component of formula_11.
The minimal dilation "U" is obtained by taking the restriction of "W" to the closed subspace generated by powers of "W" applied to "H".
Dilation theorem for contraction semigroups.
There is an alternative proof of Sz.-Nagy's dilation theorem, which allows significant generalization.
Let "G" be a group, "U"("g") a unitary representation of "G" on a Hilbert space "K" and "P" an orthogonal projection onto a closed subspace "H" = "PK" of "K".
The operator-valued function
formula_12
with values in operators on "K" satisfies the positive-definiteness condition
formula_13
where
formula_14
Moreover,
formula_15
Conversely, every operator-valued positive-definite function arises in this way. Recall that every (continuous) scalar-valued positive-definite function on a topological group induces an inner product and group representation φ("g") = 〈"Ug v", "v"〉 where "Ug" is a (strongly continuous) unitary representation (see Bochner's theorem). Replacing "v", a rank-1 projection, by a general projection gives the operator-valued statement. In fact the construction is identical; this is sketched below.
Let formula_6 be the space of functions on "G" of finite support with values in "H" with inner product
formula_16
"G" acts unitarily on formula_6 by
formula_17
Moreover, "H" can be identified with a closed subspace of formula_6 using the isometric embedding
sending "v" in "H" to "f""v" with
formula_18
If "P" is the projection of formula_6 onto "H", then
formula_19
using the above identification.
When "G" is a separable topological group, Φ is continuous in the strong (or weak) operator topology if and only if "U" is.
In this case functions supported on a countable dense subgroup of "G" are dense in formula_6, so that formula_6 is separable.
When "G" = Z any contraction operator "T" defines such a function Φ through
formula_20
for "n" > 0. The above construction then yields a minimal unitary dilation.
The same method can be applied to prove a second dilation theorem of Sz._Nagy for a one-parameter strongly continuous contraction semigroup "T"("t") ("t" ≥ 0) on a Hilbert space "H". had previously proved the result for one-parameter semigroups of isometries,
The theorem states that there is a larger Hilbert space "K" containing "H" and a unitary representation "U"("t") of R such that
formula_21
and the translates "U"("t")"H" generate "K".
In fact "T"("t") defines a continuous operator-valued positove-definite function Φ on R through
formula_22
for "t" > 0. Φ is positive-definite on cyclic subgroups of R, by the argument for Z, and hence on R itself by continuity.
The previous construction yields a minimal unitary representation "U"("t") and projection "P".
The Hille-Yosida theorem assigns a closed unbounded operator "A" to every contractive one-parameter semigroup "T"'("t") through
formula_23
where the domain on "A" consists of all ξ for which this limit exists.
"A" is called the generator of the semigroup and satisfies
formula_24
on its domain. When "A" is a self-adjoint operator
formula_25
in the sense of the spectral theorem and this notation is used more generally in semigroup theory.
The cogenerator of the semigroup is the contraction defined by
formula_26
"A" can be recovered from "T" using the formula
formula_27
In particular a dilation of "T" on "K" ⊃ "H" immediately gives a dilation of the semigroup.
Functional calculus.
Let "T" be totally non-unitary contraction on "H". Then the minimal unitary dilation "U" of "T" on "K" ⊃ "H" is unitarily equivalent to a direct sum of copies the bilateral shift operator, i.e. multiplication by "z" on L2("S"1).
If "P" is the orthogonal projection onto "H" then for "f" in L∞ = L∞("S"1) it follows that the operator "f"("T") can be defined
by
formula_28
Let H∞ be the space of bounded holomorphic functions on the unit disk "D". Any such function has boundary values in L∞ and is uniquely determined by these, so that there is an embedding H∞ ⊂ L∞.
For "f" in H∞, "f"("T") can be defined
without reference to the unitary dilation.
In fact if
formula_29
for |"z"| < 1, then for "r" < 1
formula_30
is holomorphic on |"z"| < 1/"r".
In that case "f""r"("T") is defined by the holomorphic functional calculus and "f" ("T" ) can be defined by
formula_31
The map sending "f" to "f"("T") defines an algebra homomorphism of H∞ into bounded operators on "H". Moreover, if
formula_32
then
formula_33
This map has the following continuity property: if a uniformly bounded sequence "f""n" tends almost everywhere to "f", then "f""n"("T") tends to "f"("T") in the strong operator topology.
For "t" ≥ 0, let "e""t" be the inner function
formula_34
If "T" is the cogenerator of a one-parameter semigroup of completely non-unitary contractions "T"("t"), then
formula_35
and
formula_36
C0 contractions.
A completely non-unitary contraction "T" is said to belong to the class C0 if and only if "f"("T") = 0 for some non-zero
"f" in H∞. In this case the set of such "f" forms an ideal in H∞. It has the form φ ⋅ H∞ where "g"
is an inner function, i.e. such that |φ| = 1 on "S"1: φ is uniquely determined up to multiplication by a complex number of modulus 1 and is called the minimal function of "T". It has properties analogous to the minimal polynomial of a matrix.
The minimal function φ admits a canonical factorization
formula_37
where |"c"|=1, "B"("z") is a Blaschke product
formula_38
with
formula_39
and "P"("z") is holomorphic with non-negative real part in "D". By the Herglotz representation theorem,
formula_40
for some non-negative finite measure μ on the circle: in this case, if non-zero, μ must be singular with respect to Lebesgue measure. In the above decomposition of φ, either of the two factors can be absent.
The minimal function φ determines the spectrum of "T". Within the unit disk, the spectral values are the zeros of φ. There are at most countably many such λi, all eigenvalues of "T", the zeros of "B"("z"). A point of the unit circle does not lie in the spectrum of "T" if and only if φ has a holomorphic continuation to a neighborhood of that point.
φ reduces to a Blaschke product exactly when "H" equals the closure of the direct sum (not necessarily orthogonal) of the generalized eigenspaces
formula_41
Quasi-similarity.
Two contractions "T"1 and "T"2 are said to be quasi-similar when there are bounded operators "A", "B" with trivial kernel and dense range such that
formula_42
The following properties of a contraction "T" are preserved under quasi-similarity:
Two quasi-similar C0 contractions have the same minimal function and hence the same spectrum.
The classification theorem for C0 contractions states that two multiplicity free C0 contractions are quasi-similar if and only if they have the same minimal function (up to a scalar multiple).
A model for multiplicity free C0 contractions with minimal function φ is given by taking
formula_43
where H2 is the Hardy space of the circle and letting "T" be multiplication by "z".
Such operators are called Jordan blocks and denoted "S"(φ).
As a generalization of Beurling's theorem, the commutant of such an operator consists exactly of operators ψ("T") with ψ in "H"≈, i.e. multiplication operators on "H"2 corresponding to functions in "H"≈.
A C0 contraction operator "T" is multiplicity free if and only if it is quasi-similar to a Jordan block (necessarily corresponding the one corresponding to its minimal function).
Examples.
formula_44
with the λi's distinct, of modulus less than 1, such that
formula_45
and ("e""i") is an orthonormal basis, then "S", and hence "T", is C0 and multiplicity free. Hence "H" is the closure of direct sum of the λi-eigenspaces of "T", each having multiplicity one. This can also be seen directly using the definition of quasi-similarity.
Classification theorem for C0 contractions: "Every C0 contraction is canonically quasi-similar to a direct sum of Jordan blocks."
In fact every C0 contraction is quasi-similar to a unique operator of the form
formula_46
where the φ"n" are uniquely determined inner functions, with φ"1" the minimal function of "S" and hence "T".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{H}"
},
{
"math_id": 1,
"text": "\\mathcal{D}_T"
},
{
"math_id": 2,
"text": "\\mathcal{D}_{T*}"
},
{
"math_id": 3,
"text": "(\\dim\\mathcal{D}_T, \\dim\\mathcal{D}_{T^*})."
},
{
"math_id": 4,
"text": "T = \\Gamma \\oplus U"
},
{
"math_id": 5,
"text": "\\displaystyle{\\mathcal{H}=H\\oplus H\\oplus H \\oplus \\cdots ,}"
},
{
"math_id": 6,
"text": "\\mathcal H"
},
{
"math_id": 7,
"text": "\\displaystyle{V(\\xi_1,\\xi_2,\\xi_3,\\dots)=(T\\xi_1, \\sqrt{I-T^*T}\\xi_1,\\xi_2,\\xi_3,\\dots).}"
},
{
"math_id": 8,
"text": "\\displaystyle{\\mathcal{K}=\\mathcal{H} \\oplus \\mathcal{H}.}"
},
{
"math_id": 9,
"text": "\\mathcal K"
},
{
"math_id": 10,
"text": "\\displaystyle{W(x,y)=(Vx+(I-VV^*)y,-V^*y).}"
},
{
"math_id": 11,
"text": "\\mathcal{H}\\subset \\mathcal{K}"
},
{
"math_id": 12,
"text": "\\displaystyle{\\Phi(g)=PU(g)P,}"
},
{
"math_id": 13,
"text": " \\sum \\lambda_i\\overline{\\lambda_j} \\Phi(g_j^{-1}g_i) = PT^*TP\\ge 0,"
},
{
"math_id": 14,
"text": "\\displaystyle{T=\\sum \\lambda_i U(g_i).}"
},
{
"math_id": 15,
"text": "\\displaystyle{\\Phi(1)=P.}"
},
{
"math_id": 16,
"text": "\\displaystyle{(f_1,f_2)=\\sum_{g,h} (\\Phi(h^{-1}g)f_1(g),f_2(h)).}"
},
{
"math_id": 17,
"text": "\\displaystyle{U(g)f(x)=f(g^{-1}x).}"
},
{
"math_id": 18,
"text": "f_v(g)=\\delta_{g,1} v. \\, "
},
{
"math_id": 19,
"text": "\\displaystyle{PU(g)P=\\Phi(g),}"
},
{
"math_id": 20,
"text": "\\displaystyle \\Phi(0)=I, \\,\\,\\, \\Phi(n)=T^n,\\,\\,\\, \\Phi(-n)=(T^*)^n, "
},
{
"math_id": 21,
"text": "\\displaystyle{T(t)=PU(t)P}"
},
{
"math_id": 22,
"text": "\\displaystyle{\\Phi(0)=I, \\,\\,\\, \\Phi(t)=T(t),\\,\\,\\, \\Phi(-t)= T(t)^*,}"
},
{
"math_id": 23,
"text": "\\displaystyle{A\\xi=\\lim_{t\\downarrow 0} {1\\over t}(T(t)-I)\\xi,}"
},
{
"math_id": 24,
"text": " \\displaystyle{-\\Re (A\\xi,\\xi)\\ge 0}"
},
{
"math_id": 25,
"text": "\\displaystyle{T(t)=e^{At},}"
},
{
"math_id": 26,
"text": " \\displaystyle{T=(A+I)(A-I)^{-1}.}"
},
{
"math_id": 27,
"text": "\\displaystyle{A=(T+I)(T-I)^{-1}.}"
},
{
"math_id": 28,
"text": "\\displaystyle{f(T)\\xi=Pf(U)\\xi.}"
},
{
"math_id": 29,
"text": "\\displaystyle{f(z)=\\sum_{n\\ge 0} a_n z^n}"
},
{
"math_id": 30,
"text": "\\displaystyle{f_r(z))=\\sum_{n\\ge 0} r^n a_n z^n}"
},
{
"math_id": 31,
"text": "\\displaystyle{f(T)\\xi=\\lim_{r\\rightarrow 1} f_r(T)\\xi.}"
},
{
"math_id": 32,
"text": "\\displaystyle{f^\\sim(z)=\\sum_{n\\ge 0} a_n \\overline{z}^n,}"
},
{
"math_id": 33,
"text": "\\displaystyle{f^\\sim(T)=f(T^*)^*.}"
},
{
"math_id": 34,
"text": "\\displaystyle{e_t(z)=\\exp t{z+1\\over z-1}.}"
},
{
"math_id": 35,
"text": "\\displaystyle{T(t)=e_t(T)}"
},
{
"math_id": 36,
"text": "\\displaystyle{T={1\\over 2}I -{1\\over 2}\\int_0^\\infty e^{-t}T(t)\\, dt.}"
},
{
"math_id": 37,
"text": "\\displaystyle{\\varphi(z) = c B(z) e^{-P(z)},}"
},
{
"math_id": 38,
"text": "\\displaystyle{B(z)=\\prod \\left[{|\\lambda_i|\\over \\lambda_i} {\\lambda_i -z \\over 1-\\overline{\\lambda}_i }\\right]^{m_i},}"
},
{
"math_id": 39,
"text": "\\displaystyle{\\sum m_i(1-|\\lambda_i|) <\\infty,}"
},
{
"math_id": 40,
"text": "\\displaystyle{P(z) =\\int_0^{2\\pi} {1 + e^{-i\\theta}z\\over 1 -e^{-i\\theta}z} \\, d\\mu(\\theta)}"
},
{
"math_id": 41,
"text": "\\displaystyle{H_i=\\{\\xi:(T-\\lambda_i I)^{m_i} \\xi=0\\}.}"
},
{
"math_id": 42,
"text": "\\displaystyle{AT_1=T_2A,\\,\\,\\, BT_2=T_1B.}"
},
{
"math_id": 43,
"text": " \\displaystyle{H=H^2\\ominus \\varphi H^2,}"
},
{
"math_id": 44,
"text": "\\displaystyle{Se_i=\\lambda_i e_i}"
},
{
"math_id": 45,
"text": "\\displaystyle{\\sum (1-|\\lambda_i|) < 1}"
},
{
"math_id": 46,
"text": "\\displaystyle{S=S(\\varphi_1)\\oplus S(\\varphi_1\\varphi_2)\\oplus S(\\varphi_1\\varphi_2\\varphi_3) \\oplus \\cdots }"
}
] | https://en.wikipedia.org/wiki?curid=10907456 |
1090930 | DFT matrix | Discrete fourier transform expressed as a matrix
In applied mathematics, a DFT matrix is an expression of a discrete Fourier transform (DFT) as a transformation matrix, which can be applied to a signal through matrix multiplication.
Definition.
An "N"-point DFT is expressed as the multiplication formula_0, where formula_1 is the original input signal, formula_2 is the "N"-by-"N" square DFT matrix, and formula_3 is the DFT of the signal.
The transformation matrix formula_2 can be defined as formula_4, or equivalently:
formula_5,
where formula_6 is a primitive "N"th root of unity in which formula_7. We can avoid writing large exponents for formula_8 using the fact that for any exponent formula_1 we have the identity formula_9 This is the Vandermonde matrix for the roots of unity, up to the normalization factor. Note that the normalization factor in front of the sum ( formula_10 ) and the sign of the exponent in ω are merely conventions, and differ in some treatments. All of the following discussion applies regardless of the convention, with at most minor adjustments. The only important thing is that the forward and inverse transforms have opposite-sign exponents, and that the product of their normalization factors be 1/"N". However, the formula_10 choice here makes the resulting DFT matrix unitary, which is convenient in many circumstances.
Fast Fourier transform algorithms utilize the symmetries of the matrix to reduce the time of multiplying a vector by this matrix, from the usual formula_11. Similar techniques can be applied for multiplications by matrices such as Hadamard matrix and the Walsh matrix.
Examples.
Two-point.
The two-point DFT is a simple case, in which the first entry is the DC (sum) and the second entry is the AC (difference).
formula_12
The first row performs the sum, and the second row performs the difference.
The factor of formula_13 is to make the transform unitary (see below).
Four-point.
The four-point clockwise DFT matrix is as follows:
formula_14
where formula_15.
Eight-point.
The first non-trivial integer power of two case is for eight points:
formula_16
where
formula_17
Evaluating for the value of formula_8, gives:
formula_19
The following image depicts the DFT as a matrix multiplication, with elements of the matrix depicted by samples of complex exponentials:
The real part (cosine wave) is denoted by a solid line, and the imaginary part (sine wave) by a dashed line.
The top row is all ones (scaled by formula_20 for unitarity), so it "measures" the DC component in the input signal. The next row is eight samples of negative one cycle of a complex exponential, i.e., a signal with a fractional frequency of −1/8, so it "measures" how much "strength" there is at fractional frequency +1/8 in the signal. Recall that a matched filter compares the signal with a time reversed version of whatever we're looking for, so when we're looking for fracfreq. 1/8 we compare with fracfreq. −1/8 so that is why this row is a negative frequency. The next row is negative two cycles of a complex exponential, sampled in eight places, so it has a fractional frequency of −1/4, and thus "measures" the extent to which the signal has a fractional frequency of +1/4.
The following summarizes how the 8-point DFT works, row by row, in terms of fractional frequency:
Equivalently the last row can be said to have a fractional frequency of +1/8 and thus measure how much of the signal has a fractional frequency of −1/8. In this way, it could be said that the top rows of the matrix "measure" positive frequency content in the signal and the bottom rows measure negative frequency component in the signal.
Unitary transform.
The DFT is (or can be, through appropriate selection of scaling) a unitary transform, i.e., one that preserves energy. The appropriate choice of scaling to achieve unitarity is formula_10, so that the energy in the physical domain will be the same as the energy in the Fourier domain, i.e., to satisfy Parseval's theorem. (Other, non-unitary, scalings, are also commonly used for computational convenience; e.g., the convolution theorem takes on a slightly simpler form with the scaling shown in the discrete Fourier transform article.)
Other properties.
For other properties of the DFT matrix, including its eigenvalues, connection to convolutions, applications, and so on, see the discrete Fourier transform article.
A limiting case: The Fourier operator.
The notion of a Fourier transform is readily generalized. One such formal generalization of the "N"-point DFT can be imagined by taking "N" arbitrarily large. In the limit, the rigorous mathematical machinery treats such linear operators as so-called integral transforms. In this case, if we make a very large matrix with complex exponentials in the rows (i.e., cosine real parts and sine imaginary parts), and increase the resolution without bound, we approach the kernel of the Fredholm integral equation of the 2nd kind, namely the Fourier operator that defines the continuous Fourier transform. A rectangular portion of this continuous Fourier operator can be displayed as an image, analogous to the DFT matrix, as shown at right, where greyscale pixel value denotes numerical quantity.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "X = W x"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "W"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "W = \\left(\\frac{\\omega^{jk}}{{\\sqrt{N}}}\\right)_{j,k=0,\\ldots,N-1} "
},
{
"math_id": 5,
"text": "\nW = \\frac{1}{\\sqrt{N}} \\begin{bmatrix}\n1&1&1&1&\\cdots &1 \\\\\n1&\\omega&\\omega^2&\\omega^3&\\cdots&\\omega^{N-1} \\\\\n1&\\omega^2&\\omega^4&\\omega^6&\\cdots&\\omega^{2(N-1)}\\\\ 1&\\omega^3&\\omega^6&\\omega^9&\\cdots&\\omega^{3(N-1)}\\\\\n\\vdots&\\vdots&\\vdots&\\vdots&\\ddots&\\vdots\\\\\n1&\\omega^{N-1}&\\omega^{2(N-1)}&\\omega^{3(N-1)}&\\cdots&\\omega^{(N-1)(N-1)}\n\\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "\\omega = e^{-2\\pi i/N}"
},
{
"math_id": 7,
"text": "i^2=-1"
},
{
"math_id": 8,
"text": "\\omega"
},
{
"math_id": 9,
"text": "\\omega^{x} = \\omega^{x \\bmod N}."
},
{
"math_id": 10,
"text": "1/\\sqrt{N}"
},
{
"math_id": 11,
"text": "O(N^2)"
},
{
"math_id": 12,
"text": "W=\n\\frac{1}{\\sqrt{2}}\n\\begin{bmatrix}\n1 & 1 \\\\\n1 & -1 \\end{bmatrix}\n"
},
{
"math_id": 13,
"text": "1/\\sqrt{2}"
},
{
"math_id": 14,
"text": "\nW = \\frac{1}{\\sqrt{4}}\n\\begin{bmatrix}\n \\omega^0 & \\omega^0 &\\omega^0 &\\omega^0 \\\\\n \\omega^0 & \\omega^1 &\\omega^2 &\\omega^3 \\\\\n \\omega^0 & \\omega^2 &\\omega^4 &\\omega^6 \\\\\n \\omega^0 & \\omega^3 &\\omega^6 &\\omega^9 \\\\\n\\end{bmatrix} = \\frac{1}{\\sqrt{4}}\n\\begin{bmatrix}\n1 & 1 & 1 & 1\\\\\n1 & -i & -1 & i\\\\\n1 & -1 & 1 & -1\\\\\n1 & i & -1 & -i\\end{bmatrix}\n"
},
{
"math_id": 15,
"text": "\\omega = e^{-\\frac{2 \\pi i}{4}} = -i"
},
{
"math_id": 16,
"text": "W= \\frac{1}{\\sqrt{8}}\n\n\\begin{bmatrix}\n \\omega^0 & \\omega^0 &\\omega^0 &\\omega^0 &\\omega^0 &\\omega^0 &\\omega^0 & \\omega^0 \\\\\n \\omega^0 & \\omega^1 &\\omega^2 &\\omega^3 &\\omega^4 &\\omega^5 &\\omega^6 & \\omega^7 \\\\\n \\omega^0 & \\omega^2 &\\omega^4 &\\omega^6 &\\omega^8 &\\omega^{10} &\\omega^{12} & \\omega^{14} \\\\\n \\omega^0 & \\omega^3 &\\omega^6 &\\omega^9 &\\omega^{12} &\\omega^{15} &\\omega^{18} & \\omega^{21} \\\\\n \\omega^0 & \\omega^4 &\\omega^8 &\\omega^{12} &\\omega^{16} &\\omega^{20} &\\omega^{24} & \\omega^{28} \\\\\n \\omega^0 & \\omega^5 &\\omega^{10} &\\omega^{15} &\\omega^{20} &\\omega^{25} &\\omega^{30} & \\omega^{35} \\\\\n \\omega^0 & \\omega^6 &\\omega^{12} &\\omega^{18} &\\omega^{24} &\\omega^{30} &\\omega^{36} & \\omega^{42} \\\\\n \\omega^0 & \\omega^7 &\\omega^{14} &\\omega^{21} &\\omega^{28} &\\omega^{35} &\\omega^{42} & \\omega^{49} \\\\\n\\end{bmatrix} = \\frac{1}{\\sqrt{8}}\n\\begin{bmatrix}\n 1 &1 &1 &1 &1 &1 &1 &1 \\\\\n 1 &\\omega &-i &-i\\omega &-1 &-\\omega &i &i\\omega \\\\\n 1 &-i &-1 &i &1 &-i &-1 &i \\\\\n 1 &-i\\omega &i &\\omega &-1 &i\\omega &-i &-\\omega \\\\\n 1 &-1 &1 &-1 &1 &-1 &1 &-1 \\\\\n 1 &-\\omega &-i &i\\omega &-1 &\\omega &i &-i\\omega \\\\\n 1 &i &-1 &-i &1 &i &-1 &-i \\\\\n 1 &i\\omega &i &-\\omega &-1 &-i\\omega &-i &\\omega \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 17,
"text": "\\omega = e^{-\\frac{2 \\pi i}{8}} = \\frac{1}{\\sqrt{2}} - \\frac{i}{\\sqrt{2}}"
},
{
"math_id": 18,
"text": "\\omega^{8 + n} = \\omega^{n}"
},
{
"math_id": 19,
"text": "\nW=\\frac{1}{\\sqrt{8}} \\begin{bmatrix}\n\t\t\t\t1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\t\t\t\t1 & \\frac{1-i}{\\sqrt2} & -i & \\frac{-1-i}{\\sqrt2} & -1 & \\frac{-1+i}{\\sqrt2} & i & \\frac{1+i}{\\sqrt2} \\\\\n\t\t\t\t1 & -i & -1 & i & 1 & -i & -1 & i \\\\\n\t\t\t\t1 & \\frac{-1-i}{\\sqrt2} & i & \\frac{1-i}{\\sqrt2} & -1 & \\frac{1+i}{\\sqrt2} & -i & \\frac{-1+i}{\\sqrt2} \\\\\n\t\t\t\t1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\\\\n\t\t\t\t1 & \\frac{-1+i}{\\sqrt2} & -i & \\frac{1+i}{\\sqrt2} & -1 & \\frac{1-i}{\\sqrt2} & i & \\frac{-1-i}{\\sqrt2} \\\\\n\t\t\t\t1 & i & -1 & -i & 1 & i & -1 & -i \\\\\n\t\t\t\t1 & \\frac{1+i}{\\sqrt2} & i & \\frac{-1+i}{\\sqrt2} &-1 & \\frac{-1-i}{\\sqrt2} & -i & \\frac{1-i}{\\sqrt2} \\\\\n\t\t\t\\end{bmatrix}\n"
},
{
"math_id": 20,
"text": "1/\\sqrt{8}"
}
] | https://en.wikipedia.org/wiki?curid=1090930 |
1091018 | Gas electron diffraction | Method of observing gaseous atomic structure
Gas electron diffraction (GED) is one of the applications of electron diffraction techniques. The target of this method is the determination of the structure of gaseous molecules, i.e., the geometrical arrangement of the atoms from which a molecule is built up. GED is one of two experimental methods (besides microwave spectroscopy) to determine the structure of free molecules, undistorted by intermolecular forces, which are omnipresent in the solid and liquid state. The determination of accurate molecular structures by GED studies is fundamental for an understanding of structural chemistry.
Introduction.
Diffraction occurs because the wavelength of electrons accelerated by a potential of a few thousand volts is of the same order of magnitude as internuclear distances in molecules. The principle is the same as that of other electron diffraction methods such as LEED and RHEED, but the obtainable diffraction pattern is considerably weaker than those of LEED and RHEED because the density of the target is about one thousand times smaller. Since the orientation of the target molecules relative to the electron beams is random, the internuclear distance information obtained is one-dimensional. Thus only relatively simple molecules can be completely structurally characterized by electron diffraction in the gas phase. It is possible to combine information obtained from other sources, such as rotational spectra, NMR spectroscopy or high-quality quantum-mechanical calculations with electron diffraction data, if the latter are not sufficient to determine the molecule's structure completely.
The total scattering intensity in GED is given as a function of the momentum transfer, which is defined as the difference between the wave vector of the incident electron beam and that of the scattered electron beam and has the reciprocal dimension of length. The total scattering intensity is composed of two parts: the atomic scattering intensity and the molecular scattering intensity. The former decreases monotonically and contains no information about the molecular structure. The latter has sinusoidal modulations as a result of the interference of the scattering spherical waves generated by the scattering from the atoms included in the target molecule. The interferences reflect the distributions of the atoms composing the molecules, so the molecular structure is determined from this part.
Experiment.
Figure 1 shows a drawing and a photograph of an electron diffraction apparatus. Scheme 1 shows the schematic procedure of an electron diffraction experiment. A fast electron beam is generated in an electron gun, enters a diffraction chamber typically at a vacuum of 10−7 mbar. The electron beam hits a perpendicular stream of a gaseous sample effusing from a nozzle of a small diameter (typically 0.2 mm). At this point, the electrons are scattered. Most of the sample is immediately condensed and frozen onto the surface of a cold trap held at -196 °C (liquid nitrogen). The scattered electrons are detected on the surface of a suitable detector in a well-defined distance to the point of scattering.
The scattering pattern consists of diffuse concentric rings (see Figure 2). The steep decent of intensity can be compensated for by passing the electrons through a fast rotation sector (Figure 3). This is cut in a way, that electrons with small scattering angles are more shadowed than those at wider scattering angles. The detector can be a photographic plate, an electron imaging plate (usual technique today) or other position sensitive devices such as hybrid pixel detectors (future technique).
The intensities generated from reading out the plates or processing intensity data from other detectors are then corrected for the sector effect. They are initially a function of distance between primary beam position and intensity, and then converted into a function of scattering angle. The so-called atomic intensity and the experimental background are subtracted to give the final experimental molecular scattering intensities as a function of "s" (the change of momentum).
These data are then processed by suitable fitting software like UNEX for refining a suitable model for the compound and to yield precise structural information in terms of bond lengths, angles and torsional angles.
Theory.
GED can be described by scattering theory. The outcome if applied to gases with randomly oriented molecules is provided here in short:
Scattering occurs at each individual atom (formula_0), but also at pairs (also called molecular scattering) (formula_1), or triples (formula_2), of atoms.
formula_3 is the scattering variable or change of electron momentum, and its absolute value is defined as
formula_4
with formula_5 being the electron wavelength defined above, and formula_6 being the scattering angle.
The above-mentioned contributions of scattering add up to the total scattering
formula_7
where formula_8 is the experimental background intensity, which is needed to describe the experiment completely.
The contribution of individual atom scattering is called atomic scattering and easy to calculate:
formula_9
with formula_10, formula_11 being the distance between the point of scattering and the detector, formula_12 being the intensity of the primary electron beam, and formula_13 being the scattering amplitude of the "i"-th atom. In essence, this is a summation over the scattering contributions of all atoms independent of the molecular structure. formula_0 is the main contribution and easily obtained if the atomic composition of the gas (sum formula) is known.
The most interesting contribution is the molecular scattering, because it contains information about the distance between all pairs of atoms in a molecule (bonded or non-bonded):
formula_14
with formula_15 being the parameter of main interest: the atomic distance between two atoms, formula_16 being the mean square amplitude of vibration between the two atoms, formula_17 the anharmonicity constant (correcting the vibration description for deviations from a purely harmonic model), and formula_18 is a phase factor, which becomes important if a pair of atoms with very different nuclear charge is involved.
The first part is similar to the atomic scattering, but contains two scattering factors of the involved atoms. Summation is performed over all atom pairs.
formula_2 is negligible in most cases and not described here in more detail. formula_8 is mostly determined by fitting and subtracting smooth functions to account for the background contribution.
So it is the molecular scattering intensity that is of interest, and this is obtained by calculation all other contributions and subtracting them from the experimentally measured total scattering function.
Results.
Figure 5 shows two typical examples of results. The molecular scattering intensity curves are used to refine a structural model by means of a least squares fitting program. This yield precise structural information. The Fourier transformation of the molecular scattering intensity curves gives the radial distribution curves (RDC). These represent the probability to find a certain distance between two nuclei of a molecule. The curves below the RDC represent the diffrerence between the experiment and the model, i.e. the quality of fit.
The very simple example in Figure 5 shows the results for evaporated white phosphorus, P4. It is a perfectly tetrahedral molecule and has thus only one P-P distance. This makes the molecular scattering intensity curve a very simple one; a sine curve which is damped due to molecular vibration. The radial distribution curve (RDC) shows a maximum at 2.1994 Å with a least-squares error of 0.0003 Å, represented as 2.1994(3) Å. The width of the peak represents the molecular vibration and is the result of Fourier transformation of the damping part. This peak width means that the P-P distance varies by this vibration within a certain range given as a vibrational amplitude "u", in this example "u"T(P‒P) = 0.0560(5) Å.
The slightly more complicated molecule P3As has two different distances P-P and P-As. Because their contributions overlap in the RDC, the peak is broader (also seen in a more rapid damping in the molecular scattering). The determination of these two independent parameters is more difficult and results in less precise parameter values than for P4.
Some selected other examples of important contributions to the structural chemistry of molecules are provided here:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_\\text{a}(s)"
},
{
"math_id": 1,
"text": "I_\\text{m}(s)"
},
{
"math_id": 2,
"text": "I_\\text{t}(s)"
},
{
"math_id": 3,
"text": "s"
},
{
"math_id": 4,
"text": "|s| = \\frac{4\\pi}{\\lambda} \\sin(\\theta / 2),"
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "I_\\text{tot}(s) = I_\\text{a}(s) + I_\\text{m}(s) + I_\\text{t}(s) + I_\\text{b}(s),"
},
{
"math_id": 8,
"text": "I_\\text{b}(s)"
},
{
"math_id": 9,
"text": "I_\\text{a}(s) = \\frac{K^2}{R^2} I_0 \\sum_{i=1}^N |f_i(s)|^2,"
},
{
"math_id": 10,
"text": "K = \\frac{8 \\pi^2 me^2}{h^2}"
},
{
"math_id": 11,
"text": "R"
},
{
"math_id": 12,
"text": "I_0"
},
{
"math_id": 13,
"text": "f_i(s)"
},
{
"math_id": 14,
"text": "I_\\text{m}(s) = \\frac{K^2}{R^2} I_0 \\sum_{i=1}^N \\sum_{j=1,i\\neq j}^N |f_i(s)|\\,|f_j(s)| \\frac{\\sin[s(r_{ij} - \\kappa s^2)]}{sr_{ij}}e^{-\\frac{1}{2} l_{ij} s^2} \\cos[\\eta_i(s) - \\eta_i(s)],"
},
{
"math_id": 15,
"text": "r_{ij}"
},
{
"math_id": 16,
"text": "l_{ij}"
},
{
"math_id": 17,
"text": "\\kappa"
},
{
"math_id": 18,
"text": "\\eta"
}
] | https://en.wikipedia.org/wiki?curid=1091018 |
10910481 | Pooled variance | Method for estimating variance of several different populations
In statistics, pooled variance (also known as combined variance, composite variance, or overall variance, and written formula_0) is a method for estimating variance of several different populations when the mean of each population may be different, but one may assume that the variance of each population is the same. The numerical estimate resulting from the use of this method is also called the pooled variance.
Under the assumption of equal population variances, the pooled sample variance provides a higher precision estimate of variance than the individual sample variances. This higher precision can lead to increased statistical power when used in statistical tests that compare the populations, such as the "t"-test.
The square root of a pooled variance estimator is known as a pooled standard deviation (also known as combined standard deviation, composite standard deviation, or overall standard deviation).
Motivation.
In statistics, many times, data are collected for a dependent variable, "y", over a range of values for the independent variable, "x". For example, the observation of fuel consumption might be studied as a function of engine speed while the engine load is held constant. If, in order to achieve a small variance in "y", numerous repeated tests are required at each value of "x", the expense of testing may become prohibitive. Reasonable estimates of variance can be determined by using the principle of pooled variance after repeating each test at a particular "x" only a few times.
Definition and computation.
The pooled variance is an estimate of the fixed common variance formula_1 underlying various populations that have different means.
We are given a set of sample variances formula_2, where the populations are indexed formula_3,
formula_2 = formula_4
Assuming uniform sample sizes, formula_5, then the pooled variance formula_6 can be computed by the arithmetic mean:
formula_7
If the sample sizes are non-uniform, then the pooled variance formula_6 can be computed by the weighted average, using as weights formula_8 the respective degrees of freedom (see also: Bessel's correction):
formula_9
The distribution of formula_10 is formula_11.
Proof. When there is a single mean, the distribution of formula_12 is a gaussian in formula_13, the formula_14-dimensional simplex, with standard deviation formula_15. Where there are multiple means, the distribution of formula_16 is a gaussian in formula_17.
Variants.
The unbiased least squares estimate of formula_0 (as presented above),
and the biased maximum likelihood estimate below:
formula_18
are used in different contexts. The former can give an unbiased formula_19 to estimate formula_0 when the two groups share an equal population variance. The latter one can give a more efficient formula_19 to estimate formula_0, although subject to bias. Note that the quantities formula_20 in the right hand sides of both equations are the unbiased estimates.
Example.
Consider the following set of data for "y" obtained at various levels of the independent variable "x".
The number of trials, mean, variance and standard deviation are presented in the next table.
These statistics represent the variance and standard deviation for each subset of data at the various levels of "x". If we can assume that the same phenomena are generating random error at every level of "x", the above data can be “pooled” to express a single estimate of variance and standard deviation. In a sense, this suggests finding a mean variance or standard deviation among the five results above. This mean variance is calculated by weighting the individual values with the size of the subset for each level of "x". Thus, the pooled variance is defined by
formula_21
where "n"1, "n"2, . . ., "n""k" are the sizes of the data subsets at each level of the variable "x", and "s"12, "s"22, . . ., "s""k"2 are their respective variances.
The pooled variance of the data shown above is therefore:
formula_22
Effect on precision.
Pooled variance is an estimate when there is a correlation between pooled data sets or the average of the data sets is not identical. Pooled variation is less precise the more non-zero the correlation or distant the averages between data sets.
The variation of data for non-overlapping data sets is:
formula_23
where the mean is defined as:
formula_24
Given a biased maximum likelihood defined as:
formula_25
Then the error in the biased maximum likelihood estimate is:
formula_26
Assuming "N" is large such that:
formula_27
Then the error in the estimate reduces to:
formula_28
Or alternatively:
formula_29
Aggregation of standard deviation data.
Rather than estimating pooled standard deviation, the following is the way to exactly aggregate standard deviation when more statistical information is available.
Population-based statistics.
The populations of sets, which may overlap, can be calculated simply as follows:
formula_30
The populations of sets, which do not overlap, can be calculated simply as follows:
formula_31
Standard deviations of non-overlapping ("X" ∩ "Y"
∅) sub-populations can be aggregated as follows if the size (actual or relative to one another) and means of each are known:
formula_32
For example, suppose it is known that the average American man has a mean height of 70 inches with a standard deviation of three inches and that the average American woman has a mean height of 65 inches with a standard deviation of two inches. Also assume that the number of men, "N", is equal to the number of women. Then the mean and standard deviation of heights of American adults could be calculated as
formula_33
For the more general case of "M" non-overlapping populations, "X"1 through "X""M", and the aggregate population formula_34,
formula_35,
where
formula_36
If the size (actual or relative to one another), mean, and standard deviation of two overlapping populations are known for the populations as well as their intersection, then the standard deviation of the overall population can still be calculated as follows:
formula_37
If two or more sets of data are being added together datapoint by datapoint, the standard deviation of the result can be calculated if the standard deviation of each data set and the covariance between each pair of data sets is known:
formula_38
For the special case where no correlation exists between any pair of data sets, then the relation reduces to the root sum of squares:
formula_39
Sample-based statistics.
Standard deviations of non-overlapping ("X" ∩ "Y"
∅) sub-samples can be aggregated as follows if the actual size and means of each are known:
formula_40
For the more general case of "M" non-overlapping data sets, "X"1 through "X""M", and the aggregate data set formula_34,
formula_41
where
formula_42
If the size, mean, and standard deviation of two overlapping samples are known for the samples as well as their intersection, then the standard deviation of the aggregated sample can still be calculated. In general,
formula_43 | [
{
"math_id": 0,
"text": "\\sigma^2"
},
{
"math_id": 1,
"text": "\\sigma ^2"
},
{
"math_id": 2,
"text": "s^2_i"
},
{
"math_id": 3,
"text": "i = 1, \\ldots, m"
},
{
"math_id": 4,
"text": "\\frac{1}{n_i-1} \\sum_{j=1}^{n_i} \\left(y_{i, j} - \\overline{y}_i \\right)^2. "
},
{
"math_id": 5,
"text": "n_i=n"
},
{
"math_id": 6,
"text": "s^2_p"
},
{
"math_id": 7,
"text": "s_p^2=\\frac{\\sum_{i=1}^m s_i^2}{m} = \\frac{s_1^2+s_2^2+\\cdots+s_m^2}{m}."
},
{
"math_id": 8,
"text": "w_i=n_i-1"
},
{
"math_id": 9,
"text": "s_p^2=\\frac{\\sum_{i=1}^m (n_i - 1)s_i^2}{\\sum_{i=1}^m(n_i - 1)} = \\frac{(n_1 - 1)s_1^2+(n_2 - 1)s_2^2+\\cdots+(n_m - 1)s_m^2}{n_1+n_2+\\cdots+n_m - m}."
},
{
"math_id": 10,
"text": "s_p^2/\\sigma^2"
},
{
"math_id": 11,
"text": "\\chi^2(\\sum_i n_i - m)"
},
{
"math_id": 12,
"text": "(y_1 - \\bar y, \\dots, y_n - \\bar y)"
},
{
"math_id": 13,
"text": "\\Delta_{n-1}"
},
{
"math_id": 14,
"text": "(n-1)"
},
{
"math_id": 15,
"text": "\\sigma"
},
{
"math_id": 16,
"text": "(y_{1, 1} - \\bar y_1, \\dots, y_{1, n_1} - \\bar y_1, \\dots, y_{m, 1} - \\bar y_m, \\dots, y_{m, n_m} - \\bar y_m)"
},
{
"math_id": 17,
"text": "\\Delta_{n_1-1} \\times \\dots \\times \\Delta_{n_m-1}"
},
{
"math_id": 18,
"text": "s_p^2=\\frac{\\sum_{i=1}^N (n_i - 1)s_i^2}{\\sum_{i=1}^N n_i },"
},
{
"math_id": 19,
"text": "s_p^2"
},
{
"math_id": 20,
"text": "s_i^2"
},
{
"math_id": 21,
"text": "s_p^2 = \\frac{(n_1-1)s_1^2+(n_2-1)s_2^2 + \\cdots + (n_k - 1)s_k^2}{(n_1 - 1) + (n_2 - 1) + \\cdots +(n_k - 1)}"
},
{
"math_id": 22,
"text": "s_p^2 = 2.764 \\, "
},
{
"math_id": 23,
"text": "\n \\sigma_X^2 =\\frac{ \\sum_i \\left[(N_{X_i} - 1) \\sigma_{X_i}^2 + N_{X_i} \\mu_{X_i}^2\\right] - \\left[\\sum_i N_{X_i} \\right] \\mu_X^2 }{\\sum_i N_{X_i} - 1}\n"
},
{
"math_id": 24,
"text": "\n \\mu_X = \\frac{ \\sum_i N_{X_i} \\mu_{X_i} }{\\sum_i N_{X_i} }\n"
},
{
"math_id": 25,
"text": "s_p^2=\\frac{\\sum_{i=1}^k (n_i - 1)s_i^2}{\\sum_{i=1}^k n_i },"
},
{
"math_id": 26,
"text": "\\begin{align}\n\\text{Error} & = s_p^2 - \\sigma_X^2 \\\\[6pt]\n& =\\frac{\\sum_i (N_{X_i} - 1)s_i^2}{\\sum_i N_{X_i} } - \\frac{1}{\\sum_i N_{X_i} - 1} \\left( \\sum_i \\left[(N_{X_i} - 1) \\sigma_{X_i}^2 + N_{X_i} \\mu_{X_i}^2\\right] - \\left[\\sum_i N_{X_i} \\right]\\mu_X^2 \\right)\n\\end{align}"
},
{
"math_id": 27,
"text": "\n\\sum_i N_{X_i} \\approx \\sum_i N_{X_i} - 1\n"
},
{
"math_id": 28,
"text": "\\begin{align}\nE & =- \\frac{\\left( \\sum_i \\left[N_{X_i} \\mu_{X_i}^2\\right] - \\left[\\sum_i N_{X_i} \\right]\\mu_X^2 \\right)}{\\sum_i N_{X_i}}\\\\[3pt]\n& =\\mu_X^2 - \\frac{\\sum_i \\left[N_{X_i} \\mu_{X_i}^2\\right] }{\\sum_i N_{X_i}}\n\\end{align}"
},
{
"math_id": 29,
"text": "\\begin{align}\nE & =\\left[ \\frac{\\sum_i N_{X_i} \\mu_{X_i}}{\\sum_i N_{X_i}} \\right]^2 - \\frac{\\sum_i \\left[N_{X_i} \\mu_{X_i}^2\\right] }{\\sum_i N_{X_i}}\\\\[3pt]\n& =\\frac{\\left[\\sum_i N_{X_i} \\mu_{X_i} \\right]^2 \n - \\sum_i N_{X_i} \\sum_i \\left[N_{X_i} \\mu_{X_i}^2\\right]\n}{\\left[\\sum_i N_{X_i} \\right]^2}\n\\end{align}"
},
{
"math_id": 30,
"text": "\\begin{align}\n &&N_{X \\cup Y} &= N_X + N_Y - N_{X \\cap Y}\\\\\n\\end{align}"
},
{
"math_id": 31,
"text": "\\begin{align}\n X \\cap Y = \\varnothing &\\Rightarrow &N_{X \\cap Y} &= 0\\\\\n &\\Rightarrow &N_{X \\cup Y} &= N_X + N_Y\n\\end{align}"
},
{
"math_id": 32,
"text": "\\begin{align}\n \\mu_{X \\cup Y} &= \\frac{ N_X \\mu_X + N_Y \\mu_Y }{N_X + N_Y} \\\\[3pt]\n \\sigma_{X\\cup Y} &= \\sqrt{ \\frac{N_X \\sigma_X^2 + N_Y \\sigma_Y^2}{N_X + N_Y} + \\frac{N_X N_Y}{(N_X+N_Y)^2}(\\mu_X - \\mu_Y)^2 }\n \\end{align}"
},
{
"math_id": 33,
"text": "\\begin{align}\n \\mu &= \\frac{N\\cdot70 + N\\cdot65}{N + N} = \\frac{70+65}{2} = 67.5 \\\\[3pt]\n \\sigma &= \\sqrt{ \\frac{3^2 + 2^2}{2} + \\frac{(70-65)^2}{2^2} } = \\sqrt{12.75} \\approx 3.57\n \\end{align}"
},
{
"math_id": 34,
"text": " X \\,=\\, \\bigcup_i X_i"
},
{
"math_id": 35,
"text": "\\begin{align}\n \\mu_X &= \\frac{ \\sum_i N_{X_i}\\mu_{X_i} }{ \\sum_i N_{X_i} } \\\\[3pt]\n \\sigma_X &= \\sqrt{ \\frac{ \\sum_i N_{X_i}\\sigma_{X_i}^2 }{ \\sum_i N_{X_i} } + \\frac{ \\sum_{i<j} N_{X_i}N_{X_j} (\\mu_{X_i}-\\mu_{X_j})^2 }{\\big(\\sum_i N_{X_i}\\big)^2} }\n \\end{align}"
},
{
"math_id": 36,
"text": "\n X_i \\cap X_j = \\varnothing, \\quad \\forall\\ i<j.\n "
},
{
"math_id": 37,
"text": "\\begin{align}\n \\mu_{X \\cup Y} &= \\frac{1}{N_{X \\cup Y}}\\left(N_X\\mu_X + N_Y\\mu_Y - N_{X \\cap Y}\\mu_{X \\cap Y}\\right)\\\\[3pt]\n \\sigma_{X \\cup Y} &= \\sqrt{\\frac{1}{N_{X \\cup Y}}\\left(N_X[\\sigma_X^2 + \\mu _X^2] + N_Y[\\sigma_Y^2 + \\mu _Y^2] - N_{X \\cap Y}[\\sigma_{X \\cap Y}^2 + \\mu _{X \\cap Y}^2]\\right) - \\mu_{X\\cup Y}^2}\n\\end{align}"
},
{
"math_id": 38,
"text": "\\sigma_X = \\sqrt{\\sum_i{\\sigma_{X_i}^2} + 2\\sum_{i,j}\\operatorname{cov}(X_i,X_j)}"
},
{
"math_id": 39,
"text": "\\begin{align}\n &\\operatorname{cov}(X_i, X_j) = 0,\\quad \\forall i<j\\\\\n \\Rightarrow &\\;\\sigma_X = \\sqrt{\\sum_i {\\sigma_{X_i}^2}}.\n\\end{align}"
},
{
"math_id": 40,
"text": "\\begin{align}\n \\mu_{X \\cup Y} &= \\frac{1}{N_{X \\cup Y}}\\left(N_X\\mu_X + N_Y\\mu_Y\\right)\\\\[3pt]\n \\sigma_{X \\cup Y} &= \\sqrt{\\frac{1}{N_{X \\cup Y} - 1}\\left([N_X - 1]\\sigma_X^2 + N_X\\mu_X^2 + [N_Y - 1]\\sigma_Y^2 + N_Y\\mu _Y^2 - [N_X + N_Y]\\mu_{X \\cup Y}^2\\right) }\n\\end{align}"
},
{
"math_id": 41,
"text": "\\begin{align}\n \\mu_X &= \\frac{1}{\\sum_i { N_{X_i}}} \\left(\\sum_i { N_{X_i} \\mu_{X_i}}\\right)\\\\[3pt]\n \\sigma_X &= \\sqrt{\\frac{1}{\\sum_i {N_{X_i} - 1}} \\left( \\sum_i { \\left[(N_{X_i} - 1) \\sigma_{X_i}^2 + N_{X_i} \\mu_{X_i}^2\\right] } - \\left[\\sum_i {N_{X_i}}\\right]\\mu_X^2 \\right) }\n\\end{align}"
},
{
"math_id": 42,
"text": "X_i \\cap X_j = \\varnothing,\\quad \\forall i<j."
},
{
"math_id": 43,
"text": "\\begin{align}\n \\mu_{X \\cup Y} &= \\frac{1}{N_{X \\cup Y}}\\left(N_X\\mu_X + N_Y\\mu_Y - N_{X\\cap Y}\\mu_{X\\cap Y}\\right)\\\\[3pt]\n \\sigma_{X \\cup Y} &= \\sqrt{ \\frac{[N_X - 1]\\sigma_X^2 + N_X\\mu_X^2 + [N_Y - 1]\\sigma_Y^2 + N_Y\\mu _Y^2 - [N_{X \\cap Y}-1]\\sigma_{X \\cap Y}^2 - N_{X \\cap Y}\\mu_{X \\cap Y}^2 - [N_X + N_Y - N_{X \\cap Y}]\\mu_{X \\cup Y}^2}{N_{X \\cup Y} - 1} }\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10910481 |
1091100 | Similitude | Concept applicable to the testing of engineering models
Similitude is a concept applicable to the testing of engineering models. A model is said to have "similitude" with the real application if the two share geometric similarity, kinematic similarity and dynamic similarity. "Similarity" and "similitude" are interchangeable in this context.
The term dynamic similitude is often used as a catch-all because it implies that geometric and kinematic similitude have already been met.
Similitude's main application is in hydraulic and aerospace engineering to test fluid flow conditions with scaled models. It is also the primary theory behind many textbook formulas in fluid mechanics.
The concept of similitude is strongly tied to dimensional analysis.
Overview.
Engineering models are used to study complex fluid dynamics problems where calculations and computer simulations aren't reliable. Models are usually smaller than the final design, but not always. Scale models allow testing of a design prior to building, and in many cases are a critical step in the development process.
Construction of a scale model, however, must be accompanied by an analysis to determine what conditions it is tested under. While the geometry may be simply scaled, other parameters, such as pressure, temperature or the velocity and type of fluid may need to be altered. Similitude is achieved when testing conditions are created such that the test results are applicable to the real design.
The following criteria are required to achieve similitude;
To satisfy the above conditions the application is analyzed;
It is often impossible to achieve strict similitude during a model test. The greater the departure from the application's operating conditions, the more difficult achieving similitude is. In these cases some aspects of similitude may be neglected, focusing on only the most important parameters.
The design of marine vessels remains more of an art than a science in
large part because dynamic similitude is especially difficult to attain
for a vessel that is partially submerged: a ship is affected by wind
forces in the air above it, by hydrodynamic forces within the water
under it, and especially by wave motions at the interface between the
water and the air. The scaling requirements for each of these
phenomena differ, so models cannot replicate what happens to a full
sized vessel nearly so well as can be done for an aircraft or
submarine—each of which operates entirely within one medium.
Similitude is a term used widely in fracture mechanics relating to the strain life approach. Under given loading conditions the fatigue damage in an un-notched specimen is comparable to that of a notched specimen. Similitude suggests that the component fatigue life of the two objects will also be similar.
An example.
Consider a submarine modeled at 1/40th scale. The application operates in sea water at 0.5 °C, moving at 5 m/s. The model will be tested in fresh water at 20 °C. Find the power required for the submarine to operate at the stated speed.
A free body diagram is constructed and the relevant relationships of force and velocity are formulated using techniques from continuum mechanics. The variables which describe the system are:
This example has five independent variables and three fundamental units. The fundamental units are: meter, kilogram, second.
Invoking the Buckingham π theorem shows that the system can be described with two dimensionless numbers and one independent variable.
Dimensional analysis is used to rearrange the units to form the Reynolds number (formula_0) and pressure coefficient (formula_1). These dimensionless numbers account for all the variables listed above except "F", which will be the test measurement. Since the dimensionless parameters will stay constant for both the test and the real application, they will be used to formulate scaling laws for the test.
Scaling laws:
formula_2
The pressure (formula_3) is not one of the five variables, but the force (formula_4) is. The pressure difference (Δformula_3) has thus been replaced with (formula_5) in the pressure coefficient. This gives a required test velocity of:
formula_6 .
A model test is then conducted at that velocity and the force that is measured in the model (formula_7) is then scaled to find the force that can be expected for the real application (formula_8):
formula_9
The power formula_10 in watts required by the submarine is then:
formula_11
Note that even though the model is scaled smaller, the water velocity needs to be increased for testing. This remarkable result shows how similitude in nature is often counterintuitive.
Typical applications.
Fluid mechanics.
Similitude has been well documented for a large number of engineering problems and is the basis of many textbook formulas and dimensionless quantities. These formulas and quantities are easy to use without having to repeat the laborious task of dimensional analysis and formula derivation. Simplification of the formulas (by neglecting some aspects of similitude) is common, and needs to be reviewed by the engineer for each application.
Similitude can be used to predict the performance of a new design based on data from an existing, similar design. In this case, the model is the existing design. Another use of similitude and models is in validation of computer simulations with the ultimate goal of eliminating the need for physical models altogether.
Another application of similitude is to replace the operating fluid with a different test fluid. Wind tunnels, for example, have trouble with air liquefying in certain conditions so helium is sometimes used. Other applications may operate in dangerous or expensive fluids so the testing is carried out in a more convenient substitute.
Some common applications of similitude and associated dimensionless numbers;
Solid mechanics: structural similitude.
Similitude analysis is a powerful engineering tool to design the scaled-down structures. Although both dimensional analysis and direct use of the governing equations may be used to derive the scaling laws, the latter results in more specific scaling laws. The design of the scaled-down composite structures can be successfully carried out using the complete and partial similarities. In the design of the scaled structures under complete similarity condition, all the derived scaling laws must be satisfied between the model and prototype which yields the perfect similarity between the two scales. However, the design of a scaled-down structure which is perfectly similar to its prototype has the practical limitation, especially for laminated structures. Relaxing some of the scaling laws may eliminate the limitation of the design under complete similarity condition and yields the scaled models that are partially similar to their prototype. However, the design of the scaled structures under the partial similarity condition must follow a deliberate methodology to ensure the accuracy of the scaled structure in predicting the structural response of the prototype. Scaled models can be designed to replicate the dynamic characteristic (e.g. frequencies, mode shapes and damping ratios) of their full-scale counterparts. However, appropriate response scaling laws need to be derived to predict the dynamic response of the full-scale prototype from the experimental data of the scaled model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R_e"
},
{
"math_id": 1,
"text": "C_p"
},
{
"math_id": 2,
"text": " \n\\begin{align}\n &R_e = \\left(\\frac{\\rho V L}{\\mu}\\right)\n &\\longrightarrow\n &V_\\text{model} = V_\\text{application} \\times \\left(\\frac{\\rho_a}{\\rho_m}\\right)\\times \\left(\\frac{L_a}{L_m}\\right) \\times \\left(\\frac{\\mu_m}{\\mu_a}\\right)\n \\\\\n &C_p = \\left(\\frac{2 \\Delta p}{\\rho V^2}\\right), F=\\Delta p L^2\n &\\longrightarrow\n &F_\\text{application} =F_\\text{model} \\times \\left(\\frac{\\rho_a}{\\rho_m}\\right) \\times \\left(\\frac{V_a}{V_m}\\right)^2 \\times \\left(\\frac{L_a}{L_m}\\right)^2.\n\\end{align}\n"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "F/L^2"
},
{
"math_id": 6,
"text": " V_\\text{model} = V_\\text{application} \\times 21.9 "
},
{
"math_id": 7,
"text": "F_{model}"
},
{
"math_id": 8,
"text": "F_{application}"
},
{
"math_id": 9,
"text": " F_\\text{application} = F_\\text{model} \\times 3.44 "
},
{
"math_id": 10,
"text": "P"
},
{
"math_id": 11,
"text": " P [\\mathrm{W}] =F_\\text{application}\\times V_\\text{application}= F_\\text{model} [\\mathrm{N}] \\times 17.2 \\ \\mathrm{m/s}"
}
] | https://en.wikipedia.org/wiki?curid=1091100 |
1091136 | Von Mises yield criterion | Failure Theory in continuum mechanics
In continuum mechanics, the maximum distortion energy criterion (also von Mises yield criterion) states that yielding of a ductile material begins when the second invariant of deviatoric stress formula_0 reaches a critical value. It is a part of plasticity theory that mostly applies to ductile materials, such as some metals. Prior to yield, material response can be assumed to be of a linear elastic, nonlinear elastic, or viscoelastic behavior.
In materials science and engineering, the von Mises yield criterion is also formulated in terms of the von Mises stress or equivalent tensile stress, formula_1. This is a scalar value of stress that can be computed from the Cauchy stress tensor. In this case, a material is said to start yielding when the von Mises stress reaches a value known as yield strength, formula_2. The von Mises stress is used to predict yielding of materials under complex loading from the results of uniaxial tensile tests. The von Mises stress satisfies the property where two stress states with equal distortion energy have an equal von Mises stress.
Because the von Mises yield criterion is independent of the first stress invariant, formula_3, it is applicable for the analysis of plastic deformation for ductile materials such as metals, as onset of yield for these materials does not depend on the hydrostatic component of the stress tensor.
Although it has been believed it was formulated by James Clerk Maxwell in 1865, Maxwell only described the general conditions in a letter to William Thomson (Lord Kelvin). Richard Edler von Mises rigorously formulated it in 1913. Tytus Maksymilian Huber (1904), in a paper written in Polish, anticipated to some extent this criterion by properly relying on the distortion strain energy, not on the total strain energy as his predecessors. Heinrich Hencky formulated the same criterion as von Mises independently in 1924. For the above reasons this criterion is also referred to as the "Maxwell–Huber–Hencky–von Mises theory".
Mathematical formulation.
Mathematically the von Mises yield criterion is expressed as:
formula_5
Here formula_6 is yield stress of the material in pure shear. As shown later in this article, at the onset of yielding, the magnitude of the shear yield stress in pure shear is √3 times lower than the tensile yield stress in the case of simple tension. Thus, we have:
formula_7
where formula_8 is tensile yield strength of the material. If we set the von Mises stress equal to the yield strength and combine the above equations, the von Mises yield criterion is written as:
formula_9
or
formula_10
Substituting formula_0 with the Cauchy stress tensor components, we get
formula_11,
where formula_12 is called deviatoric stress. This equation defines the yield surface as a circular cylinder (See Figure) whose yield curve, or intersection with the deviatoric plane, is a circle with radius formula_13, or formula_4. This implies that the yield condition is independent of hydrostatic stresses.
Reduced von Mises equation for different stress conditions.
Uniaxial (1D) stress.
In the case of uniaxial stress or simple tension, formula_15, the von Mises criterion simply reduces to
formula_16,
which means the material starts to yield when formula_17 reaches the "yield strength" of the material formula_2, in agreement with the definition of tensile (or compressive) yield strength.
Multi-axial (2D or 3D) stress.
An equivalent tensile stress or equivalent von-Mises stress, formula_1 is used to predict yielding of materials under multiaxial loading conditions using results from simple uniaxial tensile tests. Thus, we define
formula_18
where formula_19 are components of stress deviator tensor formula_20:
formula_21.
In this case, yielding occurs when the equivalent stress, formula_1, reaches the yield strength of the material in simple tension, formula_2. As an example, the stress state of a steel beam in compression differs from the stress state of a steel axle under torsion, even if both specimens are of the same material. In view of the stress tensor, which fully describes the stress state, this difference manifests in six degrees of freedom, because the stress tensor has six independent components. Therefore, it is difficult to tell which of the two specimens is closer to the yield point or has even reached it. However, by means of the von Mises yield criterion, which depends solely on the value of the scalar von Mises stress, i.e., one degree of freedom, this comparison is straightforward: A larger von Mises value implies that the material is closer to the yield point.
In the case of pure shear stress, formula_22, while all other formula_23, von Mises criterion becomes:
formula_24.
This means that, at the onset of yielding, the magnitude of the shear stress in pure shear is formula_25 times lower than the yield stress in the case of simple tension. The von Mises yield criterion for pure shear stress, expressed in principal stresses, is
formula_26
In the case of principal plane stress, formula_14 and formula_27, the von Mises criterion becomes:
formula_28
This equation represents an ellipse in the plane formula_29.
Physical interpretation of the von Mises yield criterion.
Hencky (1924) offered a physical interpretation of von Mises criterion suggesting that yielding begins when the elastic energy of distortion reaches a critical value. For this reason, the von Mises criterion is also known as the maximum distortion strain energy criterion. This comes from the relation between formula_0 and the elastic strain energy of distortion formula_30:
formula_31 with the elastic shear modulus formula_32.
In 1937 Arpad L. Nadai suggested that yielding begins when the octahedral shear stress reaches a critical value, i.e. the octahedral shear stress of the material at yield in simple tension. In this case, the von Mises yield criterion is also known as the maximum octahedral shear stress criterion in view of the direct proportionality that exists between formula_0 and the octahedral shear stress, formula_33, which by definition is
formula_34
thus we have
formula_35
Strain energy density consists of two components - volumetric or dialational and distortional. Volumetric component is responsible for change in volume without any change in shape. Distortional component is responsible for shear deformation or change in shape.
Practical engineering usage of the von Mises yield criterion.
As shown in the equations above, the use of the von Mises criterion as a yield criterion is only exactly applicable when the following material properties are isotropic, and the ratio of the shear yield strength to the tensile yield strength has the following value:
formula_36
Since no material will have this ratio precisely, in practice it is necessary to use engineering judgement to decide what failure theory is appropriate for a given material. Alternately, for use of the Tresca theory, the same ratio is defined as 1/2.
The yield margin of safety is written as
formula_37
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "J_2"
},
{
"math_id": 1,
"text": "\\sigma_\\text{v}"
},
{
"math_id": 2,
"text": "\\sigma_\\text{y}"
},
{
"math_id": 3,
"text": "I_1"
},
{
"math_id": 4,
"text": "\\sqrt{\\frac{2}{3}} \\sigma_y"
},
{
"math_id": 5,
"text": "J_2 = k^2\\,\\!"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "k = \\frac{\\sigma_y}{\\sqrt{3}}"
},
{
"math_id": 8,
"text": "\\sigma_y"
},
{
"math_id": 9,
"text": "\\sigma_v = \\sigma_y = \\sqrt{3J_2} "
},
{
"math_id": 10,
"text": "\\sigma_v^2 = 3J_2 = 3k^2"
},
{
"math_id": 11,
"text": "\\sigma_\\text{v}^2 = \\frac{1}{2}\\left[(\\sigma_{11} - \\sigma_{22})^2 + (\\sigma_{22} - \\sigma_{33})^2 + (\\sigma_{33} - \\sigma_{11})^2 + 6\\left(\\sigma_{23}^2 + \\sigma_{31}^2 + \\sigma_{12}^2\\right)\\right] = \\frac{3}{2}s_{ij}s_{ij}"
},
{
"math_id": 12,
"text": "s"
},
{
"math_id": 13,
"text": "\\sqrt{2}k"
},
{
"math_id": 14,
"text": "\\sigma_3 = 0"
},
{
"math_id": 15,
"text": "\\sigma_1 \\neq 0, \\sigma_3 = \\sigma_2 = 0"
},
{
"math_id": 16,
"text": "\\sigma_1 = \\sigma_\\text{y}\\,\\!"
},
{
"math_id": 17,
"text": "\\sigma_1"
},
{
"math_id": 18,
"text": "\\begin{align}\n \\sigma_\\text{v}\n &= \\sqrt{3J_2} \\\\\n &= \\sqrt{\\frac{(\\sigma_{11} - \\sigma_{22})^2 + (\\sigma_{22} - \\sigma_{33})^2 + \\left(\\sigma_{33} - \\sigma_{11})^2 + 6(\\sigma_{12}^2 + \\sigma_{23}^2 + \\sigma_{31}^2\\right)}{2}} \\\\\n &= \\sqrt{\\frac{(\\sigma_1 - \\sigma_2)^2 + (\\sigma_2 - \\sigma_3)^2 + (\\sigma_3 - \\sigma_1)^2} {2}} \\\\\n &= \\sqrt{\\frac{3}{2} s_{ij}s_{ij}}\n\\end{align}\n\\,\\!"
},
{
"math_id": 19,
"text": "s_{ij}"
},
{
"math_id": 20,
"text": "\\boldsymbol{\\sigma}^\\text{dev}"
},
{
"math_id": 21,
"text": "\\boldsymbol{\\sigma}^\\text{dev} = \\boldsymbol{\\sigma} - \\frac{\\operatorname{tr}\\left(\\boldsymbol{\\sigma}\\right)}{3} \\mathbf{I}\\,\\!"
},
{
"math_id": 22,
"text": "\\sigma_{12} = \\sigma_{21}\\neq0"
},
{
"math_id": 23,
"text": "\\sigma_{ij} = 0"
},
{
"math_id": 24,
"text": "\\sigma_{12} = k = \\frac{\\sigma_y}{\\sqrt{3}}\\,\\!"
},
{
"math_id": 25,
"text": "\\sqrt{3}"
},
{
"math_id": 26,
"text": "(\\sigma_1 - \\sigma_2)^2 + (\\sigma_2 - \\sigma_3)^2 + (\\sigma_1 - \\sigma_3)^2 = 2\\sigma_y^2\\,\\!"
},
{
"math_id": 27,
"text": "\\sigma_{12} = \\sigma_{23} = \\sigma_{31} = 0"
},
{
"math_id": 28,
"text": "\\sigma_1^2 - \\sigma_1\\sigma_2 + \\sigma_2^2 = 3k^2 = \\sigma_y^2\\,\\!"
},
{
"math_id": 29,
"text": "\\sigma_1 - \\sigma_2"
},
{
"math_id": 30,
"text": "W_\\text{D}"
},
{
"math_id": 31,
"text": "W_\\text{D} = \\frac{J_2}{2G}\\,\\!"
},
{
"math_id": 32,
"text": "G = \\frac{E}{2(1 + \\nu)}\\,\\!"
},
{
"math_id": 33,
"text": "\\tau_\\text{oct}"
},
{
"math_id": 34,
"text": "\\tau_\\text{oct} = \\sqrt{\\frac{2}{3}J_2}\\,\\!"
},
{
"math_id": 35,
"text": "\\tau_\\text{oct} = \\frac{\\sqrt{2}}{3} \\sigma_\\text{y}\\,\\!"
},
{
"math_id": 36,
"text": "\\frac{F_{sy}}{F_{ty}} = \\frac{1}{\\sqrt 3} \\approx 0.577\\!"
},
{
"math_id": 37,
"text": "MS_\\text{yld} = \\frac{F_y}{\\sigma_\\text{v}} - 1"
}
] | https://en.wikipedia.org/wiki?curid=1091136 |
10912009 | Cover tree | Type of data structure
The cover tree is a type of data structure in computer science that is specifically designed to facilitate the speed-up of a nearest neighbor search. It is a refinement of the Navigating Net data structure, and related to a variety of other data structures developed for indexing intrinsically low-dimensional data.
The tree can be thought of as a hierarchy of levels with the top level containing the root point and the bottom level containing every point in the metric space. Each level "C" is associated with an integer value "i" that decrements by one as the tree is descended. Each level "C" in the cover tree has three important properties:
Complexity.
Find.
Like other metric trees the cover tree allows for nearest neighbor searches in formula_7 where formula_8 is a constant associated with the dimensionality of the dataset and n is the cardinality. To compare, a basic linear search requires formula_9, which is a much worse dependence on formula_10. However, in high-dimensional metric spaces the formula_8 constant is non-trivial, which means it cannot be ignored in complexity analysis. Unlike other metric trees, the cover tree has a theoretical bound on its constant that is based on the dataset's expansion constant or doubling constant (in the case of approximate NN retrieval). The bound on search time is formula_11 where formula_12 is the expansion constant of the dataset.
Insert.
Although cover trees provide faster searches than the naive approach, this advantage must be weighed with the additional cost of maintaining the data structure. In a naive approach adding a new point to the dataset is trivial because order does not need to be preserved, but in a cover tree it can take formula_13 time. However, this is an upper-bound, and some techniques have been implemented that seem to improve the performance in practice.
Space.
The cover tree uses implicit representation to keep track of repeated points. Thus, it only requires O(n) space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_{i} \\subseteq C_{i-1}"
},
{
"math_id": 1,
"text": "p \\in C_{i-1}"
},
{
"math_id": 2,
"text": "q \\in C_{i} "
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "2^{i}"
},
{
"math_id": 6,
"text": "p,q \\in C_i"
},
{
"math_id": 7,
"text": "O(\\eta*\\log{n})"
},
{
"math_id": 8,
"text": "\\eta"
},
{
"math_id": 9,
"text": "O(n)"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "O(c^{12} \\log{n})"
},
{
"math_id": 12,
"text": "c"
},
{
"math_id": 13,
"text": "O(c^6 \\log{n})"
}
] | https://en.wikipedia.org/wiki?curid=10912009 |
109122 | Darlington transistor | Multi-transistor electronics configuration
In electronics, a Darlington configuration (commonly called as a Darlington pair) is a circuit consisting of two bipolar transistors with the emitter of one transistor connected to the base of the other, such that the current amplified by the first transistor is amplified further by the second one. The collectors of both transistors are connected together. This configuration has a much higher current gain than each transistor taken separately. It acts like and is often packaged as a single transistor. It was invented in 1953 by Sidney Darlington.
Behavior.
A Darlington pair behaves like a single transistor, meaning it has one base, collector, and emitter. It typically creates a high current gain (approximately the product of the gains of the two transistors, due to the fact that their β values multiply together). A general relation between the compound current gain and the individual gains is given by:
formula_0
If "β1" and "β2" are high enough (hundreds), this relation can be approximated with:
formula_1
A typical Darlington transistor has a current gain of 1000 or more, so that only a small base current is needed to make the pair switch on much higher switched currents. Another advantage involves providing a very high input impedance for the circuit which also translates into an equal decrease in output impedance. The ease of creating this circuit also provides an advantage. It can be simply made with two separate NPN (or PNP) transistors, and is also available in a variety of single packages.
One drawback is an approximate doubling of the base–emitter voltage. Since there are two junctions between the base and emitter of the Darlington transistor, the equivalent base–emitter voltage is the sum of both base–emitter voltages:
formula_2
For silicon-based technology, where each VBEi is about 0.65 V when the device is operating in the active or saturated region, the necessary base–emitter voltage of the pair is 1.3 V.
Another drawback of the Darlington pair is its increased "saturation" voltage. The output transistor is not allowed to saturate (i.e. its base–collector junction must remain reverse-biased) because the first transistor, when saturated, establishes full (100%) parallel negative feedback between the collector and the base of the second transistor. Since collector–emitter voltage is equal to the sum of its own base–emitter voltage and the collector-emitter voltage of the first transistor, both positive quantities in normal operation, it always exceeds the base-emitter voltage. (In symbols, formula_3 always.) Thus the "saturation" voltage of a Darlington transistor is one VBE (about 0.65 V in silicon) higher than a single transistor saturation voltage, which is typically 0.1 - 0.2 V in silicon. For equal collector currents, this drawback translates to an increase in the dissipated power for the Darlington transistor over a single transistor. The increased low output level can cause troubles when TTL logic circuits are driven.
Another problem is a reduction in switching speed or response, because the first transistor cannot actively inhibit the base current of the second one, making the device slow to switch off. To alleviate this, the second transistor often has a resistor of a few hundred ohms connected between its base and emitter terminals. This resistor provides a low-impedance discharge path for the charge accumulated on the base-emitter junction, allowing a faster transistor turn-off.
The Darlington pair has more phase shift at high frequencies than a single transistor and hence can more easily become unstable with negative feedback (i.e., systems that use this configuration can have poor performance due to the extra transistor delay).
Packaging.
Darlington pairs are available as integrated packages or can be made from two discrete transistors; Q1, the left-hand transistor in the diagram, can be a low power type, but normally Q2 (on the right) will need to be high power. The maximum collector current IC(max) of the pair is that of Q2. A typical integrated power device is the 2N6282, which includes a switch-off resistor and has a current gain of 2400 at IC=10 A.
Integrated devices can take less space than two individual transistors because they can use a "shared" collector. Integrated Darlington pairs come packaged singly in transistor-like packages or as an array of devices (usually eight) in an integrated circuit.
Darlington triplet.
A third transistor can be added to a Darlington pair to give even higher current gain, making a Darlington triplet. The emitter of the second transistor in the pair is connected to the base of the third, as the emitter of first transistor is connected to the base of the second, and the collectors of all three transistors are connected together. This gives current gain approximately equal to the product of the gains of the three transistors. However the increased current gain often does not justify the sensitivity and saturation current problems, so this circuit is seldom used.
Applications.
Darlington pairs are often used in the push-pull output stages of the power audio amplifiers that drive most sound systems. In a fully symmetrical push-pull circuit two Darlington pairs are connected as emitter followers driving the output from the positive and negative supply: an NPN Darlington pair connected to the positive rail providing current for positive excursions of the output, and a PNP Darlington pair connected to the negative rail providing current for negative excursions.
Before good quality PNP power transistors were available, the quasi-symmetrical push-pull circuit was used, in which only the two transistors connected to the positive supply rail were an NPN Darlington pair, and the pair from the negative rail were two more NPN transistors connected as common-emitter amplifiers.
A Darlington pair can be sensitive enough to respond to the current passed by skin contact even at safe zone voltages. Thus it can form a new input stage of a touch-sensitive switch.
Darlington transistors can be used in high-current circuits such as the LM1084 voltage regulator. Other high-current applications could include those involving computer control of motors or relays, where the current is amplified from a safe low level of the computer output line to the amount needed by the connected device.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\beta_\\mathrm{Darlington} = \\beta_1 \\cdot \\beta_2 + \\beta_1 + \\beta_2"
},
{
"math_id": 1,
"text": "\\beta_\\mathrm{Darlington} \\approx \\beta_1 \\cdot \\beta_2"
},
{
"math_id": 2,
"text": "V_{BE} = V_{BE1} + V_{BE2} \\approx 2V_{BE1}\\!"
},
{
"math_id": 3,
"text": "\\mathrm{V_{CE2} = V_{CE1} + V_{BE2} > V_{BE2}} \\Rightarrow \\mathrm{V_{C2} > V_{B2}}"
}
] | https://en.wikipedia.org/wiki?curid=109122 |
1091252 | Fourier operator | The Fourier operator is the kernel of the Fredholm integral of the first kind that defines the continuous Fourier transform, and is a two-dimensional function when it corresponds to the Fourier transform of one-dimensional functions. It is complex-valued and has a constant (typically unity) magnitude everywhere. When depicted, e.g. for teaching purposes, it may be visualized by its separate real and imaginary parts, or as a colour image using a colour wheel to denote phase.
It is usually denoted by a capital letter "F" in script font (formula_0), e.g. the Fourier transform of a function formula_1 would be written using the operator as formula_2.
It may be thought of as a limiting case for when the size of the discrete Fourier transform increases without bound while its spatial resolution also increases without bound, so as to become both continuous and not necessarily periodic.
Visualization.
The Fourier operator defines a continuous two-dimensional function that extends along time and frequency axes, outwards to infinity in all four directions. This is analogous to the DFT matrix but, in this case, is continuous and infinite in extent. The value of the function at any point is such that it has the same magnitude everywhere. Along any fixed value of time, the value of the function varies as a complex exponential in frequency. Likewise along any fixed value of frequency the value of the function varies as a complex exponential in time. A portion of the infinite Fourier operator is shown in the illustration below.
Any slice parallel to either of the axes, through the Fourier operator, is a complex exponential, i.e. the real part is a cosine wave and the imaginary part is a sine wave of the same frequency as the real part.
Diagonal slices through the Fourier operator give rise to chirps. Thus rotation of the Fourier operator gives rise to the fractional Fourier transform, which is related to the chirplet transform.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "g(t)"
},
{
"math_id": 2,
"text": "\\mathcal{F}g(t)"
}
] | https://en.wikipedia.org/wiki?curid=1091252 |
10912739 | Johann Nikuradse | Johann Nikuradse (Georgian: , "Ivane Nikuradze") (November 20, 1894 – July 18, 1979) was a Georgia-born German engineer and physicist. His brother, Alexander Nikuradse, was also a Germany-based physicist and geopolitician known for his ties with Alfred Rosenberg and for his role in saving many Georgians during World War II.
He was born in Samtredia, Georgia (then part of the Kutais Governorate, Imperial Russia) and studied at Kutaisi. In 1919, through the recommendations of the conspicuous Georgian scholar Petre Melikishvili, he went abroad for further studies. The 1921 Sovietization of Georgia precluded his return to homeland and Nikuradse naturalized as a German citizen.
As PhD student of Ludwig Prandtl in 1920, he later worked as a researcher at the Kaiser Wilhelm Institute for Flow Research (now the Max Planck Institute for Dynamics and Self-Organization). He succeeded in putting himself in Prandtl's favour and thus advanced to the position of department head. In spite of his close ties with the Nazi Party, Nikuradse came, in the early 1930s, under fire of the institute's National Socialist Factory Cell Organization whose members accused him of spying for the Soviet Union and of stealing books from the institute. Prandtl initially defended Nikuradse, but was eventually forced to dismiss him in 1934. He then served as a professor at the University of Breslau (1934–1945), and an honorary professor at the Aachen Technical University since 1945.
Nikuradse lived mostly in Göttingen and engaged in hydrodynamics. His best known experiment was published in Germany in 1933. Nikuradse carefully measured the friction that a fluid experiences in turbulent flow through a rough pipe. He cemented grains of sand to the inner wall of a pipe and discovered that, the rougher the surface the greater the friction, and hence a greater pressure loss.
He discovered that:
In range I, for small Reynolds number the resistance factor is the same for rough as for smooth pipes. The projections of the roughening lie entirely within the laminar layer for this range.
In range II (transition range), an increase in the resistance factor was observed for an increasing Reynolds number. The thickness of the laminar layer is here of the same order of magnitude as that of the projections.
In range III, the resistance factor is independent of the Reynolds number (quadratic law of resistance). Here all the projections of the roughening extend through the laminar layer and the resistance factor formula_0 .
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "\\lambda = \\frac{1}{(1.14+2 log(\\frac{r}{k}))^2}"
}
] | https://en.wikipedia.org/wiki?curid=10912739 |
1091767 | Non-well-founded set theory | Theory that allows sets to be elements of themselves
Non-well-founded set theories are variants of axiomatic set theory that allow sets to be elements of themselves and otherwise violate the rule of well-foundedness. In non-well-founded set theories, the foundation axiom of ZFC is replaced by axioms implying its negation.
The study of non-well-founded sets was initiated by Dmitry Mirimanoff in a series of papers between 1917 and 1920, in which he formulated the distinction between well-founded and non-well-founded sets; he did not regard well-foundedness as an axiom. Although a number of axiomatic systems of non-well-founded sets were proposed afterwards, they did not find much in the way of applications until the book Non-Well-Founded Sets by Peter Aczel introduces hyperset theory in 1988.
The theory of non-well-founded sets has been applied in the logical modelling of non-terminating computational processes in computer science (process algebra and final semantics), linguistics and natural language semantics (situation theory), philosophy (work on the Liar Paradox), and in a different setting, non-standard analysis.
Details.
In 1917, Dmitry Mirimanoff introduced the concept of well-foundedness of a set:
A set, x0, is well-founded if it has no infinite descending membership sequence formula_0
In ZFC, there is no infinite descending ∈-sequence by the axiom of regularity. In fact, the axiom of regularity is often called the "foundation axiom" since it can be proved within ZFC− (that is, ZFC without the axiom of regularity) that well-foundedness implies regularity. In variants of ZFC without the axiom of regularity, the possibility of non-well-founded sets with set-like ∈-chains arises. For example, a set "A" such that "A" ∈ "A" is non-well-founded.
Although Mirimanoff also introduced a notion of isomorphism between possibly non-well-founded sets, he considered neither an axiom of foundation nor of anti-foundation. In 1926, Paul Finsler introduced the first axiom that allowed non-well-founded sets. After Zermelo adopted Foundation into his own system in 1930 (from previous work of von Neumann 1925–1929) interest in non-well-founded sets waned for decades. An early non-well-founded set theory was Willard Van Orman Quine’s New Foundations, although it is not merely ZF with a replacement for Foundation.
Several proofs of the independence of Foundation from the rest of ZF were published in 1950s particularly by Paul Bernays (1954), following an announcement of the result in an earlier paper of his from 1941, and by Ernst Specker who gave a different proof in his Habilitationsschrift of 1951, proof which was published in 1957. Then in 1957 Rieger's theorem was published, which gave a general method for such proof to be carried out, rekindling some interest in non-well-founded axiomatic systems. The next axiom proposal came in a 1960 congress talk of Dana Scott (never published as a paper), proposing an alternative axiom now called SAFA. Another axiom proposed in the late 1960s was Maurice Boffa's axiom of superuniversality, described by Aczel as the highpoint of research of its decade. Boffa's idea was to make foundation fail as badly as it can (or rather, as extensionality permits): Boffa's axiom implies that every extensional set-like relation is isomorphic to the elementhood predicate on a transitive class.
A more recent approach to non-well-founded set theory, pioneered by M. Forti and F. Honsell in the 1980s, borrows from computer science the concept of a bisimulation. Bisimilar sets are considered indistinguishable and thus equal, which leads to a strengthening of the axiom of extensionality. In this context, axioms contradicting the axiom of regularity are known as anti-foundation axioms, and a set that is not necessarily well-founded is called a hyperset.
Four mutually independent anti-foundation axioms are well-known, sometimes abbreviated by the first letter in the following list:
They essentially correspond to four different notions of equality for non-well-founded sets. The first of these, AFA, is based on accessible pointed graphs (apg) and states that two hypersets are equal if and only if they can be pictured by the same apg. Within this framework, it can be shown that the so-called Quine atom, formally defined by Q={Q}, exists and is unique.
Each of the axioms given above extends the universe of the previous, so that: V ⊆ A ⊆ S ⊆ F ⊆ B. In the Boffa universe, the distinct Quine atoms form a proper class.
It is worth emphasizing that hyperset theory is an extension of classical set theory rather than a replacement: the well-founded sets within a hyperset domain conform to classical set theory.
Applications.
In published research, non-well-founded sets are also called hypersets, in parallel to the hyperreal numbers of nonstandard analysis.
The hypersets were extensively used by Jon Barwise and John Etchemendy in their 1987 book "The Liar", on the liar's paradox. The book's proposals contributed to the theory of truth. The book is also a good introduction to the topic of non-well-founded sets.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\cdots \\in x_2 \\in x_1 \\in x_0. "
}
] | https://en.wikipedia.org/wiki?curid=1091767 |
10918 | Fibonacci sequence | Numbers obtained by adding the two previous ones
In mathematics, the Fibonacci sequence is a sequence in which each number is the sum of the two preceding ones. Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers, commonly denoted . The sequence commonly starts from 0 and 1, although some authors start the sequence from 1 and 1 or sometimes (as did Fibonacci) from 1 and 2. Starting from 0 and 1, the sequence begins
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...
The Fibonacci numbers were first described in Indian mathematics as early as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from syllables of two lengths. They are named after the Italian mathematician Leonardo of Pisa, also known as Fibonacci, who introduced the sequence to Western European mathematics in his 1202 book .
Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire journal dedicated to their study, the "Fibonacci Quarterly". Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed systems. They also appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, and the arrangement of a pine cone's bracts, though they do not occur in all species.
Fibonacci numbers are also strongly related to the golden ratio: Binet's formula expresses the n-th Fibonacci number in terms of n and the golden ratio, and implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are also closely related to Lucas numbers, which obey the same recurrence relation and with the Fibonacci numbers form a complementary pair of Lucas sequences.
Definition.
The Fibonacci numbers may be defined by the recurrence relation
formula_0
and
formula_1
for "n" > 1.
Under some older definitions, the value formula_2 is omitted, so that the sequence starts with formula_3 and the recurrence formula_4 is valid for "n" > 2.
The first 20 Fibonacci numbers "Fn" are:
History.
India.
The Fibonacci sequence appears in Indian mathematics, in connection with Sanskrit prosody. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long (L) syllables of 2 units duration, juxtaposed with short (S) syllables of 1 unit duration. Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is "F""m"+1.
Knowledge of the Fibonacci sequence was expressed as early as Pingala (c. 450 BC–200 BC). Singh cites Pingala's cryptic formula "misrau cha" ("the two are mixed") and scholars who interpret it in context as saying that the number of patterns for m beats ("F""m"+1) is obtained by adding one [S] to the "F""m" cases and one [L] to the "F""m"−1 cases. Bharata Muni also expresses knowledge of the sequence in the "Natya Shastra" (c. 100 BC–c. 350 AD).
However, the clearest exposition of the sequence arises in the work of Virahanka (c. 700 AD), whose own work is lost, but is available in a quotation by Gopala (c. 1135):
Variations of two earlier meters [is the variation] ... For example, for [a meter of length] four, variations of meters of two [and] three being mixed, five happens. [works out examples 8, 13, 21] ... In this way, the process should be followed in all "mātrā-vṛttas" [prosodic combinations].
Hemachandra (c. 1150) is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number ... of the next mātrā-vṛtta."
Europe.
The Fibonacci sequence first appears in the book ("The Book of Calculation", 1202) by Fibonacci where it is used to calculate the growth of rabbit populations. Fibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that: a newly born breeding pair of rabbits are put in a field; each breeding pair mates at the age of one month, and at the end of their second month they always produce another pair of rabbits; and rabbits never die, but continue breeding forever. Fibonacci posed the puzzle: how many pairs will there be in one year?
At the end of the n-th month, the number of pairs of rabbits is equal to the number of mature pairs (that is, the number of pairs in month "n" – 2) plus the number of pairs alive last month (month "n" – 1). The number in the n-th month is the n-th Fibonacci number.
The name "Fibonacci sequence" was first used by the 19th-century number theorist Édouard Lucas.
Relation to the golden ratio.
Closed-form expression.
Like every sequence defined by a homogenous linear recurrence with constant coefficients, the Fibonacci numbers have a closed-form expression. It has become known as Binet's formula, named after French mathematician Jacques Philippe Marie Binet, though it was already known by Abraham de Moivre and Daniel Bernoulli:
formula_5
where
formula_6
is the golden ratio, and ψ is its conjugate:
formula_7
Since formula_8, this formula can also be written as
formula_9
To see the relation between the sequence and these constants, note that φ and ψ are both solutions of the equation formula_10 and thus formula_11 so the powers of φ and ψ satisfy the Fibonacci recursion. In other words,
formula_12
It follows that for any values a and b, the sequence defined by
formula_13
satisfies the same recurrence,
formula_14
If a and b are chosen so that "U"0 = 0 and "U"1 = 1 then the resulting sequence "U""n" must be the Fibonacci sequence. This is the same as requiring a and b satisfy the system of equations:
formula_15
which has solution
formula_16
producing the required formula.
Taking the starting values "U"0 and "U"1 to be arbitrary constants, a more general solution is:
formula_17
where
formula_18
Computation by rounding.
Since
formula_19 for all "n" ≥ 0, the number "F""n" is the closest integer to formula_20. Therefore, it can be found by rounding, using the nearest integer function:
formula_21
In fact, the rounding error quickly becomes very small as n grows, being less than 0.1 for "n" ≥ 4, and less than 0.01 for "n" ≥ 8. This formula is easily inverted to find an index of a Fibonacci number F:
formula_22
Instead using the floor function gives the largest index of a Fibonacci number that is not greater than F:
formula_23
where formula_24, formula_25, and formula_26.
Magnitude.
Since "Fn" is asymptotic to formula_27, the number of digits in "F""n" is asymptotic to formula_28. As a consequence, for every integer "d" > 1 there are either 4 or 5 Fibonacci numbers with d decimal digits.
More generally, in the base b representation, the number of digits in "F""n" is asymptotic to formula_29
Limit of consecutive quotients.
Johannes Kepler observed that the ratio of consecutive Fibonacci numbers converges. He wrote that "as 5 is to 8 so is 8 to 13, practically, and as 8 is to 13, so is 13 to 21 almost", and concluded that these ratios approach the golden ratio formula_30
formula_31
This convergence holds regardless of the starting values formula_32 and formula_33, unless formula_34. This can be verified using Binet's formula. For example, the initial values 3 and 2 generate the sequence 3, 2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555, ... . The ratio of consecutive terms in this sequence shows the same convergence towards the golden ratio.
In general, formula_35, because the ratios between consecutive Fibonacci numbers approaches formula_36.
Decomposition of powers.
Since the golden ratio satisfies the equation
formula_37
this expression can be used to decompose higher powers formula_38 as a linear function of lower powers, which in turn can be decomposed all the way down to a linear combination of formula_36 and 1. The resulting recurrence relationships yield Fibonacci numbers as the linear coefficients:
formula_39
This equation can be proved by induction on "n" ≥ 1:
formula_40
For formula_41, it is also the case that formula_42 and it is also the case that
formula_43
These expressions are also true for "n" < 1 if the Fibonacci sequence "Fn" is extended to negative integers using the Fibonacci rule formula_44
Identification.
Binet's formula provides a proof that a positive integer x is a Fibonacci number if and only if at least one of formula_45 or formula_46 is a perfect square. This is because Binet's formula, which can be written as formula_47, can be multiplied by formula_48 and solved as a quadratic equation in formula_38 via the quadratic formula:
formula_49
Comparing this to formula_50, it follows that
formula_51
In particular, the left-hand side is a perfect square.
Matrix form.
A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is
formula_52
alternatively denoted
formula_53
which yields formula_54. The eigenvalues of the matrix A are formula_55 and formula_56 corresponding to the respective eigenvectors
formula_57
As the initial value is
formula_58
it follows that the nth term is
formula_59
From this, the nth element in the Fibonacci series may be read off directly as a closed-form expression:
formula_60
Equivalently, the same computation may be performed by diagonalization of A through use of its eigendecomposition:
formula_61
where
formula_62
The closed-form expression for the nth element in the Fibonacci series is therefore given by
formula_63
which again yields
formula_64
The matrix A has a determinant of −1, and thus it is a 2 × 2 unimodular matrix.
This property can be understood in terms of the continued fraction representation for the golden ratio φ:
formula_65
The convergents of the continued fraction for φ are ratios of successive Fibonacci numbers: "φ""n" = "F""n"+1 / "F""n" is the n-th convergent, and the ("n" + 1)-st convergent can be found from the recurrence relation "φ""n"+1 = 1 + 1 / "φ""n". The matrix formed from successive convergents of any continued fraction has a determinant of +1 or −1. The matrix representation gives the following closed-form expression for the Fibonacci numbers:
formula_66
For a given n, this matrix can be computed in "O"(log "n") arithmetic operations, using the exponentiation by squaring method.
Taking the determinant of both sides of this equation yields Cassini's identity,
formula_67
Moreover, since A"n"A"m"
A"n"+"m" for any square matrix A, the following identities can be derived (they are obtained from two different coefficients of the matrix product, and one may easily deduce the second one from the first one by changing n into "n" + 1),
formula_68
In particular, with "m" = "n",
formula_69
These last two identities provide a way to compute Fibonacci numbers recursively in "O"(log "n") arithmetic operations. This matches the time for computing the n-th Fibonacci number from the closed-form matrix formula, but with fewer redundant steps if one avoids recomputing an already computed Fibonacci number (recursion with memoization).
Combinatorial identities.
Combinatorial proofs.
Most identities involving Fibonacci numbers can be proved using combinatorial arguments using the fact that formula_70 can be interpreted as the number of (possibly empty) sequences of 1s and 2s whose sum is formula_71. This can be taken as the definition of formula_70 with the conventions formula_2, meaning no such sequence exists whose sum is −1, and formula_72, meaning the empty sequence "adds up" to 0. In the following, formula_73 is the cardinality of a set:
formula_74
formula_75
formula_76
formula_77
formula_78
formula_79
In this manner the recurrence relation
formula_80
may be understood by dividing the formula_70 sequences into two non-overlapping sets where all sequences either begin with 1 or 2:
formula_81
Excluding the first element, the remaining terms in each sequence sum to formula_82 or formula_83 and the cardinality of each set is formula_84 or formula_85 giving a total of formula_86 sequences, showing this is equal to formula_70.
In a similar manner it may be shown that the sum of the first Fibonacci numbers up to the n-th is equal to the ("n" + 2)-th Fibonacci number minus 1. In symbols:
formula_87
This may be seen by dividing all sequences summing to formula_88 based on the location of the first 2. Specifically, each set consists of those sequences that start formula_89 until the last two sets formula_90 each with cardinality 1.
Following the same logic as before, by summing the cardinality of each set we see that
formula_91
... where the last two terms have the value formula_72. From this it follows that formula_92.
A similar argument, grouping the sums by the position of the first 1 rather than the first 2 gives two more identities:
formula_93
and
formula_94
In words, the sum of the first Fibonacci numbers with odd index up to formula_95 is the (2"n")-th Fibonacci number, and the sum of the first Fibonacci numbers with even index up to formula_96 is the (2"n" + 1)-th Fibonacci number minus 1.
A different trick may be used to prove
formula_97
or in words, the sum of the squares of the first Fibonacci numbers up to formula_70 is the product of the n-th and ("n" + 1)-th Fibonacci numbers. To see this, begin with a Fibonacci rectangle of size formula_98 and decompose it into squares of size formula_99; from this the identity follows by comparing areas:
Symbolic method.
The sequence formula_100 is also considered using the symbolic method. More precisely, this sequence corresponds to a specifiable combinatorial class. The specification of this sequence is formula_101. Indeed, as stated above, the formula_102-th Fibonacci number equals the number of combinatorial compositions (ordered partitions) of formula_71 using terms 1 and 2.
It follows that the ordinary generating function of the Fibonacci sequence, formula_103, is the rational function formula_104
Induction proofs.
Fibonacci identities often can be easily proved using mathematical induction.
For example, reconsider
formula_105
Adding formula_106 to both sides gives
formula_107
and so we have the formula for formula_88
formula_108
Similarly, add formula_109 to both sides of
formula_97
to give
formula_110
formula_111
Binet formula proofs.
The Binet formula is
formula_112
This can be used to prove Fibonacci identities.
For example, to prove that formula_113
note that the left hand side multiplied by formula_114 becomes
formula_115
as required, using the facts formula_116 and formula_117 to simplify the equations.
Other identities.
Numerous other identities can be derived using various methods. Here are some of them:
Cassini's and Catalan's identities.
Cassini's identity states that
formula_118
Catalan's identity is a generalization:
formula_119
d'Ocagne's identity.
formula_120
formula_121
where "L""n" is the n-th Lucas number. The last is an identity for doubling n; other identities of this type are
formula_122
by Cassini's identity.
formula_123
formula_124
formula_125
These can be found experimentally using lattice reduction, and are useful in setting up the special number field sieve to factorize a Fibonacci number.
More generally,
formula_126
or alternatively
formula_127
Putting "k" = 2 in this formula, one gets again the formulas of the end of above section Matrix form.
Generating function.
The generating function of the Fibonacci sequence is the power series
formula_128
This series is convergent for any complex number formula_129 satisfying formula_130 and its sum has a simple closed form:
formula_131
This can be proved by multiplying by formula_132:
formula_133
where all terms involving formula_134 for formula_135 cancel out because of the defining Fibonacci recurrence relation.
The partial fraction decomposition is given by
formula_136
where formula_137 is the golden ratio and formula_138 is its conjugate.
The related function formula_139 is the generating function for the negafibonacci numbers, and formula_140 satisfies the functional equation
formula_141
Using formula_129 equal to any of 0.01, 0.001, 0.0001, etc. lays out the first Fibonacci numbers in the decimal expansion of formula_140. For example, formula_142
Reciprocal sums.
Infinite sums over reciprocal Fibonacci numbers can sometimes be evaluated in terms of theta functions. For example, the sum of every odd-indexed reciprocal Fibonacci number can be written as
formula_143
and the sum of squared reciprocal Fibonacci numbers as
formula_144
If we add 1 to each Fibonacci number in the first sum, there is also the closed form
formula_145
and there is a "nested" sum of squared Fibonacci numbers giving the reciprocal of the golden ratio,
formula_146
The sum of all even-indexed reciprocal Fibonacci numbers is
formula_147
with the Lambert series formula_148 since formula_149
So the reciprocal Fibonacci constant is
formula_150
Moreover, this number has been proved irrational by Richard André-Jeannin.
Millin's series gives the identity
formula_151
which follows from the closed form for its partial sums as N tends to infinity:
formula_152
Primes and divisibility.
Divisibility properties.
Every third number of the sequence is even (a multiple of formula_153) and, more generally, every k-th number of the sequence is a multiple of "Fk". Thus the Fibonacci sequence is an example of a divisibility sequence. In fact, the Fibonacci sequence satisfies the stronger divisibility property
formula_154
where gcd is the greatest common divisor function.
In particular, any three consecutive Fibonacci numbers are pairwise coprime because both formula_155 and formula_156. That is,
formula_157
for every n.
Every prime number p divides a Fibonacci number that can be determined by the value of p modulo 5. If p is congruent to 1 or 4 modulo 5, then p divides "F""p"−1, and if p is congruent to 2 or 3 modulo 5, then, p divides "F""p"+1. The remaining case is that "p" = 5, and in this case p divides "Fp".
formula_158
These cases can be combined into a single, non-piecewise formula, using the Legendre symbol:
formula_159
Primality testing.
The above formula can be used as a primality test in the sense that if
formula_160
where the Legendre symbol has been replaced by the Jacobi symbol, then this is evidence that n is a prime, and if it fails to hold, then n is definitely not a prime. If n is composite and satisfies the formula, then n is a "Fibonacci pseudoprime". When m is large – say a 500-bit number – then we can calculate "F""m" (mod "n") efficiently using the matrix form. Thus
formula_161
Here the matrix power "A""m" is calculated using modular exponentiation, which can be adapted to matrices.
Fibonacci primes.
A "Fibonacci prime" is a Fibonacci number that is prime. The first few are:
2, 3, 5, 13, 89, 233, 1597, 28657, 514229, ...
Fibonacci primes with thousands of digits have been found, but it is not known whether there are infinitely many.
"F""kn" is divisible by "F""n", so, apart from "F"4 = 3, any Fibonacci prime must have a prime index. As there are arbitrarily long runs of composite numbers, there are therefore also arbitrarily long runs of composite Fibonacci numbers.
No Fibonacci number greater than "F"6 = 8 is one greater or one less than a prime number.
The only nontrivial square Fibonacci number is 144. Attila Pethő proved in 2001 that there is only a finite number of perfect power Fibonacci numbers. In 2006, Y. Bugeaud, M. Mignotte, and S. Siksek proved that 8 and 144 are the only such non-trivial perfect powers.
1, 3, 21, and 55 are the only triangular Fibonacci numbers, which was conjectured by Vern Hoggatt and proved by Luo Ming.
No Fibonacci number can be a perfect number. More generally, no Fibonacci number other than 1 can be multiply perfect, and no ratio of two Fibonacci numbers can be perfect.
Prime divisors.
With the exceptions of 1, 8 and 144 ("F"1 = "F"2, "F"6 and "F"12) every Fibonacci number has a prime factor that is not a factor of any smaller Fibonacci number (Carmichael's theorem). As a result, 8 and 144 ("F"6 and "F"12) are the only Fibonacci numbers that are the product of other Fibonacci numbers.
The divisibility of Fibonacci numbers by a prime p is related to the Legendre symbol formula_162 which is evaluated as follows:
formula_163
If p is a prime number then
formula_164
For example,
formula_165
It is not known whether there exists a prime p such that
formula_166
Such primes (if there are any) would be called Wall–Sun–Sun primes.
Also, if "p" ≠ 5 is an odd prime number then:
formula_167
Example 1. "p" = 7, in this case "p" ≡ 3 (mod 4) and we have:
formula_168
formula_169
formula_170
Example 2. "p" = 11, in this case "p" ≡ 3 (mod 4) and we have:
formula_171
formula_172
formula_173
Example 3. "p" = 13, in this case "p" ≡ 1 (mod 4) and we have:
formula_174
formula_175
formula_176
Example 4. "p" = 29, in this case "p" ≡ 1 (mod 4) and we have:
formula_177
formula_178
formula_179
For odd n, all odd prime divisors of "F""n" are congruent to 1 modulo 4, implying that all odd divisors of "F""n" (as the products of odd prime divisors) are congruent to 1 modulo 4.
For example,
formula_180
All known factors of Fibonacci numbers "F"("i") for all "i" < 50000 are collected at the relevant repositories.
Periodicity modulo "n".
If the members of the Fibonacci sequence are taken mod n, the resulting sequence is periodic with period at most 6"n". The lengths of the periods for various n form the so-called Pisano periods. Determining a general formula for the Pisano periods is an open problem, which includes as a subproblem a special instance of the problem of finding the multiplicative order of a modular integer or of an element in a finite field. However, for any particular n, the Pisano period may be found as an instance of cycle detection.
Generalizations.
The Fibonacci sequence is one of the simplest and earliest known sequences defined by a recurrence relation, and specifically by a linear difference equation. All these sequences may be viewed as generalizations of the Fibonacci sequence. In particular, Binet's formula may be generalized to any sequence that is a solution of a homogeneous linear difference equation with constant coefficients.
Some specific examples that are close, in some sense, to the Fibonacci sequence include:
Applications.
Mathematics.
The Fibonacci numbers occur as the sums of binomial coefficients in the "shallow" diagonals of Pascal's triangle:
formula_181
This can be proved by expanding the generating function
formula_182
and collecting like terms of formula_183.
To see how the formula is used, we can arrange the sums by the number of terms present:
which is formula_184, where we are choosing the positions of k twos from "n"−"k"−1 terms.
These numbers also give the solution to certain enumerative problems, the most common of which is that of counting the number of ways of writing a given number n as an ordered sum of 1s and 2s (called compositions); there are "F""n"+1 ways to do this (equivalently, it's also the number of domino tilings of the formula_185 rectangle). For example, there are "F"5+1 = "F"6 = 8 ways one can climb a staircase of 5 steps, taking one or two steps at a time:
The figure shows that 8 can be decomposed into 5 (the number of ways to climb 4 steps, followed by a single-step) plus 3 (the number of ways to climb 3 steps, followed by a double-step). The same reasoning is applied recursively until a single step, of which there is only one way to climb.
The Fibonacci numbers can be found in different ways among the set of binary strings, or equivalently, among the subsets of a given set.
Nature.
Fibonacci sequences appear in biological settings, such as branching in trees, arrangement of leaves on a stem, the fruitlets of a pineapple, the flowering of artichoke, the arrangement of a pine cone, and the family tree of honeybees. Kepler pointed out the presence of the Fibonacci sequence in nature, using it to explain the (golden ratio-related) pentagonal form of some flowers. Field daisies most often have petals in counts of Fibonacci numbers. In 1830, Karl Friedrich Schimper and Alexander Braun discovered that the parastichies (spiral phyllotaxis) of plants were frequently expressed as fractions involving Fibonacci numbers.
Przemysław Prusinkiewicz advanced the idea that real instances can in part be understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars.
A model for the pattern of florets in the head of a sunflower was proposed by Helmut Vogel in 1979. This has the form
formula_187
where n is the index number of the floret and c is a constant scaling factor; the florets thus lie on Fermat's spiral. The divergence angle, approximately 137.51°, is the golden angle, dividing the circle in the golden ratio. Because this ratio is irrational, no floret has a neighbor at exactly the same angle from the center, so the florets pack efficiently. Because the rational approximations to the golden ratio are of the form "F"( "j"):"F"( "j" + 1), the nearest neighbors of floret number n are those at "n" ± "F"( "j") for some index j, which depends on r, the distance from the center. Sunflowers and similar flowers most commonly have spirals of florets in clockwise and counter-clockwise directions in the amount of adjacent Fibonacci numbers, typically counted by the outermost range of radii.
Fibonacci numbers also appear in the ancestral pedigrees of bees (which are haplodiploids), according to the following rules:
Thus, a male bee always has one parent, and a female bee has two. If one traces the pedigree of any male bee (1 bee), he has 1 parent (1 bee), 2 grandparents, 3 great-grandparents, 5 great-great-grandparents, and so on. This sequence of numbers of parents is the Fibonacci sequence. The number of ancestors at each level, "F""n", is the number of female ancestors, which is "F""n"−1, plus the number of male ancestors, which is "F""n"−2. This is under the unrealistic assumption that the ancestors at each level are otherwise unrelated.
It has similarly been noticed that the number of possible ancestors on the human X chromosome inheritance line at a given ancestral generation also follows the Fibonacci sequence. A male individual has an X chromosome, which he received from his mother, and a Y chromosome, which he received from his father. The male counts as the "origin" of his own X chromosome (formula_155), and at his parents' generation, his X chromosome came from a single parent (formula_188). The male's mother received one X chromosome from her mother (the son's maternal grandmother), and one from her father (the son's maternal grandfather), so two grandparents contributed to the male descendant's X chromosome (formula_153). The maternal grandfather received his X chromosome from his mother, and the maternal grandmother received X chromosomes from both of her parents, so three great-grandparents contributed to the male descendant's X chromosome (formula_189). Five great-great-grandparents contributed to the male descendant's X chromosome (formula_190), etc. (This assumes that all ancestors of a given descendant are independent, but if any genealogy is traced far enough back in time, ancestors begin to appear on multiple lines of the genealogy, until eventually a population founder appears on all lines of the genealogy.)
References.
Explanatory footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_0=0,\\quad F_1= 1,"
},
{
"math_id": 1,
"text": "F_n=F_{n-1} + F_{n-2}"
},
{
"math_id": 2,
"text": "F_0 = 0"
},
{
"math_id": 3,
"text": "F_1=F_2=1,"
},
{
"math_id": 4,
"text": "F_n=F_{n-1} + F_{n-2}"
},
{
"math_id": 5,
"text": "\nF_n = \\frac{\\varphi^n-\\psi^n}{\\varphi-\\psi} = \\frac{\\varphi^n-\\psi^n}{\\sqrt 5},\n"
},
{
"math_id": 6,
"text": "\n\\varphi = \\frac{1 + \\sqrt{5}}{2} \\approx 1.61803\\,39887\\ldots\n"
},
{
"math_id": 7,
"text": "\n\\psi = \\frac{1 - \\sqrt{5}}{2} = 1 - \\varphi = - {1 \\over \\varphi} \\approx -0.61803\\,39887\\ldots.\n"
},
{
"math_id": 8,
"text": "\\psi = -\\varphi^{-1}"
},
{
"math_id": 9,
"text": "\nF_n = \\frac{\\varphi^n - (-\\varphi)^{-n}}{\\sqrt 5} = \\frac{\\varphi^n - (-\\varphi)^{-n}}{2\\varphi - 1}.\n"
},
{
"math_id": 10,
"text": "x^2 = x + 1"
},
{
"math_id": 11,
"text": "x^n = x^{n-1} + x^{n-2},"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\varphi^n &= \\varphi^{n-1} + \\varphi^{n-2}, \\\\[3mu]\n\\psi^n &= \\psi^{n-1} + \\psi^{n-2}.\n\\end{align}"
},
{
"math_id": 13,
"text": "U_n=a \\varphi^n + b \\psi^n"
},
{
"math_id": 14,
"text": "\\begin{align}\nU_n &= a\\varphi^n + b\\psi^n \\\\[3mu]\n&= a(\\varphi^{n-1} + \\varphi^{n-2}) + b(\\psi^{n-1} + \\psi^{n-2}) \\\\[3mu]\n&= a\\varphi^{n-1} + b\\psi^{n-1} + a\\varphi^{n-2} + b\\psi^{n-2} \\\\[3mu]\n&= U_{n-1} + U_{n-2}.\n\\end{align}"
},
{
"math_id": 15,
"text": "\n\\left\\{\\begin{align} a + b &= 0 \\\\ \\varphi a + \\psi b &= 1\\end{align}\\right.\n"
},
{
"math_id": 16,
"text": "\na = \\frac{1}{\\varphi-\\psi} = \\frac{1}{\\sqrt 5},\\quad b = -a,\n"
},
{
"math_id": 17,
"text": " U_n = a\\varphi^n + b\\psi^n "
},
{
"math_id": 18,
"text": "\\begin{align}\na&=\\frac{U_1-U_0\\psi}{\\sqrt 5}, \\\\[3mu]\nb&=\\frac{U_0\\varphi-U_1}{\\sqrt 5}.\n\\end{align}"
},
{
"math_id": 19,
"text": "\\left|\\frac{\\psi^{n}}{\\sqrt 5}\\right| < \\frac{1}{2}"
},
{
"math_id": 20,
"text": "\\frac{\\varphi^n}{\\sqrt 5}"
},
{
"math_id": 21,
"text": "F_n=\\left\\lfloor\\frac{\\varphi^n}{\\sqrt 5}\\right\\rceil,\\ n \\geq 0."
},
{
"math_id": 22,
"text": "n(F) = \\left\\lfloor \\log_\\varphi \\sqrt{5}F\\right\\rceil,\\ F \\geq 1."
},
{
"math_id": 23,
"text": "n_{\\mathrm{largest}}(F) = \\left\\lfloor \\log_\\varphi \\sqrt{5}(F+1/2)\\right\\rfloor,\\ F \\geq 0,"
},
{
"math_id": 24,
"text": "\\log_\\varphi(x) = \\ln(x)/\\ln(\\varphi) = \\log_{10}(x)/\\log_{10}(\\varphi)"
},
{
"math_id": 25,
"text": "\\ln(\\varphi) = 0.481211\\ldots"
},
{
"math_id": 26,
"text": "\\log_{10}(\\varphi) = 0.208987\\ldots"
},
{
"math_id": 27,
"text": "\\varphi^n/\\sqrt5"
},
{
"math_id": 28,
"text": "n\\log_{10}\\varphi\\approx 0.2090\\, n"
},
{
"math_id": 29,
"text": "n\\log_b\\varphi = \\frac{n \\log \\varphi}{\\log b}."
},
{
"math_id": 30,
"text": "\\varphi\\colon "
},
{
"math_id": 31,
"text": "\\lim_{n\\to\\infty}\\frac{F_{n+1}}{F_n}=\\varphi."
},
{
"math_id": 32,
"text": "U_0"
},
{
"math_id": 33,
"text": "U_1"
},
{
"math_id": 34,
"text": "U_1 = -U_0/\\varphi"
},
{
"math_id": 35,
"text": "\\lim_{n\\to\\infty}\\frac{F_{n+m}}{F_n}=\\varphi^m\n"
},
{
"math_id": 36,
"text": "\\varphi"
},
{
"math_id": 37,
"text": "\\varphi^2 = \\varphi + 1,"
},
{
"math_id": 38,
"text": "\\varphi^n"
},
{
"math_id": 39,
"text": "\\varphi^n = F_n\\varphi + F_{n-1}."
},
{
"math_id": 40,
"text": "\\varphi^{n+1} = (F_n\\varphi + F_{n-1})\\varphi = F_n\\varphi^2 + F_{n-1}\\varphi = F_n(\\varphi+1) + F_{n-1}\\varphi = (F_n + F_{n-1})\\varphi + F_n = F_{n+1}\\varphi + F_n."
},
{
"math_id": 41,
"text": "\\psi = -1/\\varphi"
},
{
"math_id": 42,
"text": "\\psi^2 = \\psi + 1"
},
{
"math_id": 43,
"text": "\\psi^n = F_n\\psi + F_{n-1}."
},
{
"math_id": 44,
"text": "F_n = F_{n+2} - F_{n+1}."
},
{
"math_id": 45,
"text": "5x^2+4"
},
{
"math_id": 46,
"text": "5x^2-4"
},
{
"math_id": 47,
"text": "F_n = (\\varphi^n - (-1)^n \\varphi^{-n}) / \\sqrt{5}"
},
{
"math_id": 48,
"text": "\\sqrt{5} \\varphi^n"
},
{
"math_id": 49,
"text": "\\varphi^n = \\frac{F_n\\sqrt{5} \\pm \\sqrt{5{F_n}^2 + 4(-1)^n}}{2}."
},
{
"math_id": 50,
"text": "\\varphi^n = F_n \\varphi + F_{n-1} = (F_n\\sqrt{5} + F_n + 2 F_{n-1})/2"
},
{
"math_id": 51,
"text": "5{F_n}^2 + 4(-1)^n = (F_n + 2F_{n-1})^2\\,."
},
{
"math_id": 52,
"text": "\n{F_{k+2} \\choose F_{k+1}}\n= \\begin{pmatrix} 1 & 1 \\\\ 1 & 0 \\end{pmatrix} {F_{k+1} \\choose F_{k}} "
},
{
"math_id": 53,
"text": " \\vec F_{k+1} = \\mathbf{A} \\vec F_{k},"
},
{
"math_id": 54,
"text": "\\vec F_n = \\mathbf{A}^n \\vec F_0"
},
{
"math_id": 55,
"text": "\\varphi=\\tfrac12\\bigl(1+\\sqrt5~\\!\\bigr)"
},
{
"math_id": 56,
"text": "\\psi=-\\varphi^{-1}=\\tfrac12\\bigl(1-\\sqrt5~\\!\\bigr)"
},
{
"math_id": 57,
"text": "\\vec \\mu={\\varphi \\choose 1}, \\quad \\vec\\nu={-\\varphi^{-1} \\choose 1}."
},
{
"math_id": 58,
"text": "\\vec F_0={1 \\choose 0}=\\frac{1}{\\sqrt{5}}\\vec{\\mu}-\\frac{1}{\\sqrt{5}}\\vec{\\nu},"
},
{
"math_id": 59,
"text": "\\begin{align}\n\\vec F_n &= \\frac{1}{\\sqrt{5}}A^n\\vec\\mu-\\frac{1}{\\sqrt{5}}A^n\\vec\\nu \\\\\n&= \\frac{1}{\\sqrt{5}}\\varphi^n\\vec\\mu - \\frac{1}{\\sqrt{5}}(-\\varphi)^{-n}\\vec\\nu \\\\\n&= \\cfrac{1}{\\sqrt{5}}\\left(\\cfrac{1+\\sqrt{5}}{2}\\right)^{\\!n}{\\varphi \\choose 1} \\,-\\, \\cfrac{1}{\\sqrt{5}}\\left(\\cfrac{1-\\sqrt{5}}{2}\\right)^{\\!n}{-\\varphi^{-1}\\choose 1}.\n\\end{align}"
},
{
"math_id": 60,
"text": "\nF_n = \\cfrac{1}{\\sqrt{5}}\\left(\\cfrac{1+\\sqrt{5}}{2}\\right)^{\\!n} - \\, \\cfrac{1}{\\sqrt{5}}\\left(\\cfrac{1-\\sqrt{5}}{2}\\right)^{\\!n}.\n"
},
{
"math_id": 61,
"text": "\\begin{align} A & = S\\Lambda S^{-1}, \\\\[3mu]\n A^n & = S\\Lambda^n S^{-1},\n\\end{align}"
},
{
"math_id": 62,
"text": "\n\\Lambda=\\begin{pmatrix} \\varphi & 0 \\\\ 0 & -\\varphi^{-1}\\! \\end{pmatrix}, \\quad\nS=\\begin{pmatrix} \\varphi & -\\varphi^{-1} \\\\ 1 & 1 \\end{pmatrix}.\n"
},
{
"math_id": 63,
"text": "\\begin{align} {F_{n+1} \\choose F_n} & = A^{n} {F_1 \\choose F_0} \\\\\n & = S \\Lambda^n S^{-1} {F_1 \\choose F_0} \\\\\n & = S \\begin{pmatrix} \\varphi^n & 0 \\\\ 0 & (-\\varphi)^{-n} \\end{pmatrix} S^{-1} {F_1 \\choose F_0} \\\\\n & = \\begin{pmatrix} \\varphi & -\\varphi^{-1} \\\\ 1 & 1 \\end{pmatrix}\n \\begin{pmatrix} \\varphi^n & 0 \\\\ 0 & (-\\varphi)^{-n} \\end{pmatrix}\n \\frac{1}{\\sqrt{5}}\\begin{pmatrix} 1 & \\varphi^{-1} \\\\ -1 & \\varphi \\end{pmatrix} {1 \\choose 0},\n\\end{align}"
},
{
"math_id": 64,
"text": "F_n = \\cfrac{\\varphi^n-(-\\varphi)^{-n}}{\\sqrt{5}}."
},
{
"math_id": 65,
"text": "\\varphi = 1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\ddots}}}."
},
{
"math_id": 66,
"text": "\\begin{pmatrix} 1 & 1 \\\\ 1 & 0 \\end{pmatrix}^n = \\begin{pmatrix} F_{n+1} & F_n \\\\ F_n & F_{n-1} \\end{pmatrix}."
},
{
"math_id": 67,
"text": "(-1)^n = F_{n+1}F_{n-1} - {F_n}^2."
},
{
"math_id": 68,
"text": "\\begin{align}\n {F_m}{F_n} + {F_{m-1}}{F_{n-1}} &= F_{m+n-1}, \\\\[3mu]\n F_{m} F_{n+1} + F_{m-1} F_n &= F_{m+n} .\n\\end{align}"
},
{
"math_id": 69,
"text": "\\begin{align}\n F_{2 n-1} &= {F_n}^2 + {F_{n-1}}^2 \\\\[6mu]\n F_{2 n\\phantom{{}-1}} &= (F_{n-1}+F_{n+1})F_n \\\\[3mu]\n &= (2 F_{n-1}+F_n)F_n \\\\[3mu]\n &= (2 F_{n+1}-F_n)F_n.\n\\end{align}"
},
{
"math_id": 70,
"text": "F_n"
},
{
"math_id": 71,
"text": "n-1"
},
{
"math_id": 72,
"text": "F_1 = 1"
},
{
"math_id": 73,
"text": "|{...}|"
},
{
"math_id": 74,
"text": "F_0 = 0 = |\\{\\}|"
},
{
"math_id": 75,
"text": "F_1 = 1 = |\\{()\\}|"
},
{
"math_id": 76,
"text": "F_2 = 1 = |\\{(1)\\}|"
},
{
"math_id": 77,
"text": "F_3 = 2 = |\\{(1,1),(2)\\}|"
},
{
"math_id": 78,
"text": "F_4 = 3 = |\\{(1,1,1),(1,2),(2,1)\\}|"
},
{
"math_id": 79,
"text": "F_5 = 5 = |\\{(1,1,1,1),(1,1,2),(1,2,1),(2,1,1),(2,2)\\}|"
},
{
"math_id": 80,
"text": "F_n = F_{n-1} + F_{n-2}"
},
{
"math_id": 81,
"text": "F_n = |\\{(1,...),(1,...),...\\}| + |\\{(2,...),(2,...),...\\}|"
},
{
"math_id": 82,
"text": "n-2"
},
{
"math_id": 83,
"text": "n-3"
},
{
"math_id": 84,
"text": "F_{n-1}"
},
{
"math_id": 85,
"text": "F_{n-2}"
},
{
"math_id": 86,
"text": "F_{n-1}+F_{n-2}"
},
{
"math_id": 87,
"text": "\\sum_{i=1}^n F_i = F_{n+2} - 1"
},
{
"math_id": 88,
"text": "n+1"
},
{
"math_id": 89,
"text": "(2,...), (1,2,...), ..., "
},
{
"math_id": 90,
"text": "\\{(1,1,...,1,2)\\}, \\{(1,1,...,1)\\}"
},
{
"math_id": 91,
"text": "F_{n+2} = F_n + F_{n-1} + ... + |\\{(1,1,...,1,2)\\}| + |\\{(1,1,...,1)\\}|"
},
{
"math_id": 92,
"text": "\\sum_{i=1}^n F_i = F_{n+2}-1"
},
{
"math_id": 93,
"text": "\\sum_{i=0}^{n-1} F_{2 i+1} = F_{2 n}"
},
{
"math_id": 94,
"text": "\\sum_{i=1}^{n} F_{2 i} = F_{2 n+1}-1."
},
{
"math_id": 95,
"text": "F_{2 n-1}"
},
{
"math_id": 96,
"text": "F_{2 n}"
},
{
"math_id": 97,
"text": "\\sum_{i=1}^n F_i^2 = F_n F_{n+1}"
},
{
"math_id": 98,
"text": "F_n \\times F_{n+1}"
},
{
"math_id": 99,
"text": "F_n, F_{n-1}, ..., F_1"
},
{
"math_id": 100,
"text": "(F_n)_{n\\in\\mathbb N}"
},
{
"math_id": 101,
"text": "\\operatorname{Seq}(\\mathcal{Z+Z^2})"
},
{
"math_id": 102,
"text": "n"
},
{
"math_id": 103,
"text": "\\sum_{i=0}^\\infty F_iz^i"
},
{
"math_id": 104,
"text": "\\frac{z}{1-z-z^2}."
},
{
"math_id": 105,
"text": "\\sum_{i=1}^n F_i = F_{n+2} - 1."
},
{
"math_id": 106,
"text": "F_{n+1}"
},
{
"math_id": 107,
"text": "\\sum_{i=1}^n F_i + F_{n+1} = F_{n+1} + F_{n+2} - 1"
},
{
"math_id": 108,
"text": "\\sum_{i=1}^{n+1} F_i = F_{n+3} - 1"
},
{
"math_id": 109,
"text": "{F_{n+1}}^2"
},
{
"math_id": 110,
"text": "\\sum_{i=1}^n F_i^2 + {F_{n+1}}^2 = F_{n+1}\\left(F_n + F_{n+1}\\right)"
},
{
"math_id": 111,
"text": "\\sum_{i=1}^{n+1} F_i^2 = F_{n+1}F_{n+2}"
},
{
"math_id": 112,
"text": "\\sqrt5F_n = \\varphi^n - \\psi^n."
},
{
"math_id": 113,
"text": "\\sum_{i=1}^n F_i = F_{n+2} - 1"
},
{
"math_id": 114,
"text": "\\sqrt5"
},
{
"math_id": 115,
"text": "\n\\begin{align}\n1 +& \\varphi + \\varphi^2 + \\dots + \\varphi^n - \\left(1 + \\psi + \\psi^2 + \\dots + \\psi^n \\right)\\\\\n&= \\frac{\\varphi^{n+1}-1}{\\varphi-1} - \\frac{\\psi^{n+1}-1}{\\psi-1}\\\\\n&= \\frac{\\varphi^{n+1}-1}{-\\psi} - \\frac{\\psi^{n+1}-1}{-\\varphi}\\\\\n&= \\frac{-\\varphi^{n+2}+\\varphi + \\psi^{n+2}-\\psi}{\\varphi\\psi}\\\\\n&= \\varphi^{n+2}-\\psi^{n+2}-(\\varphi-\\psi)\\\\\n&= \\sqrt5(F_{n+2}-1)\\\\\n\\end{align}"
},
{
"math_id": 116,
"text": "\\varphi\\psi =- 1"
},
{
"math_id": 117,
"text": "\\varphi-\\psi=\\sqrt5"
},
{
"math_id": 118,
"text": "{F_n}^2 - F_{n+1}F_{n-1} = (-1)^{n-1}"
},
{
"math_id": 119,
"text": "{F_n}^2 - F_{n+r}F_{n-r} = (-1)^{n-r}{F_r}^2"
},
{
"math_id": 120,
"text": "F_m F_{n+1} - F_{m+1} F_n = (-1)^n F_{m-n}"
},
{
"math_id": 121,
"text": "F_{2 n} = {F_{n+1}}^2 - {F_{n-1}}^2 = F_n \\left (F_{n+1}+F_{n-1} \\right ) = F_nL_n"
},
{
"math_id": 122,
"text": "F_{3 n} = 2{F_n}^3 + 3 F_n F_{n+1} F_{n-1} = 5{F_n}^3 + 3 (-1)^n F_n"
},
{
"math_id": 123,
"text": "F_{3 n+1} = {F_{n+1}}^3 + 3 F_{n+1}{F_n}^2 - {F_n}^3"
},
{
"math_id": 124,
"text": "F_{3 n+2} = {F_{n+1}}^3 + 3 {F_{n+1}}^2 F_n + {F_n}^3"
},
{
"math_id": 125,
"text": "F_{4 n} = 4 F_n F_{n+1} \\left ({F_{n+1}}^2 + 2{F_n}^2 \\right ) - 3{F_n}^2 \\left ({F_n}^2 + 2{F_{n+1}}^2 \\right )"
},
{
"math_id": 126,
"text": "F_{k n+c} = \\sum_{i=0}^k {k\\choose i} F_{c-i} {F_n}^i {F_{n+1}}^{k-i}."
},
{
"math_id": 127,
"text": "F_{k n+c} = \\sum_{i=0}^k {k\\choose i} F_{c+i} {F_n}^i {F_{n-1}}^{k-i}."
},
{
"math_id": 128,
"text": "\ns(z) = \\sum_{k=0}^\\infty F_k z^k = 0 + z + z^2 + 2z^3 + 3z^4 + 5z^5 + \\dots.\n"
},
{
"math_id": 129,
"text": "z"
},
{
"math_id": 130,
"text": "|z| < 1/\\varphi,"
},
{
"math_id": 131,
"text": "s(z)=\\frac{z}{1-z-z^2}."
},
{
"math_id": 132,
"text": "(1-z-z^2)"
},
{
"math_id": 133,
"text": "\\begin{align}\n(1 - z- z^2) s(z)\n &= \\sum_{k=0}^{\\infty} F_k z^k - \\sum_{k=0}^{\\infty} F_k z^{k+1} - \\sum_{k=0}^{\\infty} F_k z^{k+2} \\\\\n &= \\sum_{k=0}^{\\infty} F_k z^k - \\sum_{k=1}^{\\infty} F_{k-1} z^k - \\sum_{k=2}^{\\infty} F_{k-2} z^k \\\\\n &= 0z^0 + 1z^1 - 0z^1 + \\sum_{k=2}^{\\infty} (F_k - F_{k-1} - F_{k-2}) z^k \\\\\n &= z,\n\\end{align}"
},
{
"math_id": 134,
"text": "z^k"
},
{
"math_id": 135,
"text": "k \\ge 2"
},
{
"math_id": 136,
"text": "s(z) = \\frac{1}{\\sqrt5}\\left(\\frac{1}{1 - \\varphi z} - \\frac{1}{1 - \\psi z}\\right)"
},
{
"math_id": 137,
"text": "\\varphi = \\tfrac12\\left(1 + \\sqrt{5}\\right)"
},
{
"math_id": 138,
"text": "\\psi = \\tfrac12\\left(1 - \\sqrt{5}\\right)"
},
{
"math_id": 139,
"text": "z \\mapsto -s\\left(-1/z\\right)"
},
{
"math_id": 140,
"text": "s(z)"
},
{
"math_id": 141,
"text": "s(z) = s\\!\\left(-\\frac{1}{z}\\right)."
},
{
"math_id": 142,
"text": "s(0.001) = \\frac{0.001}{0.998999} = \\frac{1000}{998999} = 0.001001002003005008013021\\ldots."
},
{
"math_id": 143,
"text": "\\sum_{k=1}^\\infty \\frac{1}{F_{2 k-1}} = \\frac{\\sqrt{5}}{4} \\; \\vartheta_2\\!\\left(0, \\frac{3-\\sqrt 5}{2}\\right)^2 ,"
},
{
"math_id": 144,
"text": "\\sum_{k=1}^\\infty \\frac{1}{{F_k}^2} = \\frac{5}{24} \\!\\left(\\vartheta_2\\!\\left(0, \\frac{3-\\sqrt 5}{2}\\right)^4 - \\vartheta_4\\!\\left(0, \\frac{3-\\sqrt 5}{2}\\right)^4 + 1 \\right)."
},
{
"math_id": 145,
"text": "\\sum_{k=1}^\\infty \\frac{1}{1+F_{2 k-1}} = \\frac{\\sqrt{5}}{2},"
},
{
"math_id": 146,
"text": "\\sum_{k=1}^\\infty \\frac{(-1)^{k+1}}{\\sum_{j=1}^k {F_{j}}^2} = \\frac{\\sqrt{5}-1}{2} ."
},
{
"math_id": 147,
"text": "\\sum_{k=1}^{\\infty} \\frac{1}{F_{2 k}} = \\sqrt{5} \\left(L(\\psi^2) - L(\\psi^4)\\right) "
},
{
"math_id": 148,
"text": "\\textstyle L(q) := \\sum_{k=1}^{\\infty} \\frac{q^k}{1-q^k} ,"
},
{
"math_id": 149,
"text": "\\textstyle \\frac{1}{F_{2 k}} = \\sqrt{5} \\left(\\frac{\\psi^{2 k}}{1-\\psi^{2 k}} - \\frac{\\psi^{4 k}}{1-\\psi^{4 k}} \\right)\\!."
},
{
"math_id": 150,
"text": "\\sum_{k=1}^{\\infty} \\frac{1}{F_k} = \\sum_{k=1}^\\infty \\frac{1}{F_{2 k-1}} + \\sum_{k=1}^{\\infty} \\frac {1}{F_{2 k}} = 3.359885666243 \\dots"
},
{
"math_id": 151,
"text": "\\sum_{k=0}^{\\infty} \\frac{1}{F_{2^k}} = \\frac{7 - \\sqrt{5}}{2},"
},
{
"math_id": 152,
"text": "\\sum_{k=0}^N \\frac{1}{F_{2^k}} = 3 - \\frac{F_{2^N-1}}{F_{2^N}}."
},
{
"math_id": 153,
"text": "F_3=2"
},
{
"math_id": 154,
"text": "\\gcd(F_a,F_b,F_c,\\ldots) = F_{\\gcd(a,b,c,\\ldots)}\\,"
},
{
"math_id": 155,
"text": "F_1=1"
},
{
"math_id": 156,
"text": "F_2 = 1"
},
{
"math_id": 157,
"text": "\\gcd(F_n, F_{n+1}) = \\gcd(F_n, F_{n+2}) = \\gcd(F_{n+1}, F_{n+2}) = 1"
},
{
"math_id": 158,
"text": "\\begin{cases} p =5 & \\Rightarrow p \\mid F_{p}, \\\\ p \\equiv \\pm1 \\pmod 5 & \\Rightarrow p \\mid F_{p-1}, \\\\ p \\equiv \\pm2 \\pmod 5 & \\Rightarrow p \\mid F_{p+1}.\\end{cases}"
},
{
"math_id": 159,
"text": "p \\mid F_{p \\;-\\, \\left(\\frac{5}{p}\\right)}."
},
{
"math_id": 160,
"text": "n \\mid F_{n \\;-\\, \\left(\\frac{5}{n}\\right)},"
},
{
"math_id": 161,
"text": " \\begin{pmatrix} F_{m+1} & F_m \\\\ F_m & F_{m-1} \\end{pmatrix} \\equiv \\begin{pmatrix} 1 & 1 \\\\ 1 & 0 \\end{pmatrix}^m \\pmod n."
},
{
"math_id": 162,
"text": "\\bigl(\\tfrac{p}{5}\\bigr)"
},
{
"math_id": 163,
"text": "\\left(\\frac{p}{5}\\right) = \\begin{cases} 0 & \\text{if } p = 5\\\\ 1 & \\text{if } p \\equiv \\pm 1 \\pmod 5\\\\ -1 & \\text{if } p \\equiv \\pm 2 \\pmod 5.\\end{cases}"
},
{
"math_id": 164,
"text": " F_p \\equiv \\left(\\frac{p}{5}\\right) \\pmod p \\quad \\text{and}\\quad F_{p-\\left(\\frac{p}{5}\\right)} \\equiv 0 \\pmod p."
},
{
"math_id": 165,
"text": "\\begin{align}\n\\bigl(\\tfrac{2}{5}\\bigr) &= -1, &F_3 &= 2, &F_2&=1, \\\\\n\\bigl(\\tfrac{3}{5}\\bigr) &= -1, &F_4 &= 3,&F_3&=2, \\\\\n\\bigl(\\tfrac{5}{5}\\bigr) &= 0, &F_5 &= 5, \\\\\n\\bigl(\\tfrac{7}{5}\\bigr) &= -1, &F_8 &= 21,&F_7&=13, \\\\\n\\bigl(\\tfrac{11}{5}\\bigr)& = +1, &F_{10}& = 55, &F_{11}&=89.\n\\end{align}"
},
{
"math_id": 166,
"text": "F_{p-\\left(\\frac{p}{5}\\right)} \\equiv 0 \\pmod{p^2}."
},
{
"math_id": 167,
"text": "5 {F_{\\frac{p \\pm 1}{2}}}^2 \\equiv \\begin{cases}\n\\tfrac{1}{2} \\left (5\\bigl(\\tfrac{p}{5}\\bigr)\\pm 5 \\right ) \\pmod p & \\text{if } p \\equiv 1 \\pmod 4\\\\\n\\tfrac{1}{2} \\left (5\\bigl(\\tfrac{p}{5}\\bigr)\\mp 3 \\right ) \\pmod p & \\text{if } p \\equiv 3 \\pmod 4.\n\\end{cases}"
},
{
"math_id": 168,
"text": "\\bigl(\\tfrac{7}{5}\\bigr) = -1: \\qquad \\tfrac{1}{2}\\left(5 \\bigl(\\tfrac{7}{5}\\bigr)+3 \\right ) =-1, \\quad \\tfrac{1}{2} \\left(5\\bigl(\\tfrac{7}{5}\\bigr)-3 \\right )=-4."
},
{
"math_id": 169,
"text": "F_3=2 \\text{ and } F_4=3."
},
{
"math_id": 170,
"text": "5{F_3}^2=20\\equiv -1 \\pmod {7}\\;\\;\\text{ and }\\;\\;5{F_4}^2=45\\equiv -4 \\pmod {7}"
},
{
"math_id": 171,
"text": "\\bigl(\\tfrac{11}{5}\\bigr) = +1: \\qquad \\tfrac{1}{2}\\left( 5\\bigl(\\tfrac{11}{5}\\bigr)+3 \\right)=4, \\quad \\tfrac{1}{2} \\left(5\\bigl(\\tfrac{11}{5}\\bigr)- 3 \\right)=1."
},
{
"math_id": 172,
"text": "F_5=5 \\text{ and } F_6=8."
},
{
"math_id": 173,
"text": "5{F_5}^2=125\\equiv 4 \\pmod {11} \\;\\;\\text{ and }\\;\\;5{F_6}^2=320\\equiv 1 \\pmod {11}"
},
{
"math_id": 174,
"text": "\\bigl(\\tfrac{13}{5}\\bigr) = -1: \\qquad \\tfrac{1}{2}\\left(5\\bigl(\\tfrac{13}{5}\\bigr)-5 \\right) =-5, \\quad \\tfrac{1}{2}\\left(5\\bigl(\\tfrac{13}{5}\\bigr)+ 5 \\right)=0."
},
{
"math_id": 175,
"text": "F_6=8 \\text{ and } F_7=13."
},
{
"math_id": 176,
"text": "5{F_6}^2=320\\equiv -5 \\pmod {13} \\;\\;\\text{ and }\\;\\;5{F_7}^2=845\\equiv 0 \\pmod {13}"
},
{
"math_id": 177,
"text": "\\bigl(\\tfrac{29}{5}\\bigr) = +1: \\qquad \\tfrac{1}{2}\\left(5\\bigl(\\tfrac{29}{5}\\bigr)-5 \\right)=0, \\quad \\tfrac{1}{2}\\left(5\\bigl(\\tfrac{29}{5}\\bigr)+5 \\right)=5."
},
{
"math_id": 178,
"text": "F_{14}=377 \\text{ and } F_{15}=610."
},
{
"math_id": 179,
"text": "5{F_{14}}^2=710645\\equiv 0 \\pmod {29} \\;\\;\\text{ and }\\;\\;5{F_{15}}^2=1860500\\equiv 5 \\pmod {29}"
},
{
"math_id": 180,
"text": "F_1 = 1,\\ F_3 = 2,\\ F_5 = 5,\\ F_7 = 13,\\ F_9 = {\\color{Red}34} = 2 \\cdot 17,\\ F_{11} = 89,\\ F_{13} = 233,\\ F_{15} = {\\color{Red}610} = 2 \\cdot 5 \\cdot 61."
},
{
"math_id": 181,
"text": "F_n = \\sum_{k=0}^{\\left\\lfloor\\frac{n-1}{2}\\right\\rfloor} \\binom{n-k-1}{k}."
},
{
"math_id": 182,
"text": "\\frac{x}{1-x-x^2} = x + x^2(1+x) + x^3(1+x)^2 + \\dots + x^{k+1}(1+x)^k + \\dots = \\sum\\limits_{n=0}^\\infty F_n x^n"
},
{
"math_id": 183,
"text": "x^n"
},
{
"math_id": 184,
"text": "\\textstyle \\binom{5}{0}+\\binom{4}{1}+\\binom{3}{2}"
},
{
"math_id": 185,
"text": "2\\times n"
},
{
"math_id": 186,
"text": "(F_n F_{n+3})^2 + (2 F_{n+1}F_{n+2})^2 = {F_{2 n+3}}^2."
},
{
"math_id": 187,
"text": "\\theta = \\frac{2\\pi}{\\varphi^2} n,\\ r = c \\sqrt{n}"
},
{
"math_id": 188,
"text": "F_2=1"
},
{
"math_id": 189,
"text": "F_4=3"
},
{
"math_id": 190,
"text": "F_5=5"
}
] | https://en.wikipedia.org/wiki?curid=10918 |
10920061 | Adaptive-additive algorithm | In the studies of Fourier optics, sound synthesis, stellar interferometry, optical tweezers, and diffractive optical elements (DOEs) it is often important to know the spatial frequency phase of an observed wave source. In order to reconstruct this phase the Adaptive-Additive Algorithm (or AA algorithm), which derives from a group of adaptive (input-output) algorithms, can be used. The AA algorithm is an iterative algorithm that utilizes the Fourier Transform to calculate an unknown part of a propagating wave, normally the spatial frequency phase (k space). This can be done when given the phase’s known counterparts, usually an observed amplitude (position space) and an assumed starting amplitude (k space). To find the correct phase the algorithm uses error conversion, or the error between the desired and the theoretical intensities.
The algorithm.
History.
The adaptive-additive algorithm was originally created to reconstruct the spatial frequency phase of light intensity in the study of stellar interferometry. Since then, the AA algorithm has been adapted to work in the fields of Fourier Optics by Soifer and Dr. Hill, soft matter and optical tweezers by Dr. Grier, and sound synthesis by Röbel.
Example.
For the problem of reconstructing the spatial frequency phase ("k"-space) for a desired intensity in the image plane ("x"-space). Assume the amplitude and the starting phase of the wave in "k"-space is formula_0 and formula_1 respectively. Fourier transform the wave in "k"-space to "x" space.
formula_2
Then compare the transformed intensity formula_3 with the desired intensity formula_4, where
formula_5
formula_6
Check formula_7 against the convergence requirements. If the requirements are not met then mix the transformed amplitude formula_8 with desired amplitude formula_9.
formula_10
where "a" is mixing ratio and
formula_11.
Note that "a" is a percentage, defined on the interval 0 ≤ "a" ≤ 1.
Combine mixed amplitude with the "x"-space phase and inverse Fourier transform.
formula_12
Separate formula_13 and formula_14 and combine formula_0 with formula_14. Increase loop by one formula_15 and repeat. | [
{
"math_id": 0,
"text": "A_0"
},
{
"math_id": 1,
"text": "\\phi_n^{k}"
},
{
"math_id": 2,
"text": "A_0e^{i\\phi_n^{k}} \\xrightarrow{FFT} A_n^fe^{i\\phi_n^{f}}"
},
{
"math_id": 3,
"text": "I_n^f"
},
{
"math_id": 4,
"text": "I_0^f"
},
{
"math_id": 5,
"text": "\nI_n^f = \\left(A_n^f\\right)^2,\n"
},
{
"math_id": 6,
"text": "\n\\varepsilon = \\sqrt{\\left(I_n^f\\right)^2 - \\left(I_0\\right)^2}.\n"
},
{
"math_id": 7,
"text": "\\varepsilon"
},
{
"math_id": 8,
"text": "A_n^f"
},
{
"math_id": 9,
"text": "A^f"
},
{
"math_id": 10,
"text": "\\bar{A}^f_n = \\left[a A^f + (1-a) A_n^f\\right],"
},
{
"math_id": 11,
"text": "A^f = \\sqrt{I_0}"
},
{
"math_id": 12,
"text": "\\bar{A}^{f}e^{i\\phi_n^f} \\xrightarrow{iFFT} \\bar{A}_n^ke^{i\\phi_n^k}."
},
{
"math_id": 13,
"text": "\\bar{A}_n^k"
},
{
"math_id": 14,
"text": "\\phi^k_n"
},
{
"math_id": 15,
"text": " n \\to n + 1"
},
{
"math_id": 16,
"text": "a = 1"
},
{
"math_id": 17,
"text": "a = 0"
},
{
"math_id": 18,
"text": "\\bar{A}^k_n = A_0"
}
] | https://en.wikipedia.org/wiki?curid=10920061 |
1092012 | Quantization (image processing) | Lossy compression technique
Quantization, involved in image processing, is a lossy compression technique achieved by compressing a range of values to a single quantum (discrete) value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible. For example, reducing the number of colors required to represent a digital image makes it possible to reduce its file size. Specific applications include DCT data quantization in JPEG and DWT data quantization in JPEG 2000.
Color quantization.
Color quantization reduces the number of colors used in an image; this is important for displaying images on devices that support a limited number of colors and for efficiently compressing certain kinds of images. Most bitmap editors and many operating systems have built-in support for color quantization. Popular modern color quantization algorithms include the nearest color algorithm (for fixed palettes), the median cut algorithm, and an algorithm based on octrees.
It is common to combine color quantization with dithering to create an impression of a larger number of colors and eliminate banding artifacts.
Grayscale quantization.
Grayscale quantization, also known as gray level quantization, is a process in digital image processing that involves reducing the number of unique intensity levels (shades of gray) in an image while preserving its essential visual information. This technique is commonly used for simplifying images, reducing storage requirements, and facilitating processing operations. In grayscale quantization, an image with "N" intensity levels is converted into an image with a reduced number of levels, typically "L" levels, where "L"<"N". The process involves mapping each pixel's original intensity value to one of the new intensity levels. One of the simplest methods of grayscale quantization is uniform quantization, where the intensity range is divided into equal intervals, and each interval is represented by a single intensity value. Let's say we have an image with intensity levels ranging from 0 to 255 (8-bit grayscale). If we want to quantize it to 4 levels, the intervals would be [0-63], [64-127], [128-191], and [192-255]. Each interval would be represented by the midpoint intensity value, resulting in intensity levels of 31, 95, 159, and 223 respectively.
The formula for uniform quantization is:
formula_0
Where:
Let's quantize an original intensity value of 147 to 3 intensity levels.
Original intensity value: "x"=147
Desired intensity levels: "L"=3
We first need to calculate the size of each quantization interval:
formula_1
Using the uniform quantization formula:
formula_2
formula_3
formula_4
Rounding 191.25 to the nearest integer, we get formula_5
So, the quantized intensity value of 147 to 3 levels is 191.
Frequency quantization for image compression.
The human eye is fairly good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency (rapidly varying) brightness variation. This fact allows one to reduce the amount of information required by ignoring the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers.
As human vision is also more sensitive to luminance than chrominance, further compression can be obtained by working in a non-RGB color space which separates the two (e.g., YCbCr), and quantizing the channels separately.
Quantization matrices.
A typical video codec works by breaking the picture into discrete blocks (8×8 pixels in the case of MPEG). These blocks can then be subjected to discrete cosine transform (DCT) to calculate the frequency components, both horizontally and vertically. The resulting block (the same size as the original block) is then pre-multiplied by the quantization scale code and divided element-wise by the quantization matrix, and rounding each resultant element. The quantization matrix is designed to provide more resolution to more perceivable frequency components over less perceivable components (usually lower frequencies over high frequencies) in addition to transforming as many components to 0, which can be encoded with greatest efficiency. Many video encoders (such as DivX, Xvid, and 3ivx) and compression standards (such as MPEG-2 and H.264/AVC) allow custom matrices to be used. The extent of the reduction may be varied by changing the quantizer scale code, taking up much less bandwidth than a full quantizer matrix.
This is an example of DCT coefficient matrix:
formula_6
A common quantization matrix is:
formula_7
Dividing the DCT coefficient matrix element-wise with this quantization matrix, and rounding to integers results in:
formula_8
For example, using −415 (the DC coefficient) and rounding to the nearest integer
formula_9
Typically this process will result in matrices with values primarily in the upper left (low frequency) corner. By using a zig-zag ordering to group the non-zero entries and run length encoding, the quantized matrix can be much more efficiently stored than the non-quantized version.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q(x) = \\left \\lfloor \\frac{x}{\\Delta} \\right \\rfloor \\times \\Delta + \\frac{\\Delta}{2} "
},
{
"math_id": 1,
"text": "\\Delta = \\frac{255}{L-1} = \\frac{255}{3-1} = 127.5"
},
{
"math_id": 2,
"text": "Q(x) = \\left \\lfloor \\frac{147}{127.5} \\right \\rfloor \\times 127.5 + \\frac{127.5}{2}"
},
{
"math_id": 3,
"text": "Q(x) = \\left \\lfloor 1.15294118 \\right \\rfloor \\times 127.5 + \\frac{127.5}{2}"
},
{
"math_id": 4,
"text": "Q(x) = 1 \\times 127.5 + 63.75 = 191.25"
},
{
"math_id": 5,
"text": "Q(x) = 191"
},
{
"math_id": 6,
"text": "\n\\begin{bmatrix}\n -415 & -33 & -58 & 35 & 58 & -51 & -15 & -12 \\\\\n 5 & -34 & 49 & 18 & 27 & 1 & -5 & 3 \\\\\n -46 & 14 & 80 & -35 & -50 & 19 & 7 & -18 \\\\\n -53 & 21 & 34 & -20 & 2 & 34 & 36 & 12 \\\\\n 9 & -2 & 9 & -5 & -32 & -15 & 45 & 37 \\\\\n -8 & 15 & -16 & 7 & -8 & 11 & 4 & 7 \\\\\n 19 & -28 & -2 & -26 & -2 & 7 & -44 & -21 \\\\\n 18 & 25 & -12 & -44 & 35 & 48 & -37 & -3\n\\end{bmatrix}\n"
},
{
"math_id": 7,
"text": "\n\\begin{bmatrix}\n 16 & 11 & 10 & 16 & 24 & 40 & 51 & 61 \\\\\n 12 & 12 & 14 & 19 & 26 & 58 & 60 & 55 \\\\\n 14 & 13 & 16 & 24 & 40 & 57 & 69 & 56 \\\\\n 14 & 17 & 22 & 29 & 51 & 87 & 80 & 62 \\\\\n 18 & 22 & 37 & 56 & 68 & 109 & 103 & 77 \\\\\n 24 & 35 & 55 & 64 & 81 & 104 & 113 & 92 \\\\\n 49 & 64 & 78 & 87 & 103 & 121 & 120 & 101 \\\\\n 72 & 92 & 95 & 98 & 112 & 100 & 103 & 99\n\\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\n\\begin{bmatrix}\n -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\\\\n 0 & -3 & 4 & 1 & 1 & 0 & 0 & 0 \\\\\n -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\\\\n -4 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\n\\end{bmatrix}\n"
},
{
"math_id": 9,
"text": "\n\\mathrm{round}\n\\left(\n \\frac{-415}{16}\n\\right)\n=\n\\mathrm{round}\n\\left(\n -25.9375\n\\right)\n=-26\n"
}
] | https://en.wikipedia.org/wiki?curid=1092012 |
1092110 | Bulk modulus | Resistance of a material to uniform pressure
The bulk modulus (formula_0 or formula_1 or formula_2) of a substance is a measure of the resistance of a substance to bulk compression. It is defined as the ratio of the infinitesimal pressure increase to the resulting "relative" decrease of the volume.
Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear stress, and Young's modulus describes the response to normal (lengthwise stretching) stress. For a fluid, only the bulk modulus is meaningful. For a complex anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full generalized Hooke's law. The reciprocal of the bulk modulus at fixed temperature is called the isothermal compressibility.
Definition.
The bulk modulus formula_0 (which is usually positive) can be formally defined by the equation
formula_3
where formula_4 is pressure, formula_5 is the initial volume of the substance, and formula_6 denotes the derivative of pressure with respect to volume. Since the volume is inversely proportional to the density, it follows that
formula_7
where formula_8 is the initial density and formula_9 denotes the derivative of pressure with respect to density. The inverse of the bulk modulus gives a substance's compressibility. Generally the bulk modulus is defined at constant temperature as the isothermal bulk modulus, but can also be defined at constant entropy as the adiabatic bulk modulus.
Thermodynamic relation.
Strictly speaking, the bulk modulus is a thermodynamic quantity, and in order to specify a bulk modulus it is necessary to specify how the pressure varies during compression: constant-temperature (isothermal formula_10), constant-entropy (isentropic formula_11), and other variations are possible. Such distinctions are especially relevant for gases.
For an ideal gas, an isentropic process has:
formula_12
where formula_13 is the heat capacity ratio. Therefore, the isentropic bulk modulus formula_11 is given by
formula_14
Similarly, an isothermal process of an ideal gas has:
formula_15
Therefore, the isothermal bulk modulus formula_10 is given by
formula_16 .
When the gas is not ideal, these equations give only an approximation of the bulk modulus. In a fluid, the bulk modulus formula_0 and the density formula_8 determine the speed of sound formula_17 (pressure waves), according to the Newton-Laplace formula
formula_18
In solids, formula_11 and formula_10 have very similar values. Solids can also sustain transverse waves: for these materials one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds.
Measurement.
It is possible to measure the bulk modulus using powder diffraction under applied pressure.
It is a property of a fluid which shows its ability to change its volume under its pressure.
Selected values.
A material with a bulk modulus of 35 GPa loses one percent of its volume when subjected to an external pressure of 0.35 GPa (~) (assumed constant or weakly pressure dependent bulk modulus).
Microscopic origin.
Interatomic potential and linear elasticity.
Since linear elasticity is a direct result of interatomic interaction, it is related to the extension/compression of bonds. It can then be derived from the interatomic potential for crystalline materials. First, let us examine the potential energy of two interacting atoms. Starting from very far points, they will feel an attraction towards each other. As they approach each other, their potential energy will decrease. On the other hand, when two atoms are very close to each other, their total energy will be very high due to repulsive interaction. Together, these potentials guarantee an interatomic distance that achieves a minimal energy state. This occurs at some distance a0, where the total force is zero:
formula_19
Where U is interatomic potential and r is the interatomic distance. This means the atoms are in equilibrium.
To extend the two atoms approach into solid, consider a simple model, say, a 1-D array of one element with interatomic distance of a, and the equilibrium distance is "a"0. Its potential energy-interatomic distance relationship has similar form as the two atoms case, which reaches minimal at "a"0, The Taylor expansion for this is:
formula_20
At equilibrium, the first derivative is 0, so the dominant term is the quadratic one. When displacement is small, the higher order terms should be omitted. The expression becomes:
formula_21
formula_22
Which is clearly linear elasticity.
Note that the derivation is done considering two neighboring atoms, so the Hook's coefficient is:
formula_23
This form can be easily extended to 3-D case, with volume per atom(Ω) in place of interatomic distance.
formula_24
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "K=-V\\frac{dP}{dV} ,"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "dP/dV"
},
{
"math_id": 7,
"text": "K=\\rho \\frac{dP}{d\\rho} ,"
},
{
"math_id": 8,
"text": "\\rho"
},
{
"math_id": 9,
"text": "dP/d\\rho"
},
{
"math_id": 10,
"text": "K_T"
},
{
"math_id": 11,
"text": "K_S"
},
{
"math_id": 12,
"text": "PV^\\gamma=\\text{constant} \\Rightarrow P\\propto \\left(\\frac{1}{V}\\right)^\\gamma\\propto \\rho ^\\gamma,\n"
},
{
"math_id": 13,
"text": "\\gamma "
},
{
"math_id": 14,
"text": "K_S=\\gamma P."
},
{
"math_id": 15,
"text": "PV=\\text{constant} \\Rightarrow P\\propto \\frac{1}{V} \\propto \\rho,\n"
},
{
"math_id": 16,
"text": "K_T = P "
},
{
"math_id": 17,
"text": "c"
},
{
"math_id": 18,
"text": "c=\\sqrt{\\frac{K}{\\rho}}."
},
{
"math_id": 19,
"text": "F=-{\\partial U \\over \\partial r}=0"
},
{
"math_id": 20,
"text": "u(a)=u(a_0)+ \\left({\\partial u \\over \\partial r} \\right )_{r=a_0}(a-a_0)+{1 \\over 2} \\left ({\\partial^2\\over\\partial r^2}u \\right )_{r=a_0}(a-a_0)^2+O \\left ((a-a_0)^3 \\right )"
},
{
"math_id": 21,
"text": "u(a)=u(a_0)+{1 \\over 2} \\left ({\\partial^2\\over\\partial r^2}u \\right )_{r=a_0}(a-a_0)^2"
},
{
"math_id": 22,
"text": "F(a)=-{\\partial u \\over \\partial r}= \\left ({\\partial^2\\over\\partial r^2}u \\right )_{r=a_0}(a-a_0)"
},
{
"math_id": 23,
"text": "K=a_0{dF \\over dr}=a_0 \\left ({\\partial^2\\over\\partial r^2}u \\right )_{r=a_0}"
},
{
"math_id": 24,
"text": "K=\\Omega_0 \\left ({\\partial^2\\over\\partial \\Omega^2}u \\right )_{\\Omega=\\Omega_0}"
}
] | https://en.wikipedia.org/wiki?curid=1092110 |
1092282 | Negative frequency | In mathematics, the concept of signed frequency (negative and positive frequency) can indicate both the rate and sense of rotation; it can be as simple as a wheel rotating clockwise or counterclockwise. The rate is expressed in units such as revolutions (a.k.a. "cycles") per second (hertz) or radian/second (where 1 cycle corresponds to 2"π" radians).
Example: Mathematically speaking, the vector formula_0 has a positive frequency of +1 radian per unit of time and rotates counterclockwise around the unit circle, while the vector formula_1 has a negative frequency of -1 radian per unit of time, which rotates clockwise instead.
Sinusoids.
Let ω > 0 be an angular frequency with units of radians/second. Then the function f(t) = −ωt + θ has slope −ω, which is called a negative frequency. But when the function is used as the argument of a cosine operator, the result is indistinguishable from cos("ωt" − "θ"). Similarly, sin(−"ωt" + "θ") is indistinguishable from sin("ωt" − "θ" + "π"). Thus any sinusoid can be represented in terms of a positive frequency. The sign of the underlying phase slope is ambiguous.
The ambiguity is resolved when the cosine and sine operators can be observed simultaneously, because cos("ωt" + "θ") leads sin("ωt" + "θ") by <templatestyles src="Fraction/styles.css" />1⁄4 cycle (i.e. <templatestyles src="Fraction/styles.css" />π⁄2 radians) when "ω" > 0, and lags by <templatestyles src="Fraction/styles.css" />1⁄4 cycle when "ω" < 0. Similarly, a vector, (cos "ωt", sin "ωt"), rotates counter-clockwise if "ω" > 0, and clockwise if "ω" < 0. Therefore, the sign of formula_2 is also preserved in the complex-valued function:
whose corollary is:
In Eq.1 the second term is an addition to formula_3 that resolves the ambiguity. In Eq.2 the second term looks like an addition, but it is actually a cancellation that reduces a 2-dimensional vector to just one dimension, resulting in the ambiguity. Eq.2 also shows why the Fourier transform has responses at both formula_4 even though formula_2 can have only one sign. What the false response does is enable the inverse transform to distinguish between a real-valued function and a complex one.
Applications.
Simplifying the Fourier transform.
Perhaps the best-known application of negative frequency is the formula:
formula_5
which is a measure of the energy in function formula_6 at frequency formula_7 When evaluated for a continuum of argument formula_8 the result is called the Fourier transform.
For instance, consider the function:
formula_9
And:
formula_10
Note that although most functions do not comprise infinite duration sinusoids, that idealization is a common simplification to facilitate understanding.
Looking at the first term of this result, when formula_11 the negative frequency formula_12 cancels the positive frequency, leaving just the constant coefficient formula_13 (because formula_14), which causes the infinite integral to diverge. At other values of formula_2 the residual oscillations cause the integral to converge to zero. This "idealized" Fourier transform is usually written as:
formula_15
For realistic durations, the divergences and convergences are less extreme, and smaller non-zero convergences (spectral leakage) appear at many other frequencies, but the concept of negative frequency still applies. Fourier's original formulation (the sine transform and the cosine transform) requires an integral for the cosine and another for the sine. And the resultant trigonometric expressions are often less tractable than complex exponential expressions. (see Analytic signal, , and Phasor)
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\cos(t), \\sin(t))"
},
{
"math_id": 1,
"text": "(\\cos(-t), \\sin(-t))"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "\\cos(\\omega t)"
},
{
"math_id": 4,
"text": "\\pm \\omega,"
},
{
"math_id": 5,
"text": "\\hat{f}(\\omega) = \\int_{-\\infty}^\\infty f(t) e^{-i \\omega t} dt,"
},
{
"math_id": 6,
"text": "f(t)"
},
{
"math_id": 7,
"text": "\\omega."
},
{
"math_id": 8,
"text": "\\omega,"
},
{
"math_id": 9,
"text": "f(t)= A_1 e^{i \\omega_1 t}+A_2 e^{i \\omega_2 t},\\ \\forall\\ t \\in \\mathbb R,\\ \\omega_1 > 0,\\ \\omega_2 > 0."
},
{
"math_id": 10,
"text": "\n\\begin{align}\n\\hat{f}(\\omega) &= \\int_{-\\infty}^\\infty [A_1 e^{i \\omega_1 t}+A_2 e^{i \\omega_2 t}] e^{-i \\omega t} dt\\\\\n &= \\int_{-\\infty}^\\infty A_1 e^{i \\omega_1 t} e^{-i \\omega t} dt + \\int_{-\\infty}^\\infty A_2 e^{i \\omega_2 t} e^{-i \\omega t} dt\\\\\n &= \\int_{-\\infty}^\\infty A_1 e^{i (\\omega_1 -\\omega) t}dt + \\int_{-\\infty}^\\infty A_2 e^{i (\\omega_2 -\\omega) t} dt\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\\omega = \\omega_1,"
},
{
"math_id": 12,
"text": "-\\omega_1"
},
{
"math_id": 13,
"text": "A_1"
},
{
"math_id": 14,
"text": "e^{i 0 t} = e^0 = 1"
},
{
"math_id": 15,
"text": "\\hat{f}(\\omega) = 2\\pi A_1 \\delta(\\omega - \\omega_1) + 2\\pi A_2 \\delta(\\omega - \\omega_2)."
}
] | https://en.wikipedia.org/wiki?curid=1092282 |
1092524 | Scorer's function | In mathematics, the Scorer's functions are special functions studied by and denoted Gi("x") and Hi("x").
Hi("x") and -Gi("x") solve the equation
formula_0
and are given by
formula_1
formula_2
The Scorer's functions can also be defined in terms of Airy functions:
formula_3 | [
{
"math_id": 0,
"text": "y''(x) - x\\ y(x) = \\frac{1}{\\pi}"
},
{
"math_id": 1,
"text": "\\mathrm{Gi}(x) = \\frac{1}{\\pi} \\int_0^\\infty \\sin\\left(\\frac{t^3}{3} + xt\\right)\\, dt,"
},
{
"math_id": 2,
"text": "\\mathrm{Hi}(x) = \\frac{1}{\\pi} \\int_0^\\infty \\exp\\left(-\\frac{t^3}{3} + xt\\right)\\, dt."
},
{
"math_id": 3,
"text": "\\begin{align}\n \\mathrm{Gi}(x) &{}= \\mathrm{Bi}(x) \\int_x^\\infty \\mathrm{Ai}(t) \\, dt + \\mathrm{Ai}(x) \\int_0^x \\mathrm{Bi}(t) \\, dt, \\\\\n \\mathrm{Hi}(x) &{}= \\mathrm{Bi}(x) \\int_{-\\infty}^x \\mathrm{Ai}(t) \\, dt - \\mathrm{Ai}(x) \\int_{-\\infty}^x \\mathrm{Bi}(t) \\, dt. \\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=1092524 |
10925394 | Swizzling (computer graphics) | Vector computation used in computer graphics
In computer graphics, swizzles are a class of operations that transform vectors by rearranging components. Swizzles can also project from a vector of one dimensionality to a vector of another dimensionality, such as taking a three-dimensional vector and creating a two-dimensional or five-dimensional vector using components from the original vector. For example, if codice_0, where the components are codice_1, codice_2, codice_3, and codice_4 respectively, you could compute codice_5, whereupon codice_6 would equal codice_7. Additionally, one could create a two-dimensional vector with A.wx or a five-dimensional vector with A.xyzwx. Combining vectors and swizzling can be employed in various ways. This is common in GPGPU applications.
In terms of linear algebra, this is equivalent to multiplying by a matrix whose rows are standard basis vectors. If formula_0, then swizzling formula_1 as above looks like
formula_2
See also.
Z-order curve
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A=(1,2,3,4)^T"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "A.wwxy = \\begin{bmatrix} 0&0&0&1 \\\\ 0&0&0&1 \\\\ 1&0&0&0 \\\\ 0&1&0&0 \\end{bmatrix}\\begin{bmatrix} 1 \\\\ 2 \\\\ 3 \\\\ 4\\end{bmatrix} = \\begin{bmatrix} 4 \\\\ 4 \\\\ 1 \\\\ 2\\end{bmatrix}."
}
] | https://en.wikipedia.org/wiki?curid=10925394 |
1092713 | Approximation theory | Theory of getting acceptably close inexact mathematical calculations
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. What is meant by "best" and "simpler" will depend on the application.
A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials.
One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or rational (ratio of polynomials) approximations.
The objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. This is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function.
Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated. Modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment.
Optimal polynomials.
Once the domain (typically an interval) and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the maximum value of formula_0, where "P"("x") is the approximating polynomial, "f"("x") is the actual function, and "x" varies over the chosen interval. For well-behaved functions, there exists an "N"th-degree polynomial that will lead to an error curve that oscillates back and forth between formula_1 and formula_2 a total of "N"+2 times, giving a worst-case error of formula_3. It is seen that there exists an "N"th-degree polynomial that can interpolate "N"+1 points in a curve. That such a polynomial is always optimal is asserted by the equioscillation theorem. It is possible to make contrived functions "f"("x") for which no such polynomial exists, but these occur rarely in practice.
For example, the graphs shown to the right show the error in approximating log(x) and exp(x) for "N" = 4. The red curves, for the optimal polynomial, are level, that is, they oscillate between formula_1 and formula_2 exactly. In each case, the number of extrema is "N"+2, that is, 6. Two of the extrema are at the end points of the interval, at the left and right edges of the graphs.
To prove this is true in general, suppose "P" is a polynomial of degree "N" having the property described, that is, it gives rise to an error function that has "N" + 2 extrema, of alternating signs and equal magnitudes. The red graph to the right shows what this error function might look like for "N" = 4. Suppose "Q"("x") (whose error function is shown in blue to the right) is another "N"-degree polynomial that is a better approximation to "f" than "P". In particular, "Q" is closer to "f" than "P" for each value "xi" where an extreme of "P"−"f" occurs, so
formula_4
When a maximum of "P"−"f" occurs at "xi", then
formula_5
And when a minimum of "P"−"f" occurs at "xi", then
formula_6
So, as can be seen in the graph, ["P"("x") − "f"("x")] − ["Q"("x") − "f"("x")] must alternate in sign for the "N" + 2 values of "xi". But ["P"("x") − "f"("x")] − ["Q"("x") − "f"("x")] reduces to "P"("x") − "Q"("x") which is a polynomial of degree "N". This function changes sign at least "N"+1 times so, by the Intermediate value theorem, it has "N"+1 zeroes, which is impossible for a polynomial of degree "N".
Chebyshev approximation.
One can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree.
This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the usual trigonometric functions.
If one calculates the coefficients in the Chebyshev expansion for a function:
formula_7
and then cuts off the series after the formula_8 term, one gets an "N"th-degree polynomial approximating "f"("x").
The reason this polynomial is nearly optimal is that, for functions with rapidly converging power series, if the series is cut off after some term, the total error arising from the cutoff is close to the first term after the cutoff. That is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of bucking polynomials. If a Chebyshev expansion is cut off after formula_8, the error will take a form close to a multiple of formula_9. The Chebyshev polynomials have the property that they are level – they oscillate between +1 and −1 in the interval [−1, 1]. formula_9 has "N"+2 level extrema. This means that the error between "f"("x") and its Chebyshev expansion out to formula_8 is close to a level function with "N"+2 extrema, so it is close to the optimal "N"th-degree polynomial.
In the graphs above, the blue error function is sometimes better than (inside of) the red function, but sometimes worse, meaning that it is not quite the optimal polynomial. The discrepancy is less serious for the exp function, which has an extremely rapidly converging power series, than for the log function.
Chebyshev approximation is the basis for Clenshaw–Curtis quadrature, a numerical integration technique.
Remez's algorithm.
The Remez algorithm (sometimes spelled Remes) is used to produce an optimal polynomial "P"("x") approximating a given function "f"("x") over a given interval. It is an iterative algorithm that converges to a polynomial that has an error function with "N"+2 level extrema. By the theorem above, that polynomial is optimal.
Remez's algorithm uses the fact that one can construct an "N"th-degree polynomial that leads to level and alternating error values, given "N"+2 test points.
Given "N"+2 test points formula_10, formula_11, ... formula_12 (where formula_10 and formula_12 are presumably the end points of the interval of approximation), these equations need to be solved:
formula_13
The right-hand sides alternate in sign.
That is,
formula_14
Since formula_10, ..., formula_12 were given, all of their powers are known, and formula_15, ..., formula_16 are also known. That means that the above equations are just "N"+2 linear equations in the "N"+2 variables formula_17, formula_18, ..., formula_19, and formula_3. Given the test points formula_10, ..., formula_12, one can solve this system to get the polynomial "P" and the number formula_3.
The graph below shows an example of this, producing a fourth-degree polynomial approximating formula_20 over [−1, 1]. The test points were set at
−1, −0.7, −0.1, +0.4, +0.9, and 1. Those values are shown in green. The resultant value of formula_3 is 4.43 × 10−4
The error graph does indeed take on the values formula_21 at the six test points, including the end points, but that those points are not extrema. If the four interior test points had been extrema (that is, the function "P"("x")"f"("x") had maxima or minima there), the polynomial would be optimal.
The second step of Remez's algorithm consists of moving the test points to the approximate locations where the error function had its actual local maxima or minima. For example, one can tell from looking at the graph that the point at −0.1 should have been at about −0.28. The way to do this in the algorithm is to use a single round of Newton's method. Since one knows the first and second derivatives of "P"("x") − "f"("x"), one can calculate approximately how far a test point has to be moved so that the derivative will be zero.
Calculating the derivatives of a polynomial is straightforward. One must also be able to calculate the first and second derivatives of "f"("x"). Remez's algorithm requires an ability to calculate formula_22, formula_23, and formula_24 to extremely high precision. The entire algorithm must be carried out to higher precision than the desired precision of the result.
After moving the test points, the linear equation part is repeated, getting a new polynomial, and Newton's method is used again to move the test points again. This sequence is continued until the result converges to the desired accuracy. The algorithm converges very rapidly. Convergence is quadratic for well-behaved functions—if the test points are within formula_25 of the correct result, they will be approximately within formula_26 of the correct result after the next round.
Remez's algorithm is typically started by choosing the extrema of the Chebyshev polynomial formula_9 as the initial points, since the final error function will be similar to that polynomial.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "\\mid P(x) - f(x)\\mid"
},
{
"math_id": 1,
"text": "+\\varepsilon"
},
{
"math_id": 2,
"text": "-\\varepsilon"
},
{
"math_id": 3,
"text": "\\varepsilon"
},
{
"math_id": 4,
"text": "|Q(x_i)-f(x_i)|<|P(x_i)-f(x_i)|."
},
{
"math_id": 5,
"text": "Q(x_i)-f(x_i)\\le|Q(x_i)-f(x_i)|<|P(x_i)-f(x_i)|=P(x_i)-f(x_i),"
},
{
"math_id": 6,
"text": "f(x_i)-Q(x_i)\\le|Q(x_i)-f(x_i)|<|P(x_i)-f(x_i)|=f(x_i)-P(x_i)."
},
{
"math_id": 7,
"text": "f(x) \\sim \\sum_{i=0}^\\infty c_i T_i(x)"
},
{
"math_id": 8,
"text": "T_N"
},
{
"math_id": 9,
"text": "T_{N+1}"
},
{
"math_id": 10,
"text": "x_1"
},
{
"math_id": 11,
"text": "x_2"
},
{
"math_id": 12,
"text": "x_{N+2}"
},
{
"math_id": 13,
"text": "\\begin{align}\n P(x_1) - f(x_1) &= +\\varepsilon \\\\\n P(x_2) - f(x_2) &= -\\varepsilon \\\\\n P(x_3) - f(x_3) &= +\\varepsilon \\\\\n &\\ \\ \\vdots \\\\\n P(x_{N+2}) - f(x_{N+2}) &= \\pm\\varepsilon.\n\\end{align}"
},
{
"math_id": 14,
"text": "\\begin{align}\n P_0 + P_1 x_1 + P_2 x_1^2 + P_3 x_1^3 + \\dots + P_N x_1^N - f(x_1) &= +\\varepsilon \\\\\n P_0 + P_1 x_2 + P_2 x_2^2 + P_3 x_2^3 + \\dots + P_N x_2^N - f(x_2) &= -\\varepsilon \\\\\n &\\ \\ \\vdots\n\\end{align}"
},
{
"math_id": 15,
"text": "f(x_1)"
},
{
"math_id": 16,
"text": "f(x_{N+2})"
},
{
"math_id": 17,
"text": "P_0"
},
{
"math_id": 18,
"text": "P_1"
},
{
"math_id": 19,
"text": "P_N"
},
{
"math_id": 20,
"text": "e^x"
},
{
"math_id": 21,
"text": "\\pm \\varepsilon"
},
{
"math_id": 22,
"text": "f(x)\\,"
},
{
"math_id": 23,
"text": "f'(x)\\,"
},
{
"math_id": 24,
"text": "f''(x)\\,"
},
{
"math_id": 25,
"text": "10^{-15}"
},
{
"math_id": 26,
"text": "10^{-30}"
}
] | https://en.wikipedia.org/wiki?curid=1092713 |
10928467 | Publicly verifiable secret sharing | In cryptography, a secret sharing scheme is publicly verifiable (PVSS) if it is a verifiable secret sharing scheme and if any party (not just the participants of the protocol) can verify the validity of the shares distributed by the dealer.
<templatestyles src="Template:Blockquote/styles.css" />In verifiable secret sharing (VSS) the object is to resist malicious players, such as
(i) a dealer sending incorrect shares to some or all of the participants, and
(ii) participants submitting incorrect shares during the reconstruction protocol, cf. [CGMA85].
In publicly verifiable secret sharing (PVSS), as introduced by Stadler [Sta96], it is an explicit goal that not just the participants can verify their
own shares, but that anybody can verify that the participants received correct shares.
Hence, it is explicitly required that (i) can be verified publicly.
The method introduced here according to the paper by Chunming Tang, Dingyi Pei, Zhuo Liu, and Yong He is non-interactive and maintains this property throughout the protocol.
Initiali.
The PVSS scheme dictates an initialization process in which:
Excluding the initialization process, the PVSS consists of two phases:
Distribution.
1. Distribution of secret formula_0 shares is performed by the dealer formula_1, which does the following:
(note: formula_6 guarantees that the reconstruction protocol will result in the same formula_0.
2. Verification of the shares:
Reconstruction.
1. Decryption of the shares:
(note: fault-tolerance can be allowed here: it's not required that all participants succeed in decrypting formula_4 as long as a qualified set of participants are successful to decrypt formula_8).
2. Pooling the shares:
Chaum-Pedersen Protocol.
A proposed protocol proving: formula_10 :
Denote this protocol as: formula_16
A generalization of formula_16 is denoted as: formula_17 where as: formula_18 and formula_19:
The Chaum-Pedersen protocol is an interactive method and needs some modification to be used in a non-interactive way:
Replacing the randomly chosen formula_27 by a 'secure hash' function with formula_28 as input value. | [
{
"math_id": 0,
"text": "s"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "s_{1},s_{2}...s_{n}"
},
{
"math_id": 3,
"text": "P_{1},P_{2}...P_{n}"
},
{
"math_id": 4,
"text": "E_{i}(s_{i})"
},
{
"math_id": 5,
"text": "P_{i}"
},
{
"math_id": 6,
"text": "\\mathrm{proof}_{D}"
},
{
"math_id": 7,
"text": "E_{i}"
},
{
"math_id": 8,
"text": "s_{i}"
},
{
"math_id": 9,
"text": "\\mathrm{proof}_{P_{i}}"
},
{
"math_id": 10,
"text": "\\log_{_{g1}}h_{1} = \\log_{_{g2}}h_{2}"
},
{
"math_id": 11,
"text": "r\\in \\boldsymbol{\\Zeta}_{q^*} "
},
{
"math_id": 12,
"text": "c \\in _{R}\\boldsymbol{\\Zeta}_{q} "
},
{
"math_id": 13,
"text": "s = r - c x(\\mathrm{mod}\\,q)"
},
{
"math_id": 14,
"text": "\\alpha_1 = g_{1}^s h_{1}^c "
},
{
"math_id": 15,
"text": "\\alpha_2 = g_{2}^s h_{2}^c "
},
{
"math_id": 16,
"text": "\\mathrm{dleq}(g_1, h_1,g_2,h_2)"
},
{
"math_id": 17,
"text": "\\text{dleq}(X, Y, g_1, h_1,g_2,h_2)"
},
{
"math_id": 18,
"text": "X = g_{1}^{x_1}g_{2}^{x_2}"
},
{
"math_id": 19,
"text": "Y = h_{1}^{x_1}h_{2}^{x_2}"
},
{
"math_id": 20,
"text": " r_1,r_2 \\in Z_{q}^*"
},
{
"math_id": 21,
"text": "t_1 = g_{1}^{r_1} g_{2}^{r_2}"
},
{
"math_id": 22,
"text": "t_2 = h_{1}^{r_1} h_{2}^{r_2}"
},
{
"math_id": 23,
"text": "s_1 = r_1 - cx_1 (\\mathrm{mod}\\,q) "
},
{
"math_id": 24,
"text": "s_2 = r_2 - cx_2 (\\mathrm{mod}\\,q) "
},
{
"math_id": 25,
"text": "t_1 = X^c g_{1}^{s_1}g_{2}^{s_2}"
},
{
"math_id": 26,
"text": "t_2 = Y^c h_{1}^{s_1}h_{2}^{s_2}"
},
{
"math_id": 27,
"text": "c"
},
{
"math_id": 28,
"text": "m"
}
] | https://en.wikipedia.org/wiki?curid=10928467 |
10930438 | Spatial heterogeneity | Spatial heterogeneity is a property generally ascribed to a landscape or to a population. It refers to the uneven distribution of various concentrations of each species within an area. A landscape with spatial heterogeneity has a mix of concentrations of multiple species of plants or animals (biological), or of terrain formations (geological), or environmental characteristics (e.g. rainfall, temperature, wind) filling its area. A population showing spatial heterogeneity is one where various concentrations of individuals of this species are unevenly distributed across an area; nearly synonymous with "patchily distributed."
Terminology.
Spatial heterogeneity can be re-phrased as scaling hierarchy of far more small things than large ones. It has been formulated as a scaling law.
Spatial heterogeneity or scaling hierarchy can be measured or quantified by ht-index: a head/tail breaks induced number.
Examples.
Environments with a wide variety of habitats such as different topographies, soil types, and climates are able to accommodate a greater amount of species. The leading scientific explanation for this is that when organisms can finely subdivide a landscape into unique suitable habitats, more species can coexist in a landscape without competition, a phenomenon termed "niche partitioning." Spatial heterogeneity is a concept parallel to ecosystem productivity, the species richness of animals is directly related to the species richness of plants in a certain habitat. Vegetation serves as food sources, habitats, and so on. Therefore, if vegetation is scarce, the animal populations will be as well. The more plant species there are in an ecosystem, the greater variety of microhabitats there are. Plant species richness directly reflects spatial heterogeneity in an ecosystem.
Types.
There exist two main types of spatial heterogeneity. The spatial local heterogeneity categorises the geographic phenomena whose its attributes' values are significantly similar within a directly local neighbourhood, but which significantly differ in the nearby surrounding-areas beyond this directly local neighbourhood (e.g. hot spots, cold spots). The spatial stratified heterogeneity categorises the geographic phenomena whose the within-strata variance of its attributes' values is significantly lower than its between-strata variance, such as collections of ecological zones or land-use classes within a given geographic area depict.
Testing.
Spatial local heterogeneity can be tested by LISA, Gi and SatScan, while spatial stratified heterogeneity of an attribute can be measured by geographical detector "q"-statistic:
formula_0
where a population is partitioned into "h" = 1, ..., "L" strata; "N" stands for the size of the population, σ2 stands for variance of the attribute. The value of "q" is within [0, 1], 0 indicates no spatial stratified heterogeneity, 1 indicates perfect spatial stratified heterogeneity. The value of "q" indicates the percent of the variance of an attribute explained by the stratification. The "q" follows a noncentral "F" probability density function.
Spatial heterogeneity for multivariate data and 3D data can also be statistically assessed using the :
Models.
Spatial stratified heterogeneity.
Optimal parameters-based geographical detector.
Optimal Parameters-based Geographical Detector (OPGD) characterizes spatial heterogeneity with the optimized parameters of spatial data discretization for identifying geographical factors and interactive impacts of factors, and estimating risks.
Interactive detector for spatial associations.
Interactive Detector for Spatial Associations (IDSA) estimates power of interactive determinants (PID) on the basis of spatial stratified heterogeneity, spatial autocorrelation, and spatial fuzzy overlay of explanatory variables.
Geographically optimal zones-based heterogeneity.
Geographically Optimal Zones-based Heterogeneity (GOZH) explores individual and interactive determinants of geographical attributes (e.g., global soil moisture) across a large study area based on the identification of explainable geographically optimal zones.
Robust geographical detector.
Robust Geographical Detector (RGD) overcomes the limitation of the sensitivity in spatial data discretization and estimates robust power of determinants of explanatory variables.
meta-STAR.
The model-agnostic Spatial Transformation And modeRation (meta-STAR) is a framework for integrating the spatial heterogeneity into spatial statistical models (e.g. spatial ensemble methods, spatial neural networks), so to improve their accuracy. It involves the use of "spatial networks/transformations" and "spatial moderators", plus handles the geo-spatial datasets representing geographic phenomena at multiple scales.
Law of geography.
In a 2004 publication titled "The Validity and Usefulness of Laws in Geographic Information Science and Geography," Michael Frank Goodchild proposed Spatial heterogeneity could be a candidate for a law of geography similar to Tobler's first law of geography. The literature cites this paper and states this law as "geographic variables exhibit uncontrolled variance." Often referred to as the second law of geography, or Michael Goodchild's second law of geography, it is one of many concepts competing for that term, including Tobler's second law of geography, and Arbia's law of geography.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " q = 1 - \\frac{1}{N\\sigma^2}\\sum_{h=1}^LN_h\\sigma_h^2 "
}
] | https://en.wikipedia.org/wiki?curid=10930438 |
10930578 | Spectral centroid | The spectral centroid is a measure used in digital signal processing to characterise a spectrum. It indicates where the center of mass of the spectrum is located. Perceptually, it has a robust connection with the impression of brightness of a sound. It is sometimes called center of spectral mass.
Calculation.
It is calculated as the weighted mean of the frequencies present in the signal, determined using a Fourier transform, with their magnitudes as the weights:
formula_0
where "x(n)" represents the weighted frequency value, or magnitude, of bin number "n", and "f(n)" represents the center frequency of that bin.
Alternative usage.
Some people use "spectral centroid" to refer to the median of the spectrum. This is a "different" statistic, the difference being essentially the same as the difference between the unweighted median and mean statistics. Since both are measures of central tendency, in some situations they will exhibit some similarity of behaviour. But since typical audio spectra are not normally distributed, the two measures will often give strongly different values. Grey and Gordon in 1978 found the mean a better fit than the median.
Applications.
Because the spectral centroid is a good predictor of the "brightness" of a sound, it is widely used in digital audio and music processing as an automatic measure of musical timbre. | [
{
"math_id": 0,
"text": " \n\\mathrm{Centroid} = \\frac{\n \\sum_{n=0}^{N-1}\n f(n)\n x(n)\n} {\n \\sum_{n=0}^{N-1}\n x(n)\n}\n"
}
] | https://en.wikipedia.org/wiki?curid=10930578 |
10931 | Finite-state machine | Mathematical model of computation
<imagemap>
File:Automata theory.svgClasses of automata text to hide from printing but not mirroring
A finite-state machine (FSM) or finite-state automaton (FSA, plural: "automata"), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of "states" at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a "transition". An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. For any non-deterministic finite-state machine, an equivalent deterministic one can be constructed.
The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are: vending machines, which dispense products when the proper combination of coins is deposited; elevators, whose sequence of stops is determined by the floors requested by riders; traffic lights, which change sequence when cars are waiting; combination locks, which require the input of a sequence of numbers in the proper order.
The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory.
Example: coin-operated turnstile.
An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted.
Considered as a state machine, the turnstile has two possible states: "Locked" and "Unlocked". There are two possible inputs that affect its state: putting a coin in the slot ("coin") and pushing the arm ("push"). In the locked state, pushing on the arm has no effect; no matter how many times the input "push" is given, it stays in the locked state. Putting a coin in – that is, giving the machine a "coin" input – shifts the state from "Locked" to "Unlocked". In the unlocked state, putting additional coins in has no effect; that is, giving additional "coin" inputs does not change the state. A customer pushing through the arms gives a "push" input and resets the state to "Locked".
The turnstile state machine can be represented by a state-transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input:
The turnstile state machine can also be represented by a directed graph called a state diagram "(above)". Each state is represented by a node ("circle"). Edges ("arrows") show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a "coin" input in the "Unlocked" state) is represented by a circular arrow returning to the original state. The arrow into the "Locked" node from the black dot indicates it is the initial state.
Concepts and terminology.
A "state" is a description of the status of a system that is waiting to execute a "transition". A transition is a set of actions to be executed when a condition is fulfilled or when an event is received.
For example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state.
In some finite-state machine representations, it is also possible to associate actions with a state:
Representations.
State/Event table.
Several state-transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). The complete action's information is not directly described in the table and can only be added using footnotes. An FSM definition including the full action's information is possible using state tables (see also virtual finite-state machine).
UML state machines.
The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite-state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines.
SDL state machines.
The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition:
SDL embeds basic data types called "Abstract Data Types", an action language, and an execution semantic in order to make the finite-state machine executable.
Other state diagrams.
There are a large number of variants to represent an FSM such as the one in figure 3.
Usage.
In addition to their use in modeling reactive systems presented here, finite-state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, video game programming, and logic. Finite-state machines are a class of automata studied in automata theory and the theory of computation.
In computer science, finite-state machines are widely used in modeling of application behavior (control theory), design of hardware digital systems, software engineering, compilers, network protocols, and computational linguistics.
Classification.
Finite-state machines can be subdivided into acceptors, classifiers, transducers and sequencers.
Acceptors.
Acceptors (also called "detectors" or recognizers) produce binary output, indicating whether or not the received input is accepted. Each state of an acceptor is either "accepting" or "non accepting". Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The start state can also be an accepting state, in which case the acceptor accepts the empty string. The example in figure 4 shows an acceptor that accepts the string "nice". In this acceptor, the only accepting state is state 7.
A (possibly infinite) set of symbol sequences, called a formal language, is a regular language if there is some acceptor that accepts "exactly" that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not.
An acceptor could also be described as defining a language that would contain every string accepted by the acceptor but none of the rejected ones; that language is "accepted" by the acceptor. By definition, the languages accepted by acceptors are the regular languages.
The problem of determining the language accepted by a given acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring.
An example of an accepting state appears in Fig. 5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s.
"S"1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This acceptor will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this acceptor are ε (the empty string), 1, 11, 11..., 00, 010, 1010, 10110, etc.
Classifiers.
Classifiers are a generalization of acceptors that produce "n"-ary output where "n" is strictly greater than two.
Transducers.
"Transducers" produce output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics.
In control applications, two types are distinguished:
Sequencers.
"Sequencers" (also called "generators") are a subclass of acceptors and transducers that have a single-letter input alphabet. They produce only one sequence, which can be seen as an output sequence of acceptor or transducer outputs.
Determinism.
A further distinction is between "deterministic" (DFA) and "non-deterministic" (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality.
A finite-state machine with only one state is called a "combinatorial FSM". It only allows actions upon transition "into" a state. This concept is useful in cases where a number of finite-state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools.
Alternative semantics.
There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. These charts, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions.
Mathematical model.
In accordance with the general classification, the following formal definitions are found.
A "deterministic finite-state machine" or "deterministic finite-state acceptor" is a quintuple formula_0, where:
For both deterministic and non-deterministic FSMs, it is conventional to allow formula_4 to be a partial function, i.e. formula_8 does not have to be defined for every combination of formula_9 and formula_10. If an FSM formula_11 is in a state formula_12, the next symbol is formula_13 and formula_8 is not defined, then formula_11 can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions.
A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite-state machine is accepted by such a kind of restricted Turing machine, and vice versa.
A "finite-state transducer" is a sextuple formula_14, where:
If the output function depends on the state and input symbol (formula_17) that definition corresponds to the "Mealy model", and can be modelled as a Mealy machine. If the output function depends only on the state (formula_18) that definition corresponds to the "Moore model", and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system.
If we disregard the first output symbol of a Moore machine, formula_19, then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol.
Optimization.
Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time.
Implementation.
Hardware applications.
In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller.
In a "Medvedev machine", the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output.
Through state encoding for low power state machines may be optimized to minimize power consumption.
Software applications.
The following concepts are commonly used to build software applications with finite-state machines:
Finite-state machines and compilers.
Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite-state machines that implement a lexical analyzer and a parser.
Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
"We may think of a Markov chain as a process that moves successively through a set of states "s1", "s2", …, "sr". … if it is in state "si" it moves on to the next stop to state "sj" with probability "pij". These probabilities can be exhibited in the form of a transition matrix" (Kemeny (1959), p. 384)
Further reading.
Finite Markov chain processes.
Finite Markov-chain processes are also known as subshifts of finite type. | [
{
"math_id": 0,
"text": "(\\Sigma, S, s_0, \\delta, F)"
},
{
"math_id": 1,
"text": "\\Sigma"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "s_0"
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "\\delta: S \\times \\Sigma \\rightarrow S"
},
{
"math_id": 6,
"text": "\\delta: S \\times \\Sigma \\rightarrow \\mathcal{P}(S)"
},
{
"math_id": 7,
"text": "F"
},
{
"math_id": 8,
"text": "\\delta(s, x)"
},
{
"math_id": 9,
"text": "s \\isin S"
},
{
"math_id": 10,
"text": "x \\isin \\Sigma"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": "s"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "(\\Sigma, \\Gamma, S, s_0, \\delta, \\omega)"
},
{
"math_id": 15,
"text": "\\Gamma"
},
{
"math_id": 16,
"text": "\\omega"
},
{
"math_id": 17,
"text": "\\omega: S \\times \\Sigma \\rightarrow \\Gamma"
},
{
"math_id": 18,
"text": "\\omega: S \\rightarrow \\Gamma"
},
{
"math_id": 19,
"text": "\\omega(s_0)"
}
] | https://en.wikipedia.org/wiki?curid=10931 |
10932739 | Doppler spectroscopy | Indirect method for finding extrasolar planets and brown dwarfs
Doppler spectroscopy (also known as the radial-velocity method, or colloquially, the wobble method) is an indirect method for finding extrasolar planets and brown dwarfs from radial-velocity measurements via observation of Doppler shifts in the spectrum of the planet's parent star.
As of November 2022, about 19.5% of known extrasolar planets (1,018 of the total) have been discovered using Doppler spectroscopy.
History.
Otto Struve proposed in 1952 the use of powerful spectrographs to detect distant planets. He described how a very large planet, as large as Jupiter, for example, would cause its parent star to wobble slightly as the two objects orbit around their center of mass. He predicted that the small Doppler shifts to the light emitted by the star, caused by its continuously varying radial velocity, would be detectable by the most sensitive spectrographs as tiny redshifts and blueshifts in the star's emission. However, the technology of the time produced radial-velocity measurements with errors of 1,000 m/s or more, making them useless for the detection of orbiting planets. The expected changes in radial velocity are very small – Jupiter causes the Sun to change velocity by about 12.4 m/s over a period of 12 years, and the Earth's effect is only 0.1 m/s over a period of 1 year – so long-term observations by instruments with a very high resolution are required.
Advances in spectrometer technology and observational techniques in the 1980s and 1990s produced instruments capable of detecting the first of many new extrasolar planets. The ELODIE spectrograph, installed at the Haute-Provence Observatory in Southern France in 1993, could measure radial-velocity shifts as low as 7 m/s, low enough for an extraterrestrial observer to detect Jupiter's influence on the Sun. Using this instrument, astronomers Michel Mayor and Didier Queloz identified 51 Pegasi b, a "Hot Jupiter" in the constellation Pegasus. Although planets had previously been detected orbiting pulsars, 51 Pegasi b was the first planet ever confirmed to be orbiting a main-sequence star, and the first detected using Doppler spectroscopy.
In November 1995, the scientists published their findings in the journal "Nature"; the paper has since been cited over 1,000 times. Since that date, over 1,000 exoplanet candidates have been identified, many of which have been detected by Doppler search programs based at the Keck, Lick, and Anglo-Australian Observatories (respectively, the California, Carnegie and Anglo-Australian planet searches), and teams based at the Geneva Extrasolar Planet Search.
Beginning in the early 2000s, a second generation of planet-hunting spectrographs permitted far more precise measurements. The HARPS spectrograph, installed at the La Silla Observatory in Chile in 2003, can identify radial-velocity shifts as small as 0.3 m/s, enough to locate many rocky, Earth-like planets. A third generation of spectrographs is expected to come online in 2017. With measurement errors estimated below 0.1 m/s, these new instruments would allow an extraterrestrial observer to detect even Earth.
Procedure.
A series of observations is made of the spectrum of light emitted by a star. Periodic variations in the star's spectrum may be detected, with the wavelength of characteristic spectral lines in the spectrum increasing and decreasing regularly over a period of time. Statistical filters are then applied to the data set to cancel out spectrum effects from other sources. Using mathematical best-fit techniques, astronomers can isolate the tell-tale periodic sine wave that indicates a planet in orbit.
If an extrasolar planet is detected, a minimum mass for the planet can be determined from the changes in the star's radial velocity. To find a more precise measure of the mass requires knowledge of the inclination of the planet's orbit. A graph of measured radial velocity versus time will give a characteristic curve (sine curve in the case of a circular orbit), and the amplitude of the curve will allow the minimum mass of the planet to be calculated using the binary mass function.
The Bayesian Kepler periodogram is a mathematical algorithm, used to detect single or multiple extrasolar planets from successive radial-velocity measurements of the star they are orbiting. It involves a Bayesian statistical analysis of the radial-velocity data, using a prior probability distribution over the space determined by one or more sets of Keplerian orbital parameters. This analysis may be implemented using the Markov chain Monte Carlo (MCMC) method.
The method has been applied to the HD 208487 system, resulting in an apparent detection of a second planet with a period of approximately 1000 days. However, this may be an artifact of stellar activity. The method is also applied to the HD 11964 system, where it found an apparent planet with a period of approximately 1 year. However, this planet was not found in re-reduced data, suggesting that this detection was an artifact of the Earth's orbital motion around the Sun.
Although radial-velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial-velocity of the planet itself can be found and this gives the inclination of the planet's orbit and therefore the planet's actual mass can be determined. The first non-transiting planet to have its mass found this way was Tau Boötis b in 2012 when carbon monoxide was detected in the infrared part of the spectrum.
Example.
The graph to the right illustrates the sine curve using Doppler spectroscopy to observe the radial velocity of an imaginary star which is being orbited by a planet in a circular orbit. Observations of a real star would produce a similar graph, although eccentricity in the orbit will distort the curve and complicate the calculations below.
This theoretical star's velocity shows a periodic variance of ±1 m/s, suggesting an orbiting mass that is creating a gravitational pull on this star. Using Kepler's third law of planetary motion, the observed period of the planet's orbit around the star (equal to the period of the observed variations in the star's spectrum) can be used to determine the planet's distance from the star (formula_0) using the following equation:
formula_1
where:
Having determined formula_0, the velocity of the planet around the star can be calculated using Newton's law of gravitation, and the orbit equation:
formula_2
where formula_3 is the velocity of planet.
The mass of the planet can then be found from the calculated velocity of the planet:
formula_4
where formula_5 is the velocity of parent star. The observed Doppler velocity, formula_6, where "i" is the inclination of the planet's orbit to the line perpendicular to the line-of-sight.
Thus, assuming a value for the inclination of the planet's orbit and for the mass of the star, the observed changes in the radial velocity of the star can be used to calculate the mass of the extrasolar planet.
Radial-velocity comparison tables.
Ref:
Limitations.
The major limitation with Doppler spectroscopy is that it can only measure movement along the line-of-sight, and so depends on a measurement (or estimate) of the inclination of the planet's orbit to determine the planet's mass. If the orbital plane of the planet happens to line up with the line-of-sight of the observer, then the measured variation in the star's radial velocity is the true value. However, if the orbital plane is tilted away from the line-of-sight, then the true effect of the planet on the motion of the star will be greater than the measured variation in the star's radial velocity, which is only the component along the line-of-sight. As a result, the planet's true mass will be greater than measured.
To correct for this effect, and so determine the true mass of an extrasolar planet, radial-velocity measurements can be combined with astrometric observations, which track the movement of the star across the plane of the sky, perpendicular to the line-of-sight. Astrometric measurements allows researchers to check whether objects that appear to be high mass planets are more likely to be brown dwarfs.
A further disadvantage is that the gas envelope around certain types of stars can expand and contract, and some stars are variable. This method is unsuitable for finding planets around these types of stars, as changes in the stellar emission spectrum caused by the intrinsic variability of the star can swamp the small effect caused by a planet.
The method is best at detecting very massive objects close to the parent star – so-called "hot Jupiters" – which have the greatest gravitational effect on the parent star, and so cause the largest changes in its radial velocity. Hot Jupiters have the greatest gravitational effect on their host stars because they have relatively small orbits and large masses. Observation of many separate spectral lines and many orbital periods allows the signal-to-noise ratio of observations to be increased, increasing the chance of observing smaller and more distant planets, but planets like the Earth remain undetectable with current instruments.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "r^3=\\frac{GM_\\mathrm{star}}{4\\pi^2}P_\\mathrm{star}^2\\,"
},
{
"math_id": 2,
"text": "V_\\mathrm{PL}=\\sqrt{GM_\\mathrm{star}/r}\\,"
},
{
"math_id": 3,
"text": "V_\\mathrm{PL}"
},
{
"math_id": 4,
"text": "M_\\mathrm{PL}=\\frac{M_\\mathrm{star}V_\\mathrm{star}}{V_\\mathrm{PL}}\\,"
},
{
"math_id": 5,
"text": "V_\\mathrm{star}"
},
{
"math_id": 6,
"text": "K = V_\\mathrm{star}\\sin(i)"
}
] | https://en.wikipedia.org/wiki?curid=10932739 |
10932781 | Slope mass rating | Classification scheme of rock strength
Slope mass rating (SMR) is a rock mass classification scheme developed by Manuel Romana to describe the strength of an individual rock outcrop or slope. The system is founded upon the more widely used RMR scheme, which is modified with quantitative guidelines to the rate the influence of adverse joint orientations (e.g. joints dipping steeply out of the slope).
Slope mass rating has been widely used worldwide. It has been included in the technical regulations of some countries as a classification system by itself or as a quality index for rocky slopes (e.g., India, Serbia, Italy). It has also been used in more than 50 countries across five continents, especially in Asia (e.g., China and India), where its use is very common.
Definition.
Rock mass classification schemes are designed to account for a number of factors influencing the strength and deformability of a rock mass (e.g. joint orientations, fracture density, intact strength), and may be used to quantify the competence of an outcrop or particular geologic material. Scores typically range from 0 to 100, with 100 being the most competent rock mass. The term "rock mass" incorporates the influence of both intact material and discontinuities on the overall strength and behavior of a discontinuous rock medium. While it is relatively straightforward to test the mechanical properties of either intact rock or joints individually, describing their interaction is difficult and several empirical rating schemes (such as RMR and SMR) are available for this purpose.
SMR index calculation.
SMR uses the same first five scoring categories as RMR:
The final sixth category is a rating adjustment or penalization for adverse joint orientations, which is particularly important for evaluating the competence of a rock slope. SMR provides quantitative guidelines to evaluate this rating penalization in the form of four sub-categories, three that describe the relative rock slope and joint set geometries and a fourth which accounts for the method of slope excavation. SMR addresses both planar sliding and toppling failure modes, no additional consideration was made originally for sliding on multiple joint planes. However, Anbalagan et al. adapted the original classification for wedge failure mode.
The final SMR rating is obtained by means of next expression:
formula_0
where:
Although SMR is worldwide used, sometimes some misinterpretations and imprecisions are made when applied. Most of the observed inaccuracies are related to the calculation of the ancillary angular relationships between dips and dip directions of the discontinuities and the slope required to determine F1, F2 and F3 factors. A comprehensive definition of these angular relationships can be found in.
SMR index modifications.
Continuous SMR (C-SMR)
Tomás et al. proposed alternative continuous functions for the computation of F1, F2 and F3 correction parameters. These functions show maximum absolute differences with discrete functions lower than 7 points and significantly reduce subjective interpretations. Moreover, the proposed functions for SMR correction factors calculus reduce doubts about what score to assign to values near the border of the discrete classification.
The proposed F1 continuous function that best fits discrete values is:
formula_1
where parameter A is the angle formed between the discontinuity and the slope strikes for planar and toppling failures modes and the angle formed between the intersection of the two discontinuities (the plunge direction) and the dip direction of the slope for wedge failure. Arctangent function is expressed in degrees.
formula_2
where parameter B is the discontinuity dip in degrees for planar failure and the plunge of the intersection for wedge failure. Note that the arctangent function is also expressed in degrees.
formula_3
where C depends on the relationship between slope and discontinuity dips (toppling or planar failure cases) or the slope dip and the immersion line dip for wedge failure case. Arctangent functions are expressed in degrees.
Graphical SMR (GSMR)
Alternatively, Tomás et al. also proposed a graphical method based on the stereographic representation of the discontinuities and the slope to obtain correction parameters of the SMR (F1, F2 and F3). This method allows the SMR correction factors to be easily obtained for a simple slope or for several practical applications as linear infrastructures slopes, open pit mining or trench excavations.
Other adaptations of SMR
Other approaches have been proposed to adapt SMR to different situations as high slopes, flysch formations or even heterogeneous materials.
Sensitivity analysis of Slope Mass Rating.
A four-dimensional visual analysis of SMR geomechanical classification, performed by Tomás et al. by means of the Worlds within Worlds methodology to explore, analyze and visualize the relationship among the main controlling parameters of this geomechanical classification, revealed that several cases exist where the slope-discontinuity geometrical relationship scarcely affects slope stability (i.e. F1×F2×F3≃0), and as a consequence SMR can be computed by correcting basic RMR only with the F4 factor using next equation with a maximum error lower than nine points:
formula_4
These cases in which the influence of the geometry of the slope and the discontinuities is negligible (i.e. F1×F2×F3≃0) are:
a) For planar failure
b) For wedge failure
Where βs is the angle of the slope, βj is the discontinuity dip, βi is the plunge of the intersection line between two discontinuities and A is the parallelism between the discontinuity (or intersection line for wedges) and slope dip directions.
Considering the previous situations, we can assess that the SMR index is insensitive to the geometrical conditions of the slope for a significant number of plausible discontinuity-slope geometries. In these situations, we can ignore the calculation of factors F1, F2, and F3, which depend on the geometry of the slope and the discontinuities, considering that F1 × F2 × F3 ≃ 0. This insight can be very useful for field engineers and geologists, as it helps provide preliminary acceptable field values of SMR when any of the aforementioned circumstances are identified in the studied slope, resulting in significant time savings.
Application of the SMR index.
SMR index can be calculated through the open source software SMRTool, which permits to compute SMR from the geomechanical data of the rock mass and the orientation of the slope and the discontinuities. This software was used to calculate the SMR index using 3D point clouds.
Some authors have proposed different methodologies to map the failure susceptibility in rock slopes by computing the SMR index using a Geographical Information System (GIS). | [
{
"math_id": 0,
"text": "SMR = RMR_b+F_1 \\times F_2 \\times F_3 + F_4"
},
{
"math_id": 1,
"text": "F_1 = \\frac{16}{25}-\\frac{3}{500}\\arctan \\left ( \\frac{1}{10} \\left ( |A|-17 \\right ) \\right )"
},
{
"math_id": 2,
"text": "F_2 = \\begin{cases} \n\\frac{9}{16}+\\frac{1}{195}\\arctan \\left ( \\frac{17}{100}B - 5 \\right ), & \\text{for planar and wedge failure} \\\\ \n1, & \\text{for toppling failure} \n\\end{cases}\n\n"
},
{
"math_id": 3,
"text": "F_3 = \\begin{cases} \n-30 + \\frac{1}{3}\\arctan C, & \\text{for planar and wedge failure} \\\\ \n-13-\\frac{1}{7}\\arctan \\left ( C- 120 \\right ), & \\text{for toppling failure} \n\\end{cases}"
},
{
"math_id": 4,
"text": "SMR = RMR_b + F_4"
}
] | https://en.wikipedia.org/wiki?curid=10932781 |
10933 | Functional programming | Programming paradigm based on applying and composing functions
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.
Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification.
Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8).
History.
The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s.
Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming.
The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.
Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of "generator", which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features.
Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book "A Programming Language" (). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q.
In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language.
John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming.
The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML.
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook "Structure and Interpretation of Computer Programs". Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.
In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called "constructive" type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages.
The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.
More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.
Functional programming continues to be used in commercial settings.
Concepts.
A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.
First-class and higher-order functions.
Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator formula_0, which returns the derivative of a function formula_1.
Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).
Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
Pure functions.
Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code:
While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated "pure". C++11 added codice_0 keyword with similar semantics.
Recursion.
Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.
The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop.
Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages.
Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming.
Strict versus non-strict evaluation.
Functional languages can be categorized by whether they use "strict (eager)" or "non-strict (lazy)" evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression:
print length([2+1, 3*2, 1/0, 5-4])
fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.
The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell.
argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.
Type systems.
Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic datatypes makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases.
Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the C programming language that is written in Coq and formally verified.
A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#.
Referential transparency.
Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.
Consider C assignment statement codice_1, this changes the value assigned to the variable codice_2. Let us say that the initial value of codice_2 was codice_4, then two consecutive evaluations of the variable codice_2 yields codice_6 and codice_7 respectively. Clearly, replacing codice_1 with either codice_6 or codice_7 gives a program a different meaning, and so the expression "is not" referentially transparent. In fact, assignment statements are never referentially transparent.
Now, consider another function such as int plusone(int x) {return x+1;} "is" transparent, as it does not implicitly change the input x and thus has no such side effects.
Functional programs exclusively use this type of function and are therefore referentially transparent.
Data structures.
Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.
Comparison to imperative programming.
Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency.
Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item.
Imperative vs. functional programming.
The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable "result".
Traditional imperative loop:
const numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let result = 0;
for (let i = 0; i < numList.length; i++) {
if (numList[i] % 2 === 0) {
result += numList[i] * 10;
Functional programming with higher-order functions:
const result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
.filter(n => n % 2 === 0)
.map(a => a * 10)
.reduce((a, b) => a + b, 0);
Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule).
Simulating state.
There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.
The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).
Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.
Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.
Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit.
Efficiency issues.
Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between "formally" "immutable" data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called "lifetimes."
Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example codice_11 classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka.
Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan "et al." 2008 give some practical advice for analyzing and fixing them.
However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) .
Abstraction cost.
Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure:
When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as:
(defn even?
"Returns true if n is even, throws an exception if n is not an integer"
{:added "1.0"
[n] (if (integer? n)
(zero? (bit-and (clojure.lang.RT/uncheckedLongCast n) 1))
(throw (IllegalArgumentException. (str "Argument must be an integer: " n)))))
has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?, so let's take for instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling codice_12 is 4% slower than an equivalent codice_13 loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining.
One distinguishing feature of Rust are "zero-cost abstractions". This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime.
Functional programming in non-functional languages.
It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions.
JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the codice_14 standard library module. First-class functions have been introduced into other mainstream languages such as PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin.
In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.
In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes.
In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.
Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.
Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.
Comparison to logic programming.
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.
For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program:
mother(charles, elizabeth).
mother(harry, diana).
The program can be queried, like a functional program, to generate mothers from children:
?- mother(harry, X).
X = diana.
?- mother(charles, X).
X = elizabeth.
But it can also be queried "backwards", to generate children:
?- mother(X, elizabeth).
X = charles.
?- mother(X, diana).
X = harry.
It can even be used to generate all instances of the mother relation:
?- mother(X, Y).
X = charles,
Y = elizabeth.
X = harry,
Y = diana.
Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form:
maternal_grandmother(X) = mother(mother(X)).
The same definition in relational notation needs to be written in the unnested form:
maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y).
Here codice_15 means "if" and codice_16means "and".
However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming:
grandparent(X) := parent(parent(X)).
parent(X) := mother(X).
parent(X) := father(X).
mother(charles) := elizabeth.
father(charles) := phillip.
mother(harry) := diana.
father(harry) := charles.
?- grandparent(X,Y).
X = harry,
Y = elizabeth.
X = harry,
Y = phillip.
Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy.
Applications.
Text editors.
Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages.
Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family.
Spreadsheets.
Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.
Academia.
Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming.
Industry.
Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming.
Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform "Allegro Lokalnie."
Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory.
Education.
Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods.
Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book "Structure and Interpretation of Classical Mechanics".
In particular, Scheme has been a relatively popular choice for teaching programming for years.
Notes and references.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d/dx"
},
{
"math_id": 1,
"text": "f"
}
] | https://en.wikipedia.org/wiki?curid=10933 |
1093331 | Stanley's reciprocity theorem | Gives a functional equation satisfied by the generating function of any rational cone
In combinatorial mathematics, Stanley's reciprocity theorem, named after MIT mathematician Richard P. Stanley, states that a certain functional equation is satisfied by the generating function of any rational cone (defined below) and the generating function of the cone's interior.
Definitions.
A rational cone is the set of all "d"-tuples
("a"1, ..., "a""d")
of nonnegative integers satisfying a system of inequalities
formula_0
where "M" is a matrix of integers. A "d"-tuple satisfying the corresponding "strict" inequalities, i.e., with ">" rather than "≥", is in the "interior" of the cone.
The generating function of such a cone is
formula_1
The generating function "F"int("x"1, ..., "x""d") of the interior of the cone is defined in the same way, but one sums over "d"-tuples in the interior rather than in the whole cone.
It can be shown that these are rational functions.
Formulation.
Stanley's reciprocity theorem states that for a rational cone as above, we have
formula_2
Matthias Beck and Mike Develin have shown how to prove this by using the calculus of residues.
Stanley's reciprocity theorem generalizes Ehrhart-Macdonald reciprocity for Ehrhart polynomials of rational convex polytopes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M\\left[\\begin{matrix}a_1 \\\\ \\vdots \\\\ a_d\\end{matrix}\\right] \\geq \\left[\\begin{matrix}0 \\\\ \\vdots \\\\ 0\\end{matrix}\\right]"
},
{
"math_id": 1,
"text": "F(x_1,\\dots,x_d)=\\sum_{(a_1,\\dots,a_d)\\in {\\rm cone}} x_1^{a_1}\\cdots x_d^{a_d}."
},
{
"math_id": 2,
"text": "F(1/x_1,\\dots,1/x_d)=(-1)^d F_{\\rm int}(x_1,\\dots,x_d)."
}
] | https://en.wikipedia.org/wiki?curid=1093331 |
10935949 | Outliers ratio | In objective video quality assessment, the outliers ratio (OR) is a measure of the performance of an objective video quality metric. It is the ratio of "false" scores given by the objective metric to the total number of scores. The "false" scores are the scores that lie outside the interval
formula_0
where MOS is the mean opinion score and "σ" is the standard deviation of the MOS. | [
{
"math_id": 0,
"text": "[\\text{MOS} - 2\\sigma, \\text{MOS} + 2\\sigma]"
}
] | https://en.wikipedia.org/wiki?curid=10935949 |
1093675 | Wannier function | The Wannier functions are a complete set of orthogonal functions used in solid-state physics. They were introduced by Gregory Wannier in 1937. Wannier functions are the localized molecular orbitals of crystalline systems.
The Wannier functions for different lattice sites in a crystal are orthogonal, allowing a convenient basis for the expansion of electron states in certain regimes. Wannier functions have found widespread use, for example, in the analysis of binding forces acting on electrons.
Definition.
Although, like localized molecular orbitals, Wannier functions can be chosen in many different ways, the original, simplest, and most common definition in solid-state physics is as follows. Choose a single band in a perfect crystal, and denote its Bloch states by
formula_0
where "u"k(r) has the same periodicity as the crystal. Then the Wannier functions are defined by
formula_1,
where
formula_2
where "BZ" denotes the Brillouin zone, which has volume Ω.
Properties.
On the basis of this definition, the following properties can be proven to hold:
formula_3
In other words, a Wannier function only depends on the quantity (r − R). As a result, these functions are often written in the alternative notation
formula_4
formula_5,
where the sum is over each lattice vector R in the crystal.
formula_7
Wannier functions have been extended to nearly periodic potentials as well.
Localization.
The Bloch states "ψ"k(r) are defined as the eigenfunctions of a particular Hamiltonian, and are therefore defined only up to an overall phase. By applying a phase transformation "e""iθ"(k) to the functions "ψ"k(r), for any (real) function "θ"(k), one arrives at an equally valid choice. While the change has no consequences for the properties of the Bloch states, the corresponding Wannier functions are significantly changed by this transformation.
One therefore uses the freedom to choose the phases of the Bloch states in order to give the most convenient set of Wannier functions. In practice, this is usually the maximally-localized set, in which the Wannier function "&varphi;"R is localized around the point R and rapidly goes to zero away from R. For the one-dimensional case, it has been proved by Kohn that there is always a unique choice that gives these properties (subject to certain symmetries). This consequently applies to any separable potential in higher dimensions; the general conditions are not established, and are the subject of ongoing research.
A Pipek-Mezey style localization scheme has also been recently proposed for obtaining Wannier functions. Contrary to the maximally localized Wannier functions (which are an application of the Foster-Boys scheme to crystalline systems), the Pipek-Mezey Wannier functions do not mix σ and π orbitals.
Rigorous results.
The existence of exponentially localized Wannier functions in insulators was proved mathematically in 2006.
Modern theory of polarization.
Wannier functions have recently found application in describing the polarization in crystals, for example, ferroelectrics. The modern theory of polarization is pioneered by Raffaele Resta and David Vanderbilt. See for example, Berghold, and Nakhmanson, and a power-point introduction by Vanderbilt. The polarization per unit cell in a solid can be defined as the dipole moment of the Wannier charge density:
formula_8
where the summation is over the occupied bands, and "Wn" is the Wannier function localized in the cell for band "n". The "change" in polarization during a continuous physical process is the time derivative of the polarization and also can be formulated in terms of the Berry phase of the occupied Bloch states.
Wannier interpolation.
Wannier functions are often used to interpolate bandstructures calculated "ab initio" on a coarse grid of k-points to any arbitrary k-point. This is particularly useful for evaluation of Brillouin-zone integrals on dense grids and searching of Weyl points, and also taking derivatives in the k-space. This approach is similar in spirit to the tight binding approximation, but in contrast allows for an exact description of bands in a certain energy range. Wannier interpolation schemes have been derived for spectral properties,
anomalous Hall conductivity,
orbital magnetization,
thermoelectric and electronic transport properties,
gyrotropic effects,
shift current,
spin Hall conductivity
and other effects.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi_{\\mathbf{k}}(\\mathbf{r}) = e^{i\\mathbf{k}\\cdot\\mathbf{r}}u_\\mathbf{k}(\\mathbf{r})"
},
{
"math_id": 1,
"text": "\\phi_{\\mathbf{R}}(\\mathbf{r}) = \\frac{1}{\\sqrt{N}} \\sum_{\\mathbf{k}} e^{-i\\mathbf{k}\\cdot\\mathbf{R}} \\psi_{\\mathbf{k}}(\\mathbf{r})"
},
{
"math_id": 2,
"text": "\\sum_{\\mathbf{k}} \\longrightarrow \\frac{N}{\\Omega} \\int_\\text{BZ} d^3\\mathbf{k}"
},
{
"math_id": 3,
"text": "\\phi_{\\mathbf{R}}(\\mathbf{r}) = \\phi_{\\mathbf{R}+\\mathbf{R}'}(\\mathbf{r}+\\mathbf{R}')"
},
{
"math_id": 4,
"text": "\\phi(\\mathbf{r}-\\mathbf{R}) := \\phi_{\\mathbf{R}}(\\mathbf{r})"
},
{
"math_id": 5,
"text": "\\psi_{\\mathbf{k}}(\\mathbf{r}) = \\frac{1}{\\sqrt{N}} \\sum_{\\mathbf{R}} e^{i\\mathbf{k}\\cdot\\mathbf{R}} \\phi_{\\mathbf{R}}(\\mathbf{r})"
},
{
"math_id": 6,
"text": "\\phi_{\\mathbf{R}}"
},
{
"math_id": 7,
"text": "\\begin{align}\n\\int_\\text{crystal} \\phi_{\\mathbf{R}}(\\mathbf{r})^* \\phi_{\\mathbf{R'}}(\\mathbf{r}) d^3\\mathbf{r} & = \\frac{1}{N} \\sum_{\\mathbf{k,k'}}\\int_\\text{crystal} e^{i\\mathbf{k}\\cdot\\mathbf{R}} \\psi_{\\mathbf{k}}(\\mathbf{r})^* e^{-i\\mathbf{k'}\\cdot\\mathbf{R'}} \\psi_{\\mathbf{k'}}(\\mathbf{r}) d^3\\mathbf{r} \\\\\n& = \\frac{1}{N} \\sum_{\\mathbf{k,k'}} e^{i\\mathbf{k}\\cdot\\mathbf{R}} e^{-i\\mathbf{k'}\\cdot\\mathbf{R'}} \\delta_{\\mathbf{k,k'}} \\\\\n& = \\frac{1}{N} \\sum_{\\mathbf{k}} e^{i\\mathbf{k}\\cdot\\mathbf{(R-R')}} \\\\\n& =\\delta_{\\mathbf{R,R'}}\n\\end{align} "
},
{
"math_id": 8,
"text": "\\mathbf{p_c} = -e \\sum_n \\int\\ d^3 r \\,\\, \\mathbf{r} |W_n(\\mathbf{r})|^2 \\ , "
}
] | https://en.wikipedia.org/wiki?curid=1093675 |
10936783 | Faro shuffle | Perfectly interleaved playing card shuffle
The faro shuffle (American), weave shuffle (British), or dovetail shuffle is a method of shuffling playing cards, in which half of the deck is held in each hand with the thumbs inward, then cards are released by the thumbs so that they fall to the table interleaved. Diaconis, Graham, and Kantor also call this the technique, when used in magic.
Mathematicians use the term "faro shuffle" to describe a precise rearrangement of a deck into two equal piles of 26 cards which are then interleaved perfectly.
Description.
A right-handed practitioner holds the cards from above in the left hand and from below in the right hand. The deck is separated into two preferably equal parts by simply lifting up half the cards with the right thumb slightly and pushing the left hand's packet forward away from the right hand. The two packets are often crossed and tapped against each other to align them. They are then pushed together on the short sides and bent either up or down. The cards will then alternately fall onto each other, ideally alternating one by one from each half, much like a zipper. A flourish can be added by springing the packets together by applying pressure and bending them from above.
A game of Faro ends with the cards in two equal piles that the dealer must combine to deal them for the next game. According to the magician John Maskelyne, the above method was used, and he calls it the "faro dealer's shuffle". Maskelyne was the first to give clear instructions, but the shuffle was used and associated with faro earlier, as discovered mostly by the mathematician and magician Persi Diaconis.
Perfect shuffles.
The faro shuffle is a controlled shuffle that does not fully randomize a deck.
A perfect faro shuffle, where the cards are perfectly alternated, requires the shuffler to cut the deck into two equal stacks and apply just the right pressure when pushing the half decks into each other.
A faro shuffle that leaves the original top card at the top and the original bottom card at the bottom is known as an out-shuffle, while one that moves the original top card to second and the original bottom card to second from the bottom is known as an in-shuffle. These names were coined by the magician and computer programmer Alex Elmsley.
An out-shuffle has the same result as removing the top and bottom cards, doing an in-shuffle on the remaining cards, and then replacing the top and bottom cards in their original positions. Repeated out-shuffles cannot reverse the order of the entire deck, only the middle n−2 cards. Mathematical theorems regarding faro shuffles tend to refer to out-shuffles.
An in-shuffle has the same result as adding one extraneous card at the top and one extraneous card at the bottom, doing an out-shuffle on the enlarged deck, and then removing the extraneous cards. Repeated in-shuffles can reverse the order of the deck.
If one can do perfect in-shuffles, then 26 shuffles will reverse the order of the deck and 26 more will restore it to its original order.
In general, formula_0 perfect in-shuffles will restore the order of an formula_1-card deck if formula_2. For example, 52 consecutive in-shuffles restore the order of a 52-card deck, because formula_3.
In general, formula_0 perfect out-shuffles will restore the order of an formula_1-card deck if formula_4. For example, if one manages to perform eight out-shuffles in a row, then the deck of 52 cards will be restored to its original order, because formula_5. However, only 6 faro out-shuffles are required to restore the order of a 64-card deck.
In other words, the number of in-shuffles required to return a deck of cards of even size "n", to original order is given by the multiplicative order of 2 modulo ("n" + 1).
For example, for a deck size of "n"=2, 4, 6, 8, 10, 12 ..., the number of in-shuffles needed are: 2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, ... (sequence in the OEIS).
According to Artin's conjecture on primitive roots, it follows that there are infinitely many deck sizes which require the full set of "n" shuffles.
The analogous operation to an out-shuffle for an infinite sequence is the interleave sequence.
Example.
For simplicity, we will use a deck of six cards.
The following shows the order of the deck after each in-shuffle. A deck of this size returns to its original order after 3 in-shuffles.
The following shows the order of the deck after each out-shuffle. A deck of this size returns to its original order after 4 out-shuffles.
As deck manipulation.
Magician Alex Elmsley discovered that a controlled series of in- and out-shuffles can be used to move the top card of the deck down into any desired position. The trick is to express the card's desired position as a binary number, and then do an in-shuffle for each 1 and an out-shuffle for each 0.
For example, to move the top card down so that there are ten cards above it, express the number ten in binary (10102). Shuffle in, out, in, out. Deal ten cards off the top of the deck; the eleventh will be your original card. Notice that it doesn't matter whether you express the number ten as 10102 or 000010102; preliminary out-shuffles will not affect the outcome because out-shuffles always keep the top card on top.
Group theory aspects.
In mathematics, a perfect shuffle can be considered an element of the symmetric group.
More generally, in formula_6, the perfect shuffle is the permutation that splits the set into 2 piles and interleaves them:
formula_6=formula_7
In other words, it is the map
formula_8
Analogously, the formula_9-perfect shuffle permutation is the element of formula_10 that splits the set into "k" piles and interleaves them.
The formula_11-perfect shuffle, denoted formula_12, is the composition of the formula_13-perfect shuffle with an formula_1-cycle, so the sign of formula_12 is:
formula_14
The sign is thus 4-periodic:
formula_15
The first few perfect shuffles are: formula_16 and formula_17 are trivial, and formula_18 is the transposition formula_19.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
| [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "2^k\\equiv 1\\pmod{n+1}"
},
{
"math_id": 3,
"text": "2^{52}\\equiv 1\\pmod{53}"
},
{
"math_id": 4,
"text": "2^k\\equiv 1\\pmod{n-1}"
},
{
"math_id": 5,
"text": "2^8\\equiv 1\\pmod{51}"
},
{
"math_id": 6,
"text": "S_{2n}"
},
{
"math_id": 7,
"text": "\\begin{pmatrix} 1 & 2 & 3 & 4 & \\cdots & 2n-1 & 2n\\\\\n1 & n+1 & 2 & n+2 & \\cdots & n & 2n \\end{pmatrix}"
},
{
"math_id": 8,
"text": "k \\mapsto \\begin{cases}\n\\frac{k+1}{2} & k \\ \\text{odd}\\\\\nn+\\frac{k}{2} & k \\ \\text{even}\n\\end{cases}"
},
{
"math_id": 9,
"text": "(k,n)"
},
{
"math_id": 10,
"text": "S_{kn}"
},
{
"math_id": 11,
"text": "(2,n)"
},
{
"math_id": 12,
"text": "\\rho_n"
},
{
"math_id": 13,
"text": "(2,n-1)"
},
{
"math_id": 14,
"text": "\\mbox{sgn}(\\rho_n)=(-1)^{n+1}\\mbox{sgn}(\\rho_{n-1})."
},
{
"math_id": 15,
"text": "\\mbox{sgn}(\\rho_n)=(-1)^{\\lfloor n/2 \\rfloor}=\\begin{cases}\n+1 & n \\equiv 0,1 \\pmod{4}\\\\\n-1 & n \\equiv 2,3 \\pmod{4}\n\\end{cases}"
},
{
"math_id": 16,
"text": "\\rho_0"
},
{
"math_id": 17,
"text": "\\rho_1"
},
{
"math_id": 18,
"text": "\\rho_2"
},
{
"math_id": 19,
"text": "(23) \\in S_4"
}
] | https://en.wikipedia.org/wiki?curid=10936783 |
1093768 | Electron-beam lithography | Lithographic technique that uses a scanning beam of electrons
Electron-beam lithography (often abbreviated as e-beam lithography or EBL) is the practice of scanning a focused beam of electrons to draw custom shapes on a surface covered with an electron-sensitive film called a resist (exposing). The electron beam changes the solubility of the resist, enabling selective removal of either the exposed or non-exposed regions of the resist by immersing it in a solvent (developing). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching.
The primary advantage of electron-beam lithography is that it can draw custom patterns (direct-write) with sub-10 nm resolution. This form of maskless lithography has high resolution but low throughput, limiting its usage to photomask fabrication, low-volume production of semiconductor devices, and research and development.
Systems.
Electron-beam lithography systems used in commercial applications are dedicated e-beam writing systems that are very expensive (> US$1M). For research applications, it is very common to convert an electron microscope into an electron beam lithography system using relatively low cost accessories (< US$100K). Such converted systems have produced linewidths of ~20 nm since at least 1990, while current dedicated systems have produced linewidths on the order of 10 nm or smaller.
Electron-beam lithography systems can be classified according to both beam shape and beam deflection strategy. Older systems used Gaussian-shaped beams that scanned these beams in a raster fashion. Newer systems use shaped beams that can be deflected to various positions in the writing field (also known as vector scan).
Electron sources.
Lower-resolution systems can use thermionic sources (cathode), which are usually formed from lanthanum hexaboride. However, systems with higher-resolution requirements need to use field electron emission sources, such as heated W/ZrO2 for lower energy spread and enhanced brightness. Thermal field emission sources are preferred over cold emission sources, in spite of the former's slightly larger beam size, because they offer better stability over typical writing times of several hours.
Lenses.
Both electrostatic and magnetic lenses may be used. However, electrostatic lenses have more aberrations and so are not used for fine focusing. There is currently no mechanism to make achromatic electron beam lenses, so extremely narrow dispersions of the electron beam energy are needed for finest focusing.
Stage, stitching and alignment.
Typically, for very small beam deflections, electrostatic deflection "lenses" are used; larger beam deflections require electromagnetic scanning. Because of the inaccuracy and because of the finite number of steps in the exposure grid, the writing field is of the order of 100 micrometre – 1 mm. Larger patterns require stage moves. An accurate stage is critical for stitching (tiling writing fields exactly against each other) and pattern overlay (aligning a pattern to a previously made one).
Electron beam write time.
The minimum time to expose a given area for a given dose is given by the following formula:
formula_0
where formula_1 is the time to expose the object (can be divided into exposure time/step size), formula_2 is the beam current, formula_3 is the dose and formula_4 is the area exposed.
For example, assuming an exposure area of 1 cm2, a dose of 10−3 coulombs/cm2, and a beam current of 10−9 amperes, the resulting minimum write time would be 106 seconds (about 12 days). This minimum write time does not include time for the stage to move back and forth, as well as time for the beam to be blanked (blocked from the wafer during deflection), as well as time for other possible beam corrections and adjustments in the middle of writing. To cover the 700 cm2 surface area of a 300 mm silicon wafer, the minimum write time would extend to 7*108 seconds, about 22 years. This is a factor of about 10 million times slower than current optical lithography tools. It is clear that throughput is a serious limitation for electron beam lithography, especially when writing dense patterns over a large area.
E-beam lithography is not suitable for high-volume manufacturing because of its limited throughput. The smaller field of electron beam writing makes for very slow pattern generation compared with photolithography (the current standard) because more exposure fields must be scanned to form the final pattern area (≤mm2 for electron beam vs. ≥40 mm2 for an optical mask projection scanner). The stage moves in between field scans. The electron beam field is small enough that a rastering or serpentine stage motion is needed to pattern a 26 mm X 33 mm area for example, whereas in a photolithography scanner only a one-dimensional motion of a 26 mm X 2 mm slit field would be required.
Currently an optical maskless lithography tool is much faster than an electron beam tool used at the same resolution for photomask patterning.
Shot noise.
As features sizes shrink, the number of incident electrons at fixed dose also shrinks. As soon as the number reaches ~10000, shot noise effects become predominant, leading to substantial natural dose variation within a large feature population. With each successive process node, as the feature area is halved, the minimum dose must double to maintain the same noise level. Consequently, the tool throughput would be halved with each successive process node.
Note: 1 ppm of population is about 5 standard deviations away from the mean dose.
"Ref.: SPIE Proc. 8683-36 (2013)"
Shot noise is a significant consideration even for mask fabrication. For example, a commercial mask e-beam resist like FEP-171 would use doses less than 10 μC/cm2, whereas this leads to noticeable shot noise for a target critical dimension (CD) even on the order of ~200 nm on the mask. CD variation can be on the order of 15–20% for sub-20 nm features.
Defects in electron-beam lithography.
Despite the high resolution of electron-beam lithography, the generation of defects during electron-beam lithography is often not considered by users. Defects may be classified into two categories: data-related defects, and physical defects.
Data-related defects may be classified further into two sub-categories. Blanking or deflection errors occur when the electron beam is not deflected properly when it is supposed to, while shaping errors occur in variable-shaped beam systems when the wrong shape is projected onto the sample. These errors can originate either from the electron optical control hardware or the input data that was taped out. As might be expected, larger data files are more susceptible to data-related defects.
Physical defects are more varied, and can include sample charging (either negative or positive), backscattering calculation errors, dose errors, fogging (long-range reflection of backscattered electrons), outgassing, contamination, beam drift and particles. Since the write time for electron beam lithography can easily exceed a day, "randomly occurring" defects are more likely to occur. Here again, larger data files can present more opportunities for defects.
Photomask defects largely originate during the electron beam lithography used for pattern definition.
Electron energy deposition in matter.
The primary electrons in the incident beam lose energy upon entering a material through inelastic scattering or collisions with other electrons. In such a collision the momentum transfer from the incident electron to an atomic electron can be expressed as formula_5, where "b" is the distance of closest approach between the electrons, and "v" is the incident electron velocity. The energy transferred by the collision is given by formula_6, where "m" is the electron mass and "E" is the incident electron energy, given by formula_7. By integrating over all values of "T" between the lowest binding energy, "E0" and the incident energy, one obtains the result that the total cross section for collision is inversely proportional to the incident energy formula_8, and proportional to "1/E0 – 1/E". Generally, "E » E0", so the result is essentially inversely proportional to the binding energy.
By using the same integration approach, but over the range "2E0" to "E", one obtains by comparing cross-sections that half of the inelastic collisions of the incident electrons produce electrons with kinetic energy greater than "E0". These secondary electrons are capable of breaking bonds (with binding energy "E0") at some distance away from the original collision. Additionally, they can generate additional, lower energy electrons, resulting in an electron cascade. Hence, it is important to recognize the significant contribution of secondary electrons to the spread of the energy deposition.
In general, for a molecule AB:
e− + AB → AB− → A + B−
This reaction, also known as "electron attachment" or "dissociative electron attachment" is most likely to occur after the electron has essentially slowed to a halt, since it is easiest to capture at that point. The cross-section for electron attachment is inversely proportional to electron energy at high energies, but approaches a maximum limiting value at zero energy. On the other hand, it is already known that the mean free path at the lowest energies (few to several eV or less, where dissociative attachment is significant) is well over 10 nm, thus limiting the ability to consistently achieve resolution at this scale.
Resolution capability.
With today's electron optics, electron beam widths can routinely go down to a few nanometers. This is limited mainly by aberrations and space charge. However, the feature resolution limit is determined not by the beam size but by forward scattering (or effective beam broadening) in the resist, while the pitch resolution limit is determined by secondary electron travel in the resist. This point was driven home by a 2007 demonstration of double patterning using electron beam lithography in the fabrication of 15 nm half-pitch zone plates. Although a 15 nm feature was resolved, a 30 nm pitch was still difficult to do due to secondary electrons scattering from the adjacent feature. The use of double patterning allowed the spacing between features to be wide enough for the secondary electron scattering to be significantly reduced.
The forward scattering can be decreased by using higher energy electrons or thinner resist, but the generation of secondary electrons is inevitable. It is now recognized that for insulating materials like PMMA, low energy electrons can travel quite a far distance (several nm is possible). This is due to the fact that below the ionization potential the only energy loss mechanism is mainly through phonons and polarons. Although the latter is basically an ionic lattice effect, polaron hopping can extend as far as 20 nm. The travel distance of secondary electrons is not a fundamentally derived physical value, but a statistical parameter often determined from many experiments or Monte Carlo simulations down to < 1 eV. This is necessary since the energy distribution of secondary electrons peaks well below 10 eV. Hence, the resolution limit is not usually cited as a well-fixed number as with an optical diffraction-limited system. Repeatability and control at the practical resolution limit often require considerations not related to image formation, e.g., resist development and intermolecular forces.
A study by the College of Nanoscale Science and Engineering (CNSE) presented at the 2013 EUVL Workshop indicated that, as a measure of electron blur, 50–100 eV electrons easily penetrated beyond 10 nm of resist thickness in PMMA or a commercial resist. Furthermore dielectric breakdown discharge is possible. More recent studies have indicated that 20 nm resist thickness could be penetrated by low energy electrons (of sufficient dose) and sub-20 nm half-pitch electron-beam lithography already required double patterning.
As of 2022, a state-of-the-art electron multi-beam writer achieves about a 20 nm resolution.
Scattering.
In addition to producing secondary electrons, primary electrons from the incident beam with sufficient energy to penetrate the resist can be multiply scattered over large distances from underlying films and/or the substrate. This leads to exposure of areas at a significant distance from the desired exposure location. For thicker resists, as the primary electrons move forward, they have an increasing opportunity to scatter laterally from the beam-defined location. This scattering is called forward scattering. Sometimes the primary electrons are scattered at angles exceeding 90 degrees, i.e., they no longer advance further into the resist. These electrons are called backscattered electrons and have the same effect as long-range flare in optical projection systems. A large enough dose of backscattered electrons can lead to complete exposure of resist over an area much larger than defined by the beam spot.
Proximity effect.
The smallest features produced by electron-beam lithography have generally been isolated features, as nested features exacerbate the proximity effect, whereby electrons from exposure of an adjacent region spill over into the exposure of the currently written feature, effectively enlarging its image, and reducing its contrast, i.e., difference between maximum and minimum intensity. Hence, nested feature resolution is harder to control. For most resists, it is difficult to go below 25 nm lines and spaces, and a limit of 20 nm lines and spaces has been found. In actuality, though, the range of secondary electron scattering is quite far, sometimes exceeding 100 nm, but becoming very significant below 30 nm.
The proximity effect is also manifest by secondary electrons leaving the top surface of the resist and then returning some tens of nanometers distance away.
Proximity effects (due to electron scattering) can be addressed by solving the inverse problem and calculating the exposure function "E(x,y)" that leads to a dose distribution as close as possible to the desired dose "D(x,y)" when convolved by the scattering distribution point spread function "PSF(x,y)". However, it must be remembered that an error in the applied dose (e.g., from shot noise) would cause the proximity effect correction to fail.
Charging.
Since electrons are charged particles, they tend to charge the substrate negatively unless they can quickly gain access to a path to ground. For a high-energy beam incident on a silicon wafer, virtually all the electrons stop in the wafer where they can follow a path to ground. However, for a quartz substrate such as a photomask, the embedded electrons will take a much longer time to move to ground. Often the negative charge acquired by a substrate can be compensated or even exceeded by a positive charge on the surface due to secondary electron emission into the vacuum. The presence of a thin conducting layer above or below the resist is generally of limited use for high energy (50 keV or more) electron beams, since most electrons pass through the layer into the substrate. The charge dissipation layer is generally useful only around or below 10 keV, since the resist is thinner and most of the electrons either stop in the resist or close to the conducting layer. However, they are of limited use due to their high sheet resistance, which can lead to ineffective grounding.
The range of low-energy secondary electrons (the largest component of the free electron population in the resist-substrate system) which can contribute to charging is not a fixed number but can vary from 0 to as high as 50 nm (see section New frontiers and extreme ultraviolet lithography). Hence, resist-substrate charging is not repeatable and is difficult to compensate consistently. Negative charging deflects the electron beam away from the charged area while positive charging deflects the electron beam toward the charged area.
Electron-beam resist performance.
Due to the scission efficiency generally being an order of magnitude higher than the crosslinking efficiency, most polymers used for positive-tone electron-beam lithography will also crosslink (and therefore become negative tone) at doses an order of magnitude higher than the doses used to cause scission in the polymer for positive tone exposure. In the case of PMMA, exposure of electrons at up to more than 1000 μC/cm2, the gradation curve corresponds to the curve of a “normal” positive process. Above 2000 μC/cm2, the recombinant crosslinking process prevails, and at about 7000 μC/cm2 the layer is completely crosslinked which makes the layer more insoluble than the unexposed initial layer. If negative PMMA structures should be used, a stronger developer than for the positive process is required. Such large dose increases may be required to avoid shot noise effects.
A study performed at the Naval Research Laboratory indicated that low-energy (10–50 eV) electrons were able to damage ~30 nm thick PMMA films. The damage was manifest as a loss of material.
In 2018, a thiol-ene resist was developed that features native reactive surface groups, which allows the direct functionalization of the resist surface with biomolecules.
New frontiers.
To get around the secondary electron generation, it will be imperative to use low-energy electrons as the primary radiation to expose resist. Ideally, these electrons should have energies on the order of not much more than several eV in order to expose the resist without generating any secondary electrons, since they will not have sufficient excess energy. Such exposure has been demonstrated using a scanning tunneling microscope as the electron beam source. The data suggest that electrons with energies as low as 12 eV can penetrate 50 nm thick polymer resist. The drawback to using low energy electrons is that it is hard to prevent spreading of the electron beam in the resist. Low energy electron optical systems are also hard to design for high resolution. Coulomb inter-electron repulsion always becomes more severe for lower electron energy.
Another alternative in electron-beam lithography is to use extremely high electron energies (at least 100 keV) to essentially "drill" or sputter the material. This phenomenon has been observed frequently in transmission electron microscopy. However, this is a very inefficient process, due to the inefficient transfer of momentum from the electron beam to the material. As a result, it is a slow process, requiring much longer exposure times than conventional electron beam lithography. Also high energy beams always bring up the concern of substrate damage.
Interference lithography using electron beams is another possible path for patterning arrays with nanometer-scale periods. A key advantage of using electrons over photons in interferometry is the much shorter wavelength for the same energy.
Despite the various intricacies and subtleties of electron beam lithography at different energies, it remains the most practical way to concentrate the most energy into the smallest area.
There has been significant interest in the development of multiple electron beam approaches to lithography in order to increase throughput. This work has been supported by SEMATECH and start-up companies such as Multibeam Corporation, Mapper and IMS. IMS Nanofabrication has commercialized the multibeam-maskwriter and started a rollout in 2016.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " D \\cdot A = T\\cdot I \\,"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "dp=2e^2/bv"
},
{
"math_id": 6,
"text": "T = (dp)^2/2m = e^4/Eb^2"
},
{
"math_id": 7,
"text": "E=(1/2) mv^2"
},
{
"math_id": 8,
"text": "E"
}
] | https://en.wikipedia.org/wiki?curid=1093768 |
10938074 | Schönberg–Chandrasekhar limit | In stellar astrophysics, the Schönberg–Chandrasekhar limit is the maximum mass of a non-fusing, isothermal core that can support an enclosing envelope. It is expressed as the ratio of the core mass to the total mass of the core and envelope. Estimates of the limit depend on the models used and the assumed chemical compositions of the core and envelope; typical values given are from 0.10 to 0.15 (10% to 15% of the total stellar mass). This is the maximum to which a helium-filled core can grow, and if this limit is exceeded, as can only happen in massive stars, the core collapses, releasing energy that causes the outer layers of the star to expand to become a red giant. It is named after the astrophysicists Subrahmanyan Chandrasekhar and Mario Schönberg, who estimated its value in a 1942 paper. They estimated it to be
formula_0
where formula_1 is the mass, formula_2 is the mean molecular weight, index "c" denotes the core, and index "e" is the envelope.
The Schönberg–Chandrasekhar limit comes into play when fusion in a main-sequence star exhausts the hydrogen at the center of the star. The star then contracts until hydrogen fuses in a shell surrounding a helium-rich core, both of which are surrounded by an envelope consisting primarily of hydrogen. The core increases in mass as the shell burns its way outwards through the star. If the star's mass is less than approximately 1.5 solar masses, the core will become degenerate before the Schönberg–Chandrasekhar limit is reached, and, on the other hand, if the mass is greater than approximately 6 solar masses, the star leaves the main sequence with a core mass already greater than the Schönberg–Chandrasekhar limit so its core is never isothermal before helium fusion. In the remaining case, where the mass is between 1.5 and 6 solar masses, the core will grow until the limit is reached, at which point it will contract rapidly until helium starts to fuse in the core.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\operatorname{\\left(\\frac{\\operatorname{M}_{c}}{M}\\right)}_{SC} = 0.37 \\left(\\frac{\\operatorname{\\mu}_{e}}{\\operatorname{\\mu}_{c}}\\right)^2"
},
{
"math_id": 1,
"text": "\\operatorname{M}"
},
{
"math_id": 2,
"text": "\\operatorname{\\mu}"
}
] | https://en.wikipedia.org/wiki?curid=10938074 |
10939 | Formal language | Sequence of words formed by specific rules
In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar.
The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called "well-formed words" or "well-formed formulas". A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules.
In computer science, formal languages are used, among others, as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way.
The field of formal language theory studies primarily the purely syntactic aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages.
History.
In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes.
Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in "Begriffsschrift" (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a "formal language of pure language."
In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble", and later devised the canonical system for the creation of formal languages.
In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools.
Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60.
Words over an alphabet.
An "alphabet", in the context of formal languages, can be any set; its elements are called "letters". An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode.
A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ* (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the "empty word", which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word.
In some applications, especially in logic, the alphabet is also known as the "vocabulary" and words are known as "formulas" or "sentences"; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.
Definition.
A formal language "L" over an alphabet Σ is a subset of Σ*, that is, a set of words over that alphabet. Sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of 'well-formed expressions'.
In computer science and mathematics, which do not usually deal with natural languages, the adjective "formal" is often omitted as redundant.
While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it.
Examples.
The following rules describe a formal language L over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}:
Under these rules, the string "23+4=555" is in L, but the string "=234=+" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc.
Constructions.
For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language L as just L = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all (L = ∅).
However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", ... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing "L" = {a, b, ab, cba}. Here are some examples of formal languages:
Language-specification formalisms.
Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as
Typical questions asked about such formalisms include:
Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications.
Operations on languages.
Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations.
Examples: suppose formula_0 and formula_1 are languages over some common alphabet formula_2.
Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right.
Applications.
Programming languages.
A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like codice_0, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like codice_1, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built.
Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute.
Formal theories, systems, and proofs.
In mathematical logic, a "formal theory" is a set of sentences expressed in a formal language.
A "formal system" (also called a "logical calculus", or a "logical system") consists of a formal language together with a deductive apparatus (also called a "deductive system"). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems formula_15 and formula_16 may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance).
A "formal proof" or "derivation" is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions.
Interpretations and models.
Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value.
The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a "model" for a formula is an interpretation of terms such that the formula becomes true.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "L_1"
},
{
"math_id": 1,
"text": "L_2"
},
{
"math_id": 2,
"text": "\\Sigma"
},
{
"math_id": 3,
"text": "L_1 \\cdot L_2"
},
{
"math_id": 4,
"text": "vw"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "w"
},
{
"math_id": 7,
"text": "L_1 \\cap L_2"
},
{
"math_id": 8,
"text": "\\neg L_1"
},
{
"math_id": 9,
"text": "\\varepsilon^R = \\varepsilon"
},
{
"math_id": 10,
"text": "w = \\sigma_1 \\cdots \\sigma_n"
},
{
"math_id": 11,
"text": "\\sigma_1, \\ldots, \\sigma_n"
},
{
"math_id": 12,
"text": "w^R = \\sigma_n \\cdots \\sigma_1"
},
{
"math_id": 13,
"text": "L"
},
{
"math_id": 14,
"text": "L^R = \\{ w^R \\mid w \\in L \\}"
},
{
"math_id": 15,
"text": "\\mathcal{FS}"
},
{
"math_id": 16,
"text": "\\mathcal{FS'}"
}
] | https://en.wikipedia.org/wiki?curid=10939 |
10940575 | Isopentenyl-diphosphate delta isomerase | Class of enzymes
Isopentenyl pyrophosphate isomerase (EC 5.3.3.2, IPP isomerase), also known as Isopentenyl-diphosphate delta isomerase, is an isomerase that catalyzes the conversion of the relatively un-reactive isopentenyl pyrophosphate (IPP) to the more-reactive electrophile dimethylallyl pyrophosphate (DMAPP). This isomerization is a key step in the biosynthesis of isoprenoids through the mevalonate pathway and the MEP pathway.
isopentenyl diphosphate formula_0 dimethylallyl diphosphate
This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is isopentenyl-diphosphate Delta3-Delta2-isomerase. Other names in common use include isopentenylpyrophosphate Delta-isomerase, methylbutenylpyrophosphate isomerase, and isopentenylpyrophosphate isomerase.
Enzyme mechanism.
IPP isomerase catalyzes the isomerization of IPP to DMAPP by an antarafacial transposition of hydrogen. The empirical evidence suggests that this reaction proceeds by a protonation/deprotonation mechanism, with the addition of a proton to the "re"-face of the inactivated C3-C4 double bond resulting in a transient carbocation intermediate. The removal of the pro-R proton from C2 forms the C2-C3 double bond of DMAPP.
Enzyme structure.
Crystallographic studies have observed that the active form of IPP isomerase is a monomer with alternating α-helices and β-sheets. The active site of IPP isomerase is deeply buried within the enzyme and consists of a glutamic acid residue and a cysteine residue that interact with opposite sides of the IPP substrate, consistent with the antarafacial stereochemistry of isomerization. The origin of the initial protonation step has not been conclusively established. Recent evidence suggests that the glutamic acid residue is involved in the protonating step despite the observation that its carboxylic acid side-chain is stabilized in its carboxylate form. This discrepancy has been addressed by the discovery of a water molecule in the active site of human IPP isomerase, suggesting a mechanism where the glutamine residue polarizes the double bond of IPP and makes it more susceptible to protonation by water.
IPP isomerase also requires a divalent cation to fold into its active conformation. The enzyme contains several amino acids, including the catalytic glutamate, that are involved in coordinating with Mg2+ or Mn2+. The coordination of the metal cation to the glutamate residue stabilizes the carbiocation intermediate after protonation.
Structural studies.
As of late 2007, 25 structures have been solved for this class of enzymes, with PDB accession codes 1HX3, 1HZT, 1I9A, 1NFS, 1NFZ, 1OW2, 1P0K, 1P0N, 1PPV, 1PPW, 1PVF, 1Q54, 1R67, 1VCF, 1VCG, 1X83, 1X84, 2B2K, 2DHO, 2G73, 2G74, 2I6K, 2ICJ, 2ICK, and 2PNY.
Biological function.
The protonation of an inactivated double bond is rarely seen in nature, highlighting the unique catalytic mechanism of IPP isomerase. The isomerization of IPP to DMAPP is a crucial step in the synthesis of isoprenoids and isoprenoid-derivatives, compounds that play vital roles in the biosynthetic pathways of all living organisms. Because of the importance of the melavonate pathway in isoprenoid biosynthesis, IPP isomerase is found in a variety of different cellular compartments, including plastids and mammalian mitochondria.
Disease relevance.
Mutations in "IDI1", the gene that codes for IPP isomerase 1, have been implicated in decreased viability in a number of organisms, including the yeast "Saccharomyces cerevisiae", the nematode "Caenorhabditis elegans" and the plant "Arabidopsis thaliana". While there have been no evidence directly implicating "IDI1" mutations in human disease, genomic analysis has identified a copy-number gain near two IPP isomerase genes in a substantial proportion of patients with sporadic amyotrophic lateral sclerosis, suggesting that the isomerase may play a role in this disease.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10940575 |
10942172 | Orientation entanglement | Spinor topology
In mathematics and physics, the notion of orientation entanglement is sometimes used to develop intuition relating to the geometry of spinors or alternatively as a concrete realization of the failure of the special orthogonal groups to be simply connected.
Elementary description.
Spatial vectors alone are not sufficient to describe fully the properties of rotations in space.
Consider the following example. A coffee cup is suspended in a room by a pair of elastic rubber bands fixed to the walls of the room. The cup is rotated by its handle through a full twist of 360°, so that the handle is brought all the way around the central vertical axis of the cup and back to its original position.
Note that after this rotation, the cup has been returned to its original orientation, but that its orientation with respect to the walls is "twisted". In other words, if we lower the coffee cup to the floor of the room, the two bands will coil around each other in one full twist of a double helix. This is an example of orientation entanglement: the new orientation of the coffee cup embedded in the room is not actually the same as the old orientation, as evidenced by the twisting of the rubber bands. Stated another way, the orientation of the coffee cup has become entangled with the orientation of the surrounding walls.
Clearly the geometry of spatial vectors alone is insufficient to express the orientation entanglement (the twist of the rubber bands). Consider drawing a vector across the cup. A full rotation will move the vector around so that the new orientation of the vector is the same as the old one. The vector alone doesn't know that the coffee cup is entangled with the walls of the room.
In fact, the coffee cup is inextricably entangled. There is no way to untwist the bands without rotating the cup. However, consider what happens instead when the cup is rotated, not through just one 360° turn, but "two" 360° turns for a total rotation of 720°. Then if the cup is lowered to the floor, the two rubber bands coil around each other in two full twists of a double helix. If the cup is now brought up through the center of one coil of this helix, and passed onto its other side, the twist disappears. The bands are no longer coiled about each other, even though no additional rotation had to be performed. (This experiment is more easily performed with a ribbon or belt. See below.)
Thus, whereas the orientation of the cup was twisted with respect to the walls after a rotation of only 360°, it was no longer twisted after a rotation of 720°. By only considering the vector attached to the cup, it is impossible to distinguish between these two cases, however. It is only when we attach a spinor to the cup that we can distinguish between the twisted and untwisted case.
In this situation, a spinor is a sort of "polarized" vector. In the adjacent diagram, a spinor can be represented as a vector whose head is a flag lying on one side of a Möbius strip, pointing inward. Initially, suppose that the flag is on top of the strip as shown. As the coffee cup is rotated it carries the spinor, and its flag, along the strip. If the cup is rotated through 360°, the spinor returns to the initial position, but the flag is now underneath the strip, pointing outward. It takes another 360° rotation in order to return the flag to its original orientation.
A detailed bridge between the above, and the formal mathematics can be found in the article on tangloids.
Formal details.
In three dimensions, the problem illustrated above corresponds to the fact that the Lie group SO(3) is not simply connected. Mathematically, one can tackle this problem by exhibiting the special unitary group, SU(2), which is also the spin group in three Euclidean dimensions, as a double cover of SO(3). If "X"
("x"1, "x"2, "x"3) is a vector in R3, then we identify "X" with the 2 × 2 matrix with complex entries
formula_0
Note that −det("X") gives the square of the Euclidean length of "X" regarded as a vector, and that "X" is a trace-free, or better, trace-zero Hermitian matrix.
The unitary group acts on "X" via
formula_1
where "M" ∈ SU(2). Note that, since "M" is unitary,
formula_2, and
formula_3 is trace-zero Hermitian.
Hence SU(2) acts via rotation on the vectors "X". Conversely, since any change of basis which sends trace-zero Hermitian matrices to trace-zero Hermitian matrices must be unitary, it follows that every rotation also lifts to SU(2). However, each rotation is obtained from a pair of elements "M" and −"M" of SU(2). Hence SU(2) is a double-cover of SO(3). Furthermore, SU(2) is easily seen to be itself simply connected by realizing it as the group of unit quaternions, a space homeomorphic to the 3-sphere.
A unit quaternion has the cosine of half the rotation angle as its scalar part and the sine of half the rotation angle multiplying a unit vector along some rotation axis (here assumed fixed) as its vector part (also called imaginary part, see Euler–Rodrigues formula). If the initial orientation of a rigid body (with unentangled connections to its fixed surroundings) is identified with a unit quaternion having zero vector part and +1 for the scalar part, then after one complete rotation (2π rad) the vector part returns to zero and the scalar part has become −1 (entangled). After two complete rotations (4π rad) the vector part again returns to zero and the scalar part returns to +1 (unentangled), completing the cycle. | [
{
"math_id": 0,
"text": "X = \\left(\\begin{matrix} x_1 & x_2 - ix_3 \\\\ x_2 + ix_3 & -x_1 \\end{matrix}\\right)"
},
{
"math_id": 1,
"text": "X \\mapsto MXM^\\dagger"
},
{
"math_id": 2,
"text": "\\det\\left(MXM^\\dagger\\right) = \\det(X)"
},
{
"math_id": 3,
"text": "MXM^\\dagger"
}
] | https://en.wikipedia.org/wiki?curid=10942172 |
10942725 | L-xylulose reductase | Enzyme
Dicarbonyl/L-xylulose reductase, also known as carbonyl reductase II, is an enzyme that in human is encoded by the DCXR gene located on chromosome 17.
Structure.
The DCXR gene encodes a membrane protein that is approximately 34 kDa in size and composed of 224 amino acids. The protein is highly expressed in the kidney and localizes to the cytoplasmic membrane.
Function.
DCSR catalyzes the reduction of several L-xylylose as well as a number of pentoses, tetroses, trioses, alpha-dicarbonyl compounds. The enzyme is involved in carbohydrate metabolism, glucose metabolism, the uronate cycle and may play a role in the water absorption and cellular osmoregulation in the proximal renal tubules by producing xylitol.
In enzymology, an L-xylulose reductase (EC 1.1.1.10) is an enzyme that catalyzes the chemical reaction
xylitol + NADP+ formula_0 L-xylulose + NADPH + H+
Thus, the two substrates of this enzyme are xylitol and NADP+, whereas its 3 products are L-xylulose, NADPH, and H+.
This enzyme belongs to the superfamily of short-chain oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is xylitol:NADP+ 2-oxidoreductase (L-xylulose-forming).
Clinical significance.
A deficiency is responsible for pentosuria. The insufficiency of L-xylulose reductase activity causes an inborn error of metabolism disease characterized by excessive urinary excretion of L-xylulose.
Over-expression and ectopic expression of the protein may be associated with prostate adenocarcinoma.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10942725 |
10942765 | Wang Ganchang | Chinese nuclear physicist (1907–1998)
Wang Ganchang (; May 28, 1907 – December 10, 1998) was a Chinese nuclear physicist. He was one of the founding fathers of Chinese nuclear physics, cosmic rays and particle physics. Wang was also a leader in the fields of detonation physics experiments, anti-electromagnetic pulse technology, nuclear explosion detection, anti-nuclear radiation technology, and laser stimulated nuclear explosion technologies.
For his numerous contributions, Wang is considered among the top leaders, pioneers and scientists of the Chinese nuclear weapons program. He was elected a member of the Chinese Academy of Sciences, and was a member of the Chinese Communist party.
In 1930, Wang first proposed the use of a cloud chamber to study a new type of high-energy ray induced by the bombardment of beryllium with α particles. In 1941 Wang first proposed the use of beta-capture to detect the neutrino. James Allen employed his suggestion and found evidence for the existence of the neutrino in 1942. Frederick Reines and Clyde Cowan detected the neutrino via the inverse beta-decay reaction in 1956, for which, forty years later, they were awarded the 1995 Nobel Prize in Physics.
Wang also led a group which discovered the anti-sigma minus hyperon particle at the Joint Institute for Nuclear Research, Dubna, Russia in 1959.
After May 1950, Wang became researcher and vice-director of the Institute of Modern Physics. He was also vice-director of the Soviet Joint Institute for Nuclear Research.
From spring of 1969 onwards, Wang held many high-level positions within Chinese academic and political organizations. He was vice-director of the Ninth Research Institute (二机部第九研究院), predecessor of the China Academy of Engineering Physics, director of the China Institute of Atomic Energy, deputy director of the Nuclear industry Science and Technology Commission (核工业部科技委), and second vice-chairman of the China Association for Science and Technology. He was also vice-chairman of the Chinese Physical Society and the first chairman of the Chinese Nuclear Society. Within the political sphere he was a member of the 3rd through 16th National People's Congress Standing Committees of the Chinese government.
In 2000, the Chinese Physical Society established five prizes in recognition of five pioneers of modern physics in China. The Wang Ganchang Prize is awarded to physicists in particle physics and inertial confinement fusion.
Early years.
Wang Ganchang was born in Zhitang (支塘镇枫塘湾), Changshu, Jiangsu Province on May 28, 1907. In 1924, he graduated from Pudong High School (浦东中学) in Shanghai. Subsequently, he studied English for six months while driving and repairing cars to sustain himself. He passed the entrance examinations for Tsinghua University in August 1928.
He graduated from the Physics Department of Tsinghua in June 1929, and served as an assistant professor from 1929 to 1930. In his thesis "On the daily change of radon gas" (《清华园周围氡气的强度及每天的变化》), he was the first Chinese scientist to publish on atmospheric research and radioactive experiments.
Overseas student in Germany.
In 1930 he went to study at the University of Berlin in Germany. As soon as he arrived in Berlin, he became aware of the Bothe report (博特报告) relating to the emission of a new type of high-energy neutral radiation induced by the bombardment of beryllium with α particles from a radioactive polonium source, which was non-ionizing but even more penetrating than the strongest gamma rays derived from radium. These were (wrongly) presumed to be gamma rays.
Wang suggested the use of a cloud chamber to study these particles. However, he could not perform this experiment during his time in Germany, since he lacked the support of his supervisor Lise Meitner. Instead, it was conducted one year later by the English physicist James Chadwick, who discovered a new type of particle, the neutron. Chadwick was subsequently awarded the 1935 Nobel Prize in Physics.
During or after his time in Germany, Wang worked briefly at UC Berkeley in the United States.
In 1934, Wang Ganchang received his Ph.D. with a thesis on β decay spectrum (German: Über die β-Spektren von ThB+C+C; Chinese:《ThB+C+C的β能谱》) under the supervision of Meitner. He returned to China in April of that year.
Upon his return to China.
He first worked at Shandong University as a physics professor from 1934 to 1936. He then became a professor at Zhejiang University and served as head of the Department of Physics there from October 1936 to 1950.
During World War II.
After the Marco Polo Bridge Incident in July 1937, the Japanese invasion of China forced Wang and other professors to retreat with all the faculty of Zhejiang University to the western mountainous rural areas of China to escape capture. Despite the difficult conditions, he nonetheless tried in 1939 to find evidence of tracks of nuclear fission caused by neutron bombardment of cadmium acid on photographic film.
In 1941, he first proposed an experiment to prove the existence of the neutrino by capturing K-electrons in nuclear reactions. Unfortunately, due to the war he was unable to conduct this experiment. Instead, fifteen years later in 1956, Frederick Reines and Clyde Cowan detected the neutrino through a different method involving the inverse beta-decay reaction. Forty years later, they were awarded the 1995 Nobel Prize in Physics.
After the founding of the People's Republic of China.
From April 1950 to 1956 Wang was a researcher at the Institute of Modern Physics at the Chinese Academy of Sciences and served as the Institute's deputy director from 1952. There, with the invitation of fellow researcher Qian Sanqiang, he began studies of cosmic-rays with a circular 12-feet cloud chamber. In 1952, he designed a magnetic cloud chamber.
Professor Wang was the first to propose the establishment of a cosmic ray laboratory in China. From 1953 to 1956, he directed the Luoxue Mountain Cosmic Rays Research Center (落雪山宇宙线实验站) located 3185 meters above sea level in the mountainous region of Yunnan province.
His study of cosmic-rays lead him to publish his findings on neutral-meson decay in 1955. By 1957 he had collected more than 700 recordings of new types of particles.
The USSR years.
In order to develop the field of high energy physics in China, in 1956 the Chinese government began to send experts to the Joint Institute for Nuclear Research at Dubna in the Soviet Union to do field work and carry out preliminary design of particle accelerators. The agreement on the establishment of JINR was signed on March 26, 1956 in Moscow, with Wang Ganchang as one of the founders.
On April 4, 1956, Wang went to the USSR to help plan the long-term development of the peaceful utilization of atomic energy. Later, many Chinese students went to the Soviet Union to study the technology of accelerator and detector construction. Using this technology, the experimental group led by Professor Wang Ganchang in Dubna analysed more than 40,000 photographs which recorded tens of thousands of nuclear interactions taken in the propane bubble chamber produced by a 10 GeV synchrophasotron used to bombard a target forming high energy mesons. On March 9, 1960 they were the first to discover anti-sigma minus hyperon particles (反西格马负超子).
formula_0
The discovery of this new unstable antiparticle, which decays in (1.18±0.07)·10−10 s into an antineutron and a negative pion, was announced in September of that year.
formula_1
Initially there was no doubt that this particle was an elementary particle. However, a few years later this hyperon, along with the proton, the neutron, the pion and other hadrons had all lost their status as elementary particles when they turned out to be complex particles consisting of quarks and antiquarks.
Wang remained affiliated with the Joint Institute for Nuclear Research even after returning to China, serving as its deputy director from 1958 until 1960.
Nuclear weapons.
After his return to China in 1958, Wang agreed to participate in the Chinese nuclear program to develop an atomic bomb, which meant giving up his research on elementary particles for the next 17 years. Within one year he had conducted more than one thousand detonation experiments at the foot of the Great Wall, in the Yanshan Mountains, Huailai county, Hebei province.
In 1963 he moved to a site within the Qinghai Plateau more than 3000 meters above sea level to continue polymerization detonation experiments. He then relocated to the Taklamakan desert in Xinjiang province to prepare for China's first nuclear test.
On October 16, 1964 the first atomic bomb test (code-named "596") was conducted successfully, making China a nuclear-weapons state.
Less than three years later, on June 17, 1967 the first hydrogen bomb test (code-named "Test No. 6") was conducted successfully. This shocked the world since China had not only managed to break the nuclear monopoly of the two superpowers, but had developed this technology even before some major Western powers such as France.
In spring 1969, Wang was one of several scientists who spoke with an Australian journalist about China's nuclear weapons programme.
That year, as part of his duties as vice-director of the Ninth Research Institute (二机部第九研究院), Wang received the task of conducting China's first underground nuclear test. Due to severe high altitude hypoxia brought on by the test location, he had to carry an oxygen tank while at work. The first underground test was successfully conducted on September 22, 1969. Wang also led the second and third Chinese underground nuclear tests.
Nuclear fusion and nuclear energy.
In 1964, the Shanghai Optical Machinery Institute (上海光学精密机械研究所) of the Chinese Academy of Sciences developed a high-power 10 MW output laser. In late December of the same year, Wang proposed to the State Council to use high-power laser beam targeting in order to achieve inertial confinement fusion, an idea simultaneously (but independently) developed by his Soviet counterpart Nikolai Gennadievich Basov. For this contribution, Wang is known as the founder of Chinese laser fusion technology.
Unfortunately, due to the political turmoil of the Cultural Revolution, which caused seven years of delay, Wang's leading position in this field was lost.
By the end of 1978, his inertial confinement fusion research group established by the Atomic Energy began the construction of a high-current accelerator. As an advocate of nuclear energy, and with four other nuclear experts in October 1978, Wang proposed the development of nuclear power in China. In 1980, he promoted a plan to build 20 nuclear power plants at various locations including Qinshan, Zhejiang Province, Daya Bay, and Guangzhou.
Project 863.
On March 3, 1986, Wang Ganchang, Wang Daheng, Yang Jiachi and Chen Fangyun first proposed in a letter (《关于跟踪世界战略性高科技发展的建议》) to the Chinese government that China should research weapons utilizing lasers and microwaves, as well as electromagnetic pulse weapons. Wang's plan was adopted in November of that year under the code name Project 863 ("863计划"). As an ongoing program, it has produced several notable developments including the Loongson computer processor family (originally named "Godson"), the Tianhe supercomputers, and aspects of the Shenzhou spacecraft.
Awards.
Wang was the first recipient of the State Natural Science Award in 1982. He was also the first recipient of the Special Award of the State Science and Technology Progress Award (国家科技进步奖特等奖) in 1985.
In September 1999, Wang and Qian Sanqiang jointly received the special prize Two Bombs, One Satellite Meritorious Award for their contributions to the Chinese nuclear program. It was granted to them posthumously by the State Council, the Central Committee of the Communist Party, and the Central Military Commission.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi^- + C\\to \\bar\\Sigma^- + K^0 + \\bar K^0 + K^- + p^+ + \\pi^+ + \\pi^- + \\text{nucleus recoil}"
},
{
"math_id": 1,
"text": "\\bar\\Sigma^-\\to \\bar n^0 + \\pi^-"
}
] | https://en.wikipedia.org/wiki?curid=10942765 |
10945580 | Schwarz integral formula | In complex analysis, a branch of mathematics, the Schwarz integral formula, named after Hermann Schwarz, allows one to recover a holomorphic function, up to an imaginary constant, from the boundary values of its real part.
Unit disc.
Let "f" be a function holomorphic on the closed unit disc {"z" ∈ C | |"z"| ≤ 1}. Then
formula_0
for all |"z"| < 1.
Upper half-plane.
Let "f" be a function holomorphic on the closed upper half-plane {"z" ∈ C | Im("z") ≥ 0} such that, for some "α" > 0, |"z""α" "f"("z")| is bounded on the closed upper half-plane. Then
formula_1
for all Im("z") > 0.
Note that, as compared to the version on the unit disc, this formula does not have an arbitrary constant added to the integral; this is because the additional decay condition makes the conditions for this formula more stringent.
Corollary of Poisson integral formula.
The formula follows from Poisson integral formula applied to "u":
formula_2
By means of conformal maps, the formula can be generalized to any simply connected open set. | [
{
"math_id": 0,
"text": "f(z) = \\frac{1}{2\\pi i} \\oint_{|\\zeta| = 1} \\frac{\\zeta + z}{\\zeta - z} \\operatorname{Re}(f(\\zeta)) \\, \\frac{d\\zeta}{\\zeta}+ i\\operatorname{Im}(f(0))"
},
{
"math_id": 1,
"text": "f(z) = \\frac{1}{\\pi i} \\int_{-\\infty}^\\infty \\frac{u(\\zeta,0)}{\\zeta - z} \\, d\\zeta = \\frac{1}{\\pi i} \\int_{-\\infty}^\\infty \\frac{\\operatorname{Re}(f)(\\zeta+0i)}{\\zeta - z} \\, d\\zeta"
},
{
"math_id": 2,
"text": "u(z) = \\frac{1}{2\\pi}\\int_0^{2\\pi} u(e^{i\\psi}) \\operatorname{Re} {e^{i\\psi} + z \\over e^{i\\psi} - z} \\, d\\psi \\qquad \\text{for } |z| < 1."
}
] | https://en.wikipedia.org/wiki?curid=10945580 |
10947683 | Academic grading in Australia | Overview of academic grading in Australia
Academic grading systems in Australia include:
Tertiary institutions.
Australian universities issue results for each subject, based on the following gradings:
Note that the numbers above do not correspond to a percentile, but are notionally a percentage of the maximum raw marks available. Various tertiary institutions in Australia have policies on the allocations for each grade and scaling may occur to meet these policies. These policies may vary also according to the degree year (higher percentages for later years), but generally, only 2–5% of students who pass (that is, who achieve raw marks of 50 or more) may be awarded a High Distinction grade, and 50% or more of passing students are awarded a basic Pass grade. Raw marks for students who fail are not scaled and do not increase the allocations of higher grades. Some universities also have a Pass Conceded (PC) grade for marks that fall in the range of 45–49 inclusive.
A few universities do not issue numeric grades out of 100 for individual subjects, instead relying on qualitative descriptors. Griffith University and The University of Queensland issue results of High Distinction, Distinction, Credit, Pass, and Fail.
Grade point average.
Grade point averages are not generally used in Australia below a tertiary level. At universities, they are calculated according to a more complicated formula than in some other nations:
formula_0
where grade points are as follows:
A "conceded pass" is a pass for a course that has been awarded only after supplementary assessment has been undertaken by the student.
Where a course result is a Non-Graded Pass, the result will only be included if the GPA is less than 4, and will be assigned the grade point of 4, otherwise NGP results will be disregarded. The term "course unit values" is used to distinguish between courses which have different weightings, for example between a full year course and a single semester course.
Some other universities, such as the University of Melbourne, University of New South Wales, University of Sydney, and University of Wollongong use a Weighted Average Mark (WAM) for the same purpose as a GPA. The WAM is based on the raw percentage grades, or marks, achieved by the student, rather than grade points such as High Distinction or Distinction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{GPA} = {{\\sum {(\\text{grade points} \\times \\text{credit points for unit})}} \\over {\\sum \\text{credit points}}}, "
}
] | https://en.wikipedia.org/wiki?curid=10947683 |
10948177 | Shamir's secret sharing | Cryptographic algorithm created by Adi Shamir
Shamir's secret sharing (SSS) is an efficient secret sharing algorithm for distributing private information (the "secret") among a group. The secret cannot be revealed unless a quorum of the group acts together to pool their knowledge. To achieve this, the secret is mathematically divided into parts (the "shares") from which the secret can be reassembled only when a sufficient number of shares are combined. SSS has the property of information-theoretic security, meaning that even if an attacker steals some shares, it is impossible for the attacker to reconstruct the secret unless they have stolen the quorum number of shares.
Shamir's secret sharing is used in some applications to share the access keys to a master secret.
High-level explanation.
SSS is used to secure a secret in a distributed form, most often to secure encryption keys. The secret is split into multiple shares, which individually do not give any information about the secret.
To reconstruct a secret secured by SSS, a number of shares is needed, called the "threshold". No information about the secret can be gained from any number of shares below the threshold (a property called perfect secrecy). In this sense, SSS is a generalisation of the one-time pad (which can be viewed as SSS with a two-share threshold and two shares in total).
Application example.
A company needs to secure their vault. If a "single" person knows the code to the vault, the code might be lost or unavailable when the vault needs to be opened. If there are "several" people who know the code, they may not trust each other to always act honestly.<br>
SSS can be used in this situation to generate shares of the vault's code which are distributed to authorized individuals in the company. The minimum threshold and number of shares given to each individual can be selected such that the vault is accessible only by (groups of) authorized individuals. If fewer shares than the threshold are presented, the vault cannot be opened.<br>
By accident, coercion or as an act of opposition, some individuals might present incorrect information for their shares. If the total of correct shares fails to meet the minimum threshold, the vault remains locked.
Use cases.
Shamir's secret sharing can be used to
Properties and weaknesses.
SSS has useful properties, but also weaknesses that means that it is unsuited to some uses.
Useful properties include:
Weaknesses include:
History.
Adi Shamir, an Israeli scientist, first formulated the scheme in 1979.
Mathematical principle.
The scheme exploits the Lagrange interpolation theorem, specifically that formula_0 points on the polynomial uniquely determines a polynomial of degree less than or equal to formula_1. For instance, 2 points are sufficient to define a line, 3 points are sufficient to define a parabola, 4 points to define a cubic curve and so forth.
Mathematical formulation.
Shamir's secret sharing is an ideal and perfect formula_2-"threshold scheme" based on polynomial interpolation over finite fields. In such a scheme, the aim is to divide a secret formula_3 (for example, the combination to a safe) into formula_4 pieces of data formula_5 (known as "shares") in such a way that:
If formula_8, then all of the shares are needed to reconstruct the secret formula_3.
Assume that the secret formula_3 can be represented as an element formula_9 of a finite field formula_10 (where formula_11 is greater than the number formula_4 of shares being generated). Randomly choose formula_1 elements, formula_12, from formula_10 and construct the polynomial formula_13. Compute any formula_4 points out on the curve, for instance set formula_14 to find points formula_15. Every participant is given a point (a non-zero input to the polynomial, and the corresponding output). Given any subset of formula_0 of these pairs, formula_9 can be obtained using interpolation, with one possible formula for doing so being formula_16, where the list of points on the polynomial is given as formula_0 pairs of the form formula_17. Note that formula_18 is equal to the first coefficient of polynomial formula_19.
Example calculation.
The following example illustrates the basic idea. Note, however, that calculations in the example are done using integer arithmetic rather than using finite field arithmetic to make the idea easier to understand. Therefore, the example below does not provide perfect secrecy and is not a proper example of Shamir's scheme. The next example will explain the problem.
Preparation.
Suppose that the secret to be shared is 1234 formula_20.
In this example, the secret will be split into 6 shares formula_21, where any subset of 3 shares formula_22 is sufficient to reconstruct the secret. formula_23 numbers are taken at random. Let them be 166 and 94.
This yields coefficients formula_24 where formula_9 is the secret
The polynomial to produce secret shares (points) is therefore:
formula_25
Six points formula_26 from the polynomial are constructed as:
formula_27
Each participant in the scheme receives a different point (a pair of formula_28 and formula_19). Because formula_29 is used instead of formula_30 the points start from formula_31 and not formula_32. This is necessary because formula_18 is the secret.
Reconstruction.
In order to reconstruct the secret, any 3 points are sufficient
Consider using the 3 pointsformula_33.
Computing the Lagrange basis polynomials:
formula_34
formula_35
formula_36
Using the formula for polynomial interpolation, formula_19 is:
formula_37
Recalling that the secret is the free coefficient, which means that formula_38, and the secret has been recovered.
Computationally efficient approach.
Using polynomial interpolation to find a coefficient in a source polynomial formula_39 using Lagrange polynomials is not efficient, since unused constants are calculated.
Considering this, an optimized formula to use Lagrange polynomials to find formula_18 is defined as follows:
formula_40
Problem of using integer arithmetic.
Although the simplified version of the method demonstrated above, which uses integer arithmetic rather than finite field arithmetic, works, there is a security problem: Eve gains information about formula_3 with every formula_41 that she finds.
Suppose that she finds the 2 points formula_42 and formula_43. She still does not have formula_44 points, so in theory she should not have gained any more information about formula_3. But she could combine the information from the 2 points with the public information: formula_45. Doing so, Eve could perform the following algebra:
Solution using finite field arithmetic.
The above attack exploits constraints on the values that the polynomial may take by virtue of how it was constructed: the polynomial must have coefficients that are integers, and the polynomial must take an integer as value when evaluated at each of the coordinates used in the scheme. This reduces its possible values at unknown points, including the resultant secret, given fewer than formula_0 shares.
This problem can be remedied by using finite field arithmetic. A finite field always has size formula_54, where formula_55 is a prime and formula_56 is a positive integer. The size formula_11 of the field must satisfy formula_57, and that formula_11 is greater than the number of possible values for the secret, though the latter condition may be circumvented by splitting the secret into smaller secret values, and applying the scheme to each of these. In our example below, we use a prime field (i.e. "r" = 1). The figure shows a polynomial curve over a finite field.
In practice this is only a small change. The order "q" of the field (i.e. the number of values that it has) must be chosen to be greater than the number of participants and the number of values that the secret formula_58 may take. All calculations involving the polynomial must also be calculated over the field (mod "p" in our example, in which formula_59 is taken to be a prime) instead of over the integers. Both the choice of the field and the mapping of the secret to a value in this field are considered to be publicly known.
For this example, choose formula_60, so the polynomial becomes formula_61 which gives the points: formula_62
This time Eve doesn't gain any information when she finds a formula_30 (until she has formula_0 points).
Suppose again that Eve finds formula_63 and formula_64, and the public information is: formula_65. Attempting the previous attack, Eve can:
There are formula_55 possible values for formula_71. She knows that formula_72 always decreases by 3, so if formula_55 were divisible by formula_73 she could conclude formula_74. However, formula_55 is prime, so she can not conclude this. Thus, using a finite field avoids this possible attack.
Also, even though Eve can conclude that formula_75, it does not provide any additional information, since the "wrapping around" behavior of modular arithmetic prevents the leakage of "S is even", unlike the example with integer arithmetic above.
Python code.
For purposes of keeping the code clearer, a prime field is used here. In practice, for convenience a scheme constructed using a smaller binary field may be separately applied to small substrings of bits of the secret (e.g. GF(256) for byte-wise application), without loss of security. The strict condition that the size of the field must be larger than the number of shares must still be respected (e.g., if the number of shares could exceed 255, the field GF(256) might be replaced by say GF(65536)).
The following Python implementation of Shamir's secret sharing is
released into the Public Domain under the terms of CC0 and OWFa:
https://creativecommons.org/publicdomain/zero/1.0/
http://www.openwebfoundation.org/legal/the-owf-1-0-agreements/owfa-1-0
See the bottom few lines for usage. Tested on Python 2 and 3.
from __future__ import division
from __future__ import print_function
import random
import functools
_PRIME = 2 ** 127 - 1
_RINT = functools.partial(random.SystemRandom().randint, 0)
def _eval_at(poly, x, prime):
"""Evaluates polynomial (coefficient tuple) at x, used to generate a
shamir pool in make_random_shares below.
accum = 0
for coeff in reversed(poly):
accum *= x
accum += coeff
accum %= prime
return accum
def make_random_shares(secret, minimum, shares, prime=_PRIME):
Generates a random shamir pool for a given secret, returns share points.
if minimum > shares:
raise ValueError("Pool secret would be irrecoverable.")
poly = [secret] + [_RINT(prime - 1) for i in range(minimum - 1)]
points = [(i, _eval_at(poly, i, prime))
for i in range(1, shares + 1)]
return points
def _extended_gcd(a, b):
Division in integers modulus p means finding the inverse of the
denominator modulo p and then multiplying the numerator by this
inverse (Note: inverse of A is B such that A*B % p == 1). This can
be computed via the extended Euclidean algorithm
http://en.wikipedia.org/wiki/Modular_multiplicative_inverse#Computation
x = 0
last_x = 1
y = 1
last_y = 0
while b != 0:
quot = a // b
a, b = b, a % b
x, last_x = last_x - quot * x, x
y, last_y = last_y - quot * y, y
return last_x, last_y
def _divmod(num, den, p):
"""Compute num / den modulo prime p
To explain this, the result will be such that:
den * _divmod(num, den, p) % p == num
inv, _ = _extended_gcd(den, p)
return num * inv
def _lagrange_interpolate(x, x_s, y_s, p):
Find the y-value for the given x, given n (x, y) points;
k points will define a polynomial of up to kth order.
k = len(x_s)
assert k == len(set(x_s)), "points must be distinct"
def PI(vals): # upper-case PI -- product of inputs
accum = 1
for v in vals:
accum *= v
return accum
nums = [] # avoid inexact division
dens = []
for i in range(k):
others = list(x_s)
cur = others.pop(i)
nums.append(PI(x - o for o in others))
dens.append(PI(cur - o for o in others))
den = PI(dens)
num = sum([_divmod(nums[i] * den * y_s[i] % p, dens[i], p)
for i in range(k)])
return (_divmod(num, den, p) + p) % p
def recover_secret(shares, prime=_PRIME):
Recover the secret from share points
(points (x,y) on the polynomial).
if len(shares) < 3:
raise ValueError("need at least three shares")
x_s, y_s = zip(*shares)
return _lagrange_interpolate(0, x_s, y_s, prime)
def main():
"""Main function"""
secret = 1234
shares = make_random_shares(secret, minimum=3, shares=6)
print('Secret: ',
secret)
print('Shares:')
if shares:
for share in shares:
print(' ', share)
print('Secret recovered from minimum subset of shares: ',
recover_secret(shares[:3]))
print('Secret recovered from a different minimum subset of shares: ',
recover_secret(shares[-3:]))
if __name__ == '__main__':
main() | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "k-1"
},
{
"math_id": 2,
"text": "\\left(k,n\\right)"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "S_1,\\ldots,S_n"
},
{
"math_id": 6,
"text": "S_i"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "n=k"
},
{
"math_id": 9,
"text": "a_0"
},
{
"math_id": 10,
"text": "\\mathrm{GF}(q)"
},
{
"math_id": 11,
"text": "q"
},
{
"math_id": 12,
"text": "a_1,\\cdots,a_{k-1}"
},
{
"math_id": 13,
"text": "f\\left(x\\right)=a_0+a_1x+a_2x^2+a_3x^3+\\cdots+a_{k-1}x^{k-1}"
},
{
"math_id": 14,
"text": "i=1,\\ldots,n"
},
{
"math_id": 15,
"text": "\\left(i,f\\left(i\\right)\\right)"
},
{
"math_id": 16,
"text": "a_0 = f(0) = \\sum_{j=0}^{k-1} y_j \\prod_{\\begin{smallmatrix} m\\,=\\,0 \\\\ m\\,\\ne\\,j \\end{smallmatrix}}^{k-1} \\frac{x_m}{x_m - x_j} "
},
{
"math_id": 17,
"text": "(x_i, y_i)"
},
{
"math_id": 18,
"text": "f(0)"
},
{
"math_id": 19,
"text": "f(x)"
},
{
"math_id": 20,
"text": "(S=1234)"
},
{
"math_id": 21,
"text": "(n=6)"
},
{
"math_id": 22,
"text": "(k=3)"
},
{
"math_id": 23,
"text": "k-1=2"
},
{
"math_id": 24,
"text": "(a_0=1234;a_1=166;a_2=94),"
},
{
"math_id": 25,
"text": "f(x)=1234+166x+94x^2"
},
{
"math_id": 26,
"text": "D_{x-1}=(x, f(x))"
},
{
"math_id": 27,
"text": "D_0=(1, 1494);D_1=(2, 1942);D_2=(3, 2578);D_3=(4, 3402); D_4= (5, 4414);D_5=(6, 5614)"
},
{
"math_id": 28,
"text": "x"
},
{
"math_id": 29,
"text": "D_{x-1}"
},
{
"math_id": 30,
"text": "D_x"
},
{
"math_id": 31,
"text": "(1, f(1))"
},
{
"math_id": 32,
"text": "(0, f(0))"
},
{
"math_id": 33,
"text": "\\left(x_0,y_0\\right)=\\left(2,1942\\right);\\left(x_1,y_1\\right)=\\left(4,3402\\right);\\left(x_2,y_2\\right)=\\left(5,4414\\right)"
},
{
"math_id": 34,
"text": "\\ell_0(x)=\\frac{x-x_1}{x_0-x_1}\\cdot\\frac{x-x_2}{x_0-x_2}=\\frac{x-4}{2-4}\\cdot\\frac{x-5}{2-5}=\\frac{1}{6}x^2-\\frac{3}{2}x+\\frac{10}{3}"
},
{
"math_id": 35,
"text": "\\ell_1(x)=\\frac{x-x_0}{x_1-x_0}\\cdot\\frac{x-x_2}{x_1-x_2}=\\frac{x-2}{4-2}\\cdot\\frac{x-5}{4-5}=-\\frac{1}{2}x^2+\\frac{7}{2}x-5"
},
{
"math_id": 36,
"text": "\\ell_2(x)=\\frac{x-x_0}{x_2-x_0}\\cdot\\frac{x-x_1}{x_2-x_1}=\\frac{x-2}{5-2}\\cdot\\frac{x-4}{5-4}=\\frac{1}{3}x^2-2x+\\frac{8}{3}"
},
{
"math_id": 37,
"text": "\n\\begin{align}\nf(x) & =\\sum_{j=0}^2 y_j\\cdot\\ell_j(x) \\\\[6pt]\n& =y_0\\ell_0(x)+y_1\\ell_1(x)+y_2\\ell_2(x) \\\\[6pt]\n& =1942\\left(\\frac{1}{6}x^2-\\frac{3}{2}x+\\frac{10}{3}\\right) + 3402\\left(-\\frac{1}{2}x^2+\\frac{7}{2}x-5\\right) + 4414\\left(\\frac{1}{3}x^2-2x+\\frac{8}{3}\\right) \\\\[6pt]\n& =1234+166x+94x^2\n\\end{align}\n"
},
{
"math_id": 38,
"text": "S=1234"
},
{
"math_id": 39,
"text": "S=f(0)"
},
{
"math_id": 40,
"text": "f(0) = \\sum_{j=0}^{k-1} y_j \\prod_{\\begin{smallmatrix} m\\,=\\,0 \\\\ m\\,\\ne\\,j \\end{smallmatrix}}^{k-1} \\frac{x_m}{x_m - x_j} "
},
{
"math_id": 41,
"text": "D_i"
},
{
"math_id": 42,
"text": "D_0=(1,1494)"
},
{
"math_id": 43,
"text": "D_1=(2,1942)"
},
{
"math_id": 44,
"text": "k=3"
},
{
"math_id": 45,
"text": "n=6, k=3, f(x)=a_0+a_1x+\\cdots+a_{k-1}x^{k-1}, a_0=S, a_i\\in\\mathbb{Z}"
},
{
"math_id": 46,
"text": "k: f(x)=S+a_1x+\\cdots+a_{k-1}x^{k-1}\\Rightarrow{}f(x)=S+a_1x+a_2x^2"
},
{
"math_id": 47,
"text": "D_0"
},
{
"math_id": 48,
"text": "f(x): 1494=S+a_{1}1+a_{2}1^2\\Rightarrow{}1494=S+a_1+a_2"
},
{
"math_id": 49,
"text": "D_1"
},
{
"math_id": 50,
"text": "f(x): 1942=S+a_{1}2+a_{2}2^2\\Rightarrow{}1942=S+2a_1+4a_2"
},
{
"math_id": 51,
"text": "(1942-1494)=(S-S)+(2a_1-a_1)+(4a_2-a_2)\\Rightarrow{}448=a_1+3a_2"
},
{
"math_id": 52,
"text": "a_1=448-3a_2"
},
{
"math_id": 53,
"text": "1494=S+(448-3a_2)+a_2\\Rightarrow{}S=1046+2a_2"
},
{
"math_id": 54,
"text": "q = p^r"
},
{
"math_id": 55,
"text": "p"
},
{
"math_id": 56,
"text": "r"
},
{
"math_id": 57,
"text": "q>n"
},
{
"math_id": 58,
"text": "a_0=S"
},
{
"math_id": 59,
"text": "p = q"
},
{
"math_id": 60,
"text": "p=1613"
},
{
"math_id": 61,
"text": "f(x)=1234+166x+94x^2\\bmod{1613}"
},
{
"math_id": 62,
"text": "(1,1494);(2,329);(3,965);(4,176);(5,1188);(6,775)"
},
{
"math_id": 63,
"text": "D_0=\\left(1,1494\\right)"
},
{
"math_id": 64,
"text": "D_1=\\left(2,329\\right)"
},
{
"math_id": 65,
"text": "n=6, k=3, p=1613, f(x)=a_0+a_1x+\\dots+a_{k-1}x^{k-1}\\mod{p}, a_0=S, a_i\\in\\mathbb{N}"
},
{
"math_id": 66,
"text": "f(x)=S+a_1x+\\dots+a_{3-1}x^{3-1}\\mod1613"
},
{
"math_id": 67,
"text": "f(x): 1494\\equiv S+a_1 1+a_2 1^2 \\pmod{1613}\\Rightarrow {} 1494\\equiv S+a_1+a_2 \\pmod{1613}"
},
{
"math_id": 68,
"text": "f(x): 1942\\equiv S+a_{1}2+a_{2}2^2 \\pmod{1613}\\Rightarrow{}1942\\equiv S+2a_1+4a_2 \\pmod{1613}"
},
{
"math_id": 69,
"text": "(1942-1494)\\equiv (S-S)+(2a_1-a_1)+(4a_2-a_2) \\pmod{1613}\\Rightarrow{}448 \\equiv a_1+3a_2 \\pmod{1613}"
},
{
"math_id": 70,
"text": "a_1\\equiv 448-3a_2 \\pmod{1613}"
},
{
"math_id": 71,
"text": "a_1"
},
{
"math_id": 72,
"text": "[448,445,442,\\ldots]"
},
{
"math_id": 73,
"text": "3"
},
{
"math_id": 74,
"text": "a_1\\in[1, 4, 7, \\ldots]"
},
{
"math_id": 75,
"text": "S\\equiv 1046+2a_2 \\pmod{1613}"
}
] | https://en.wikipedia.org/wiki?curid=10948177 |
1094867 | Azimuthal equidistant projection | Azimuthal equidistant map projection
The azimuthal equidistant projection is an azimuthal map projection. It has the useful properties that all points on the map are at proportionally correct distances from the center point, and that all points on the map are at the correct azimuth (direction) from the center point. A useful application for this type of projection is a polar projection which shows all meridians (lines of longitude) as straight, with distances from the pole represented correctly.
The flag of the United Nations contains an example of a polar azimuthal equidistant projection. The polar azimuthal equidistant projection has also been adopted by 21st century Flat Earthers as a map of the Flat Earth, particularly due to its use in the UN flag and its depiction of Antarctica as a ring around the edge of the Earth.
History.
While it may have been used by ancient Egyptians for star maps in some holy books, the earliest text describing the azimuthal equidistant projection is an 11th-century work by al-Biruni.
An example of this system is the world map by ‛Ali b. Ahmad al-Sharafi of Sfax in 1571.
The projection appears in many Renaissance maps, and Gerardus Mercator used it for an inset of the north polar regions in sheet 13 and legend 6 of his well-known 1569 map. In France and Russia this projection is named "Postel projection" after Guillaume Postel, who used it for a map in 1581. Many modern star chart planispheres use the polar azimuthal equidistant projection.
The polar azimuthal equidistant projection has also been adopted by 21st century Flat Earthers as a map of the Flat Earth, particularly due to its use in the UN flag and its depiction of Antarctica as a ring around the edge of the Earth.
Mathematical definition.
A point on the globe is chosen as "the center" in the sense that mapped distances and azimuth directions from that point to any other point will be correct. That point, ("φ"0, "λ"0), will project to the center of a circular projection, with "φ" referring to latitude and "λ" referring to longitude. All points along a given azimuth will project along a straight line from the center, and the angle "θ" that the line subtends from the vertical is the azimuth angle. The distance from the center point to another projected point "ρ" is the arc length along a great circle between them on the globe. By this description, then, the point on the plane specified by ("θ","ρ") will be projected to Cartesian coordinates:
formula_0
The relationship between the coordinates ("θ","ρ") of the point on the globe, and its latitude and longitude coordinates ("φ", "λ") is given by the equations:
formula_1
When the center point is the north pole, "φ"0 equals formula_2 and "λ"0 is arbitrary, so it is most convenient to assign it the value of 0. This assignment significantly simplifies the equations for "ρ"u and "θ" to:
formula_3
Limitation.
With the circumference of the Earth being approximately , the maximum distance that can be displayed on an azimuthal equidistant projection map is half the circumference, or about . For distances less than distortions are minimal. For distances the distortions are moderate. Distances greater than are severely distorted.
If the azimuthal equidistant projection map is centered about a point whose antipodal point lies on land and the map is extended to the maximum distance of the antipode point smears into a large circle. This is shown in the example of two maps centered about Los Angeles, and Taipei. The antipode for Los Angeles is in the south Indian Ocean hence there is not much significant distortion of land masses for the Los Angeles centered map except for East Africa and Madagascar. On the other hand, Taipei's antipode is near the Argentina–Paraguay border, causing the Taipei centered map to severely distort South America.
Applications.
Azimuthal equidistant projection maps can be useful in terrestrial point to point communication. This type of projection allows the operator to easily determine in which direction to point their directional antenna. The operator simply finds on the map the location of the target transmitter or receiver (i.e. the other antenna being communicated with) and uses the map to determine the azimuth angle needed to point the operator's antenna. The operator would use an electric rotator to point the antenna. The map can also be used in one way communication. For example if the operator is looking to receive signals from a distant radio station, this type of projection could help identify the direction of the distant radio station. In order for the map to be useful, the map should be centered as close as possible about the location of the operator's antenna.
Azimuthal equidistant projection maps can also be useful to show ranges of ballistic missiles, as demonstrated by the map centered on North Korea showing the country's missile range.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x = \\rho \\sin \\theta, \\qquad y = -\\rho \\cos \\theta"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\cos \\frac{\\rho}{R} &= \\sin \\varphi_0 \\sin \\varphi + \\cos \\varphi_0 \\cos \\varphi \\cos \\left(\\lambda - \\lambda_0\\right) \\\\\n\\tan \\theta &= \\frac{\\cos \\varphi \\sin \\left(\\lambda - \\lambda_0\\right)}{\\cos \\varphi_0 \\sin \\varphi - \\sin \\varphi_0 \\cos \\varphi \\cos \\left(\\lambda - \\lambda_0\\right)}\n\\end{align}\n"
},
{
"math_id": 2,
"text": "\\pi/2"
},
{
"math_id": 3,
"text": "\\rho = R \\left( \\frac{\\pi}{2} - \\varphi \\right), \\qquad \\theta = \\lambda~~"
}
] | https://en.wikipedia.org/wiki?curid=1094867 |
10949 | Four color theorem | Statement in mathematics
In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. "Adjacent" means that two regions share a common boundary of non-zero length (i.e., not merely a corner where three or more regions meet). It was the first major theorem to be proved using a computer. Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubts remain.
The theorem is a stronger version of the five color theorem, which can be shown using a significantly simpler argument. Although the weaker five color theorem was proven already in the 1800s, the four color theorem resisted until 1976 when it was proven by Kenneth Appel and Wolfgang Haken. This came after many false proofs and mistaken counterexamples in the preceding decades.
The Appel–Haken proof proceeds by analyzing a very large number of reducible configurations. This was improved upon in 1997 by Robertson, Sanders, Seymour, and Thomas who have managed to decrease the number of such configurations to 633 – still an extremely long case analysis. In 2005, the theorem was verified by Georges Gonthier using a general-purpose theorem-proving software.
Precise formulation of the theorem.
In graph-theoretic terms, the theorem states that for loopless planar graph formula_0, its chromatic number is formula_1.
The intuitive statement of the four color theorem – "given any separation of a plane into contiguous regions, the regions can be colored using at most four colors so that no two adjacent regions have the same color" – needs to be interpreted appropriately to be correct.
First, regions are adjacent if they share a boundary segment; two regions that share only isolated boundary points are not considered adjacent. (Otherwise, a map in a shape of a pie chart would make an arbitrarily large number of regions 'adjacent' to each other at a common corner, and require arbitrarily large number of colors as a result.) Second, bizarre regions, such as those with finite area but infinitely long perimeter, are not allowed; maps with such regions can require more than four colors. (To be safe, we can restrict to regions whose boundaries consist of finitely many straight line segments. It is allowed that a region has enclaves, that is it entirely surrounds one or more other regions.) Note that the notion of "contiguous region" (technically: connected open subset of the plane) is not the same as that of a "country" on regular maps, since countries need not be contiguous (they may have exclaves; e.g., the Cabinda Province as part of Angola, Nakhchivan as part of Azerbaijan, Kaliningrad as part of Russia, France with its overseas territories, and Alaska as part of the United States are not contiguous). If we required the entire territory of a country to receive the same color, then four colors are not always sufficient. For instance, consider a simplified map:
In this map, the two regions labeled "A" belong to the same country. If we wanted those regions to receive the same color, then five colors would be required, since the two "A" regions together are adjacent to four other regions, each of which is adjacent to all the others.
A simpler statement of the theorem uses graph theory. The set of regions of a map can be represented more abstractly as an undirected graph that has a vertex for each region and an edge for every pair of regions that share a boundary segment. This graph is planar: it can be drawn in the plane without crossings by placing each vertex at an arbitrarily chosen location within the region to which it corresponds, and by drawing the edges as curves without crossings that lead from one region's vertex, across a shared boundary segment, to an adjacent region's vertex. Conversely any planar graph can be formed from a map in this way. In graph-theoretic terminology, the four-color theorem states that the vertices of every planar graph can be colored with at most four colors so that no two adjacent vertices receive the same color, or for short: every planar graph is four-colorable.
History.
Early proof attempts.
As far as is known, the conjecture was first proposed on October 23, 1852, when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. At the time, Guthrie's brother, Frederick, was a student of Augustus De Morgan (the former advisor of Francis) at University College London. Francis inquired with Frederick regarding it, who then took it to De Morgan (Francis Guthrie graduated later in 1852, and later became a professor of mathematics in South Africa). According to De Morgan:
A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact—and do not yet. He says that if a figure be any how divided and the compartments differently colored so that figures with any portion of common boundary "line" are differently colored—four colors may be wanted but not more—the following is his case in which four colors "are" wanted. Query cannot a necessity for five or more be invented…
"F.G.", perhaps one of the two Guthries, published the question in "The Athenaeum" in 1854, and De Morgan posed the question again in the same magazine in 1860. Another early published reference by Arthur Cayley (1879) in turn credits the conjecture to De Morgan.
There were several early failed attempts at proving the theorem. De Morgan believed that it followed from a simple fact about four regions, though he didn't believe that fact could be derived from more elementary facts.
This arises in the following way. We never need four colours in a neighborhood unless there be four counties, each of which has boundary lines in common with each of the other three. Such a thing cannot happen with four areas unless one or more of them be inclosed by the rest; and the colour used for the inclosed county is thus set free to go on with. Now this principle, that four areas cannot each have common boundary with all the other three without inclosure, is not, we fully believe, capable of demonstration upon anything more evident and more elementary; it must stand as a postulate.
One proposed proof was given by Alfred Kempe in 1879, which was widely acclaimed; another was given by Peter Guthrie Tait in 1880. It was not until 1890 that Kempe's proof was shown incorrect by Percy Heawood, and in 1891, Tait's proof was shown incorrect by Julius Petersen—each false proof stood unchallenged for 11 years.
In 1890, in addition to exposing the flaw in Kempe's proof, Heawood proved the five color theorem and generalized the four color conjecture to surfaces of arbitrary genus.
Tait, in 1880, showed that the four color theorem is equivalent to the statement that a certain type of graph (called a snark in modern terminology) must be non-planar.
In 1943, Hugo Hadwiger formulated the Hadwiger conjecture, a far-reaching generalization of the four-color problem that still remains unsolved.
Proof by computer.
During the 1960s and 1970s, German mathematician Heinrich Heesch developed methods of using computers to search for a proof. Notably he was the first to use discharging for proving the theorem, which turned out to be important in the unavoidability portion of the subsequent Appel–Haken proof. He also expanded on the concept of reducibility and, along with Ken Durre, developed a computer test for it. Unfortunately, at this critical juncture, he was unable to procure the necessary supercomputer time to continue his work.
Others took up his methods, including his computer-assisted approach. While other teams of mathematicians were racing to complete proofs, Kenneth Appel and Wolfgang Haken at the University of Illinois announced, on June 21, 1976, that they had proved the theorem. They were assisted in some algorithmic work by John A. Koch.
If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist, through the use of two technical concepts:
Using mathematical rules and procedures based on properties of reducible configurations, Appel and Haken found an unavoidable set of reducible configurations, thus proving that a minimal counterexample to the four-color conjecture could not exist. Their proof reduced the infinitude of possible maps to 1,834 reducible configurations (later reduced to 1,482) which had to be checked one by one by computer and took over a thousand hours. This reducibility part of the work was independently double checked with different programs and computers. However, the unavoidability part of the proof was verified in over 400 pages of microfiche, which had to be checked by hand with the assistance of Haken's daughter Dorothea Blostein.
Appel and Haken's announcement was widely reported by the news media around the world, and the math department at the University of Illinois used a postmark stating "Four colors suffice." At the same time the unusual nature of the proof—it was the first major theorem to be proved with extensive computer assistance—and the complexity of the human-verifiable portion aroused considerable controversy.
In the early 1980s, rumors spread of a flaw in the Appel–Haken proof. Ulrich Schmidt at RWTH Aachen had examined Appel and Haken's proof for his master's thesis that was published in 1981. He had checked about 40% of the unavoidability portion and found a significant error in the discharging procedure . In 1986, Appel and Haken were asked by the editor of "Mathematical Intelligencer" to write an article addressing the rumors of flaws in their proof. They replied that the rumors were due to a "misinterpretation of [Schmidt's] results" and obliged with a detailed article. Their magnum opus, "Every Planar Map is Four-Colorable", a book claiming a complete and detailed proof (with a microfiche supplement of over 400 pages), appeared in 1989; it explained and corrected the error discovered by Schmidt as well as several further errors found by others.
Simplification and verification.
Since the proving of the theorem, a new approach has led to both a shorter proof and a more efficient algorithm for 4-coloring maps. In 1996, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas created a quadratic-time algorithm (requiring only O("n"2) time, where "n" is the number of vertices), improving on a quartic-time algorithm based on Appel and Haken's proof. The new proof, based on the same ideas, is similar to Appel and Haken's but more efficient because it reduces the complexity of the problem and requires checking only 633 reducible configurations. Both the unavoidability and reducibility parts of this new proof must be executed by a computer and are impractical to check by hand. In 2001, the same authors announced an alternative proof, by proving the snark conjecture. This proof remains unpublished, however.
In 2005, Benjamin Werner and Georges Gonthier formalized a proof of the theorem inside the Coq proof assistant. This removed the need to trust the various computer programs used to verify particular cases; it is only necessary to trust the Coq kernel.
Summary of proof ideas.
The following discussion is a summary based on the introduction to "Every Planar Map is Four Colorable" . Although flawed, Kempe's original purported proof of the four color theorem provided some of the basic tools later used to prove it. The explanation here is reworded in terms of the modern graph theory formulation above.
Kempe's argument goes as follows. First, if planar regions separated by the graph are not "triangulated" (i.e., do not have exactly three edges in their boundaries), we can add edges without introducing new vertices in order to make every region triangular, including the unbounded outer region. If this triangulated graph is colorable using four colors or fewer, so is the original graph since the same coloring is valid if edges are removed. So it suffices to prove the four color theorem for triangulated graphs to prove it for all planar graphs, and without loss of generality we assume the graph is triangulated.
Suppose "v", "e", and "f" are the number of vertices, edges, and regions (faces). Since each region is triangular and each edge is shared by two regions, we have that 2"e" = 3"f". This together with Euler's formula, "v" − "e" + "f" = 2, can be used to show that 6"v" − 2"e" = 12. Now, the "degree" of a vertex is the number of edges abutting it. If "v""n" is the number of vertices of degree "n" and "D" is the maximum degree of any vertex,
formula_2
But since 12 > 0 and 6 − "i" ≤ 0 for all "i" ≥ 6, this demonstrates that there is at least one vertex of degree 5 or less.
If there is a graph requiring 5 colors, then there is a "minimal" such graph, where removing any vertex makes it four-colorable. Call this graph "G". Then "G" cannot have a vertex of degree 3 or less, because if "d"("v") ≤ 3, we can remove "v" from "G", four-color the smaller graph, then add back "v" and extend the four-coloring to it by choosing a color different from its neighbors.
Kempe also showed correctly that "G" can have no vertex of degree 4. As before we remove the vertex "v" and four-color the remaining vertices. If all four neighbors of "v" are different colors, say red, green, blue, and yellow in clockwise order, we look for an alternating path of vertices colored red and blue joining the red and blue neighbors. Such a path is called a Kempe chain. There may be a Kempe chain joining the red and blue neighbors, and there may be a Kempe chain joining the green and yellow neighbors, but not both, since these two paths would necessarily intersect, and the vertex where they intersect cannot be colored. Suppose it is the red and blue neighbors that are not chained together. Explore all vertices attached to the red neighbor by red-blue alternating paths, and then reverse the colors red and blue on all these vertices. The result is still a valid four-coloring, and "v" can now be added back and colored red.
This leaves only the case where "G" has a vertex of degree 5; but Kempe's argument was flawed for this case. Heawood noticed Kempe's mistake and also observed that if one was satisfied with proving only five colors are needed, one could run through the above argument (changing only that the minimal counterexample requires 6 colors) and use Kempe chains in the degree 5 situation to prove the five color theorem.
In any case, to deal with this degree 5 vertex case requires a more complicated notion than removing a vertex. Rather the form of the argument is generalized to considering "configurations", which are connected subgraphs of "G" with the degree of each vertex (in G) specified. For example, the case described in degree 4 vertex situation is the configuration consisting of a single vertex labelled as having degree 4 in "G". As above, it suffices to demonstrate that if the configuration is removed and the remaining graph four-colored, then the coloring can be modified in such a way that when the configuration is re-added, the four-coloring can be extended to it as well. A configuration for which this is possible is called a "reducible configuration". If at least one of a set of configurations must occur somewhere in G, that set is called "unavoidable". The argument above began by giving an unavoidable set of five configurations (a single vertex with degree 1, a single vertex with degree 2, ..., a single vertex with degree 5) and then proceeded to show that the first 4 are reducible; to exhibit an unavoidable set of configurations where every configuration in the set is reducible would prove the theorem.
Because "G" is triangular, the degree of each vertex in a configuration is known, and all edges internal to the configuration are known, the number of vertices in "G" adjacent to a given configuration is fixed, and they are joined in a cycle. These vertices form the "ring" of the configuration; a configuration with "k" vertices in its ring is a "k"-ring configuration, and the configuration together with its ring is called the "ringed configuration". As in the simple cases above, one may enumerate all distinct four-colorings of the ring; any coloring that can be extended without modification to a coloring of the configuration is called "initially good". For example, the single-vertex configuration above with 3 or fewer neighbors were initially good. In general, the surrounding graph must be systematically recolored to turn the ring's coloring into a good one, as was done in the case above where there were 4 neighbors; for a general configuration with a larger ring, this requires more complex techniques. Because of the large number of distinct four-colorings of the ring, this is the primary step requiring computer assistance.
Finally, it remains to identify an unavoidable set of configurations amenable to reduction by this procedure. The primary method used to discover such a set is the method of discharging. The intuitive idea underlying discharging is to consider the planar graph as an electrical network. Initially positive and negative "electrical charge" is distributed amongst the vertices so that the total is positive.
Recall the formula above:
formula_3
Each vertex is assigned an initial charge of 6-deg("v"). Then one "flows" the charge by systematically redistributing the charge from a vertex to its neighboring vertices according to a set of rules, the "discharging procedure". Since charge is preserved, some vertices still have positive charge. The rules restrict the possibilities for configurations of positively charged vertices, so enumerating all such possible configurations gives an unavoidable set.
As long as some member of the unavoidable set is not reducible, the discharging procedure is modified to eliminate it (while introducing other configurations). Appel and Haken's final discharging procedure was extremely complex and, together with a description of the resulting unavoidable configuration set, filled a 400-page volume, but the configurations it generated could be checked mechanically to be reducible. Verifying the volume describing the unavoidable configuration set itself was done by peer review over a period of several years.
A technical detail not discussed here but required to complete the proof is "immersion reducibility".
False disproofs.
The four color theorem has been notorious for attracting a large number of false proofs and disproofs in its long history. At first, "The New York Times" refused, as a matter of policy, to report on the Appel–Haken proof, fearing that the proof would be shown false like the ones before it. Some alleged proofs, like Kempe's and Tait's mentioned above, stood under public scrutiny for over a decade before they were refuted. But many more, authored by amateurs, were never published at all.
Generally, the simplest, though invalid, counterexamples attempt to create one region which touches all other regions. This forces the remaining regions to be colored with only three colors. Because the four color theorem is true, this is always possible; however, because the person drawing the map is focused on the one large region, they fail to notice that the remaining regions can in fact be colored with three colors.
This trick can be generalized: there are many maps where if the colors of some regions are selected beforehand, it becomes impossible to color the remaining regions without exceeding four colors. A casual verifier of the counterexample may not think to change the colors of these regions, so that the counterexample will appear as though it is valid.
Perhaps one effect underlying this common misconception is the fact that the color restriction is not transitive: a region only has to be colored differently from regions it touches directly, not regions touching regions that it touches. If this were the restriction, planar graphs would require arbitrarily large numbers of colors.
Other false disproofs violate the assumptions of the theorem, such as using a region that consists of multiple disconnected parts, or disallowing regions of the same color from touching at a point.
Three-coloring.
While every planar map can be colored with four colors, it is NP-complete in complexity to decide whether an arbitrary planar map can be colored with just three colors.
A cubic map can be colored with only three colors if and only if each interior region has an even number of neighboring regions. In the US states map example, landlocked Missouri (MO) has eight neighbors (an even number): it must be differently colored from all of them, but the neighbors can alternate colors, thus this part of the map needs only three colors. However, landlocked Nevada (NV) has five neighbors (an odd number): one of the neighbors must be differently colored from it and all the others, thus four colors are needed here.
Generalizations.
Infinite graphs.
The four color theorem applies not only to finite planar graphs, but also to infinite graphs that can be drawn without crossings in the plane, and even more generally to infinite graphs (possibly with an uncountable number of vertices) for which every finite subgraph is planar. To prove this, one can combine a proof of the theorem for finite planar graphs with the De Bruijn–Erdős theorem stating that, if every finite subgraph of an infinite graph is "k"-colorable, then the whole graph is also "k"-colorable . This can also be seen as an immediate consequence of Kurt Gödel's compactness theorem for first-order logic, simply by expressing the colorability of an infinite graph with a set of logical formulae.
Higher surfaces.
One can also consider the coloring problem on surfaces other than the plane. The problem on the sphere or cylinder is equivalent to that on the plane. For closed (orientable or non-orientable) surfaces with positive genus, the maximum number "p" of colors needed depends on the surface's Euler characteristic χ according to the formula
formula_4
where the outermost brackets denote the floor function.
Alternatively, for an orientable surface the formula can be given in terms of the genus of a surface, "g":
formula_5
This formula, the Heawood conjecture, was proposed by P. J. Heawood in 1890 and, after contributions by several people, proved by Gerhard Ringel and J. W. T. Youngs in 1968. The only exception to the formula is the Klein bottle, which has Euler characteristic 0 (hence the formula gives p = 7) but requires only 6 colors, as shown by Philip Franklin in 1934.
For example, the torus has Euler characteristic χ = 0 (and genus "g" = 1) and thus "p" = 7, so no more than 7 colors are required to color any map on a torus. This upper bound of 7 is sharp: certain toroidal polyhedra such as the Szilassi polyhedron require seven colors.
A Möbius strip requires six colors as do 1-planar graphs (graphs drawn with at most one simple crossing per edge) . If both the vertices and the faces of a planar graph are colored, in such a way that no two adjacent vertices, faces, or vertex-face pair have the same color, then again at most six colors are needed .
For graphs whose vertices are represented as pairs of points on two distinct surfaces, with edges drawn as non-crossing curves on one of the two surfaces, the chromatic number can be at least 9 and is at most 12, but more precise bounds are not known; this is Gerhard Ringel's Earth–Moon problem.
Solid regions.
There is no obvious extension of the coloring result to three-dimensional solid regions. By using a set of "n" flexible rods, one can arrange that every rod touches every other rod. The set would then require "n" colors, or "n"+1 including the empty space that also touches every rod. The number "n" can be taken to be any integer, as large as desired. Such examples were known to Fredrick Guthrie in 1880. Even for axis-parallel cuboids (considered to be adjacent when two cuboids share a two-dimensional boundary area), an unbounded number of colors may be necessary.
Relation to other areas of mathematics.
Dror Bar-Natan gave a statement concerning Lie algebras and Vassiliev invariants which is equivalent to the four color theorem.
Use outside of mathematics.
Despite the motivation from coloring political maps of countries, the theorem is not of particular interest to cartographers. According to an article by the math historian Kenneth May, "Maps utilizing only four colors are rare, and those that do usually require only three. Books on cartography and the history of mapmaking do not mention the four-color property". The theorem also does not guarantee the usual cartographic requirement that non-contiguous regions of the same country (such as the exclave Alaska and the rest of the United States) be colored identically.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "\\chi(G) \\leq 4"
},
{
"math_id": 2,
"text": "6v - 2e = 6\\sum_{i=1}^D v_i - \\sum_{i=1}^D iv_i = \\sum_{i=1}^D (6 - i)v_i = 12."
},
{
"math_id": 3,
"text": "\\sum_{i=1}^D (6 - i)v_i = 12."
},
{
"math_id": 4,
"text": "p=\\left\\lfloor\\frac{7 + \\sqrt{49 - 24 \\chi}}{2}\\right\\rfloor,"
},
{
"math_id": 5,
"text": "p=\\left\\lfloor\\frac{7 + \\sqrt{1 + 48g }}{2}\\right\\rfloor."
}
] | https://en.wikipedia.org/wiki?curid=10949 |
10949989 | Dynein ATPase | Class of enzymes
Dynein ATPase (EC 3.6.4.2, "dynein adenosine 5'-triphosphatase") is an enzyme with systematic name "ATP phosphohydrolase (tubulin-translocating)". This enzyme catalyses the following chemical reaction
ATP + H2O formula_0 ADP + phosphate
This enzyme is a multisubunit protein complex associated with microtubules.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10949989 |
10950255 | Cu2+-exporting ATPase | Cu2+-exporting ATPase (EC 3.6.3.4) is an enzyme with systematic name "ATP phosphohydrolase (Cu2+-exporting)". This enzyme catalyses the following chemical reaction
ATP + H2O + Cu2+in formula_0 ADP + phosphate + Cu2+out
This P-type ATPase undergoes covalent phosphorylation during the transport cycle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10950255 |
10950307 | Plus-end-directed kinesin ATPase | Class of enzymes
Plus-end-directed kinesin ATPase (EC 3.6.4.4, "kinesin") is an enzyme with systematic name "kinesin ATP phosphohydrolase (plus-end-directed)". This enzyme catalyses the following chemical reaction
ATP + H2O formula_0 ADP + phosphate
This enzyme also hydrolyses GTP.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10950307 |
10950309 | Minus-end-directed kinesin ATPase | Class of enzymes
Minus-end-directed kinesin ATPase (EC 3.6.4.5) is an enzyme with systematic name "kinesin ATP phosphohydrolase (minus-end-directed)". This enzyme catalyses the following chemical reaction
ATP + H2O formula_0 ADP + phosphate
This enzyme catalyses movement towards the minus end of microtubules.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10950309 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.