id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
14336485
|
Trans-L-3-hydroxyproline dehydratase
|
The enzyme "trans"--3-hydroxyproline dehydratase (EC 4.2.1.77) catalyzes the chemical reaction
"trans"--3-hydroxyproline formula_0 Δ1-pyrroline 2-carboxylate + H2O
This enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is trans"--3-hydroxyproline hydro-lyase (Δ1-pyrroline-2-carboxylate-forming). This enzyme is also called trans"--3-hydroxyproline hydro-lyase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14336485
|
14336509
|
Trichodiene synthase
|
The enzyme trichodiene synthase (EC 4.2.3.6) catalyzes the chemical reaction
(2"E",6"E")-farnesyl diphosphate formula_0 trichodiene + diphosphate
This enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on phosphates. The systematic name of this enzyme class is (2"E",6"E")-farnesyl-diphosphate diphosphate-lyase (cyclizing, trichodiene-forming). Other names in common use include trichodiene synthetase, sesquiterpene cyclase, and "trans,trans"-farnesyl-diphosphate sesquiterpenoid-lyase. This enzyme participates in terpenoid biosynthesis.
Structural studies.
As of late 2007, 9 structures have been solved for this class of enzymes, with PDB accession codes 1YJ4, 1YYQ, 1YYR, 1YYS, 1YYT, 1YYU, 2AEK, 2AEL, and 2AET.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14336509
|
14336528
|
UDP-glucose 4,6-dehydratase
|
Class of enzymes
The enzyme UDP-glucose 4,6-dehydratase (EC 4.2.1.76) catalyzes the chemical reaction
UDP-glucose formula_0 UDP-4-dehydro-6-deoxy--glucose + H2O
This enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is UDP-glucose 4,6-hydro-lyase (UDP-4-dehydro-6-deoxy--glucose-forming). Other names in common use include UDP--glucose-4,6-hydrolyase, UDP--glucose oxidoreductase, and UDP-glucose 4,6-hydro-lyase. This enzyme participates in nucleotide sugars metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14336528
|
14336550
|
Vetispiradiene synthase
|
Vetispiradiene synthase (EC 4.2.3.21) is an enzyme from Egyptian henbane that catalyzes the following chemical reaction:
(2"E",6"E")-farnesyl diphosphate formula_0 vetispiradiene + diphosphate
This enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on phosphates. The systematic name of this enzyme class is (2"E",6"E")-farnesyl-diphosphate diphosphate-lyase (cyclizing, vetispiradiene-forming). Other names in common use include vetispiradiene-forming farnesyl pyrophosphate cyclase, pemnaspirodiene synthase, HVS, and vetispiradiene cyclase. This enzyme participates in terpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14336550
|
1433659
|
Shattered set
|
Notion in computational learning
A class of sets is said to shatter another set if it is possible to "pick out" any element of that set using intersection. The concept of shattered sets plays an important role in Vapnik–Chervonenkis theory, also known as VC-theory. Shattering and VC-theory are used in the study of empirical processes as well as in statistical computational learning theory.
Definition.
Suppose "A" is a set and "C" is a class of sets. The class "C" shatters the set "A" if for each subset "a" of "A", there is some element "c" of "C" such that
formula_0
Equivalently, "C" shatters "A" when their intersection is equal to "A"'s power set: "P"("A") = { "c" ∩ "A" | "c" ∈ "C" }.
We employ the letter "C" to refer to a "class" or "collection" of sets, as in a Vapnik–Chervonenkis class (VC-class). The set "A" is often assumed to be finite because, in empirical processes, we are interested in the shattering of finite sets of data points.
Example.
We will show that the class of all discs in the plane (two-dimensional space) does not shatter every set of four points on the unit circle, yet the class of all convex sets in the plane does shatter every finite set of points on the unit circle.
Let "A" be a set of four points on the unit circle and let "C" be the class of all discs.
To test where "C" shatters "A", we attempt to draw a disc around every subset of points in "A". First, we draw a disc around the subsets of each isolated point. Next, we try to draw a disc around every subset of point pairs. This turns out to be doable for adjacent points, but impossible for points on opposite sides of the circle. Any attempt to include those points on the opposite side will necessarily include other points not in that pair. Hence, any pair of opposite points cannot be isolated out of "A" using intersections with class "C" and so "C" does not shatter "A".
As visualized below:
Because there is some subset which can "not" be isolated by any disc in "C", we conclude then that "A" is not shattered by "C". And, with a bit of thought, we can prove that no set of four points is shattered by this "C".
However, if we redefine "C" to be the class of all "elliptical discs", we find that we can still isolate all the subsets from above, as well as the points that were formerly problematic. Thus, this specific set of 4 points is shattered by the class of elliptical discs. Visualized below:
With a bit of thought, we could generalize that any set of finite points on a unit circle could be shattered by the class of all convex sets (visualize connecting the dots).
Shatter coefficient.
To quantify the richness of a collection "C" of sets, we use the concept of "shattering coefficients" (also known as the "growth function"). For a collection "C" of sets formula_1, formula_2 being any space, often a sample space, we define
the "n"th "shattering coefficient" of "C" as
formula_3
where formula_4 denotes the cardinality of the set and formula_5 is any set of "n" points.
formula_6 is the largest number of subsets of any set "A" of "n" points that can be formed by intersecting "A" with the sets in collection "C".
For example, if set "A" contains 3 points, its power set, formula_7, contains formula_8 elements. If "C" shatters "A", its shattering coefficient(3) would be 8 and S_C(2) would be formula_9. However, if one of those sets in formula_7 cannot be obtained through intersections in "c", then S_C(3) would only be 7. If none of those sets can be obtained, S_C(3) would be 0. Additionally, if S_C(2)=3, for example, then there is an element in the set of all 2-point sets from "A" that cannot be obtained from intersections with "C". It follows from this that S_C(3) would also be less than 8 (i.e. "C" would not shatter "A") because we have already located a "missing" set in the smaller power set of 2-point sets.
This example illustrates some properties of formula_10:
The third property means that if "C" cannot shatter any set of cardinality "N" then it can not shatter sets of larger cardinalities.
Vapnik–Chervonenkis class.
If "A" cannot be shattered by "C", there will be a smallest value of "n" that makes the shatter coefficient(n) less than formula_19 because as "n" gets larger, there are more sets that could be missed. Alternatively, there is also a largest value of "n" for which the S_C(n) is still formula_19, because as "n" gets smaller, there are fewer sets that could be omitted. The extreme of this is S_C(0) (the shattering coefficient of the empty set), which must always be formula_20. These statements lends themselves to defining the VC dimension of a class "C" as:
formula_21
or, alternatively, as
formula_22
Note that formula_23. The VC dimension is usually defined as VC_0, the largest cardinality of points chosen that will still shatter "A" (i.e. "n" such that formula_14).
Altneratively, if for any "n" there is a set of cardinality "n" which can be shattered by "C", then formula_14 for all "n" and the VC dimension of this class "C" is infinite.
A class with finite VC dimension is called a "Vapnik–Chervonenkis class" or "VC class". A class "C" is uniformly Glivenko–Cantelli if and only if it is a VC class.
|
[
{
"math_id": 0,
"text": "a = c \\cap A."
},
{
"math_id": 1,
"text": "s \\subset \\Omega"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": " S_C(n) = \\max_{\\forall x_1,x_2,\\dots,x_n \\in \\Omega } \\operatorname{card} \\{\\,\\{\\,x_1,x_2,\\dots,x_n\\}\\cap s, s\\in C \\}"
},
{
"math_id": 4,
"text": "\\operatorname{card}"
},
{
"math_id": 5,
"text": "x_1,x_2,\\dots,x_n \\in \\Omega "
},
{
"math_id": 6,
"text": " S_C(n) "
},
{
"math_id": 7,
"text": "P(A)"
},
{
"math_id": 8,
"text": "2^3=8"
},
{
"math_id": 9,
"text": "2^2=4"
},
{
"math_id": 10,
"text": "S_C(n)"
},
{
"math_id": 11,
"text": "S_C(n)\\leq 2^n"
},
{
"math_id": 12,
"text": "\\{s\\cap A|s\\in C\\}\\subseteq P(A)"
},
{
"math_id": 13,
"text": "A\\subseteq \\Omega"
},
{
"math_id": 14,
"text": "S_C(n)=2^n"
},
{
"math_id": 15,
"text": "S_C(N)<2^N"
},
{
"math_id": 16,
"text": "N>1"
},
{
"math_id": 17,
"text": "S_C(n)<2^n"
},
{
"math_id": 18,
"text": "n\\geq N"
},
{
"math_id": 19,
"text": "2^n"
},
{
"math_id": 20,
"text": "2^0=1"
},
{
"math_id": 21,
"text": "VC(C)=\\underset{n}{\\min}\\{n:S_C(n)<2^n\\}\\,"
},
{
"math_id": 22,
"text": "VC_0(C)=\\underset{n}{\\max}\\{n:S_C(n)=2^n\\}.\\,"
},
{
"math_id": 23,
"text": "VC(C)=VC_0(C)+1."
}
] |
https://en.wikipedia.org/wiki?curid=1433659
|
14336590
|
Xylonate dehydratase
|
Enzyme
The enzyme xylonate dehydratase (EC 4.2.1.82) catalyzes the chemical reaction:
-xylonate formula_0 2-dehydro-3-deoxy--xylonate + H2O
This enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is -xylonate hydro-lyase (2-dehydro-3-deoxy--xylonate-forming). Other names in common use include -xylo-aldonate dehydratase, -xylonate dehydratase, and -xylonate hydro-lyase. This enzyme participates in pentose and glucuronate interconversions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14336590
|
14337088
|
5'-acylphosphoadenosine hydrolase
|
Class of enzymes
In enzymology, a 5'-acylphosphoadenosine hydrolase (EC 3.6.1.20) is an enzyme that catalyzes the chemical reaction
5'-acylphosphoadenosine + H2O formula_0 AMP + a carboxylate
Thus, the two substrates of this enzyme are 5'-acylphosphoadenosine and H2O, whereas its two products are AMP and carboxylate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is 5'-acylphosphoadenosine acylhydrolase. This enzyme is also called 5-phosphoadenosine hydrolase. This enzyme participates in purine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337088
|
14337129
|
Adenosine-tetraphosphatase
|
Class of enzymes
In enzymology, an adenosine-tetraphosphatase (EC 3.6.1.14) is an enzyme that catalyzes the chemical reaction
adenosine 5'-tetraphosphate + H2O formula_0 ATP + phosphate
Thus, the two substrates of this enzyme are adenosine 5'-tetraphosphate and H2O, whereas its two products are ATP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is adenosine-tetraphosphate phosphohydrolase. This enzyme participates in purine metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2V7Q.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337129
|
14337146
|
Adenylylsulfatase
|
Class of enzymes
In enzymology, an adenylylsulfatase (EC 3.6.2.1) is an enzyme that catalyzes the chemical reaction
adenylyl sulfate + H2O formula_0 AMP + sulfate + 2H+
Thus, the two substrates of this enzyme are adenylyl sulfate and H2O, whereas its two products are AMP and sulfate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in sulfonyl-containing anhydrides. The systematic name of this enzyme class is adenylyl-sulfate sulfohydrolase. Other names in common use include adenosine 5-phosphosulfate sulfohydrolase, and adenylylsulfate sulfohydrolase. This enzyme participates in sulfur metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337146
|
14337176
|
ADP-sugar diphosphatase
|
Class of enzymes
In enzymology, an ADP-sugar diphosphatase (EC 3.6.1.21) is an enzyme that catalyzes the chemical reaction
ADP-sugar + H2O formula_0 AMP + alpha-D-aldose 1-phosphate
Thus, the two substrates of this enzyme are ADP-sugar and H2O, whereas its two products are AMP and alpha-D-aldose 1-phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is ADP-sugar sugarphosphohydrolase. Other names in common use include ADP-sugar pyrophosphatase, and adenosine diphosphosugar pyrophosphatase. This enzyme participates in 3 metabolic pathways: fructose and mannose metabolism, purine metabolism, and starch and sucrose metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337176
|
14337194
|
Ag+-exporting ATPase
|
In enzymology, an Ag+-exporting ATPase (EC 7.2.2.15) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Ag+in formula_0 ADP + phosphate + Ag+out
The 3 substrates of this enzyme are ATP, H2O, and Ag+, whereas its 3 products are ADP, phosphate, and Ag+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Ag+-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337194
|
14337203
|
Alpha-factor-transporting ATPase
|
Class of enzymes
In enzymology, an alpha-factor-transporting ATPase (EC 3.6.3.48) is an enzyme that catalyzes the chemical reaction
ATP + H2O + alpha-factorin formula_0 ADP + phosphate + alpha-factorout
The 3 substrates of this enzyme are ATP, H2O, and alpha-factor, whereas its 3 products are ADP, phosphate, and alpha-factor.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (alpha-factor-transporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337203
|
14337225
|
Arsenite-transporting ATPase
|
Class of enzymes
In enzymology, an arsenite-transporting ATPase (EC 3.6.3.16) is an enzyme that catalyzes the chemical reaction
ATP + H2O + arsenitein formula_0 ADP + phosphate + arseniteout
The 3 substrates of this enzyme are ATP, H2O, and arsenite, whereas its 3 products are ADP, phosphate, and arsenite.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (arsenite-exporting).
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1IHU, 1II0, and 1II9.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337225
|
14337241
|
ATP diphosphatase
|
Class of enzymes
In enzymology, an ATP diphosphatase (EC 3.6.1.8) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 AMP + diphosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are AMP and diphosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is ATP diphosphohydrolase (diphosphate-forming). Other names in common use include ATPase, ATP pyrophosphatase, adenosine triphosphate pyrophosphatase, and ATP diphosphohydrolase [ambiguous]. This enzyme participates in purine metabolism and pyrimidine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337241
|
14337256
|
Beta-glucan-transporting ATPase
|
Class of enzymes
In enzymology, a beta-glucan-transporting ATPase (EC 3.6.3.42) is an enzyme that catalyzes the chemical reaction
ATP + H2O + beta-glucanin formula_0 ADP + phosphate + beta-glucanout
The 3 substrates of this enzyme are ATP, H2O, and beta-glucan, whereas its 3 products are ADP, phosphate, and beta-glucan.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (beta-glucan-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337256
|
14337270
|
Bis(5'-adenosyl)-triphosphatase
|
Class of enzymes
In enzymology, a bis(5'-adenosyl)-triphosphatase (EC 3.6.1.29) is an enzyme that catalyzes the chemical reaction
P1,P3-bis(5'-adenosyl) triphosphate + H2O formula_0 ADP + AMP
Thus, the two substrates of this enzyme are P1,P3-bis(5'-adenosyl) triphosphate and H2O, whereas its two products are ADP and AMP.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is P1,P3-bis(5'-adenosyl)-triphosphate adenylohydrolase. Other names in common use include dinucleosidetriphosphatase, diadenosine 5,5-P1,P3-triphosphatase, and 1-P,3-P-bis(5'-adenosyl)-triphosphate adenylohydrolase. This enzyme participates metabolic pathways involved in purine metabolism, and may have a role in the development of small cell lung cancer, and non-small cell lung cancer.
Structural studies.
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1FHI, 2FHI, 4FIT, 5FIT, and 6FIT.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337270
|
14337299
|
Bis(5'-nucleosyl)-tetraphosphatase (asymmetrical)
|
Enzyme class
In enzymology, a bis(5'-nucleosyl)-tetraphosphatase (asymmetrical) (EC 3.6.1.17) is an enzyme that catalyzes the chemical reaction
P1,P4-bis(5'-guanosyl) tetraphosphate + H2O formula_0 GTP + GMP
Thus, the two substrates of this enzyme are P1,P4-bis(5'-guanosyl) tetraphosphate and H2O, whereas its two products are GTP and GMP.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is P1,P4-bis(5'-nucleosyl)-tetraphosphate nucleotidohydrolase. Other names in common use include bis(5'-guanosyl)-tetraphosphatase, bis(5'-adenosyl)-tetraphosphatase, diguanosinetetraphosphatase (asymmetrical), dinucleosidetetraphosphatase (asymmetrical), diadenosine P1,P4-tetraphosphatase, dinucleoside tetraphosphatase, and 1-P,4-P-bis(5'-nucleosyl)-tetraphosphate nucleotidohydrolase. This enzyme participates in purine metabolism and pyrimidine metabolism.
Structural studies.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1F3Y, 1JKN, 1KT9, 1KTG, 1XSA, 1XSB, and 1XSC.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337299
|
14337318
|
Bis(5'-nucleosyl)-tetraphosphatase (symmetrical)
|
Enzyme
In enzymology, a bis(5'-nucleosyl)-tetraphosphatase (symmetrical) (EC 3.6.1.41) is an enzyme that catalyzes the chemical reaction
P1,P4-bis(5'-adenosyl) tetraphosphate + H2O formula_0 2 ADP
Thus, the two substrates of this enzyme are P1,P4-bis(5'-adenosyl) tetraphosphate and H2O, whereas its product is ADP.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is P1,P4-bis(5'-nucleosyl)-tetraphosphate nucleosidebisphosphohydrolase. Other names in common use include diadenosinetetraphosphatase (symmetrical), dinucleosidetetraphosphatasee (symmetrical), symmetrical diadenosine tetraphosphate hydrolase, adenosine tetraphosphate phosphodiesterase, Ap4A hydrolase, bis(5'-adenosyl) tetraphosphatase, diadenosine tetraphosphate hydrolase, diadenosine polyphosphate hydrolase, diadenosine 5',5-P1,P4-tetraphosphatase, diadenosinetetraphosphatase (symmetrical), 1-P,4-P-bis(5'-nucleosyl)-tetraphosphate, and nucleosidebisphosphohydrolase. This enzyme participates in purine metabolism.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2DFJ and 2QJC.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337318
|
14337335
|
Cadmium-transporting ATPase
|
Class of enzymes
In enzymology, a cadmium-transporting ATPase (EC 7.2.2.2) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (heavy-metal-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337335
|
14337354
|
Capsular-polysaccharide-transporting ATPase
|
Enzyme
In enzymology, a capsular-polysaccharide-transporting ATPase (EC 7.6.2.2) is an enzyme that catalyzes the chemical reaction
ATP + H2O + capsular polysaccharidein formula_0 ADP + phosphate + capsular polysaccharideout
The 3 substrates of this enzyme are ATP, H2O, and capsular polysaccharide, whereas its 3 products are ADP, phosphate, and capsular polysaccharide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (capsular-polysaccharide-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337354
|
14337375
|
Cd2+-exporting ATPase
|
Class of enzymes
In enzymology, a Cd2+-exporting ATPase (EC 3.6.3.3) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Cd2+in formula_0 ADP + phosphate + Cd2+out
The 3 substrates of this enzyme are ATP, H2O, and Cd2+, whereas its 3 products are ADP, phosphate, and Cd2+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Cd2+-exporting).
Structural studies.
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1MWY, 1MWZ, 2AJ0, and 2AJ1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337375
|
14337389
|
CDP-diacylglycerol diphosphatase
|
Enzyme
In enzymology, a CDP-diacylglycerol diphosphatase (EC 3.6.1.26) is an enzyme that catalyzes the chemical reaction
CDP-diacylglycerol + H2O formula_0 CMP + phosphatidate
Thus, the two substrates of this enzyme are CDP-diacylglycerol and H2O, whereas its two products are CMP and phosphatidate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is CDP-diacylglycerol phosphatidylhydrolase. Other names in common use include cytidine diphosphodiacylglycerol pyrophosphatase, and CDP diacylglycerol hydrolase. This enzyme participates in glycerophospholipid metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2POF.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337389
|
14337404
|
CDP-glycerol diphosphatase
|
Class of enzymes
In enzymology, a CDP-glycerol diphosphatase (EC 3.6.1.16) is an enzyme that catalyzes the chemical reaction
CDP-glycerol + H2O formula_0 CMP + sn-glycerol 3-phosphate
Thus, the two substrates of this enzyme are CDP-glycerol and H2O, whereas its two products are CMP and sn-glycerol 3-phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is CDP-glycerol phosphoglycerohydrolase. Other names in common use include CDP-glycerol pyrophosphatase, and cytidine diphosphoglycerol pyrophosphatase. This enzyme participates in glycerophospholipid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14337404
|
1433747
|
Harnack's principle
|
Theorem regarding the convergence of harmonic functions.
In the mathematical field of partial differential equations, Harnack's principle or Harnack's theorem is a corollary of Harnack's inequality which deals with the convergence of sequences of harmonic functions.
Given a sequence of harmonic functions "u"1, "u"2, ... on an open connected subset G of the Euclidean space R"n", which are pointwise monotonically nondecreasing in the sense that
formula_0
for every point x of G, then the limit
formula_1
automatically exists in the extended real number line for every x. Harnack's theorem says that the limit either is infinite at every point of G or it is finite at every point of G. In the latter case, the convergence is uniform on compact sets and the limit is a harmonic function on G.
The theorem is a corollary of Harnack's inequality. If "u""n"("y") is a Cauchy sequence for any particular value of y, then the Harnack inequality applied to the harmonic function "u""m" − "u""n" implies, for an arbitrary compact set D containing y, that sup"D" |"u""m" − "u""n"| is arbitrarily small for sufficiently large m and n. This is exactly the definition of uniform convergence on compact sets. In words, the Harnack inequality is a tool which directly propagates the Cauchy property of a sequence of harmonic functions at a single point to the Cauchy property at all points.
Having established uniform convergence on compact sets, the harmonicity of the limit is an immediate corollary of the fact that the mean value property (automatically preserved by uniform convergence) fully characterizes harmonic functions among continuous functions.
The proof of uniform convergence on compact sets holds equally well for any linear second-order elliptic partial differential equation, provided that it is linear so that "u""m" − "u""n" solves the same equation. The only difference is that the more general Harnack inequality holding for solutions of second-order elliptic PDE must be used, rather than that only for harmonic functions. Having established uniform convergence on compact sets, the mean value property is not available in this more general setting, and so the proof of convergence to a new solution must instead make use of other tools, such as the Schauder estimates.
References.
<templatestyles src="Reflist/styles.css" />
Sources
|
[
{
"math_id": 0,
"text": "u_1(x) \\le u_2(x) \\le \\dots"
},
{
"math_id": 1,
"text": " \\lim_{n\\to\\infty}u_n(x)"
}
] |
https://en.wikipedia.org/wiki?curid=1433747
|
14339497
|
London low emission zone
|
Traffic air pollution charge scheme
The London Low Emission Zone (LEZ) is an area of London in which an emissions standard based charge is applied to non-compliant commercial vehicles. Its aim is to reduce the exhaust emissions of diesel-powered vehicles in London. This scheme should not be confused with the Ultra Low Emission Zone (ULEZ), introduced in April 2019, which applies to all vehicles. Vehicles that do not conform to various emission standards are charged; the others may enter the controlled zone free of charge. The low emission zone started operating on 4 February 2008 with phased introduction of an increasingly stricter regime until 3 January 2012. The scheme is administered by the Transport for London executive agency within the Greater London Authority.
The current standard for large commercial vehicles (over 3.5 tonnes) is Euro VI, increased from Euro IV on 1 March 2021. Vehicles need to meet these standards or face a penalty of £100 per day. The new rules were due to come into force in October 2020 but were postponed due to the 2020 coronavirus pandemic.
History.
Since 1993, the London Air Quality Network of Imperial College London has coordinated the monitoring of air pollution across 30 London boroughs and Heathrow, and has noted that in 2005–2006 almost all road and kerbside monitoring sites across greater London exceeded the annual average limits for nitrogen dioxide of 40 μgmformula_0 (21 ppb), with eleven sites exceeding the hourly limits of 200 μgmformula_0 (105 ppb) on at least 18 occasions each.
In 2000 one measuring site exceeded EU limits for air pollution, pollution rose for two years prior to 2007. The Green Party reported that nine sites in London exceeded the EU limits for air pollution in 2007. The A23 at Brixton suffered the most consistently high levels for more than two-fifths of the period. Carbon monoxide levels had reduced rapidly during the late 1990s and been relatively stable since 2002.
In 2007 Transport for London (TfL) estimated that there were 1,000 premature deaths and a further 1,000 hospital admissions annually due to poor air quality from all causes.
Planning.
Towards the end of 2006, the Mayor of London, Ken Livingstone, proposed changing the congestion charge fee, from being a flat rate for all qualifying vehicles, to being based on Vehicle Excise Duty (VED) bands.
VED bands for new vehicles are based on the results of a laboratory test, designed to calculate the theoretical potential emissions of the vehicle in grammes of CO2 per kilometre travelled, under ideal conditions. The lowest band, "Band A", is for vehicles with a calculated CO2 value of up to 100 g/km, the highest band, "Band G", is for vehicles with a CO2 value of greater than 225 g/km. These results were to be used to determine which band each vehicle falls into. The resulting figures were described by the editor-in-chief of "What Car?" magazine as "deeply flawed".
Under the proposed modifications to the scheme, vehicles falling into Band A would have a reduced, or even zero charge, whilst those in Band G would be charged at £25 per day. Certain categories of vehicle, such as electric vehicles, are already exempt from the charge. These proposals were put out to public consultation in August 2007.
In early 2006, consultations began on another charging scheme for motor vehicles entering London. Under this new scheme, a daily charge would be applied to the vehicles responsible for most of London's road traffic emissions, commercial vehicles—such as lorries, buses, and coaches, with diesel engines. Cars were explicitly excluded. The objective of the new scheme is to help London meet its European Union (EU) air pollution obligations—specifically the EU Air Quality Framework Directive—as part of the Mayor's programme to make London the greenest city in the world. Despite some opposition, on 9 May 2007 the Mayor confirmed that he would proceed with a London Low Emission Zone, focused entirely on vehicle emissions, that plans to reduce emissions overall by 16% by 2012.
Introduction.
The LEZ came into operation on 4 February 2008 with a phased introduction of further provisions as increasingly tough emissions standards apply. Vehicles registered after October 2001 are generally compliant with the first stages of the zone when Euro 3 engine compliance was the mandatory requirement.
The regulations were tightened in July 2008 with more vehicles types included.
On 2 February 2009 the Mayor of London, Boris Johnson, announced his intention to cancel the third phase of the LEZ covering vans from 2010, subject to the outcome of a public consultation later in the year. The Freight Transport Association welcomed this move in its 3 February press release. The scheme was fully implemented on 3 January 2012.
For London Buses, since January 2012 a new Low Emission Zone (LEZ) was adopted, with those older buses selectively phasing out (those with no electronic destination displays and more than 12 years old) and the remaining buses were converted to Euro 3 or 4 standards. The new Low Emission Zone (LEZ) rules will be implemented from 2015, thus allowing all the Euro II vehicles and Euro III without catalytic standards to be removed. In July 2016 the last bus not meeting the standards was withdrawn.
Tougher standards from 2021.
From 1 March 2021, all large commercial vehicles in London need to meet Euro VI standards or face a penalty of £100 per day. Commercial vehicles which do not meet the older standards (Euro IV) are charged £300 per day.
Statistics from TfL showed that the number of vehicles complying rose to nearly 90% upon the introduction of tougher standards in March 2021, up from 70% in May 2019. All buses in London meet the Euro VI standards, with an increasing number becoming zero emission.
Timeline.
Applicable vehicles over the implementation phase:
Operation.
The zone covers most of Greater London (with minor deviations to allow diversionary routes and facilities to turn around without entering the zone and the M25 motorway). The boundary of the zone, which operates 24 hours a day, 7 days a week, is marked by signs. The LEZ emissions standards are based on European emission standards relating to particulate matter (PM), which are emitted by vehicles, which have an effect on health. The following vehicles are not charged:
Non-GB registered vehicles that meet the required LEZ standards will need to register with TfL; most compliant GB registered vehicles do not. Owners of vehicles that do not meet the above requirements have a number of options:
The zone is monitored using automatic number plate reading cameras (ANPR) to record number plates. Vehicles entering or moving around the zone are checked against the records of the DVLA to enable TfL to pursue owners of vehicles for which the charge has not been paid. For vehicles registered outside of Great Britain, an international debt recovery agency is used to obtain unpaid charges and fines. The scheme is operated on a day-to-day basis by IBM.
Reaction.
The scheme was opposed during the consultation phase by a range of interested parties: The Freight Transport Association proposed an alternative scheme, reliant on a replacement cycle of vehicles, with lorries over eight years old being liable, with higher years for other vehicles. They also stated that the standards were different from the forthcoming Euro V requirements as well suggesting the scheme did not do anything to help reduce CO2 emissions. The Road Haulage Association opposed the scheme, stating the costs to hauliers and benefits to the environment did not justify its introduction. Schools and St John Ambulance have expressed concern about the additional costs that the scheme will bring them, particularly in light of the restricted budgets they operate under. London First, a business organisation, criticised aspects of the scheme with relation to the categorisation of vehicles, but supported the principle. The scheme has been supported by the British Lung Foundation and the British Heart Foundation.
Related schemes.
T-Charge.
In October 2017, London Mayor Sadiq Khan introduced a new £10 toxicity charge, known as the T-Charge, after London suffered record air pollution levels in January 2017, and the city was put on very high pollution alert for the first time ever, as cold and stationary weather failed to clear toxic pollutants emitted mainly by diesel vehicles. The T-Charge was levied on vehicles within Central London on top of the £11.50 congestion charge. The T-Charge was replaced by the ULEZ in April 2019.
Ultra Low Emission Zone.
The ULEZ, which went into effect on 8 April 2019, initially covered the same area as the T-Charge but applies 24/7, 365 days a year, with charges of £12.50 a day for cars, vans and motorcycles, and £100 a day for lorries, buses and coaches. One month after its introduction, the number of the worst polluting vehicles entering the zone each day had dropped from 35,578 to 26,195. The zone was extended to the North Circular and South Circular roads on 25 October 2021. It was expanded again on 29 August 2023 to coincide with the low emission zone, covering almost all of Greater London.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "^{-3}"
}
] |
https://en.wikipedia.org/wiki?curid=14339497
|
14340077
|
Ultrahyperbolic equation
|
Class of partial differential equations
In the mathematical field of differential equations, the ultrahyperbolic equation is a partial differential equation (PDE) for an unknown scalar function u of 2"n" variables "x"1, ..., "x""n", "y"1, ..., "y""n" of the form
formula_0
More generally, if a is any quadratic form in 2"n" variables with signature ("n", "n"), then any PDE whose principal part is formula_1 is said to be ultrahyperbolic. Any such equation can be put in the form above by means of a change of variables.
The ultrahyperbolic equation has been studied from a number of viewpoints. On the one hand, it resembles the classical wave equation. This has led to a number of developments concerning its characteristics, one of which is due to Fritz John: the John equation.
In 2008, Walter Craig and Steven Weinstein proved that under a nonlocal constraint, the initial value problem is well-posed for initial data given on a codimension-one hypersurface. And later, in 2022, a research team at the University of Michigan extended the conditions for solving ultrahyperbolic wave equations to complex-time (kime), demonstrated space-kime dynamics, and showed data science applications using tensor-based linear modeling of functional magnetic resonance imaging data.
The equation has also been studied from the point of view of symmetric spaces, and elliptic differential operators. In particular, the ultrahyperbolic equation satisfies an analog of the mean value theorem for harmonic functions.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\frac{\\partial^2 u}{\\partial x_1^2} + \\cdots + \\frac{\\partial^2 u}{\\partial x_n^2} - \\frac{\\partial^2 u}{\\partial y_1^2} - \\cdots - \\frac{\\partial^2 u}{\\partial y_n^2} = 0.\n"
},
{
"math_id": 1,
"text": "a_{ij}u_{x_ix_j}"
}
] |
https://en.wikipedia.org/wiki?curid=14340077
|
143431
|
Stereographic projection
|
Particular mapping that projects a sphere onto a plane
In mathematics, a stereographic projection is a perspective projection of the sphere, through a specific point on the sphere (the "pole" or "center of projection"), onto a plane (the "projection plane") perpendicular to the diameter through the point. It is a smooth, bijective function from the entire sphere except the center of projection to the entire plane. It maps circles on the sphere to circles or lines on the plane, and is conformal, meaning that it preserves angles at which curves meet and thus locally approximately preserves shapes. It is neither isometric (distance preserving) nor equiareal (area preserving).
The stereographic projection gives a way to represent a sphere by a plane. The metric induced by the inverse stereographic projection from the plane to the sphere defines a geodesic distance between points in the plane equal to the spherical distance between the spherical points they represent. A two-dimensional coordinate system on the stereographic plane is an alternative setting for spherical analytic geometry instead of spherical polar coordinates or three-dimensional cartesian coordinates. This is the spherical analog of the Poincaré disk model of the hyperbolic plane.
Intuitively, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis, cartography, geology, and photography. Sometimes stereographic computations are done graphically using a special kind of graph paper called a stereographic net, shortened to stereonet, or Wulff net.
History.
The origin of the stereographic projection is not known, but it is believed to have been discovered by Ancient Greek astronomers and used for projecting the celestial sphere to the plane so that the motions of stars and planets could be analyzed using plane geometry. Its earliest extant description is found in Ptolemy's "Planisphere" (2nd century AD), but it was ambiguously attributed to Hipparchus (2nd century BC) by Synesius (c. 400 AD), and Apollonius's "Conics" (c. 200 BC) contains a theorem which is crucial in proving the property that the stereographic projection maps circles to circles. Hipparchus, Apollonius, Archimedes, and even Eudoxus (4th century BC) have sometimes been speculatively credited with inventing or knowing of the stereographic projection, but some experts consider these attributions unjustified. Ptolemy refers to the use of the stereographic projection in a "horoscopic instrument", perhaps the anaphoric clock described by Vitruvius (1st century BC).
By the time of Theon of Alexandria (4th century), the planisphere had been combined with a dioptra to form the planispheric astrolabe ("star taker"), a capable portable device which could be used for measuring star positions and performing a wide variety of astronomical calculations. The astrolabe was in continuous use by Byzantine astronomers, and was significantly further developed by medieval Islamic astronomers. It was transmitted to Western Europe during the 11th–12th century, with Arabic texts translated into Latin.
In the 16th and 17th century, the equatorial aspect of the stereographic projection was commonly used for maps of the Eastern and Western Hemispheres. It is believed that already the map created in 1507 by Gualterius Lud was in stereographic projection, as were later the maps of Jean Roze (1542), Rumold Mercator (1595), and many others. In star charts, even this equatorial aspect had been utilised already by the ancient astronomers like Ptolemy.
François d'Aguilon gave the stereographic projection its current name in his 1613 work "Opticorum libri sex philosophis juxta ac mathematicis utiles" (Six Books of Optics, useful for philosophers and mathematicians alike).
In the late 16th century, Thomas Harriot proved that the stereographic projection is conformal; however, this proof was never published and sat among his papers in a box for more than three centuries. In 1695, Edmond Halley, motivated by his interest in star charts, was the first to publish a proof. He used the recently established tools of calculus, invented by his friend Isaac Newton.
Definition.
First formulation.
The unit sphere "S"2 in three-dimensional space R3 is the set of points ("x", "y", "z") such that "x"2 + "y"2 + "z"2
1. Let "N"
(0, 0, 1) be the "north pole", and let "M" be the rest of the sphere. The plane "z"
0 runs through the center of the sphere; the "equator" is the intersection of the sphere with this plane.
For any point "P" on "M", there is a unique line through "N" and "P", and this line intersects the plane "z"
0 in exactly one point "P′", known as the stereographic projection of "P" onto the plane.
In Cartesian coordinates ("x", "y", "z") on the sphere and ("X", "Y") on the plane, the projection and its inverse are given by the formulas
formula_0
In spherical coordinates ("φ", "θ") on the sphere (with "φ" the zenith angle, 0 ≤ "φ" ≤ π, and "θ" the azimuth, 0 ≤ "θ" ≤ 2π) and polar coordinates ("R", "Θ") on the plane, the projection and its inverse are
formula_1
Here, "φ" is understood to have value π when "R" = 0. Also, there are many ways to rewrite these formulas using trigonometric identities. In cylindrical coordinates ("r", "θ", "z") on the sphere and polar coordinates ("R", "Θ") on the plane, the projection and its inverse are
formula_2
Other conventions.
Some authors define stereographic projection from the north pole (0, 0, 1) onto the plane "z"
−1, which is tangent to the unit sphere at the south pole (0, 0, −1). This can be described as a composition of a projection onto the equatorial plane described above, and a homothety from it to the polar plane. The homothety scales the image by a factor of 2 (a ratio of a diameter to a radius of the sphere), hence the values "X" and "Y" produced by this projection are exactly twice those produced by the equatorial projection described in the preceding section. For example, this projection sends the equator to the circle of radius 2 centered at the origin. While the equatorial projection produces no infinitesimal area distortion along the equator, this pole-tangent projection instead produces no infinitesimal area distortion at the south pole.
Other authors use a sphere of radius and the plane "z"
−. In this case the formulae become
formula_3
In general, one can define a stereographic projection from any point "Q" on the sphere onto any plane "E" such that
As long as "E" meets these conditions, then for any point "P" other than "Q" the line through "P" and "Q" meets "E" in exactly one point "P′", which is defined to be the stereographic projection of "P" onto "E".
Generalizations.
More generally, stereographic projection may be applied to the unit "n"-sphere "S""n" in ("n" + 1)-dimensional Euclidean space E"n"+1. If "Q" is a point of "S""n" and "E" a hyperplane in E"n"+1, then the stereographic projection of a point "P" ∈ "S""n" − {"Q"} is the point "P′" of intersection of the line with "E". In Cartesian coordinates ("x""i", "i" from 0 to "n") on "S""n" and ("X""i", "i" from 1 to "n") on "E", the projection from "Q" = (1, 0, 0, ..., 0) ∈ "S""n" is given by
formula_4
Defining
formula_5
the inverse is given by
formula_6
Still more generally, suppose that "S" is a (nonsingular) quadric hypersurface in the projective space P"n"+1. In other words, "S" is the locus of zeros of a non-singular quadratic form "f"("x"0, ..., "x""n"+1) in the homogeneous coordinates "x""i". Fix any point "Q" on "S" and a hyperplane "E" in P"n"+1 not containing "Q". Then the stereographic projection of a point "P" in "S" − {"Q"} is the unique point of intersection of with "E". As before, the stereographic projection is conformal and invertible on a non-empty Zariski open set. The stereographic projection presents the quadric hypersurface as a rational hypersurface. This construction plays a role in algebraic geometry and conformal geometry.
Properties.
The first stereographic projection defined in the preceding section sends the "south pole" (0, 0, −1) of the unit sphere to (0, 0), the equator to the unit circle, the southern hemisphere to the region inside the circle, and the northern hemisphere to the region outside the circle.
The projection is not defined at the projection point "N" = (0, 0, 1). Small neighborhoods of this point are sent to subsets of the plane far away from (0, 0). The closer "P" is to (0, 0, 1), the more distant its image is from (0, 0) in the plane. For this reason it is common to speak of (0, 0, 1) as mapping to "infinity" in the plane, and of the sphere as completing the plane by adding a point at infinity. This notion finds utility in projective geometry and complex analysis. On a merely topological level, it illustrates how the sphere is homeomorphic to the one-point compactification of the plane.
In Cartesian coordinates a point "P"("x", "y", "z") on the sphere and its image "P′"("X", "Y") on the plane either both are rational points or none of them:
formula_7
Stereographic projection is conformal, meaning that it preserves the angles at which curves cross each other (see figures). On the other hand, stereographic projection does not preserve area; in general, the area of a region of the sphere does not equal the area of its projection onto the plane. The area element is given in ("X", "Y") coordinates by
formula_8
Along the unit circle, where "X"2 + "Y"2
1, there is no inflation of area in the limit, giving a scale factor of 1. Near (0, 0) areas are inflated by a factor of 4, and near infinity areas are inflated by arbitrarily small factors.
The metric is given in ("X", "Y") coordinates by
formula_9
and is the unique formula found in Bernhard Riemann's "Habilitationsschrift" on the foundations of geometry, delivered at Göttingen in 1854, and entitled "Über die Hypothesen welche der Geometrie zu Grunde liegen".
No map from the sphere to the plane can be both conformal and area-preserving. If it were, then it would be a local isometry and would preserve Gaussian curvature. The sphere and the plane have different Gaussian curvatures, so this is impossible.
Circles on the sphere that do "not" pass through the point of projection are projected to circles on the plane. Circles on the sphere that "do" pass through the point of projection are projected to straight lines on the plane. These lines are sometimes thought of as circles through the point at infinity, or circles of infinite radius. These properties can be verified by using the expressions of formula_10 in terms of formula_11 given in : using these expressions for a substitution in the equation formula_12 of the plane containing a circle on the sphere, and clearing denominators, one gets the equation of a circle, that is, a second-degree equation with formula_13 as its quadratic part. The equation becomes linear if formula_14 that is, if the plane passes through the point of projection.
All lines in the plane, when transformed to circles on the sphere by the inverse of stereographic projection, meet at the projection point. Parallel lines, which do not intersect in the plane, are transformed to circles tangent at projection point. Intersecting lines are transformed to circles that intersect transversally at two points in the sphere, one of which is the projection point. (Similar remarks hold about the real projective plane, but the intersection relationships are different there.)
The loxodromes of the sphere map to curves on the plane of the form
formula_15
where the parameter "a" measures the "tightness" of the loxodrome. Thus loxodromes correspond to logarithmic spirals. These spirals intersect radial lines in the plane at equal angles, just as the loxodromes intersect meridians on the sphere at equal angles.
The stereographic projection relates to the plane inversion in a simple way. Let "P" and "Q" be two points on the sphere with projections "P′" and "Q′" on the plane. Then "P′" and "Q′" are inversive images of each other in the image of the equatorial circle if and only if "P" and "Q" are reflections of each other in the equatorial plane.
In other words, if:
then "P′" and "P″" are inversive images of each other in the unit circle.
formula_16
Wulff net.
Stereographic projection plots can be carried out by a computer using the explicit formulas given above. However, for graphing by hand these formulas are unwieldy. Instead, it is common to use graph paper designed specifically for the task. This special graph paper is called a stereonet or Wulff net, after the Russian mineralogist George (Yuri Viktorovich) Wulff.
The Wulff net shown here is the stereographic projection of the grid of parallels and meridians of a hemisphere centred at a point on the equator (such as the Eastern or Western hemisphere of a planet).
In the figure, the area-distorting property of the stereographic projection can be seen by comparing a grid sector near the center of the net with one at the far right or left. The two sectors have equal areas on the sphere. On the disk, the latter has nearly four times the area of the former. If the grid is made finer, this ratio approaches exactly 4.
On the Wulff net, the images of the parallels and meridians intersect at right angles. This orthogonality property is a consequence of the angle-preserving property of the stereographic projection. (However, the angle-preserving property is stronger than this property. Not all projections that preserve the orthogonality of parallels and meridians are angle-preserving.)
For an example of the use of the Wulff net, imagine two copies of it on thin paper, one atop the other, aligned and tacked at their mutual center. Let "P" be the point on the lower unit hemisphere whose spherical coordinates are (140°, 60°) and whose Cartesian coordinates are (0.321, 0.557, −0.766). This point lies on a line oriented 60° counterclockwise from the positive "x"-axis (or 30° clockwise from the positive "y"-axis) and 50° below the horizontal plane "z"
0. Once these angles are known, there are four steps to plotting "P":
To plot other points, whose angles are not such round numbers as 60° and 50°, one must visually interpolate between the nearest grid lines. It is helpful to have a net with finer spacing than 10°. Spacings of 2° are common.
To find the central angle between two points on the sphere based on their stereographic plot, overlay the plot on a Wulff net and rotate the plot about the center until the two points lie on or near a meridian. Then measure the angle between them by counting grid lines along that meridian.
Applications within mathematics.
Complex analysis.
Although any stereographic projection misses one point on the sphere (the projection point), the entire sphere can be mapped using two projections from distinct projection points. In other words, the sphere can be covered by two stereographic parametrizations (the inverses of the projections) from the plane. The parametrizations can be chosen to induce the same orientation on the sphere. Together, they describe the sphere as an oriented surface (or two-dimensional manifold).
This construction has special significance in complex analysis. The point ("X", "Y") in the real plane can be identified with the complex number "ζ"
"X" + i"Y". The stereographic projection from the north pole onto the equatorial plane is then
formula_17
Similarly, letting "ξ"
"X" − i"Y" be another complex coordinate, the functions
formula_18
define a stereographic projection from the south pole onto the equatorial plane. The transition maps between the "ζ"- and "ξ"-coordinates are then "ζ"
and "ξ"
, with "ζ" approaching 0 as "ξ" goes to infinity, and "vice versa". This facilitates an elegant and useful notion of infinity for the complex numbers and indeed an entire theory of meromorphic functions mapping to the Riemann sphere. The standard metric on the unit sphere agrees with the Fubini–Study metric on the Riemann sphere.
Visualization of lines and planes.
The set of all lines through the origin in three-dimensional space forms a space called the real projective plane. This plane is difficult to visualize, because it cannot be embedded in three-dimensional space.
However, one can visualize it as a disk, as follows. Any line through the origin intersects the southern hemisphere "z" ≤ 0 in a point, which can then be stereographically projected to a point on a disk in the XY plane. Horizontal lines through the origin intersect the southern hemisphere in two antipodal points along the equator, which project to the boundary of the disk. Either of the two projected points can be considered part of the disk; it is understood that antipodal points on the equator represent a single line in 3 space and a single point on the boundary of the projected disk (see quotient topology). So any set of lines through the origin can be pictured as a set of points in the projected disk. But the boundary points behave differently from the boundary points of an ordinary 2-dimensional disk, in that any one of them is simultaneously close to interior points on opposite sides of the disk (just as two nearly horizontal lines through the origin can project to points on opposite sides of the disk).
Also, every plane through the origin intersects the unit sphere in a great circle, called the "trace" of the plane. This circle maps to a circle under stereographic projection. So the projection lets us visualize planes as circular arcs in the disk. Prior to the availability of computers, stereographic projections with great circles often involved drawing large-radius arcs that required use of a beam compass. Computers now make this task much easier.
Further associated with each plane is a unique line, called the plane's "pole", that passes through the origin and is perpendicular to the plane. This line can be plotted as a point on the disk just as any line through the origin can. So the stereographic projection also lets us visualize planes as points in the disk. For plots involving many planes, plotting their poles produces a less-cluttered picture than plotting their traces.
This construction is used to visualize directional data in crystallography and geology, as described below.
Other visualization.
Stereographic projection is also applied to the visualization of polytopes. In a Schlegel diagram, an "n"-dimensional polytope in R"n"+1 is projected onto an "n"-dimensional sphere, which is then stereographically projected onto R"n". The reduction from R"n"+1 to R"n" can make the polytope easier to visualize and understand.
Arithmetic geometry.
In elementary arithmetic geometry, stereographic projection from the unit circle provides a means to describe all primitive Pythagorean triples. Specifically, stereographic projection from the north pole (0,1) onto the "x"-axis gives a one-to-one correspondence between the rational number points ("x", "y") on the unit circle (with "y" ≠ 1) and the rational points of the "x"-axis. If (, 0) is a rational point on the "x"-axis, then its inverse stereographic projection is the point
formula_19
which gives Euclid's formula for a Pythagorean triple.
Tangent half-angle substitution.
The pair of trigonometric functions (sin "x", cos "x") can be thought of as parametrizing the unit circle. The stereographic projection gives an alternative parametrization of the unit circle:
formula_20
Under this reparametrization, the length element "dx" of the unit circle goes over to
formula_21
This substitution can sometimes simplify integrals involving trigonometric functions.
Applications to other disciplines.
Cartography.
The fundamental problem of cartography is that no map from the sphere to the plane can accurately represent both angles and areas. In general, area-preserving map projections are preferred for statistical applications, while angle-preserving (conformal) map projections are preferred for navigation.
Stereographic projection falls into the second category. When the projection is centered at the Earth's north or south pole, it has additional desirable properties: It sends meridians to rays emanating from the origin and parallels to circles centered at the origin.
Planetary science.
The stereographic is the only projection that maps all circles on a sphere to circles on a plane. This property is valuable in planetary mapping where craters are typical features. The set of circles passing through the point of projection have unbounded radius, and therefore degenerate into lines.
Crystallography.
In crystallography, the orientations of crystal axes and faces in three-dimensional space are a central geometric concern, for example in the interpretation of X-ray and electron diffraction patterns. These orientations can be visualized as in the section Visualization of lines and planes above. That is, crystal axes and poles to crystal planes are intersected with the northern hemisphere and then plotted using stereographic projection. A plot of poles is called a pole figure.
In electron diffraction, Kikuchi line pairs appear as bands decorating the intersection between lattice plane traces and the Ewald sphere thus providing "experimental access" to a crystal's stereographic projection. Model Kikuchi maps in reciprocal space, and fringe visibility maps for use with bend contours in direct space, thus act as road maps for exploring orientation space with crystals in the transmission electron microscope.
Geology.
Researchers in structural geology are concerned with the orientations of planes and lines for a number of reasons. The foliation of a rock is a planar feature that often contains a linear feature called lineation. Similarly, a fault plane is a planar feature that may contain linear features such as slickensides.
These orientations of lines and planes at various scales can be plotted using the methods of the Visualization of lines and planes section above. As in crystallography, planes are typically plotted by their poles. Unlike crystallography, the southern hemisphere is used instead of the northern one (because the geological features in question lie below the Earth's surface). In this context the stereographic projection is often referred to as the equal-angle lower-hemisphere projection. The equal-area lower-hemisphere projection defined by the Lambert azimuthal equal-area projection is also used, especially when the plot is to be subjected to subsequent statistical analysis such as density contouring.
Rock mechanics.
The stereographic projection is one of the most widely used methods for evaluating rock slope stability. It allows for the representation and analysis of three-dimensional orientation data in two dimensions. Kinematic analysis within stereographic projection is used to assess the potential for various modes of rock slope failures—such as plane, wedge, and toppling failures—which occur due to the presence of unfavorably oriented discontinuities. This technique is particularly useful for visualizing the orientation of rock slopes in relation to discontinuity sets, facilitating the assessment of the most likely failure type. For instance, plane failure is more likely when the strike of a discontinuity set is parallel to the slope, and the discontinuities dip towards the slope at an angle steep enough to allow sliding, but not steeper than the slope itself.
Additionally, some authors have developed graphical methods based on stereographic projection to easily calculate geometrical correction parameters—such as those related to the parallelism between the slope and discontinuities, the dip of the discontinuity, and the relative angle between the discontinuity and the slope—for rock mass classifications in slopes, including slope mass rating (SMR) and rock mass rating.
Photography.
Some fisheye lenses use a stereographic projection to capture a wide-angle view. Compared to more traditional fisheye lenses which use an equal-area projection, areas close to the edge retain their shape, and straight lines are less curved. However, stereographic fisheye lenses are typically more expensive to manufacture. Image remapping software, such as Panotools, allows the automatic remapping of photos from an equal-area fisheye to a stereographic projection.
The stereographic projection has been used to map spherical panoramas, starting with Horace Bénédict de Saussure's in 1779. This results in effects known as a "little planet" (when the center of projection is the nadir) and a "tube" (when the center of projection is the zenith).
The popularity of using stereographic projections to map panoramas over other azimuthal projections is attributed to the shape preservation that results from the conformality of the projection.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}(X, Y) &= \\left(\\frac{x}{1 - z}, \\frac{y}{1 - z}\\right),\\\\\n(x, y, z) &= \\left(\\frac{2 X}{1 + X^2 + Y^2}, \\frac{2 Y}{1 + X^2 + Y^2}, \\frac{-1 + X^2 + Y^2}{1 + X^2 + Y^2}\\right).\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}(R, \\Theta) &= \\left(\\frac{\\sin \\varphi}{1 - \\cos \\varphi}, \\theta\\right) = \\left(\\cot\\frac{\\varphi}{2}, \\theta\\right),\\\\\n(\\varphi, \\theta) &= \\left(2 \\arctan \\frac{1}{R}, \\Theta\\right).\\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align}(R, \\Theta) &= \\left(\\frac{r}{1 - z}, \\theta\\right),\\\\\n(r, \\theta, z) &= \\left(\\frac{2 R}{1 + R^2}, \\Theta, \\frac{R^2 - 1}{R^2 + 1}\\right).\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}(x,y,z) \\rightarrow (\\xi, \\eta) &= \\left(\\frac{x}{\\frac{1}{2} - z}, \\frac{y}{\\frac{1}{2} - z}\\right),\\\\\n(\\xi, \\eta) \\rightarrow (x,y,z) &= \\left(\\frac{\\xi}{1 + \\xi^2 + \\eta^2}, \\frac{\\eta}{1 + \\xi^2 + \\eta^2}, \\frac{-1 + \\xi^2 + \\eta^2}{2 + 2\\xi^2 + 2\\eta^2}\\right).\\end{align}"
},
{
"math_id": 4,
"text": "X_i = \\frac{x_i}{1 - x_0} \\quad (i = 1, \\dots, n)."
},
{
"math_id": 5,
"text": "s^2=\\sum_{j=1}^n X_j^2 = \\frac{1+x_0}{1-x_0},"
},
{
"math_id": 6,
"text": "x_0 = \\frac{s^2-1}{s^2+1} \\quad \\text{and} \\quad x_i = \\frac{2 X_i}{s^2+1} \\quad (i = 1, \\dots, n)."
},
{
"math_id": 7,
"text": "P \\in \\mathbb Q^3 \\iff P' \\in \\mathbb Q^2"
},
{
"math_id": 8,
"text": "dA = \\frac{4}{(1 + X^2 + Y^2)^2} \\; dX \\; dY."
},
{
"math_id": 9,
"text": " \\frac{4}{(1 + X^2 + Y^2)^2} \\; ( dX^2 + dY^2),"
},
{
"math_id": 10,
"text": "x,y,z"
},
{
"math_id": 11,
"text": "X, Y, Z,"
},
{
"math_id": 12,
"text": "ax+by+cz-d=0"
},
{
"math_id": 13,
"text": "(c-d)(X^2+Y^2)"
},
{
"math_id": 14,
"text": "c=d,"
},
{
"math_id": 15,
"text": "R = e^{\\Theta/a},\\,"
},
{
"math_id": 16,
"text": " \\triangle NOP^\\prime \\sim \\triangle P^{\\prime\\prime}OS \\implies OP^\\prime:ON = OS : OP^{\\prime\\prime} \\implies OP^\\prime \\cdot OP^{\\prime\\prime} = r^2 "
},
{
"math_id": 17,
"text": "\\begin{align} \\zeta &= \\frac{x + i y}{1 - z},\\\\\n\\\\\n(x, y, z) &= \\left(\\frac{2 \\operatorname{Re} \\zeta}{1 + \\bar \\zeta \\zeta}, \\frac{2 \\operatorname{Im} \\zeta}{1 + \\bar \\zeta \\zeta}, \\frac{-1 + \\bar \\zeta \\zeta}{1 + \\bar \\zeta \\zeta}\\right).\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align} \\xi &= \\frac{x - i y}{1 + z},\\\\\n(x, y, z) &= \\left(\\frac{2 \\operatorname{Re} \\xi}{1 + \\bar \\xi \\xi}, \\frac{-2 \\operatorname{Im} \\xi}{1 + \\bar \\xi \\xi}, \\frac{1 - \\bar \\xi \\xi}{1 + \\bar \\xi \\xi}\\right)\\end{align}"
},
{
"math_id": 19,
"text": "\\left(\\frac{2mn}{m^2+n^2}, \\frac{m^2-n^2}{m^2+n^2}\\right)"
},
{
"math_id": 20,
"text": "\\cos x = \\frac{1 - t^2}{1 + t^2},\\quad \\sin x = \\frac{2 t}{t^2 + 1}."
},
{
"math_id": 21,
"text": "dx = \\frac{2 \\, dt}{t^2 + 1}."
}
] |
https://en.wikipedia.org/wiki?curid=143431
|
14343887
|
Precision and recall
|
Pattern-recognition performance metrics
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space.
Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula:
formula_0
Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Written as a formula:
formula_1
Both precision and recall are therefore based on relevance.
Consider a computer program for recognizing dogs (the relevant element) in a digital photograph. Upon processing a picture which contains ten cats and twelve dogs, the program identifies eight dogs. Of the eight elements identified as dogs, only five actually are dogs (true positives), while the other three are cats (false positives). Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements).
Adopting a hypothesis-testing approach, where in this case, the null hypothesis is that a given item is "irrelevant" (not a dog), absence of type I and type II errors (perfect specificity and sensitivity) corresponds respectively to perfect precision (no false positives) and perfect recall (no false negatives).
More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item.
The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors (false negatives), for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity.
Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned).
Introduction.
In a classification task, the precision for a class is the "number of true positives" (i.e. the number of items correctly labelled as belonging to the positive class) "divided by the total number of elements labelled as belonging to the positive class" (i.e. the sum of true positives and false positives, which are items incorrectly labelled as belonging to the class). Recall in this context is defined as the "number of true positives divided by the total number of elements that actually belong to the positive class" (i.e. the sum of true positives and false negatives, which are items which were not labelled as belonging to the positive class but should have been).
Precision and recall are not particularly useful metrics when used in isolation. For instance, it is possible to have perfect recall by simply retrieving every single item. Likewise, it is possible to achieve perfect precision by selecting only a very small number of extremely likely items.
In a classification task, a precision score of 1.0 for a class C means that every item labelled as belonging to class C does indeed belong to class C (but says nothing about the number of items from class C that were not labelled correctly) whereas a recall of 1.0 means that every item from class C was labelled as belonging to class C (but says nothing about how many items from other classes were incorrectly also labelled as belonging to class C).
Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other, but context may dictate if one is more valued in a given situation:
A smoke detector is generally designed to commit many Type I errors (to alert in many situations when there is no danger), because the cost of a Type II error (failing to sound an alarm during a major fire) is prohibitively high. As such, smoke detectors are designed with recall in mind (to catch all real danger), even while giving little weight to the losses in precision (and making many false alarms). In the other direction, Blackstone's ratio, "It is better that ten guilty persons escape than that one innocent suffer," emphasizes the costs of a Type I error (convicting an innocent person). As such, the criminal justice system is geared toward precision (not convicting innocents), even at the cost of losses in recall (letting more guilty people go free).
A brain surgeon removing a cancerous tumor from a patient's brain illustrates the tradeoffs as well: The surgeon needs to remove all of the tumor cells since any remaining cancer cells will regenerate the tumor. Conversely, the surgeon must not remove healthy brain cells since that would leave the patient with impaired brain function. The surgeon may be more liberal in the area of the brain they remove to ensure they have extracted all the cancer cells. This decision increases recall but reduces precision. On the other hand, the surgeon may be more conservative in the brain cells they remove to ensure they extracts only cancer cells. This decision increases precision but reduces recall. That is to say, greater recall increases the chances of removing healthy cells (negative outcome) and increases the chances of removing all cancer cells (positive outcome). Greater precision decreases the chances of removing healthy cells (positive outcome) but also decreases the chances of removing all cancer cells (negative outcome).
Usually, precision and recall scores are not discussed in isolation. A "precision-recall curve" plots precision as a function of recall; usually precision will decrease as the recall increases. Alternatively, values for one measure can be compared for a fixed level at the other measure (e.g. "precision at a recall level of 0.75") or both are combined into a single measure. Examples of measures that are a combination of precision and recall are the F-measure (the weighted harmonic mean of precision and recall), or the Matthews correlation coefficient, which is a geometric mean of the chance-corrected variants: the regression coefficients Informedness (DeltaP') and Markedness (DeltaP). Accuracy is a weighted arithmetic mean of Precision and Inverse Precision (weighted by Bias) as well as a weighted arithmetic mean of Recall and Inverse Recall (weighted by Prevalence). Inverse Precision and Inverse Recall are simply the Precision and Recall of the inverse problem where positive and negative labels are exchanged (for both real classes and prediction labels). True Positive Rate and False Positive Rate, or equivalently Recall and 1 - Inverse Recall, are frequently plotted against each other as ROC curves and provide a principled mechanism to explore operating point tradeoffs. Outside of Information Retrieval, the application of Recall, Precision and F-measure are argued to be flawed as they ignore the true negative cell of the contingency table, and they are easily manipulated by biasing the predictions. The first problem is 'solved' by using Accuracy and the second problem is 'solved' by discounting the chance component and renormalizing to Cohen's kappa, but this no longer affords the opportunity to explore tradeoffs graphically. However, Informedness and Markedness are Kappa-like renormalizations of Recall and Precision, and their geometric mean Matthews correlation coefficient thus acts like a debiased F-measure.
Definition.
For classification tasks, the terms "true positives", "true negatives", "false positives", and "false negatives" compare the results of the classifier under test with trusted external judgments. The terms "positive" and "negative" refer to the classifier's prediction (sometimes known as the "expectation"), and the terms "true" and "false" refer to whether that prediction corresponds to the external judgment (sometimes known as the "observation").
Let us define an experiment from "P" positive instances and "N" negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
<templatestyles src="Reflist/styles.css" />
Precision and recall are then defined as:
formula_2
Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity.
formula_3
Precision vs. Recall.
Both precision and recall may be useful in cases where there is imbalanced data. However, it may be valuable to prioritize one over the other in cases where the outcome of a false positive or false negative is costly. For example, in medical diagnosis, a false positive test can lead to unnecessary treatment and expenses. In this situation, it is useful to value precision over recall. In other cases, the cost of a false negative is high. For instance, the cost of a false negative in fraud detection is high, as failing to detect a fraudulent transaction can result in significant financial loss.
Probabilistic Definition.
Precision and recall can be interpreted as (estimated) conditional probabilities:
Precision is given by formula_4 while recall is given by formula_5, where formula_6 is the predicted class and formula_7 is the actual class (i.e. formula_8 means the actual class is positive). Both quantities are, therefore, connected by Bayes' theorem.
No-Skill Classifiers.
The probabilistic interpretation allows to easily derive how a no-skill classifier would perform. A no-skill classifiers is defined by the property that the joint probability formula_9 is just the product of the unconditional probabilites since the classification and the presence of the class are independent.
For example the precision of a no-skill classifier is simply a constant formula_10 i.e. determined by the probability/frequency with which the class P occurs.
A similar argument can be made for the recall:
formula_11 which is the probability for a positive classification.
Imbalanced data.
formula_12
Accuracy can be a misleading metric for imbalanced data sets. Consider a sample with 95 negative and 5 positive values. Classifying all values as negative in this case gives 0.95 accuracy score. There are many metrics that don't suffer from this problem. For example, balanced accuracy (bACC) normalizes true positive and true negative predictions by the number of positive and negative samples, respectively, and divides their sum by two:
formula_13
For the previous example (95 negative and 5 positive samples), classifying all as negative gives 0.5 balanced accuracy score (the maximum bACC score is one), which is equivalent to the expected value of a random guess in a balanced data set. Balanced accuracy can serve as an overall performance metric for a model, whether or not the true labels are imbalanced in the data, assuming the cost of FN is the same as FP.
The TPR and FPR are a property of a given classifier operating at a specific threshold. However, the overall number of TPs, FPs "etc" depend on the class imbalance in the data via the class ratio formula_14. As the recall (or TPR) depends only on positive cases, it is not affected by formula_15, but the precision is. We have that
formula_16
Thus the precision has an explicit dependence on formula_15. Starting with balanced classes at formula_17 and gradually decreasing formula_15, the corresponding precision will decrease, because the denominator increases.
Another metric is the predicted positive condition rate (PPCR), which identifies the percentage of the total population that is flagged. For example, for a search engine that returns 30 results (retrieved documents) out of 1,000,000 documents, the PPCR is 0.003%.
formula_18
According to Saito and Rehmsmeier, precision-recall plots are more informative than ROC plots when evaluating binary classifiers on imbalanced data. In such scenarios, ROC plots may be visually deceptive with respect to conclusions about the reliability of classification performance.
Different from the above approaches, if an imbalance scaling is applied directly by weighting the confusion matrix elements, the standard metrics definitions still apply even in the case of imbalanced datasets. The weighting procedure relates the confusion matrix elements to the support set of each considered class.
F-measure.
A measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score:
formula_19
This measure is approximately the average of the two when they are close, and is more generally the harmonic mean, which, for the case of two numbers, coincides with the square of the geometric mean divided by the arithmetic mean. There are several reasons that the F-score can be criticized, in particular circumstances, due to its bias as an evaluation metric. This is also known as the formula_20 measure, because recall and precision are evenly weighted.
It is a special case of the general formula_21 measure (for non-negative real values of formula_22):
formula_23
Two other commonly used formula_24 measures are the formula_25 measure, which weights recall higher than precision, and the formula_26 measure, which puts more emphasis on precision than recall.
The F-measure was derived by van Rijsbergen (1979) so that formula_21 "measures the effectiveness of retrieval with respect to a user who attaches formula_22 times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure formula_27, the second term being the weighted harmonic mean of precision and recall with weights formula_28. Their relationship is formula_29 where formula_30.
Limitations as goals.
There are other parameters and strategies for performance metric of information retrieval system, such as the area under the ROC curve (AUC) or pseudo-R-squared.
Multi-class evaluation.
Precision and recall values can also be calculated for classification problems with more than two classes. To obtain the precision for a given class, we divide the number of true positives by the classifier bias towards this class (number of times that the classifier has predicted the class). To calculate the recall for a given class, we divide the number of true positives by the prevalence of this class (number of times that the class occurs in the data sample).
The class-wise precision and recall values can then be combined into an overall multi-class evaluation score, e.g., using the macro F1 metric.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\n \\text{Precision} = \\frac{\\text{Relevant retrieved instances}}{\\text{All retrieved instances}}\n"
},
{
"math_id": 1,
"text": "\n \\text{Recall} = \\frac{\\text{Relevant retrieved instances}}{\\text{All relevant instances}}\n"
},
{
"math_id": 2,
"text": "\\begin{align}\n \\text{Precision} &= \\frac{tp}{tp + fp} \\\\\n \\text{Recall} &= \\frac{tp}{tp + fn} \\,\n\\end{align}"
},
{
"math_id": 3,
"text": "\\text{True negative rate} = \\frac{tn}{tn + fp} \\, "
},
{
"math_id": 4,
"text": "\\mathbb{P}(C=P|\\hat{C}=P)"
},
{
"math_id": 5,
"text": "\\mathbb{P}(\\hat{C}=P|C=P)"
},
{
"math_id": 6,
"text": "\\hat{C}"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "C=P"
},
{
"math_id": 9,
"text": "\\mathbb{P}(C=P,\\hat{C}=P)=\\mathbb{P}(C=P)\\mathbb{P}(\\hat{C}=P)"
},
{
"math_id": 10,
"text": "\\mathbb{P}(C=P|\\hat{C}=P)=\\frac{\\mathbb{P}(C=P,\\hat{C}=P)}{\\mathbb{P}(\\hat{C}=P)}=\\mathbb{P}(C=P),"
},
{
"math_id": 11,
"text": "\\mathbb{P}(\\hat{C}=P|C=P)=\\frac{\\mathbb{P}(C=P,\\hat{C}=P)}{\\mathbb{P}(C=P)}=\\mathbb{P}(\\hat{C}=P)"
},
{
"math_id": 12,
"text": "\\text{Accuracy}=\\frac{TP+TN}{TP+TN+FP+FN} \\, "
},
{
"math_id": 13,
"text": "\\text{Balanced accuracy}= \\frac{TPR + TNR}{2}\\, "
},
{
"math_id": 14,
"text": "r = P/N"
},
{
"math_id": 15,
"text": "r"
},
{
"math_id": 16,
"text": "\\text{Precision} = \\frac{TP}{TP+FP} = \\frac{P \\cdot TPR}{P \\cdot TPR+ N \\cdot FPR} = \\frac{TPR}{TPR+ \\frac{1}{r} FPR}."
},
{
"math_id": 17,
"text": "r =1"
},
{
"math_id": 18,
"text": "\\text{Predicted positive condition rate}=\\frac{TP+FP}{TP+FP+TN+FN} \\, "
},
{
"math_id": 19,
"text": "F = 2 \\cdot \\frac{\\mathrm{precision} \\cdot \\mathrm{recall}}{ \\mathrm{precision} + \\mathrm{recall}}"
},
{
"math_id": 20,
"text": "F_1"
},
{
"math_id": 21,
"text": "F_\\beta"
},
{
"math_id": 22,
"text": "\\beta"
},
{
"math_id": 23,
"text": "F_\\beta = (1 + \\beta^2) \\cdot \\frac{\\mathrm{precision} \\cdot \\mathrm{recall} }{ \\beta^2 \\cdot \\mathrm{precision} + \\mathrm{recall}}"
},
{
"math_id": 24,
"text": "F"
},
{
"math_id": 25,
"text": "F_2"
},
{
"math_id": 26,
"text": "F_{0.5}"
},
{
"math_id": 27,
"text": "E_{\\alpha} = 1 - \\frac{1}{\\frac{\\alpha}{P} + \\frac{1-\\alpha}{R}}"
},
{
"math_id": 28,
"text": "(\\alpha, 1-\\alpha)"
},
{
"math_id": 29,
"text": "F_\\beta = 1 - E_{\\alpha}"
},
{
"math_id": 30,
"text": "\\alpha=\\frac{1}{1 + \\beta^2}"
}
] |
https://en.wikipedia.org/wiki?curid=14343887
|
14344439
|
One-way analysis of variance
|
Statistical test
In statistics, one-way analysis of variance (or one-way ANOVA) is a technique to compare whether two or more samples' means are significantly different (using the F distribution). This analysis of variance technique requires a numeric response variable "Y" and a single explanatory variable "X", hence "one-way".
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. These estimates rely on various assumptions (see below). The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.
Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA and "t" is given by "F" = "t"2. An extension of one-way ANOVA is two-way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable.
Assumptions.
The results of a one-way ANOVA can be considered reliable as long as the following assumptions are met:
If data are ordinal, a non-parametric alternative to this test should be used such as Kruskal–Wallis one-way analysis of variance. If the variances are not known to be equal, a generalization of 2-sample Welch's t-test can be used.
Departures from population normality.
ANOVA is a relatively robust procedure with respect to violations of the normality assumption.
The one-way ANOVA can be generalized to the factorial and multivariate layouts, as well as to the analysis of covariance.
It is often stated in popular literature that none of these "F"-tests are robust when there are severe violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts. Furthermore, it is also claimed that if the underlying assumption of homoscedasticity is violated, the Type I error properties degenerate much more severely.
However, this is a misconception, based on work done in the 1950s and earlier. The first comprehensive investigation of the issue by Monte Carlo simulation was Donaldson (1966). He showed that under the usual departures (positive skew, unequal variances) "the "F"-test is conservative", and so it is less likely than it should be to find that a variable is significant. However, as either the sample size or the number of cells increases, "the power curves seem to converge to that based on the normal distribution". Tiku (1971) found that "the non-normal theory power of "F" is found to differ from the normal theory power by a correction term which decreases sharply with increasing sample size." The problem of non-normality, especially in large samples, is far less serious than popular articles would suggest.
The current view is that "Monte-Carlo studies were used extensively with normal distribution-based tests to determine how sensitive they are to violations of the assumption of normal distribution of the analyzed variables in the population. The general conclusion from these studies is that the consequences of such violations are less severe than previously thought. Although these conclusions should not entirely discourage anyone from being concerned about the normality assumption, they have increased the overall popularity of the distribution-dependent statistical tests in all areas of research."
For nonparametric alternatives in the factorial layout, see Sawilowsky. For more discussion see ANOVA on ranks.
The case of fixed effects, fully randomized experiment, unbalanced data.
The model.
The normal linear model describes treatment groups with probability
distributions which are identically bell-shaped (normal) curves with
different means. Thus fitting the models requires only the means of
each treatment group and a variance calculation (an average variance
within the treatment groups is used). Calculations of the means and
the variance are performed as part of the hypothesis test.
The commonly used normal linear models for a completely
randomized experiment are:
formula_0 (the means model)
or
formula_1 (the effects model)
where
formula_2 is an index over experimental units
formula_3 is an index over treatment groups
formula_4 is the number of experimental units in the jth treatment group
formula_5 is the total number of experimental units
formula_6 are observations
formula_7 is the mean of the observations for the jth treatment group
formula_8 is the grand mean of the observations
formula_9 is the jth treatment effect, a deviation from the grand mean
formula_10
formula_11
formula_12, formula_13 are normally distributed zero-mean random errors.
The index formula_14 over the experimental units can be interpreted several
ways. In some experiments, the same experimental unit is subject to
a range of treatments; formula_14 may point to a particular unit. In others,
each treatment group has a distinct set of experimental units; formula_14 may
simply be an index into the formula_15-th list.
The data and statistical summaries of the data.
One form of organizing experimental observations formula_16
is with groups in columns:
Comparing model to summaries: formula_17 and formula_18. The grand mean and grand variance are computed from the grand sums,
not from group means and variances.
The hypothesis test.
Given the summary statistics, the calculations of the hypothesis test
are shown in tabular form. While two columns of SS are shown for their
explanatory value, only one column is required to display results.
formula_19 is the
estimate of variance corresponding to formula_20 of the
model.
Analysis summary.
The core ANOVA analysis consists of a series of calculations. The
data is collected in tabular form. Then
If the experiment is balanced, all of the formula_4 terms are
equal so the SS equations simplify.
In a more complex experiment, where the experimental units (or
environmental effects) are not homogeneous, row statistics are also
used in the analysis. The model includes terms dependent on
formula_14. Determining the extra terms reduces the number of
degrees of freedom available.
Example.
Consider an experiment to study the effect of three different levels of a factor on a response (e.g. three levels of a fertilizer on plant growth). If we had 6 observations for each level, we could write the outcome of the experiment in a table like this, where "a"1, "a"2, and "a"3 are the three levels of the factor being studied.
The null hypothesis, denoted H0, for the overall "F"-test for this experiment would be that all three levels of the factor produce the same response, on average. To calculate the "F"-ratio:
Step 1: Calculate the mean within each group:
formula_21
Step 2: Calculate the overall mean:
formula_22
where "a" is the number of groups.
Step 3: Calculate the "between-group" sum of squared differences:
formula_23
where "n" is the number of data values per group.
The between-group degrees of freedom is one less than the number of groups
formula_24
so the between-group mean square value is
formula_25
Step 4: Calculate the "within-group" sum of squares. Begin by centering the data in each group
The within-group sum of squares is the sum of squares of all 18 values in this table
formula_26
The within-group degrees of freedom is
formula_27
Thus the within-group mean square value is
formula_28
Step 5: The "F"-ratio is
formula_29
The critical value is the number that the test statistic must exceed to reject the test. In this case, "F"crit(2,15) = 3.68 at "α" = 0.05. Since "F"=9.3 > 3.68, the results are significant at the 5% significance level. One would not accept the null hypothesis, concluding that there is strong evidence that the expected values in the three groups differ. The p-value for this test is 0.002.
After performing the "F"-test, it is common to carry out some "post-hoc" analysis of the group means. In this case, the first two group means differ by 4 units, the first and third group means differ by 5 units, and the second and third group means differ by only 1 unit. The standard error of each of these differences is formula_30. Thus the first group is strongly different from the other groups, as the mean difference is more than 3 times the standard error, so we can be highly confident that the population mean of the first group differs from the population means of the other groups. However, there is no evidence that the second and third groups have different population means from each other, as their mean difference of one unit is comparable to the standard error.
Note "F"("x", "y") denotes an "F"-distribution cumulative distribution function with "x" degrees of freedom in the numerator and "y" degrees of freedom in the denominator.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "y_{i,j}=\\mu_j+\\varepsilon_{i,j}"
},
{
"math_id": 1,
"text": "y_{i,j}=\\mu+\\tau_j+\\varepsilon_{i,j}"
},
{
"math_id": 2,
"text": "i=1,\\dotsc,I"
},
{
"math_id": 3,
"text": "j=1,\\dotsc,J"
},
{
"math_id": 4,
"text": "I_j"
},
{
"math_id": 5,
"text": "I = \\sum_j I_j"
},
{
"math_id": 6,
"text": "y_{i,j}"
},
{
"math_id": 7,
"text": "\\mu_j"
},
{
"math_id": 8,
"text": "\\mu"
},
{
"math_id": 9,
"text": "\\tau_j"
},
{
"math_id": 10,
"text": "\\sum\\tau_j=0"
},
{
"math_id": 11,
"text": "\\mu_j=\\mu+\\tau_j"
},
{
"math_id": 12,
"text": "\\varepsilon \\thicksim N(0, \\sigma^2)"
},
{
"math_id": 13,
"text": "\\varepsilon_{i,j}"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "j"
},
{
"math_id": 16,
"text": "y_{ij}"
},
{
"math_id": 17,
"text": "\\mu = m"
},
{
"math_id": 18,
"text": "\\mu_j = m_j"
},
{
"math_id": 19,
"text": "MS_{Error}"
},
{
"math_id": 20,
"text": "\\sigma^2"
},
{
"math_id": 21,
"text": "\n\\begin{align}\n\\overline{Y}_1 & = \\frac{1}{6}\\sum Y_{1i} = \\frac{6 + 8 + 4 + 5 + 3 + 4}{6} = 5 \\\\\n\\overline{Y}_2 & = \\frac{1}{6}\\sum Y_{2i} = \\frac{8 + 12 + 9 + 11 + 6 + 8}{6} = 9 \\\\\n\\overline{Y}_3 & = \\frac{1}{6}\\sum Y_{3i} = \\frac{13 + 9 + 11 + 8 + 7 + 12}{6} = 10\n\\end{align}\n"
},
{
"math_id": 22,
"text": "\\overline{Y} = \\frac{\\sum_i \\overline{Y}_i}{a} = \\frac{\\overline{Y}_1 + \\overline{Y}_2 + \\overline{Y}_3}{a} = \\frac{5 + 9 + 10}{3} = 8"
},
{
"math_id": 23,
"text": "\n\\begin{align}\nS_B & = n(\\overline{Y}_1-\\overline{Y})^2 + n(\\overline{Y}_2-\\overline{Y})^2 + n(\\overline{Y}_3-\\overline{Y})^2 \\\\[8pt]\n& = 6(5-8)^2 + 6(9-8)^2 + 6(10-8)^2 = 84\n\\end{align}\n"
},
{
"math_id": 24,
"text": "f_b = 3-1 = 2"
},
{
"math_id": 25,
"text": "MS_B = 84/2 = 42"
},
{
"math_id": 26,
"text": "\n\\begin{align}\nS_W =& (1)^2 + (3)^2+ (-1)^2+(0)^2+(-2)^2+(-1)^2+ \\\\\n&(-1)^2+(3)^2+(0)^2+(2)^2+(-3)^2+(-1)^2+ \\\\\n&(3)^2+(-1)^2+(1)^2+(-2)^2+(-3)^2+(2)^2 \\\\\n=&\\ 1 + 9 + 1 + 0 + 4 + 1 + 1 + 9 + 0 + 4 + 9 + 1 + 9 + 1 + 1 + 4 + 9 + 4\\\\\n=&\\ 68 \\\\\n\\end{align}\n"
},
{
"math_id": 27,
"text": "f_W = a(n-1) = 3(6-1) = 15"
},
{
"math_id": 28,
"text": "MS_W = S_W/f_W = 68/15 \\approx 4.5"
},
{
"math_id": 29,
"text": "F = \\frac{MS_B}{MS_W} \\approx 42/4.5 \\approx 9.3"
},
{
"math_id": 30,
"text": "\\sqrt{4.5/6 + 4.5/6} = 1.2"
}
] |
https://en.wikipedia.org/wiki?curid=14344439
|
1434444
|
Autoregressive model
|
Representation of a type of random process
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation) which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
Contrary to the moving-average (MA) model, the autoregressive model is not always stationary as it may contain a unit root.
Large language models are called autoregressive, but they are not a classical autoregressive model in this sense because they are not linear.
Definition.
The notation formula_0 indicates an autoregressive model of order "p". The AR("p") model is defined as
formula_1
where formula_2 are the "parameters" of the model, and formula_3 is white noise. This can be equivalently written using the backshift operator "B" as
formula_4
so that, moving the summation term to the left side and using polynomial notation, we have
formula_5
An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.
Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with formula_6 are not stationary. More generally, for an AR("p") model to be weak-sense stationary, the roots of the polynomial formula_7 must lie outside the unit circle, i.e., each (complex) root formula_8 must satisfy formula_9 (see pages 89,92 ).
Intertemporal effect of shocks.
In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model formula_10. A non-zero value for formula_3 at say time "t"=1 affects formula_11 by the amount formula_12. Then by the AR equation for formula_13 in terms of formula_11, this affects formula_13 by the amount formula_14. Then by the AR equation for formula_15 in terms of formula_13, this affects formula_15 by the amount formula_16. Continuing this process shows that the effect of formula_12 never ends, although if the process is stationary then the effect diminishes toward zero in the limit.
Because each shock affects "X" values infinitely far into the future from when they occur, any given value "X""t" is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression
formula_17
(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as
formula_18
When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to formula_3 has an infinite order—that is, an infinite number of lagged values of formula_3 appear on the right side of the equation.
Characteristic polynomial.
The autocorrelation function of an AR("p") process can be expressed as
formula_19
where formula_20 are the roots of the polynomial
formula_21
where "B" is the backshift operator, where formula_22 is the function defining the autoregression, and where formula_23 are the coefficients in the autoregression. The formula is valid only if all the roots have multiplicity 1.
The autocorrelation function of an AR("p") process is a sum of decaying exponentials.
Graphs of AR("p") processes.
The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.
For an AR(1) process with a positive formula_24, only the previous term in the process and the noise term contribute to the output. If formula_24 is close to 0, then the process still looks like white noise, but as formula_24 approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a low pass filter.
For an AR(2) process, the previous two terms and the noise term contribute to the output. If both formula_25 and formula_26 are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If formula_25 is positive while formula_26 is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be likened to edge detection or detection of change in direction.
Example: An AR(1) process.
An AR(1) process is given by:formula_27where formula_3 is a white noise process with zero mean and constant variance formula_28.
(Note: The subscript on formula_25 has been dropped.) The process is weak-sense stationary if formula_29 since it is obtained as the output of a stable filter whose input is white noise. (If formula_30 then the variance of formula_31 depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not weak sense stationary.) Assuming formula_29, the mean formula_32 is identical for all values of "t" by the very definition of weak sense stationarity. If the mean is denoted by formula_33, it follows fromformula_34thatformula_35and hence
formula_36
The variance is
formula_37
where formula_38 is the standard deviation of formula_3. This can be shown by noting that
formula_39
and then by noticing that the quantity above is a stable fixed point of this relation.
The autocovariance is given by
formula_40
It can be seen that the autocovariance function decays with a decay time (also called time constant) of formula_41.
The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:
formula_42
This expression is periodic due to the discrete nature of the formula_43, which is manifested as the cosine term in the denominator. If we assume that the sampling time (formula_44) is much smaller than the decay time (formula_45), then we can use a continuum approximation to formula_46:
formula_47
which yields a Lorentzian profile for the spectral density:
formula_48
where formula_49 is the angular frequency associated with the decay time formula_45.
An alternative expression for formula_31 can be derived by first substituting formula_50 for formula_51 in the defining equation. Continuing this process "N" times yields
formula_52
For "N" approaching infinity, formula_53 will approach zero and:
formula_54
It is seen that formula_31 is white noise convolved with the formula_55 kernel plus the constant mean. If the white noise formula_3 is a Gaussian process then formula_31 is also a Gaussian process. In other cases, the central limit theorem indicates that formula_31 will be approximately normally distributed when formula_24 is close to one.
For formula_56, the process formula_57 will be a geometric progression ("exponential" growth or decay). In this case, the solution can be found analytically: formula_58 whereby formula_59 is an unknown constant (initial condition).
Explicit mean/difference form of AR(1) process.
The AR(1) model is the discrete-time analogy of the continuous Ornstein-Uhlenbeck process. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model, with process parameter formula_60, is given by
formula_61, where formula_62, formula_63 is the model mean, and formula_64 is a white-noise process with zero mean and constant variance formula_65.
By rewriting this as formula_66 and then deriving (by induction) formula_67, one can show that
formula_68 and
formula_69.
Choosing the maximum lag.
The partial autocorrelation of an AR(p) process equals zero at lags larger than p, so the appropriate maximum lag p is the one after which the partial autocorrelations are all zero.
Calculation of the AR parameters.
There are many ways to estimate the coefficients, such as the ordinary least squares procedure or method of moments (through Yule–Walker equations).
The AR("p") model is given by the equation
formula_70
It is based on parameters formula_71 where "i" = 1, ..., "p". There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.
Yule–Walker equations.
The Yule–Walker equations, named for Udny Yule and Gilbert Walker, are the following set of equations.
formula_72
where "m" = 0, …, "p", yielding "p" + 1 equations. Here formula_73 is the autocovariance function of Xt, formula_38 is the standard deviation of the input noise process, and formula_74 is the Kronecker delta function.
Because the last part of an individual equation is non-zero only if "m" = 0, the set of equations can be solved by representing the equations for "m" > 0 in matrix form, thus getting the equation
formula_75
which can be solved for all formula_76 The remaining equation for "m" = 0 is
formula_77
which, once formula_78 are known, can be solved for formula_79
An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first "p"+1 elements formula_80 of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating
formula_81
Examples for some Low-order AR("p") processes
Estimation of AR parameters.
The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR("p") model, by replacing the theoretical covariances with estimated values. Some of these variants can be described as follows:
formula_89
Here predicted values of "X""t" would be based on the "p" future values of the same series. This way of estimating the AR parameters is due to John Parker Burg, and is called the Burg method: Burg and later authors called these particular estimates "maximum entropy estimates", but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with maximum entropy spectral estimation.
Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial "p" values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.
Spectrum.
The power spectral density (PSD) of an AR("p") process with noise variance formula_90 is
formula_91
AR(0).
For white noise (AR(0))
formula_92
AR(1).
For AR(1)
formula_93
AR(2).
The behavior of an AR(2) process is determined entirely by the roots of it characteristic equation, which is expressed in terms of the lag operator as:
formula_97
or equivalently by the poles of its transfer function, which is defined in the Z domain by:
formula_98
It follows that the poles are values of z satisfying:
formula_99,
which yields:
formula_100.
formula_101 and formula_102 are the reciprocals of the characteristic roots, as well as the eigenvalues of the temporal update matrix:
formula_103
AR(2) processes can be split into three groups depending on the characteristics of their roots/poles:
formula_105
with bandwidth about the peak inversely proportional to the moduli of the poles:
formula_106
The terms involving square roots are all real in the case of complex poles since they exist only when formula_107.
Otherwise the process has real roots, and:
The process is non-stationary when the poles are on or outside the unit circle, or equivalently when the characteristic roots are on or inside the unit circle.
The process is stable when the poles are strictly within the unit circle (roots strictly outside the unit circle), or equivalently when the coefficients are in the triangle formula_110.
The full PSD function can be expressed in real form as:
formula_111
Impulse response.
The impulse response of a system is the change in an evolving variable in response to a change in the value of a shock term "k" periods earlier, as a function of "k". Since the AR model is a special case of the vector autoregressive model, the computation of the impulse response in vector autoregression#impulse response applies here.
"n"-step-ahead forecasting.
Once the parameters of the autoregression
formula_112
have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use "t" to refer to the first period for which data is not yet available; substitute the known preceding values "X""t-i" for "i="1, ..., "p" into the autoregressive equation while setting the error term formula_3 equal to zero (because we forecast "X""t" to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use "t" to refer to the "next" period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of "X" one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after "p" predictions, all "p" right-side values are predicted values from preceding steps.
There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term formula_113 for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the "n"-step-ahead predictions; the confidence interval will become wider as "n" increases because of the use of an increasing number of estimated values for the right-side variables.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "AR(p)"
},
{
"math_id": 1,
"text": " X_t = \\sum_{i=1}^p \\varphi_i X_{t-i} + \\varepsilon_t"
},
{
"math_id": 2,
"text": "\\varphi_1, \\ldots, \\varphi_p"
},
{
"math_id": 3,
"text": "\\varepsilon_t"
},
{
"math_id": 4,
"text": " X_t = \\sum_{i=1}^p \\varphi_i B^i X_{t} + \\varepsilon_t "
},
{
"math_id": 5,
"text": "\\phi [B]X_t= \\varepsilon_t"
},
{
"math_id": 6,
"text": "|\\varphi_1 | \\geq 1"
},
{
"math_id": 7,
"text": "\\Phi(z):=\\textstyle 1 - \\sum_{i=1}^p \\varphi_i z^{i}"
},
{
"math_id": 8,
"text": "z_i"
},
{
"math_id": 9,
"text": "|z_i |>1"
},
{
"math_id": 10,
"text": " X_t = \\varphi_1 X_{t-1} + \\varepsilon_t"
},
{
"math_id": 11,
"text": "X_1"
},
{
"math_id": 12,
"text": "\\varepsilon_1"
},
{
"math_id": 13,
"text": "X_2"
},
{
"math_id": 14,
"text": "\\varphi_1 \\varepsilon_1"
},
{
"math_id": 15,
"text": "X_3"
},
{
"math_id": 16,
"text": "\\varphi_1^2 \\varepsilon_1"
},
{
"math_id": 17,
"text": "\\phi (B)X_t= \\varepsilon_t \\,"
},
{
"math_id": 18,
"text": "X_t= \\frac{1}{\\phi (B)}\\varepsilon_t \\, ."
},
{
"math_id": 19,
"text": "\\rho(\\tau) = \\sum_{k=1}^p a_k y_k^{-|\\tau|} ,"
},
{
"math_id": 20,
"text": "y_k"
},
{
"math_id": 21,
"text": "\\phi(B) = 1- \\sum_{k=1}^p \\varphi_k B^k "
},
{
"math_id": 22,
"text": "\\phi(\\cdot)"
},
{
"math_id": 23,
"text": "\\varphi_k"
},
{
"math_id": 24,
"text": "\\varphi"
},
{
"math_id": 25,
"text": "\\varphi_1"
},
{
"math_id": 26,
"text": "\\varphi_2"
},
{
"math_id": 27,
"text": "X_t = \\varphi X_{t-1}+\\varepsilon_t\\,"
},
{
"math_id": 28,
"text": "\\sigma_\\varepsilon^2"
},
{
"math_id": 29,
"text": "|\\varphi|<1"
},
{
"math_id": 30,
"text": "\\varphi=1"
},
{
"math_id": 31,
"text": "X_t"
},
{
"math_id": 32,
"text": "\\operatorname{E} (X_t)"
},
{
"math_id": 33,
"text": "\\mu"
},
{
"math_id": 34,
"text": "\\operatorname{E} (X_t)=\\varphi\\operatorname{E} (X_{t-1})+\\operatorname{E}(\\varepsilon_t),\n"
},
{
"math_id": 35,
"text": " \\mu=\\varphi\\mu+0,"
},
{
"math_id": 36,
"text": "\\mu=0."
},
{
"math_id": 37,
"text": "\\textrm{var}(X_t)=\\operatorname{E}(X_t^2)-\\mu^2=\\frac{\\sigma_\\varepsilon^2}{1-\\varphi^2},"
},
{
"math_id": 38,
"text": "\\sigma_\\varepsilon"
},
{
"math_id": 39,
"text": "\\textrm{var}(X_t) = \\varphi^2\\textrm{var}(X_{t-1}) + \\sigma_\\varepsilon^2,"
},
{
"math_id": 40,
"text": "B_n=\\operatorname{E}(X_{t+n}X_t)-\\mu^2=\\frac{\\sigma_\\varepsilon^2}{1-\\varphi^2}\\,\\,\\varphi^{|n|}."
},
{
"math_id": 41,
"text": "\\tau=1-\\varphi"
},
{
"math_id": 42,
"text": "\\Phi(\\omega)=\n\\frac{1}{\\sqrt{2\\pi}}\\,\\sum_{n=-\\infty}^\\infty B_n e^{-i\\omega n}\n=\\frac{1}{\\sqrt{2\\pi}}\\,\\left(\\frac{\\sigma_\\varepsilon^2}{1+\\varphi^2-2\\varphi\\cos(\\omega)}\\right).\n"
},
{
"math_id": 43,
"text": "X_j"
},
{
"math_id": 44,
"text": "\\Delta t=1"
},
{
"math_id": 45,
"text": "\\tau"
},
{
"math_id": 46,
"text": "B_n"
},
{
"math_id": 47,
"text": "B(t)\\approx \\frac{\\sigma_\\varepsilon^2}{1-\\varphi^2}\\,\\,\\varphi^{|t|}"
},
{
"math_id": 48,
"text": "\\Phi(\\omega)=\n\\frac{1}{\\sqrt{2\\pi}}\\,\\frac{\\sigma_\\varepsilon^2}{1-\\varphi^2}\\,\\frac{\\gamma}{\\pi(\\gamma^2+\\omega^2)}"
},
{
"math_id": 49,
"text": "\\gamma=1/\\tau"
},
{
"math_id": 50,
"text": "\\varphi X_{t-2}+\\varepsilon_{t-1}"
},
{
"math_id": 51,
"text": "X_{t-1}"
},
{
"math_id": 52,
"text": "X_t=\\varphi^NX_{t-N}+\\sum_{k=0}^{N-1}\\varphi^k\\varepsilon_{t-k}."
},
{
"math_id": 53,
"text": "\\varphi^N"
},
{
"math_id": 54,
"text": "X_t=\\sum_{k=0}^\\infty\\varphi^k\\varepsilon_{t-k}."
},
{
"math_id": 55,
"text": "\\varphi^k"
},
{
"math_id": 56,
"text": "\\varepsilon_t = 0"
},
{
"math_id": 57,
"text": "X_t = \\varphi X_{t-1}"
},
{
"math_id": 58,
"text": "X_t = a \\varphi^t"
},
{
"math_id": 59,
"text": "a"
},
{
"math_id": 60,
"text": "\\theta \\in \\mathbb{R}"
},
{
"math_id": 61,
"text": "X_{t+1} = X_t + (1-\\theta)(\\mu - X_t) + \\varepsilon_{t+1}"
},
{
"math_id": 62,
"text": "|\\theta| < 1 \\,"
},
{
"math_id": 63,
"text": "\\mu := E(X)"
},
{
"math_id": 64,
"text": "\\{\\epsilon_{t}\\}"
},
{
"math_id": 65,
"text": "\\sigma"
},
{
"math_id": 66,
"text": " X_{t+1} = \\theta X_t + (1 - \\theta)\\mu + \\varepsilon_{t+1} "
},
{
"math_id": 67,
"text": "X_{t+n} = \\theta ^ n X_{t} + (1 - \\theta ^ n) \\mu + \\Sigma_{i = 1}^{n} \\left(\\theta ^ {n - i} \\epsilon_{t + i}\\right)"
},
{
"math_id": 68,
"text": " \\operatorname{E}(X_{t+n} | X_t) = \\mu\\left[1-\\theta^n\\right] + X_t\\theta^n"
},
{
"math_id": 69,
"text": " \\operatorname{Var} (X_{t+n} | X_t) = \\sigma^2 \\frac{ 1 - \\theta^{2n} }{1 -\\theta^2}"
},
{
"math_id": 70,
"text": " X_t = \\sum_{i=1}^p \\varphi_i X_{t-i}+ \\varepsilon_t.\\,"
},
{
"math_id": 71,
"text": "\\varphi_i"
},
{
"math_id": 72,
"text": "\\gamma_m = \\sum_{k=1}^p \\varphi_k \\gamma_{m-k} + \\sigma_\\varepsilon^2\\delta_{m,0},"
},
{
"math_id": 73,
"text": "\\gamma_m"
},
{
"math_id": 74,
"text": "\\delta_{m,0}"
},
{
"math_id": 75,
"text": "\\begin{bmatrix}\n\\gamma_1 \\\\\n\\gamma_2 \\\\\n\\gamma_3 \\\\\n\\vdots \\\\\n\\gamma_p \\\\\n\\end{bmatrix}\n\n=\n\n\\begin{bmatrix}\n\\gamma_0 & \\gamma_{-1} & \\gamma_{-2} & \\cdots \\\\\n\\gamma_1 & \\gamma_0 & \\gamma_{-1} & \\cdots \\\\\n\\gamma_2 & \\gamma_1 & \\gamma_0 & \\cdots \\\\\n\\vdots & \\vdots & \\vdots & \\ddots \\\\\n\\gamma_{p-1} & \\gamma_{p-2} & \\gamma_{p-3} & \\cdots \\\\\n\\end{bmatrix}\n\n\\begin{bmatrix}\n\\varphi_{1} \\\\\n\\varphi_{2} \\\\\n\\varphi_{3} \\\\\n \\vdots \\\\\n\\varphi_{p} \\\\\n\\end{bmatrix}\n\n"
},
{
"math_id": 76,
"text": "\\{\\varphi_m; m=1,2, \\dots ,p\\}."
},
{
"math_id": 77,
"text": "\\gamma_0 = \\sum_{k=1}^p \\varphi_k \\gamma_{-k} + \\sigma_\\varepsilon^2 ,"
},
{
"math_id": 78,
"text": "\\{\\varphi_m ; m=1,2, \\dots ,p \\}"
},
{
"math_id": 79,
"text": "\\sigma_\\varepsilon^2 ."
},
{
"math_id": 80,
"text": "\\rho(\\tau)"
},
{
"math_id": 81,
"text": "\\rho(\\tau) = \\sum_{k=1}^p \\varphi_k \\rho(k-\\tau)"
},
{
"math_id": 82,
"text": "\\gamma_1 = \\varphi_1 \\gamma_0"
},
{
"math_id": 83,
"text": "\\rho_1 = \\gamma_1 / \\gamma_0 = \\varphi_1"
},
{
"math_id": 84,
"text": "\\gamma_1 = \\varphi_1 \\gamma_0 + \\varphi_2 \\gamma_{-1}"
},
{
"math_id": 85,
"text": "\\gamma_2 = \\varphi_1 \\gamma_1 + \\varphi_2 \\gamma_0"
},
{
"math_id": 86,
"text": "\\gamma_{-k} = \\gamma_k"
},
{
"math_id": 87,
"text": "\\rho_1 = \\gamma_1 / \\gamma_0 = \\frac{\\varphi_1}{1-\\varphi_2}"
},
{
"math_id": 88,
"text": "\\rho_2 = \\gamma_2 / \\gamma_0 = \\frac{\\varphi_1^2 - \\varphi_2^2 + \\varphi_2}{1 - \\varphi_2}"
},
{
"math_id": 89,
"text": " X_t = \\sum_{i=1}^p \\varphi_i X_{t+i}+ \\varepsilon^*_t \\,."
},
{
"math_id": 90,
"text": "\\mathrm{Var}(Z_t) = \\sigma_Z^2"
},
{
"math_id": 91,
"text": "S(f) = \\frac{\\sigma_Z^2}{| 1-\\sum_{k=1}^p \\varphi_k e^{-i 2 \\pi f k} |^2}."
},
{
"math_id": 92,
"text": "S(f) = \\sigma_Z^2."
},
{
"math_id": 93,
"text": "S(f) = \\frac{\\sigma_Z^2}{| 1- \\varphi_1 e^{-2 \\pi i f} |^2}\n = \\frac{\\sigma_Z^2}{ 1 + \\varphi_1^2 - 2 \\varphi_1 \\cos 2 \\pi f }"
},
{
"math_id": 94,
"text": "\\varphi_1 > 0"
},
{
"math_id": 95,
"text": "\\varphi_1 "
},
{
"math_id": 96,
"text": "\\varphi_1 < 0"
},
{
"math_id": 97,
"text": " 1 - \\varphi_1 B -\\varphi_2 B^2 =0, "
},
{
"math_id": 98,
"text": " H_z = (1 - \\varphi_1 z^{-1} -\\varphi_2 z^{-2})^{-1}. "
},
{
"math_id": 99,
"text": " 1 - \\varphi_1 z^{-1} -\\varphi_2 z^{-2} = 0 "
},
{
"math_id": 100,
"text": " z_1,z_2 = \\frac{1}{2\\varphi_2}\\left(\\varphi_1 \\pm \\sqrt{\\varphi_1^2 + 4\\varphi_2}\\right) "
},
{
"math_id": 101,
"text": "z_1"
},
{
"math_id": 102,
"text": "z_2"
},
{
"math_id": 103,
"text": " \\begin{bmatrix} \\varphi_1 & \\varphi_2 \\\\ 1 & 0 \\end{bmatrix} "
},
{
"math_id": 104,
"text": "\\varphi_1^2 + 4\\varphi_2 < 0"
},
{
"math_id": 105,
"text": "f^* = \\frac{1}{2\\pi}\\cos^{-1}\\left(\\frac{\\varphi_1}{2\\sqrt{-\\varphi_2}}\\right),"
},
{
"math_id": 106,
"text": "|z_1|=|z_2|=\\sqrt{-\\varphi_2}."
},
{
"math_id": 107,
"text": "\\varphi_2<0"
},
{
"math_id": 108,
"text": "f=0"
},
{
"math_id": 109,
"text": "f=1/2"
},
{
"math_id": 110,
"text": "-1 \\le \\varphi_2 \\le 1 - |\\varphi_1|"
},
{
"math_id": 111,
"text": "S(f) = \\frac{\\sigma_Z^2}{1 + \\varphi_1^2 + \\varphi_2^2 - 2\\varphi_1(1-\\varphi_2)\\cos(2\\pi f) - 2\\varphi_2\\cos(4\\pi f)}"
},
{
"math_id": 112,
"text": " X_t = \\sum_{i=1}^p \\varphi_i X_{t-i}+ \\varepsilon_t \\,"
},
{
"math_id": 113,
"text": "\\varepsilon_t \\,"
}
] |
https://en.wikipedia.org/wiki?curid=1434444
|
14345961
|
Embedded pushdown automaton
|
An embedded pushdown automaton or EPDA is a computational model for parsing languages generated by tree-adjoining grammars (TAGs). It is similar to the context-free grammar-parsing pushdown automaton, but instead of using a plain stack to store symbols, it has a stack of iterated stacks that store symbols, giving TAGs a generative capacity between context-free and context-sensitive grammars, or a subset of mildly context-sensitive grammars.
Embedded pushdown automata should not be confused with nested stack automata which have more computational power.
History and applications.
EPDAs were first described by K. Vijay-Shanker in his 1988 doctoral thesis. They have since been applied to more complete descriptions of classes of mildly context-sensitive grammars and have had important roles in refining the Chomsky hierarchy. Various subgrammars, such as the linear indexed grammar, can thus be defined.
While natural languages have traditionally been analyzed using context-free grammars (see transformational-generative grammar and computational linguistics), this model does not work well for languages with crossed dependencies, such as Dutch, situations for which an EPDA is well suited. A detailed linguistic analysis is available in Joshi, Schabes (1997).
Theory.
An EPDA is a finite state machine with a set of stacks that can be themselves accessed through the "embedded stack". Each stack contains elements of the "stack alphabet" formula_0, and so we define an element of a stack by formula_1, where the star is the Kleene closure of the alphabet.
Each stack can then be defined in terms of its elements, so we denote the formula_2th stack in the automaton using a double-dagger symbol: formula_3, where formula_4 would be the next accessible symbol in the stack. The "embedded stack" of formula_5 stacks can thus be denoted by formula_6.
We define an EPDA by the septuple (7-tuple)
formula_7 where
Thus the transition function takes a state, the next symbol of the input string, and the top symbol of the current stack and generates the next state, the stacks to be pushed and popped onto the "embedded stack", the pushing and popping of the current stack, and the stacks to be considered the current stacks in the next transition. More conceptually, the "embedded stack" is pushed and popped, the current stack is optionally pushed back onto the "embedded stack", and any other stacks one would like are pushed on top of that, with the last stack being the one read from in the next iteration. Therefore, stacks can be pushed both above and below the current stack.
A given configuration is defined by
formula_16
where formula_17 is the current state, the formula_18s are the stacks in the "embedded stack", with formula_19 the current stack, and for an input string formula_20, formula_21 is the portion of the string already processed by the machine and formula_22 is the portion to be processed, with its head being the current symbol read. Note that the empty string formula_23 is implicitly defined as a terminating symbol, where if the machine is at a final state when the empty string is read, the entire input string is "accepted", and if not it is "rejected". Such "accepted" strings are elements of the language
formula_24
where formula_25 and formula_26 defines the transition function applied over as many times as necessary to parse the string.
An informal description of EPDA can also be found in Joshi, Schabes (1997), Sect.7, p. 23-25.
"k"-order EPDA and the Weir hierarchy.
A more precisely defined hierarchy of languages that correspond to the mildly context-sensitive class was defined by David J. Weir.
Based on the work of Nabil A. Khabbaz,
Weir's Control Language Hierarchy is a containment where the "Level-1" is defined as context-free, and "Level-2" is the class of tree-adjoining and the other three grammars.
Following are some of the properties of Level-"k" languages in the hierarchy:
Those properties correspond well (at least for small "k" > 1) to the conditions of mildly context-sensitive languages imposed by Joshi, and as "k" gets bigger, the language class becomes, in a sense, less mildly context-sensitive.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\,\\Gamma"
},
{
"math_id": 1,
"text": "\\,\\sigma_i \\in \\Gamma^*"
},
{
"math_id": 2,
"text": "\\,j"
},
{
"math_id": 3,
"text": "\\,\\Upsilon_j = \\ddagger\\sigma_j = \\{\\sigma_{j,k}, \\sigma_{j,k-1}, \\ldots, \\sigma_{j,1} \\}"
},
{
"math_id": 4,
"text": "\\,\\sigma_{j, k}"
},
{
"math_id": 5,
"text": "\\,m"
},
{
"math_id": 6,
"text": "\\,\\{\\Upsilon_j \\} = \\{\\ddagger\\sigma_m,\\ddagger\\sigma_{m-1}, \\ldots, \\ddagger\\sigma_1 \\} \\in (\\ddagger\\Gamma^+)^*"
},
{
"math_id": 7,
"text": "\\,M = (Q, \\Sigma, \\Gamma, \\delta, q_0, Q_\\textrm{F}, \\sigma_0)"
},
{
"math_id": 8,
"text": "\\,Q"
},
{
"math_id": 9,
"text": "\\,\\Sigma"
},
{
"math_id": 10,
"text": "\\,q_0 \\in Q"
},
{
"math_id": 11,
"text": "\\,Q_\\textrm{F} \\subseteq Q"
},
{
"math_id": 12,
"text": "\\,\\sigma_0 \\in \\Gamma"
},
{
"math_id": 13,
"text": "\\,\\delta : Q \\times \\Sigma \\times \\Gamma \\rightarrow S"
},
{
"math_id": 14,
"text": "\\,S"
},
{
"math_id": 15,
"text": "\\,Q\\times (\\ddagger\\Gamma^+)^* \\times \\Gamma^* \\times (\\ddagger\\Gamma^+)^*"
},
{
"math_id": 16,
"text": "\\,C(M) = \\{q,\\Upsilon_m \\ldots \\Upsilon_1, x_1, x_2\\} \\in Q\\times (\\ddagger\\Gamma^+)^* \\times \\Sigma^* \\times \\Sigma^*"
},
{
"math_id": 17,
"text": "\\,q"
},
{
"math_id": 18,
"text": "\\,\\Upsilon"
},
{
"math_id": 19,
"text": "\\,\\Upsilon_m"
},
{
"math_id": 20,
"text": "\\,x=x_1 x_2 \\in \\Sigma^*"
},
{
"math_id": 21,
"text": "\\,x_1"
},
{
"math_id": 22,
"text": "\\,x_2"
},
{
"math_id": 23,
"text": "\\,\\epsilon \\in \\Sigma"
},
{
"math_id": 24,
"text": "\\,L(M) = \\left\\{ x | \\{q_0,\\Upsilon_0,\\epsilon,x\\} \\rightarrow_M^* \\{q_\\textrm{F},\\Upsilon_m \\ldots \\Upsilon_1, x, \\epsilon\\} \\right\\}"
},
{
"math_id": 25,
"text": "\\,q_\\textrm{F} \\in Q_\\textrm{F}"
},
{
"math_id": 26,
"text": "\\,\\rightarrow_M^*"
},
{
"math_id": 27,
"text": "O(n^{3\\cdot2^{k-1}})"
},
{
"math_id": 28,
"text": "\\{a_1^n \\dotso a_{2^k}^n|n\\geq0\\}"
},
{
"math_id": 29,
"text": "\\{a_1^n \\dotso a_{2^{k+1}}^n|n\\geq0\\}"
},
{
"math_id": 30,
"text": "\\{w^{2^{k-1}}|w\\in\\{a,b\\}^*\\}"
},
{
"math_id": 31,
"text": "\\{w^{2^{k-1}+1}|w\\in\\{a,b\\}^*\\}"
}
] |
https://en.wikipedia.org/wiki?curid=14345961
|
14346064
|
Autoregressive conditional duration
|
In financial econometrics, an autoregressive conditional duration (ACD, Engle and Russell (1998)) model considers irregularly spaced and autocorrelated intertrade durations. ACD is analogous to GARCH. In a continuous double auction (a common trading mechanism in many financial markets) waiting times between two consecutive trades vary at random.
Definition.
Let formula_0 denote the duration (the waiting time between consecutive trades) and assume that formula_1, where
formula_2 are independent and identically distributed random variables, positive and with formula_3 and where the series formula_4 is given by:
formula_5
and where formula_6, formula_7,
formula_8, formula_9.
|
[
{
"math_id": 0,
"text": " ~\\tau_t~ "
},
{
"math_id": 1,
"text": " ~\\tau_t=\\theta_t z_t ~"
},
{
"math_id": 2,
"text": " z_t "
},
{
"math_id": 3,
"text": " \\operatorname{E}(z_t) = 1"
},
{
"math_id": 4,
"text": " ~\\theta_t~ "
},
{
"math_id": 5,
"text": " \\theta_t = \\alpha_0 + \\alpha_1 \\tau_{t-1} + \\cdots + \\alpha_q \\tau_{t-q} + \\beta_1 \\theta_{t-1} + \\cdots + \\beta_p\\theta_{t-p} = \\alpha_0 + \\sum_{i=1}^q \\alpha_i \\tau_{t-i} + \\sum_{i=1}^p \\beta_i \\theta_{t-i} "
},
{
"math_id": 6,
"text": " ~\\alpha_0>0~ "
},
{
"math_id": 7,
"text": " \\alpha_i\\ge 0"
},
{
"math_id": 8,
"text": " \\beta_i \\ge 0 "
},
{
"math_id": 9,
"text": "~i>0"
}
] |
https://en.wikipedia.org/wiki?curid=14346064
|
14346663
|
Causality conditions
|
In the study of Lorentzian manifold spacetimes there exists a hierarchy of causality conditions which are important in proving mathematical theorems about the global structure of such manifolds. These conditions were collected during the late 1970s.
The weaker the causality condition on a spacetime, the more "unphysical" the spacetime is. Spacetimes with closed timelike curves, for example, present severe interpretational difficulties. See the grandfather paradox.
It is reasonable to believe that any physical spacetime will satisfy the strongest causality condition: global hyperbolicity. For such spacetimes the equations in general relativity can be posed as an initial value problem on a Cauchy surface.
The hierarchy.
There is a hierarchy of causality conditions, each one of which is strictly stronger than the previous. This is sometimes called the causal ladder. The conditions, from weakest to strongest, are:
Given are the definitions of these causality conditions for a Lorentzian manifold formula_0. Where two or more are given they are equivalent.
Notation:
formula_13
formula_18
Stably causal.
For each of the weaker causality conditions defined above, there are some manifolds satisfying the condition which can be made to violate it by arbitrarily small perturbations of the metric. A spacetime is stably causal if it cannot be made to contain closed causal curves by any perturbation smaller than some arbitrary finite magnitude. Stephen Hawking showed that this is equivalent to:
Globally hyperbolic.
Robert Geroch showed that a spacetime is globally hyperbolic if and only if there exists a Cauchy surface for formula_19. This means that:
|
[
{
"math_id": 0,
"text": "(M,g)"
},
{
"math_id": 1,
"text": "p \\ll q"
},
{
"math_id": 2,
"text": "p \\prec q"
},
{
"math_id": 3,
"text": "\\,I^+(x)"
},
{
"math_id": 4,
"text": "\\,I^-(x)"
},
{
"math_id": 5,
"text": "\\,J^+(x)"
},
{
"math_id": 6,
"text": "\\,J^-(x)"
},
{
"math_id": 7,
"text": "p \\in M"
},
{
"math_id": 8,
"text": "p \\not\\ll p"
},
{
"math_id": 9,
"text": " p \\in M "
},
{
"math_id": 10,
"text": "q \\prec p"
},
{
"math_id": 11,
"text": "p = q"
},
{
"math_id": 12,
"text": "p, q \\in M"
},
{
"math_id": 13,
"text": "I^-(p) = I^-(q) \\implies p = q "
},
{
"math_id": 14,
"text": "U"
},
{
"math_id": 15,
"text": "V \\subset U, p \\in V"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "V"
},
{
"math_id": 18,
"text": "I^+(p) = I^+(q) \\implies p = q "
},
{
"math_id": 19,
"text": "M"
},
{
"math_id": 20,
"text": "t"
},
{
"math_id": 21,
"text": "\\nabla^a t"
},
{
"math_id": 22,
"text": "\\,M"
},
{
"math_id": 23,
"text": "J^+(x) \\cap J^-(y)"
},
{
"math_id": 24,
"text": "x,y \\in M"
},
{
"math_id": 25,
"text": "\\mathbb{R} \\times\\!\\, S"
},
{
"math_id": 26,
"text": "S."
},
{
"math_id": 27,
"text": "\\mathbb{R}"
}
] |
https://en.wikipedia.org/wiki?curid=14346663
|
14348925
|
Hicks-neutral technical change
|
Hicks-neutral technical change is change in the production function of a business or industry which satisfies certain economic neutrality conditions. The concept of Hicks neutrality was first put forth in 1932 by John Hicks in his book "The Theory of Wages". A change is considered to be Hicks neutral if the change does not affect the balance of labor and capital in the products' production function. More formally, given the Solow model production function
formula_0,
a Hicks-neutral change is one which only changes formula_1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y = A F(K,L) \\,"
},
{
"math_id": 1,
"text": "A"
}
] |
https://en.wikipedia.org/wiki?curid=14348925
|
14352042
|
FENE
|
FENE stands for the finitely extensible nonlinear elastic model of a long-chained polymer. It simplifies the chain of monomers by connecting a sequence of beads with nonlinear springs. The spring force law is governed by inverse Langevin function or approximated by the Warner's relationship
formula_0
Where formula_1 and H is the spring constant.
Total stretching force on ith bead can be written as: formula_2.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textbf{F}_i=\\frac{H \\textbf{R}_i}{1-(R_i/L_{max})^2}"
},
{
"math_id": 1,
"text": "R_i = |\\textbf{R}_i|"
},
{
"math_id": 2,
"text": "\\textbf{F}_i - \\textbf{F}_{i-1}"
}
] |
https://en.wikipedia.org/wiki?curid=14352042
|
14355284
|
Smallest-circle problem
|
Finding the smallest circle that contains all given points
The smallest-circle problem (also known as minimum covering circle problem, bounding circle problem, least bounding circle problem, smallest enclosing circle problem) is a computational geometry problem of computing the smallest circle that contains all of a given set of points in the Euclidean plane. The corresponding problem in "n"-dimensional space, the smallest bounding sphere problem, is to compute the smallest "n"-sphere that contains all of a given set of points. The smallest-circle problem was initially proposed by the English mathematician James Joseph Sylvester in 1857.
The smallest-circle problem in the plane is an example of a facility location problem (the 1-center problem) in which the location of a new facility must be chosen to provide service to a number of customers, minimizing the farthest distance that any customer must travel to reach the new facility. Both the smallest circle problem in the plane, and the smallest bounding sphere problem in any higher-dimensional space of bounded dimension are solvable in worst-case linear time.
Characterization.
Most of the geometric approaches for the problem look for points that lie on the boundary of the minimum circle and are based on the following simple facts:
Linear-time solutions.
As Nimrod Megiddo showed, the minimum enclosing circle can be found in linear time, and the same linear time bound also applies to the smallest enclosing sphere in Euclidean spaces of any constant dimension. His article also gives a brief overview of earlier formula_0 and formula_1 algorithms; in doing so, Megiddo demonstrated that Shamos and Hoey's conjecture – that a solution to the smallest-circle problem was computable in formula_2 at best – was false.
Emo Welzl proposed a simple randomized algorithm for the
minimum covering circle problem that runs in expected time formula_3, based on a linear programming algorithm of Raimund Seidel.
Subsequently, the smallest-circle problem was included in a general class of LP-type problems that can be solved by algorithms like Welzl's based on linear programming. As a consequence of membership in this class, it was shown that the dependence on the dimension of the constant factor in the formula_3 time bound, which was factorial for Seidel's method, could be reduced to subexponential.
Megiddo's algorithm.
Megiddo's algorithm is based on the technique called prune and search, reducing the size of the problem by removing formula_4 unnecessary points.
That leads to the recurrence formula_5 giving formula_6.
The algorithm is rather complicated and it is reflected by its big multiplicative constant.
The reduction needs to solve twice the similar problem where the center of the sought-after enclosing circle is constrained to lie on a given line.
The solution of the subproblem is either the solution of the unconstrained problem or it is used to determine the half-plane where the unconstrained solution center is located.
The formula_4 points to be discarded are found as follows:
The points Pi are arranged into pairs which defines formula_7 lines pj as their bisectors.
The median average pm of bisectors in order by their directions (oriented to the same half-plane determined by bisector "p"1) is found and pairs of bisectors are made, such that in each pair one bisector has direction at most pm and the other at least pm
(direction "p"1 could be considered as −formula_8 or +formula_8 according our needs.) Let Qk be the intersection of the bisectors in the k-th pair.
The line "q" in the "p"1 direction is placed to go through an intersection Qx such that there are formula_9 intersections in each half-plane defined by the line (median position).
The constrained version of the enclosing problem is run on the line q' which determines half-plane where the center is located.
The line "q"′ in the pm direction is placed to go through an intersection Qx' such that there are formula_4 intersections in each half of the half-plane not containing the solution.
The constrained version of the enclosing problem is run on line "q"′ which together with "q" determines the quadrant where the center is located.
We consider the points Qk in the quadrant not contained in a half-plane containing the solution.
One of the bisectors of the pair defining Qk has the direction ensuring which of points Pi defining the bisector is closer to each point in the quadrant containing the center of the enclosing circle. This point could be discarded.
The constrained version of the algorithm is also solved by the prune and search technique, but reducing the problem size by removal of formula_10 points leading to recurrence
formula_11
giving formula_12.
The formula_10 points to be discarded are found as follows:
Points Pi are arranged into pairs.
For each pair, the intersection Qj of its bisector with the constraining line q is found (If this intersection does not exist we could remove one point from the pair immediately).
The median M of points Qj on the line q is found and in "O"("n") time is determined which halfline of q starting in M
contains the solution of the constrained problem.
We consider points Qj from the other half.
We know which of the points Pi defining Qj is closer to the each point of the halfline containing center of the enclosing circle of the constrained problem solution. This point could be discarded.
The half-plane where the unconstrained solution lies could be determined by the points Pi on the boundary of the constrained circle solution. (The first and last point on the circle in each half-plane suffice. If the center belongs to their convex hull, it is unconstrained solution, otherwise the direction to the nearest edge determines the half-plane of the unconstrained solution.)
Welzl's algorithm.
The algorithm is recursive.
The initial input is a set "P" of points. The algorithm selects one point "p" randomly and uniformly from "P", and recursively finds the minimal circle containing "P" – {"p"}, i.e. all of the other points in "P" except "p". If the returned circle also encloses "p", it is the minimal circle for the whole of "P" and is returned.
Otherwise, point "p" must lie on the boundary of the result circle. It recurses, but with the set "R" of points known to be on the boundary as an additional parameter.
The recursion terminates when "P" is empty, and a solution can be found from the points in "R": for 0 or 1 points the solution is trivial, for 2 points the minimal circle has its center at the midpoint between the two points, and for 3 points the circle is the circumcircle of the triangle described by the points. (In three dimensions, 4 points require the calculation of the circumsphere of a tetrahedron.)
Recursion can also terminate when "R" has size 3 (in 2D, or 4 in 3D) because the remaining points in "P" must lie within the circle described by "R".
algorithm welzl is
input: Finite sets "P" and "R" of points in the plane |"R"| ≤ 3.
output: Minimal disk enclosing "P" with "R" on the boundary.
if "P" is empty or |"R"| = 3 then
return trivial("R")
choose "p" in "P" (randomly and uniformly)
D := welzl("P" − {"p"}, "R")
if "p" is in "D" then
return "D"
return welzl(P − {"p"}, "R" ∪ {"p"})
Welzl's paper states that it is sufficient to randomly permute the input at the start, rather than performing independently random choices of "p" on each recursion.
It also states that performance is improved by dynamically re-ordering the points so that those that are found to be outside a circle are subsequently considered earlier, but this requires a change in the structure of the algorithm to store "P" as a "global".
Other algorithms.
Prior to Megiddo's result showing that the smallest-circle problem may be solved in linear time, several algorithms of higher complexity appeared in the literature. A naive algorithm solves the problem in time O("n"4) by testing the circles determined by all pairs and triples of points.
Weighted variants of the problem.
The weighted version of the minimum covering circle problem takes as input a set of points in a Euclidean space, each with weights; the goal is to find a single point that minimizes the maximum weighted distance (i.e., distance multiplied by the corresponding weight) to any point. The original (unweighted) minimum covering circle problem corresponds to the case when all weights are equal to 1. As with the unweighted problem, the weighted problem may be solved in linear time in any space of bounded dimension, using approaches closely related to bounded dimension linear programming algorithms, although slower algorithms are again frequent in the literature.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(n^3)"
},
{
"math_id": 1,
"text": "O(n \\log n)"
},
{
"math_id": 2,
"text": "\\Omega(n \\log n)"
},
{
"math_id": 3,
"text": "O(n)"
},
{
"math_id": 4,
"text": "\\frac{n}{16}"
},
{
"math_id": 5,
"text": "t(n) \\le t\\left(\\frac{15n}{16}\\right)+cn"
},
{
"math_id": 6,
"text": "t(n)=16cn"
},
{
"math_id": 7,
"text": "\\frac{n}{2}"
},
{
"math_id": 8,
"text": "\\infty"
},
{
"math_id": 9,
"text": "\\frac{n}{8}"
},
{
"math_id": 10,
"text": "\\frac{n}{4}"
},
{
"math_id": 11,
"text": "t(n) \\le t\\left(\\frac{3n}{4}\\right)+cn"
},
{
"math_id": 12,
"text": "t(n) = 4cn"
}
] |
https://en.wikipedia.org/wiki?curid=14355284
|
14355756
|
Metal–semiconductor junction
|
Type of electrical junction
In solid-state physics, a metal–semiconductor (M–S) junction is a type of electrical junction in which a metal comes in close contact with a semiconductor material. It is the oldest practical semiconductor device. M–S junctions can either be rectifying or non-rectifying. The rectifying metal–semiconductor junction forms a Schottky barrier, making a device known as a Schottky diode, while the non-rectifying junction is called an ohmic contact. (In contrast, a rectifying semiconductor–semiconductor junction, the most common semiconductor device today, is known as a p–n junction.)
Metal–semiconductor junctions are crucial to the operation of all semiconductor devices. Usually an ohmic contact is desired, so that electrical charge can be conducted easily between the active region of a transistor and the external circuitry.
Occasionally however a Schottky barrier is useful, as in Schottky diodes, Schottky transistors, and metal–semiconductor field effect transistors.
The critical parameter: Schottky barrier height.
Whether a given metal-semiconductor junction is an ohmic contact or a Schottky barrier depends on the Schottky barrier height, ΦB, of the junction.
For a sufficiently large Schottky barrier height, that is, ΦB is significantly higher than the thermal energy "kT", the semiconductor is depleted near the metal and behaves as a Schottky barrier. For lower Schottky barrier heights, the semiconductor is not depleted and instead forms an ohmic contact to the metal.
The Schottky barrier height is defined differently for n-type and p-type semiconductors (being measured from the conduction band edge and valence band edge, respectively). The alignment of the semiconductor's bands near the junction is typically independent of the semiconductor's doping level, so the "n"-type and "p"-type Schottky barrier heights are ideally related to each other by:
formula_0
where "E"g is the semiconductor's band gap.
In practice, the Schottky barrier height is not precisely constant across the interface, and varies over the interfacial surface.
Schottky–Mott rule and Fermi level pinning.
The Schottky–Mott rule of Schottky barrier formation, named for Walter H. Schottky and Nevill Mott, predicts the Schottky barrier height based on the vacuum work function of the metal relative to the vacuum electron affinity (or vacuum ionization energy) of the semiconductor:
formula_1
This model is derived based on the thought experiment of bringing together the two materials in vacuum, and is closely related in logic to Anderson's rule for semiconductor-semiconductor junctions. Different semiconductors respect the Schottky–Mott rule to varying degrees.
Although the Schottky–Mott model correctly predicted the existence of band bending in the semiconductor, it was found experimentally that it would give grossly incorrect predictions for the height of the Schottky barrier. A phenomenon referred to as "Fermi level pinning" caused some point of the band gap, at which finite DOS exists, to be locked (pinned) to the Fermi level. This made the Schottky barrier height almost completely insensitive to the metal's work function:
formula_2
where "E"bandgap is the size of band gap in the semiconductor.
In fact, empirically, it is found that neither of the above extremes is quite correct. The choice of metal does have some effect, and there appears to be a weak correlation between the metal work function and the barrier height, however the influence of the work function is only a fraction of that predicted by the Schottky-Mott rule.
It was noted in 1947 by John Bardeen that the Fermi level pinning phenomenon would naturally arise if there were chargeable states in the semiconductor right at the interface, with energies inside the semiconductor's gap. These would either be induced during the direct chemical bonding of the metal and semiconductor (metal-induced gap states) or be already present in the semiconductor–vacuum surface (surface states). These highly dense surface states would be able to absorb a large quantity of charge donated from the metal, effectively shielding the semiconductor from the details of the metal. As a result, the semiconductor's bands would necessarily align to a location relative to the surface states which are in turn pinned to the Fermi level (due to their high density), all without influence from the metal.
The Fermi level pinning effect is strong in many commercially important semiconductors (Si, Ge, GaAs), and thus can be problematic for the design of semiconductor devices. For example, nearly all metals form a significant Schottky barrier to "n"-type germanium and an ohmic contact to "p"-type germanium, since the valence band edge is strongly pinned to the metal's Fermi level. The solution to this inflexibility requires additional processing steps such as adding an intermediate insulating layer to unpin the bands. (In the case of germanium, germanium nitride has been used)
History.
The rectification property of metal–semiconductor contacts was discovered by Ferdinand Braun in 1874 using mercury metal contacted with copper sulfide and iron sulfide semiconductors. Sir Jagadish Chandra Bose applied for a US patent for a metal-semiconductor diode in 1901. This patent was awarded in 1904. G.W. Pickard received a patent in 1906 on a point-contact rectifier using silicon. In 1907, George W. Pierce published a paper in Physical Review showing rectification properties of diodes made by sputtering many metals on many semiconductors. The use of the metal–semiconductor diode rectifier was proposed by Lilienfeld in 1926 in the first of his three transistor patents as the gate of the metal–semiconductor field effect transistors. The theory of the field-effect transistor using a metal/semiconductor gate was advanced by William Shockley in 1939.
The earliest metal–semiconductor diodes in electronics application occurred around 1900, when the cat's whisker rectifiers were used in receivers. They consisted of pointed tungsten wire (in the shape of a cat's whisker) whose tip or point was pressed against the surface of a galena (lead sulfide) crystal. The first large area rectifier appeared around 1926 which consisted of a copper(I) oxide semiconductor thermally grown on a copper substrate. Subsequently, selenium films were evaporated onto large metal substrates to form the rectifying diodes. These selenium rectifiers were used (and are still used) to convert alternating current to direct current in electrical power applications. During 1925–1940, diodes consisting of a pointed tungsten metal wire in contact with a silicon crystal base, were fabricated in laboratories to detect microwaves in the UHF range. A World War II program to manufacture high-purity silicon as the crystal base for the point-contact rectifier was suggested by Frederick Seitz in 1942 and successfully undertaken by the Experimental Station of the E. I du Pont de Nemours Company.
The first theory that predicted the correct direction of rectification of the metal–semiconductor junction was given by Nevill Mott in 1939. He found the solution for both the diffusion and drift currents of the majority carriers through the semiconductor surface space charge layer which has been known since about 1948 as the Mott barrier. Walter H. Schottky and Spenke extended Mott's theory by including a donor ion whose density is spatially constant through the semiconductor surface layer. This changed the constant electric field assumed by Mott to a linearly decaying electric field. This semiconductor space-charge layer under the metal is known as the Schottky barrier. A similar theory was also proposed by Davydov in 1939. Although it gives the correct direction of rectification, it has also been proven that the Mott theory and its Schottky-Davydov extension gives the wrong current limiting mechanism and wrong current-voltage formulae in silicon metal/semiconductor diode rectifiers. The correct theory was developed by Hans Bethe and reported by him in a M.I.T. Radiation Laboratory Report dated November 23, 1942. In Bethe's theory, the current is limited by thermionic emission of electrons over the metal–semiconductor potential barrier. Thus, the appropriate name for the metal–semiconductor diode should be the Bethe diode, instead of the Schottky diode, since the Schottky theory does not predict the modern metal–semiconductor diode characteristics correctly.
If a metal-semiconductor junction is formed by placing a droplet of mercury, as Braun did, onto a semiconductor, e.g.silicon, to form a Schottky barrier in a Schottky diode electrical setup – electrowetting can be observed, where the droplet spreads out with increasing voltage. Depending on the doping type and density in the semiconductor, the droplet spreading depends on the magnitude and sign of the voltage applied to the mercury droplet. This effect has been termed ‘Schottky electrowetting’, effectively linking electrowetting and semiconductor effects.
The MOSFET (metal–oxide–semiconductor field-effect transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, and presented in 1960. They extended their work on MOS technology to do pioneering work on hot carrier devices, which used what would later be called a Schottky barrier. The Schottky diode, also known as the Schottky-barrier diode, was theorized for years, but was first practically realized as a result of the work of Atalla and Kahng during 1960–1961. They published their results in 1962 and called their device the "hot electron" triode structure with semiconductor-metal emitter. It was one of the first metal-base transistors. Atalla continued research on Schottky diodes with Robert J. Archer at HP Associates. They developed high vacuum metal film deposition technology, and fabricated stable evaporated/sputtered contacts, publishing their results in January 1963. Their work was a breakthrough in metal–semiconductor junction and Schottky barrier research, as it overcame most of the fabrication problems inherent in point-contact diodes and made it possible to build practical Schottky diodes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Phi_{\\rm B}^{(n)} + \\Phi_{\\rm B}^{(p)} = E_{\\rm g}"
},
{
"math_id": 1,
"text": "\\Phi_{\\rm B}^{(n)} \\approx \\Phi_{\\rm metal} - \\chi_{\\rm semi}"
},
{
"math_id": 2,
"text": "\\Phi_{\\rm B} \\approx \\frac{1}{2} E_{\\rm bandgap}"
}
] |
https://en.wikipedia.org/wiki?curid=14355756
|
14356754
|
Hatta number
|
The Hatta number (Ha) was developed by Shirôji Hatta (1895-1973 ) in 1932, who taught at Tohoku University from 1925 to 1958. It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. For a second order reaction ("r"A = "k"2"C"B"C"A), the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial concentration ("C"A,i); thus, the maximum rate of reaction is "k"2"C"B,bulk"C"A,i"δ"L.
formula_0
For a reaction "m"th order in "A" and "n"th order in "B":
formula_1
For gas-liquid absorption with chemical reactions, a high Hatta number indicates the reaction is much faster than diffusion. In this case, the reaction occurs within a thin film, and the surface area limits the overall rate. Conversely, a Hatta number smaller than unity suggests the reaction is the limiting factor, and the reaction takes place in the bulk fluid, requiring larger volumes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Ha^2 = {{k_{2} C_{A,i} C_{B,bulk} \\delta_L} \\over {\\frac{D_A}{\\delta_L}\\ C_{A,i}}} = {{k_2 C_{B,bulk} D_A} \\over ({\\frac{D_A}{\\delta_L}}) ^2} = {{k_2 C_{B,bulk} D_A} \\over {{k_L} ^2}}"
},
{
"math_id": 1,
"text": "Ha = {{ \\sqrt{{\\frac{2}{{m} + 1}}k_{m,n} {C_{A,i}}^{m - 1} C_{B,bulk}^n {D}_A}} \\over {{k}_L}}"
}
] |
https://en.wikipedia.org/wiki?curid=14356754
|
14356889
|
Gamow factor
|
Chance of overcoming the Coulomb barrier
The Gamow factor, Sommerfeld factor or Gamow–Sommerfeld factor, named after its discoverer George Gamow or after Arnold Sommerfeld, is a probability factor for two nuclear particles' chance of overcoming the Coulomb barrier in order to undergo nuclear reactions, for example in nuclear fusion. By classical physics, there is almost no possibility for protons to fuse by crossing each other's Coulomb barrier at temperatures commonly observed to cause fusion, such as those found in the Sun. When George Gamow instead applied quantum mechanics to the problem, he found that there was a significant chance for the fusion due to tunneling.
The probability of two nuclear particles overcoming their electrostatic barriers is given by the following equation:
formula_0
where formula_1 is the Gamow energy,
formula_2
Here, formula_3 is the reduced mass of the two particles. The constant formula_4 is the fine-structure constant, formula_5 is the speed of light, and formula_6 and formula_7 are the respective atomic numbers of each particle.
While the probability of overcoming the Coulomb barrier increases rapidly with increasing particle energy, for a given temperature, the probability of a particle having such an energy falls off very fast, as described by the Maxwell–Boltzmann distribution. Gamow found that, taken together, these effects mean that for any given temperature, the particles that fuse are mostly in a temperature-dependent narrow range of energies known as the Gamow window.
Derivation.
Gamow first solved the one-dimensional case of quantum tunneling using the WKB approximation. Considering a wave function of a particle of mass "m", we take area 1 to be where a wave is emitted, area 2 the potential barrier which has height "V" and width "l" (at formula_8), and area 3 its other side, where the wave is arriving, partly transmitted and partly reflected. For a wave number "k" and energy "E" we get:
formula_9
formula_10
formula_11
where formula_12 and formula_13.
This is solved for given "A" and "α" by taking the boundary conditions at the both barrier edges, at formula_14 and formula_15, where both formula_16 and its derivative must be equal on both sides.
For formula_17, this is easily solved by ignoring the time exponential and considering the real part alone (the imaginary part has the same behavior). We get, up to factors depending on the phases which are typically of order 1, and up to factors of the order of formula_18 (assumed not very large, since "V" is greater than "E" not marginally):
formula_19
formula_20
Next Gamow modeled the alpha decay as a symmetric one-dimensional problem, with a standing wave between two symmetric potential barriers at formula_21 and formula_22, and emitting waves at both outer sides of the barriers.
Solving this can in principle be done by taking the solution of the first problem, translating it by formula_23 and gluing it to an identical solution reflected around formula_14.
Due to the symmetry of the problem, the emitting waves on both sides must have equal amplitudes ("A"), but their phases ("α") may be different. This gives a single extra parameter; however, gluing the two solutions at formula_14 requires two boundary conditions (for both the wave function and its derivative), so in general there is no solution. In particular, re-writing formula_24 (after translation by formula_23) as a sum of a cosine and a sine of formula_25, each having a different factor that depends on "k" and "α", the factor of the sine must vanish, so that the solution can be glued symmetrically to its reflection. Since the factor is in general complex (hence its vanishing imposes two constraints, representing the two boundary conditions), this can in general be solved by adding an imaginary part of "k", which gives the extra parameter needed. Thus "E" will have an imaginary part as well.
The physical meaning of this is that the standing wave in the middle decays; the emitted waves newly emitted have therefore smaller amplitudes, so that their amplitude decays in time but grows with distance. The decay constant, denoted "λ", is assumed small compared to formula_26.
"λ" can be estimated without solving explicitly, by noting its effect on the probability current conservation law. Since the probability flows from the middle to the sides, we have:
formula_27
Note the factor of 2 is due to having two emitted waves.
Taking formula_28, this gives:
formula_29
Since the quadratic dependence in formula_30 is negligible relative to its exponential dependence, we may write:
formula_31
Remembering the imaginary part added to "k" is much smaller than the real part, we may now neglect it and get:
formula_32
Note that formula_33 is the particle velocity, so the first factor is the classical rate by which the particle trapped between the barriers hits them.
Finally, moving to the three-dimensional problem, the spherically symmetric Schrödinger equation reads (expanding the wave function formula_34 in spherical harmonics and looking at the n-th term):
formula_35
Since formula_36 amounts to enlarging the potential, and therefore substantially reducing the decay rate (given its exponential dependence on formula_37), we focus on formula_38, and get a very similar problem to the previous one with formula_39, except that now the potential as a function of "r" is not a step function.
The main effect of this on the amplitudes is that we must replace the argument in the exponent, taking an integral of formula_40 over the distance where formula_41 rather than multiplying by "l". We take the Coulomb potential:
formula_42
where formula_43 is the vacuum electric permittivity, "e" the electron charge, "z" = 2 is the charge number of the alpha particle and "Z" the charge number of the nucleus ("Z"-"z" after emitting the particle). The integration limits are then formula_44, where we assume the nuclear potential energy is still relatively small, and formula_45, which is where the nuclear negative potential energy is large enough so that the overall potential is smaller than "E". Thus, the argument of the exponent in "λ" is:
formula_46
This can be solved by substituting formula_47 and then formula_48 and solving for θ, giving:
formula_49
where formula_50.
Since "x" is small, the "x"-dependent factor is of order 1.
Gamow assumed formula_51, thus replacing the "x"-dependent factor by formula_52, giving:
formula_53
with:
formula_54
which is the same as the formula given in the beginning of the article with formula_55, formula_56
and the fine-structure constant formula_57.
For a radium alpha decay, "Z" = 88, "z" = 2 and "m" = 4"m"p, "E"G is approximately 50 GeV. Gamow calculated the slope of formula_58 with respect to "E" at an energy of 5 MeV to be ~ 1014 J−1, compared to the experimental value of .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P_\\text{G}(E) = e^{-\\sqrt{{E_\\text{G}}/{E}}}"
},
{
"math_id": 1,
"text": "E_\\text{G}"
},
{
"math_id": 2,
"text": "E_\\text{G} \\equiv 2 m_\\text{r} c^2 (\\pi \\alpha Z_\\text{a} Z_\\text{b})^2"
},
{
"math_id": 3,
"text": "m_\\text{r} = \\frac{m_\\text{a} m_\\text{b}}{m_\\text{a} + m_\\text{b}}"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "Z_\\text{a}"
},
{
"math_id": 7,
"text": "Z_\\text{b}"
},
{
"math_id": 8,
"text": "0<x<l"
},
{
"math_id": 9,
"text": "\\Psi_1 = A e^{i(kx+\\alpha)} e^{-i{Et}/{\\hbar}}"
},
{
"math_id": 10,
"text": "\\Psi_2 = B_1 e^{-k'x} + B_2 e^{k'x}"
},
{
"math_id": 11,
"text": "\\Psi_3 = (C_1 e^{-i(kx+\\beta)}+C_2 e^{i(kx+\\beta')}) e^{-i{Et}/{\\hbar}}"
},
{
"math_id": 12,
"text": "k = \\sqrt{2mE}"
},
{
"math_id": 13,
"text": "k' = \\sqrt{2m(V-E)}"
},
{
"math_id": 14,
"text": "x=0"
},
{
"math_id": 15,
"text": "x=l"
},
{
"math_id": 16,
"text": "\\Psi"
},
{
"math_id": 17,
"text": "k'l \\gg 1"
},
{
"math_id": 18,
"text": "{k}/{k'}=\\sqrt{{E}/{(V-E)}}"
},
{
"math_id": 19,
"text": "B_1, B_2 \\approx A"
},
{
"math_id": 20,
"text": "C_1, C_2 \\approx \\frac{1}{2}A\\cdot\\frac{k'}{k}\\cdot e^{k'l}"
},
{
"math_id": 21,
"text": "q_0<x<q_0+l"
},
{
"math_id": 22,
"text": "-(q_0+l)<x<-q_0"
},
{
"math_id": 23,
"text": "q_0"
},
{
"math_id": 24,
"text": "\\Psi_3"
},
{
"math_id": 25,
"text": "kx"
},
{
"math_id": 26,
"text": "E/\\hbar"
},
{
"math_id": 27,
"text": " \\frac {\\partial}{\\partial t} \\int_{-(q_0+l)}^{(q_0+l)} \\Psi^*\\Psi\\ dx = 2\\cdot\\frac{\\hbar}{2mi}\\left(\\Psi_1^* \\frac{\\partial \\Psi_1 }{\\partial x}- \\Psi_1 \\frac{\\partial \\Psi_1^* }{\\partial x} \\right) ,"
},
{
"math_id": 28,
"text": "\\Psi\\sim e^{-\\lambda t}"
},
{
"math_id": 29,
"text": " \\lambda \\cdot\\frac{1}{4}\\cdot 2(q_0+l) A^2 \\frac{k'^2}{k^2} \\cdot e^{2k'l} \\approx 2\\frac{\\hbar}{m} A^2 k ,"
},
{
"math_id": 30,
"text": "k'l"
},
{
"math_id": 31,
"text": " \\lambda \\approx \\frac{\\hbar k}{m (q_0+l)} \\frac{k^2}{k'^2} \\cdot e^{-2k'l} "
},
{
"math_id": 32,
"text": " \\lambda \\approx \\frac{\\hbar k}{m 2(q_0+l)} \\cdot 8\\frac{E}{V-E} \\cdot e^{-2\\sqrt{2m(V-E)}l/\\hbar} "
},
{
"math_id": 33,
"text": "\\frac{\\hbar k}{m}"
},
{
"math_id": 34,
"text": "\\psi(r,\\theta,\\phi) = \\chi(r)u(\\theta,\\phi)"
},
{
"math_id": 35,
"text": "\\frac {\\hbar^2}{2m}\\left(\\frac{d^2\\chi}{dr^2} + \\frac{2}{r}\\frac{d\\chi}{dr}\\right)= \\left(V(r) + \\frac {\\hbar^2}{2m}\\frac{n(n+1)}{r^2} -E\\right)\\chi"
},
{
"math_id": 36,
"text": "n>0"
},
{
"math_id": 37,
"text": "\\sqrt{V-E}"
},
{
"math_id": 38,
"text": "n=0"
},
{
"math_id": 39,
"text": "\\chi(r) = \\Psi(r)/r "
},
{
"math_id": 40,
"text": " 2\\sqrt{2m(V-E)}/\\hbar "
},
{
"math_id": 41,
"text": "V(r)>E"
},
{
"math_id": 42,
"text": " V(r) = \\frac {z(Z-z) e^2}{4\\pi\\varepsilon_0 r}"
},
{
"math_id": 43,
"text": "\\varepsilon_0"
},
{
"math_id": 44,
"text": "r_2 = \\frac {z(Z-z) e^2}{4\\pi\\varepsilon_0 E}"
},
{
"math_id": 45,
"text": "r_1"
},
{
"math_id": 46,
"text": " 2\\frac {\\sqrt{2mE}}{\\hbar} \\int_{r_1}^{r_2} \\sqrt{\\frac{V(r)}{E}-1} \\, dr = 2\\frac {\\sqrt{2mE}}{\\hbar} \\int_{r_1}^{r_2} \\sqrt{\\frac{r_2}{r}-1} \\,dr "
},
{
"math_id": 47,
"text": "t = \\sqrt{r/r_2}"
},
{
"math_id": 48,
"text": "t = cos(\\theta) "
},
{
"math_id": 49,
"text": "2\\cdot r_2\\frac{\\sqrt{2mE}}{\\hbar} \\cdot(\\cos^{-1}(\\sqrt{x}) - \\sqrt{x}\\sqrt{1-x}) = 2\\frac{\\sqrt{2m}z(Z-z) e^2}{4\\pi\\varepsilon_0 \\hbar \\sqrt{E}} \\cdot(\\cos^{-1}(\\sqrt{x}) - \\sqrt{x}\\sqrt{1-x})"
},
{
"math_id": 50,
"text": "x = r_1/r_2"
},
{
"math_id": 51,
"text": "x\\ll 1"
},
{
"math_id": 52,
"text": "\\pi / 2"
},
{
"math_id": 53,
"text": "\\lambda \\sim e^{-\\sqrt{{E_g}/{E}}}"
},
{
"math_id": 54,
"text": "E_g = \\frac{2\\pi^2m \\left[z(Z-z) e^2\\right]^2}{4\\pi\\varepsilon_0 \\hbar^2}"
},
{
"math_id": 55,
"text": "Z_\\text{a}=z"
},
{
"math_id": 56,
"text": "Z_\\text{b}=Z-z"
},
{
"math_id": 57,
"text": "\\alpha = \\frac{e^2}{4\\pi\\varepsilon_0 \\hbar c}"
},
{
"math_id": 58,
"text": "\\log(\\lambda)"
}
] |
https://en.wikipedia.org/wiki?curid=14356889
|
1435816
|
Net national product
|
Net national product (NNP) is gross national product (GNP), i.e. the total market value of all final goods and services produced by the factors of production of a country or other polity during a given time period, minus depreciation. Similarly, net domestic product (NDP) is gross domestic product (GDP) minus depreciation. Depreciation describes the devaluation of fixed capital through wear and tear associated with its use in productive activities.
Closely related to the concept of GNP is another concept called NNP of a country. NNP is a more accurate measure of total value of goods and services by a country. It is derived from GNP figures. As a rough estimate, GNP is very useful indicator of total production of a country. But if we are interested to have an accurate and true measure of what a country is producing and what is available for uses, then GNP has a serious defect.
In national accounting, net national product (NNP) and net domestic product (NDP) are given by the two following formulas:
formula_0
formula_1
Use in economics.
Although the net national product is a key identity in national accounting, its use in economics research is generally superseded by the use of the gross domestic or national product as a measure of national income, a preference which has been historically a contentious topic (see e.g. Boulding (1948) and Burk (1948)). Nonetheless, the net national product has been the subject of research on its role as a dynamic welfare indicator as well as a means of reconciling forward and backward views on capital wherein NNP("t") corresponds to the interest on accumulated capital. Furthermore, the net national product has featured prominently as a measure in environmental economics such as within models accounting for the depletion of natural and environmental resources or as an indicator of sustainability.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "NNP = GNP - Depreciation"
},
{
"math_id": 1,
"text": "NDP = GDP - Depreciation"
}
] |
https://en.wikipedia.org/wiki?curid=1435816
|
14359
|
Huygens–Fresnel principle
|
Method of analysis
The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminous wave propagation both in the far-field limit and in near-field diffraction as well as reflection.
History.
In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave; the sum of these secondary waves determines the form of the wave at any subsequent time. He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known as diffraction effects. The resolution of this error was finally explained by David A. B. Miller in 1991. The resolution is that the source is a dipole (not the monopole assumed by Huygens), which cancels in the reflected direction.
In 1818, Fresnel showed that Huygens's principle, together with his own principle of interference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including the Poisson spot.
Poisson was a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However, Arago, another member of the committee, performed the experiment and showed that the prediction was correct. (Lisle had observed this fifty years earlier.) This was one of the investigations that led to the victory of the wave theory of light over then predominant corpuscular theory.
In antenna theory and engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known as surface equivalence principle.
Huygens' principle as a microscopic model.
The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving the Kirchhoff's diffraction formula and the approximations of near field due to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered.
Kirchhoff's diffraction formula provides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation.
A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound.
Modern physics interpretations.
Not all experts agree that the Huygens' principle is an accurate microscopic representation of reality. For instance, Melvin Schwartz argued that "Huygens' principle actually does give the right answer but for the wrong reasons".
This can be reflected in the following facts:
The Huygens' principle is essentially compatible with quantum field theory in the far field approximation, considering effective fields in the center of scattering, considering small perturbations, and in the same sense that quantum optics is compatible with classical optics, other interpretations are subject of debates and active research.
The Feynman model where every point in an imaginary wave front as large as the room is generating a wavelet, shall also be interpreted in these approximations and in a probabilistic context, in this context remote points can only contribute minimally to the overall probability amplitude.
Quantum field theory does not include any microscopic model for photon creation and the concept of single photon is also put under scrutiny on a theoretical level.
Mathematical expression of the principle.
Consider the case of a point source located at a point P0, vibrating at a frequency "f". The disturbance may be described by a complex variable "U"0 known as the complex amplitude. It produces a spherical wave with wavelength λ, wavenumber "k"
2"π"/"λ". Within a constant of proportionality, the complex amplitude of the primary wave at the point Q located at a distance "r"0 from P0 is:
formula_0
Note that magnitude decreases in inverse proportion to the distance traveled, and the phase changes as "k" times the distance traveled.
Using Huygens's theory and the principle of superposition of waves, the complex amplitude at a further point P is found by summing the contribution from each point on the sphere of radius "r"0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −"i"/λ, and by an additional inclination factor, "K"(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed that "K"(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude at P, due to the contribution of secondary waves, is then given by:
formula_1
where "S" describes the surface of the sphere, and "s" is the distance between Q and P.
Fresnel used a zone construction method to find approximate values of "K" for the different zones, which enabled him to make predictions that were in agreement with experimental results. The integral theorem of Kirchhoff includes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation.
For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression for "K"(χ):
formula_2
"K" has a maximum value at χ = 0 as in the Huygens–Fresnel principle; however, "K" is not equal to zero at χ = π/2, but at χ = π.
Above derivation of "K"(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations. An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually. "K"(χ) can be generally expressed as:
formula_3
In this case, "K" satisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2).
Generalized Huygens' principle.
Many books and references - e.g. (Greiner, 2002) and (Enders, 2009) - refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948).
Feynman defines the generalized principle in the following way:
<templatestyles src="Template:Blockquote/styles.css" />"Actually Huygens’ principle is not correct in optics. It is replaced by Kirchoff’s [sic] modification which requires that both the amplitude and its derivative must be known on the adjacent surface. This is a consequence of the fact that the wave equation in optics is second order in the time. The wave equation of quantum mechanics is first order in the time; therefore, Huygens’ principle is correct for matter waves, action replacing time."
This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism of Green's functions and propagators apply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by the action and there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters.
As per Greiner the generalized principle can be expressed for formula_4 in the form:
formula_5
where "G" is the usual Green function that propagates in time the wave function formula_6. This description resembles and generalize the initial Fresnel's formula of the classical model.
Huygens' theory, Feynman's path integral and the modern photon wave function.
Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensity double-slit experiment first performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 Brussels Solvay Conference, where Louis de Broglie proposed his de Broglie hypothesis that the photon is guided by a wave function.
The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons.
Huygens' principle and quantum field theory.
Huygens' principle can be seen as a consequence of the homogeneity of space—space is uniform in all locations. Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. The superposition of all the waves results in the observed pattern of wave propagation.
Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets.
In other spatial dimensions.
In 1900, Jacques Hadamard observed that Huygens' principle was broken when the number of spatial dimensions is even. From this, he developed a set of conjectures that remain an active topic of research. In particular, it has been discovered that Huygens' principle holds on a large class of homogeneous spaces derived from the Coxeter group (so, for example, the Weyl groups of simple Lie algebras).
The traditional statement of Huygens' principle for the D'Alembertian gives rise to the KdV hierarchy; analogously, the Dirac operator gives rise to the AKNS hierarchy.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "U(r_0) \\propto \\frac {U_0 e^{ikr_0}}{r_0}. "
},
{
"math_id": 1,
"text": " U(P) = -\\frac{i}{\\lambda} U(r_0) \\int_{S} \\frac {e^{iks}}{s} K(\\chi)\\,dS "
},
{
"math_id": 2,
"text": "~K(\\chi )= \\frac{1}{2}(1+\\cos \\chi)"
},
{
"math_id": 3,
"text": "~K(\\chi )= \\cos \\chi"
},
{
"math_id": 4,
"text": "t'>t "
},
{
"math_id": 5,
"text": "\\psi'(\\mathbf{x}',t') = i \\int d^3x \\, G(\\mathbf{x}',t';\\mathbf{x},t)\\psi(\\mathbf{x},t)"
},
{
"math_id": 6,
"text": "\\psi"
}
] |
https://en.wikipedia.org/wiki?curid=14359
|
143608
|
Deferent and epicycle
|
Planetary motions in archaic models of the Solar System
In the Hipparchian, Ptolemaic, and Copernican systems of astronomy, the epicycle (from grc " "" ()" 'upon the circle', meaning "circle moving on another circle") was a geometric model used to explain the variations in speed and direction of the apparent motion of the Moon, Sun, and planets. In particular it explained the apparent retrograde motion of the five planets known at the time. Secondarily, it also explained changes in the apparent distances of the planets from the Earth.
It was first proposed by Apollonius of Perga at the end of the 3rd century BC. It was developed by Apollonius of Perga and Hipparchus of Rhodes, who used it extensively, during the 2nd century BC, then formalized and extensively used by Ptolemy in his 2nd century AD astronomical treatise the "Almagest".
Epicyclical motion is used in the Antikythera mechanism, [citation requested] an ancient Greek astronomical device, for compensating for the elliptical orbit of the Moon, moving faster at perigee and slower at apogee than circular orbits would, using four gears, two of them engaged in an eccentric way that quite closely approximates Kepler's second law.
Epicycles worked very well and were highly accurate, because, as Fourier analysis later showed, any smooth curve can be approximated to arbitrary accuracy with a sufficient number of epicycles. However, they fell out of favor with the discovery that planetary motions were largely elliptical from a heliocentric frame of reference, which led to the discovery that gravity obeying a simple inverse square law could better explain all planetary motions.
Introduction.
In both Hipparchian and Ptolemaic systems, the planets are assumed to move in a small circle called an "epicycle", which in turn moves along a larger circle called a "deferent" (Ptolemy himself described the point but did not give it a name). Both circles rotate eastward and are roughly parallel to the plane of the Sun's apparent orbit under those systems (ecliptic). Despite the fact that the system is considered geocentric, neither of the circles were centered on the earth, rather each planet's motion was centered at a planet-specific point slightly away from the Earth called the "eccentric". The orbits of planets in this system are similar to epitrochoids, but are not exactly epitrochoids because the angle of the epicycle is not a linear function of the angle of the deferent.
In the Hipparchian system the epicycle rotated and revolved along the deferent with uniform motion. However, Ptolemy found that he could not reconcile that with the Babylonian observational data available to him; in particular, the shape and size of the apparent retrogrades differed. The angular rate at which the epicycle traveled was not constant unless he measured it from another point which is now called the "equant" (Ptolemy did not give it a name). It was the angular rate at which the deferent moved around the point midway between the equant and the Earth (the eccentric) that was constant; the epicycle center swept out equal angles over equal times only when viewed from the equant. It was the use of equants to decouple uniform motion from the center of the circular deferents that distinguished the Ptolemaic system. For the outer planets, the angle between the center of the epicycle and the planet was the same as the angle between the Earth and the Sun.
Ptolemy did not predict the relative sizes of the planetary deferents in the "Almagest". All of his calculations were done with respect to a normalized deferent, considering a single case at a time. This is not to say that he believed the planets were all equidistant, but he had no basis on which to measure distances, except for the Moon. He generally ordered the planets outward from the Earth based on their orbit periods. Later he calculated their distances in the "Planetary Hypotheses" and summarized them in the first column of this table:
Had his values for deferent radii relative to the Earth–Sun distance been more accurate, the epicycle sizes would have all approached the Earth–Sun distance. Although all the planets are considered separately, in one peculiar way they were all linked: the lines drawn from the body through the epicentric center of all the planets were all parallel, along with the line drawn from the Sun to the Earth along which Mercury and Venus were situated. That means that all the bodies revolve in their epicycles in lockstep with Ptolemy's Sun (that is, they all have exactly a one-year period).
Babylonian observations showed that for superior planets the planet would typically move through in the night sky slower than the stars. Each night the planet appeared to lag a little behind the stars, in what is called prograde motion. Near opposition, the planet would appear to reverse and move through the night sky faster than the stars for a time in retrograde motion before reversing again and resuming prograde. Epicyclic theory, in part, sought to explain this behavior.
The inferior planets were always observed to be near the Sun, appearing only shortly before sunrise or shortly after sunset. Their apparent retrograde motion occurs during the transition between evening star into morning star, as they pass between the Earth and the Sun.
History.
When ancient astronomers viewed the sky, they saw the Sun, Moon, and stars moving overhead in a regular fashion. Babylonians did celestial observations, mainly of the Sun and Moon as a means of recalibrating and preserving timekeeping for religious ceremonies. Other early civilizations such as the Greeks had thinkers like Thales of Miletus, the first to document and predict a solar eclipse (585 BC), or Heraclides Ponticus. They also saw the "wanderers" or "planetai" (our planets). The regularity in the motions of the wandering bodies suggested that their positions might be predictable.
The most obvious approach to the problem of predicting the motions of the heavenly bodies was simply to map their positions against the star field and then to fit mathematical functions to the changing positions. The introduction of better celestial measurement instruments, such as the introduction of the gnomon by Anaximander, allowed the Greeks to have a better understanding of the passage of time, such as the number of days in a year and the length of seasons, which are indispensable for astronomic measurements.
The ancients worked from a geocentric perspective for the simple reason that the Earth was where they stood and observed the sky, and it is the sky which appears to move while the ground seems still and steady underfoot. Some Greek astronomers (e.g., Aristarchus of Samos) speculated that the planets (Earth included) orbited the Sun, but the optics (and the specific mathematics – Isaac Newton's law of gravitation for example) necessary to provide data that would convincingly support the heliocentric model did not exist in Ptolemy's time and would not come around for over fifteen hundred years after his time. Furthermore, Aristotelian physics was not designed with these sorts of calculations in mind, and Aristotle's philosophy regarding the heavens was entirely at odds with the concept of heliocentrism. It was not until Galileo Galilei observed the moons of Jupiter on 7 January 1610, and the phases of Venus in September 1610, that the heliocentric model began to receive broad support among astronomers, who also came to accept the notion that the planets are individual worlds orbiting the Sun (that is, that the Earth is a planet, too). Johannes Kepler formulated his three laws of planetary motion, which describe the orbits of the planets in the Solar System to a remarkable degree of accuracy utilizing a system that employs elliptical rather than circular orbits. Kepler's three laws are still taught today in university physics and astronomy classes, and the wording of these laws has not changed since Kepler first formulated them four hundred years ago.
The apparent motion of the heavenly bodies with respect to time is cyclical in nature. Apollonius of Perga (3rd century BC) realized that this cyclical variation could be represented visually by small circular orbits, or "epicycles", revolving on larger circular orbits, or "deferents". Hipparchus (2nd century BC) calculated the required orbits. Deferents and epicycles in the ancient models did not represent orbits in the modern sense, but rather a complex set of circular paths whose centers are separated by a specific distance in order to approximate the observed movement of the celestial bodies.
Claudius Ptolemy refined the deferent-and-epicycle concept and introduced the equant as a mechanism that accounts for velocity variations in the motions of the planets. The empirical methodology he developed proved to be extraordinarily accurate for its day and was still in use at the time of Copernicus and Kepler. A heliocentric model is not necessarily more accurate as a system to track and predict the movements of celestial bodies than a geocentric one when considering strictly circular orbits. A heliocentric system would require more intricate systems to compensate for the shift in reference point. It was not until Kepler's proposal of elliptical orbits that such a system became increasingly more accurate than a mere epicyclical geocentric model.
Owen Gingerich describes a planetary conjunction that occurred in 1504 and was apparently observed by Copernicus. In notes bound with his copy of the "Alfonsine Tables", Copernicus commented that "Mars surpasses the numbers by more than two degrees. Saturn is surpassed by the numbers by one and a half degrees." Using modern computer programs, Gingerich discovered that, at the time of the conjunction, Saturn indeed lagged behind the tables by a degree and a half and Mars led the predictions by nearly two degrees. Moreover, he found that Ptolemy's predictions for Jupiter at the same time were quite accurate. Copernicus and his contemporaries were therefore using Ptolemy's methods and finding them trustworthy well over a thousand years after Ptolemy's original work was published.
When Copernicus transformed Earth-based observations to heliocentric coordinates, he was confronted with an entirely new problem. The Sun-centered positions displayed a cyclical motion with respect to time but without retrograde loops in the case of the outer planets. In principle, the heliocentric motion was simpler but with new subtleties due to the yet-to-be-discovered elliptical shape of the orbits. Another complication was caused by a problem that Copernicus never solved: correctly accounting for the motion of the Earth in the coordinate transformation. In keeping with past practice, Copernicus used the deferent/epicycle model in his theory but his epicycles were small and were called "epicyclets".
In the Ptolemaic system the models for each of the planets were different, and so it was with Copernicus' initial models. As he worked through the mathematics, however, Copernicus discovered that his models could be combined in a unified system. Furthermore, if they were scaled so that the Earth's orbit was the same in all of them, the ordering of the planets we recognize today easily followed from the math. Mercury orbited closest to the Sun and the rest of the planets fell into place in order outward, arranged in distance by their periods of revolution.
Although Copernicus' models reduced the magnitude of the epicycles considerably, whether they were simpler than Ptolemy's is moot. Copernicus eliminated Ptolemy's somewhat-maligned equant but at a cost of additional epicycles. Various 16th-century books based on Ptolemy and Copernicus use about equal numbers of epicycles. The idea that Copernicus used only 34 circles in his system comes from his own statement in a preliminary unpublished sketch called the "Commentariolus". By the time he published "De revolutionibus orbium coelestium", he had added more circles. Counting the total number is difficult, but estimates are that he created a system just as complicated, or even more so. Koestler, in his history of man's vision of the universe, equates the number of epicycles used by Copernicus at 48. The popular total of about 80 circles for the Ptolemaic system seems to have appeared in 1898. It may have been inspired by the "non-Ptolemaic" system of Girolamo Fracastoro, who used either 77 or 79 orbs in his system inspired by Eudoxus of Cnidus. Copernicus in his works exaggerated the number of epicycles used in the Ptolemaic system; although original counts ranged to 80 circles, by Copernicus's time the Ptolemaic system had been updated by Peurbach toward the similar number of 40; hence Copernicus effectively replaced the problem of retrograde with further epicycles.
Copernicus' theory was at least as accurate as Ptolemy's but never achieved the stature and recognition of Ptolemy's theory. What was needed was Kepler's elliptical-orbit theory, not published until 1609 and 1619. Copernicus' work provided explanations for phenomena like retrograde motion, but really did not prove that the planets actually orbited the Sun.
Ptolemy's and Copernicus' theories proved the durability and adaptability of the deferent/epicycle device for representing planetary motion. The deferent/epicycle models worked as well as they did because of the extraordinary orbital stability of the solar system. Either theory could be used today had Gottfried Wilhelm Leibniz and Isaac Newton not invented calculus.
According to Maimonides, the now-lost astronomical system of Ibn Bajjah in 12th century Andalusian Spain lacked epicycles. Gersonides of 14th century France also eliminated epicycles, arguing that they did not align with his observations. Despite these alternative models, epicycles were not eliminated until the 17th century, when Johannes Kepler's model of elliptical orbits gradually replaced Copernicus' model based on perfect circles.
Newtonian or classical mechanics eliminated the need for deferent/epicycle methods altogether and produced more accurate theories. By treating the Sun and planets as point masses and using Newton's law of universal gravitation, equations of motion were derived that could be solved by various means to compute predictions of planetary orbital velocities and positions. If approximated as simple two-body problems, for example, they could be solved analytically, while the more realistic n-body problem required numerical methods for solution.
The power of Newtonian mechanics to solve problems in orbital mechanics is illustrated by the discovery of Neptune. Analysis of observed perturbations in the orbit of Uranus produced estimates of the suspected planet's position within a degree of where it was found. This could not have been accomplished with deferent/epicycle methods. Still, Newton in 1702 published "Theory of the Moon's Motion" which employed an epicycle and remained in use in China into the nineteenth century. Subsequent tables based on Newton's "Theory" could have approached arcminute accuracy.
The number of epicycles.
According to one school of thought in the history of astronomy, minor imperfections in the original Ptolemaic system were discovered through observations accumulated over time. It was mistakenly believed that more levels of epicycles (circles within circles) were added to the models to match more accurately the observed planetary motions. The multiplication of epicycles is believed to have led to a nearly unworkable system by the 16th century, and that Copernicus created his heliocentric system in order to simplify the Ptolemaic astronomy of his day, thus succeeding in drastically reducing the number of circles.
<templatestyles src="Template:Blockquote/styles.css" />With better observations additional epicycles and eccentrics were used to represent the newly observed phenomena till in the later Middle Ages the universe became a 'Sphere/With Centric and Eccentric scribbled o'er,/Cycle and Epicycle, Orb in Orb'.
As a measure of complexity, the number of circles is given as 80 for Ptolemy, versus a mere 34 for Copernicus. The highest number appeared in the "Encyclopædia Britannica" on Astronomy during the 1960s, in a discussion of King Alfonso X of Castile's interest in astronomy during the 13th century. (Alfonso is credited with commissioning the Alfonsine Tables.)
<templatestyles src="Template:Blockquote/styles.css" />By this time each planet had been provided with from 40 to 60 epicycles to represent after a fashion its complex movement among the stars. Amazed at the difficulty of the project, Alfonso is credited with the remark that had he been present at the Creation he might have given excellent advice.
As it turns out, a major difficulty with this epicycles-on-epicycles theory is that historians examining books on Ptolemaic astronomy from the Middle Ages and the Renaissance have found absolutely no trace of multiple epicycles being used for each planet. The Alfonsine Tables, for instance, were apparently computed using Ptolemy's original unadorned methods.
Another problem is that the models themselves discouraged tinkering. In a deferent-and-epicycle model, the parts of the whole are interrelated. A change in a parameter to improve the fit in one place would throw off the fit somewhere else. Ptolemy's model is probably optimal in this regard. On the whole it gave good results but missed a little here and there. Experienced astronomers would have recognized these shortcomings and allowed for them.
Mathematical formalism.
According to the historian of science Norwood Russell Hanson:
<templatestyles src="Template:Blockquote/styles.css" />There is no bilaterally-symmetrical, nor eccentrically-periodic curve used in any branch of astrophysics or observational astronomy which could not be smoothly plotted as the resultant motion of a point turning within a constellation of epicycles, finite in number, revolving around a fixed deferent.
Any path—periodic or not, closed or open—can be represented with an infinite number of epicycles. This is because epicycles can be represented as a complex Fourier series; therefore, with a large number of epicycles, very complex paths can be represented in the complex plane.
Let the complex number
<templatestyles src="Block indent/styles.css"/>formula_0
where "a"0 and "k"0 are constants, "i"
√−1 is the imaginary unit, and "t" is time, correspond to a deferent centered on the origin of the complex plane and revolving with a radius "a"0 and angular velocity
<templatestyles src="Block indent/styles.css"/>formula_1
where "T" is the period.
If "z"1 is the path of an epicycle, then the deferent plus epicycle is represented as the sum
<templatestyles src="Block indent/styles.css"/>formula_2
This is an almost periodic function, and is a periodic function just when the ratio of the constants "kj" is rational. Generalizing to "N" epicycles yields the almost periodic function
<templatestyles src="Block indent/styles.css"/>formula_3
which is periodic just when every pair of "kj" is rationally related. Finding the coefficients "aj" to represent a time-dependent path in the complex plane, "z"
"f"("t"), is the goal of reproducing an orbit with deferent and epicycles, and this is a way of "saving the phenomena" (σώζειν τα φαινόμενα).
This parallel was noted by Giovanni Schiaparelli. Pertinent to the Copernican Revolution's debate about "saving the phenomena" versus offering explanations, one can understand why Thomas Aquinas, in the 13th century, wrote:
<templatestyles src="Template:Blockquote/styles.css" />Reason may be employed in two ways to establish a point: firstly, for the purpose of furnishing sufficient proof of some principle [...]. Reason is employed in another way, not as furnishing a sufficient proof of a principle, but as confirming an already established principle, by showing the congruity of its results, as in astronomy the theory of eccentrics and epicycles is considered as established, because thereby the sensible appearances of the heavenly movements can be explained; not, however, as if this proof were sufficient, forasmuch as some other theory might explain them.
Epicycles and the Catholic Church.
Being a system that was for the most part used to justify the geocentric model, with the exception of Copernicus' cosmos, the deferent and epicycle model was favored over the heliocentric ideas that Kepler and Galileo proposed. Later adopters of the epicyclic model such as Tycho Brahe, who considered the Church's scriptures when creating his model, were seen even more favorably. The Tychonic model was a hybrid model that blended the geocentric and heliocentric characteristics, with a still Earth that has the sun and moon surrounding it, and the planets orbiting the Sun. To Brahe, the idea of a revolving and moving Earth was impossible, and the scripture should be always paramount and respected. When Galileo tried to challenge Tycho Brahe's system, the church was dissatisfied with their views being challenged. Galileo's publication did not aid his case in his trial.
Bad science.
"Adding epicycles" has come to be used as a derogatory comment in modern scientific discussion. The term might be used, for example, to describe continuing to try to adjust a theory to make its predictions match the facts. There is a generally accepted idea that extra epicycles were invented to alleviate the growing errors that the Ptolemaic system noted as measurements became more accurate, particularly for Mars. According to this notion, epicycles are regarded by some as the paradigmatic example of bad science.
Copernicus added an extra epicycle to his planets, but that was only in an effort to eliminate Ptolemy's equant, which he considered a philosophical break away from Aristotle's perfection of the heavens. Mathematically, the second epicycle and the equant produce nearly the same results, and many Copernican astronomers before Kepler continued using the equant, as the mathematical calculations were easier. Copernicus' epicycles were also much smaller than Ptolemy's, and were required because the planets in his model moved in perfect circles. Johannes Kepler would later show that the planets move in ellipses, which removed the need for Copernicus' epicycles as well.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z_0=a_0 e^{i k_0 t}\\,,"
},
{
"math_id": 1,
"text": "k_0=\\frac{2\\pi}{T}\\,,"
},
{
"math_id": 2,
"text": "z_2=z_0+z_1=a_0 e^{i k_0 t}+a_1 e^{i k_1 t}\\,."
},
{
"math_id": 3,
"text": "z_N=\\sum_{j=0}^N a_j e^{i k_j t}\\,,"
}
] |
https://en.wikipedia.org/wiki?curid=143608
|
1436104
|
Maximum principle
|
Theorem in complex analysis
In the mathematical fields of differential equations and geometric analysis, the maximum principle is one of the most useful and best known tools of study. Solutions of a differential inequality in a domain "D" satisfy the maximum principle if they achieve their maxima at the boundary of "D".
The maximum principle enables one to obtain information about solutions of differential equations without any explicit knowledge of the solutions themselves. In particular, the maximum principle is a useful tool in the numerical approximation of solutions of ordinary and partial differential equations and in the determination of bounds for the errors in such approximations.
In a simple two-dimensional case, consider a function of two variables "u"("x","y") such that
formula_0
The weak maximum principle, in this setting, says that for any open precompact subset M of the domain of u, the maximum of u on the closure of M is achieved on the boundary of M. The strong maximum principle says that, unless u is a constant function, the maximum cannot also be achieved anywhere on M itself.
Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient. There is no single or most general maximum principle which applies to all situations at once.
In the field of convex optimization, there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary.
Intuition.
A partial formulation of the strong maximum principle.
Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let M be an open subset of Euclidean space and let u be a "C"2 function on M such that
formula_1
where for each i and j between 1 and n, "a""ij" is a function on M with "a""ij"
"a""ji".
Fix some choice of x in M. According to the spectral theorem of linear algebra, all eigenvalues of the matrix ["a""ij"("x")] are real, and there is an orthonormal basis of ℝ"n" consisting of eigenvectors. Denote the eigenvalues by "λ""i" and the corresponding eigenvectors by "v""i", for i from 1 to n. Then the differential equation, at the point x, can be rephrased as
formula_2
The essence of the maximum principle is the simple observation that if each eigenvalue is positive (which amounts to a certain formulation of "ellipticity" of the differential equation) then the above equation imposes a certain balancing of the directional second derivatives of the solution. In particular, if one of the directional second derivatives is negative, then another must be positive. At a hypothetical point where u is maximized, all directional second derivatives are automatically nonpositive, and the "balancing" represented by the above equation then requires all directional second derivatives to be identically zero.
This elementary reasoning could be argued to represent an infinitesimal formulation of the strong maximum principle, which states, under some extra assumptions (such as the continuity of a), that u must be constant if there is a point of M where u is maximized.
Note that the above reasoning is unaffected if one considers the more general partial differential equation
formula_3
since the added term is automatically zero at any hypothetical maximum point. The reasoning is also unaffected if one considers the more general condition
formula_4
in which one can even note the extra phenomena of having an outright contradiction if there is a strict inequality (> rather than ≥) in this condition at the hypothetical maximum point. This phenomenon is important in the formal proof of the classical weak maximum principle.
Non-applicability of the strong maximum principle.
However, the above reasoning no longer applies if one considers the condition
formula_5
since now the "balancing" condition, as evaluated at a hypothetical maximum point of u, only says that a weighted average of manifestly nonpositive quantities is nonpositive. This is trivially true, and so one cannot draw any nontrivial conclusion from it. This is reflected by any number of concrete examples, such as the fact that
formula_6
and on any open region containing the origin, the function −"x"2−"y"2 certainly has a maximum.
The classical weak maximum principle for linear elliptic PDE.
The essential idea.
Let M denote an open subset of Euclidean space. If a smooth function formula_7 is maximized at a point p, then one automatically has:
One can view a partial differential equation as the imposition of an algebraic relation between the various derivatives of a function. So, if u is the solution of a partial differential equation, then it is possible that the above conditions on the first and second derivatives of u form a contradiction to this algebraic relation. This is the essence of the maximum principle. Clearly, the applicability of this idea depends strongly on the particular partial differential equation in question.
For instance, if u solves the differential equation
formula_10
then it is clearly impossible to have formula_11 and formula_12 at any point of the domain. So, following the above observation, it is impossible for u to take on a maximum value. If, instead u solved the differential equation formula_13 then one would not have such a contradiction, and the analysis given so far does not imply anything interesting. If u solved the differential equation formula_14 then the same analysis would show that u cannot take on a minimum value.
The possibility of such analysis is not even limited to partial differential equations. For instance, if formula_7 is a function such that
formula_15
which is a sort of "non-local" differential equation, then the automatic strict positivity of the right-hand side shows, by the same analysis as above, that u cannot attain a maximum value.
There are many methods to extend the applicability of this kind of analysis in various ways. For instance, if u is a harmonic function, then the above sort of contradiction does not directly occur, since the existence of a point p where formula_16 is not in contradiction to the requirement formula_17 everywhere. However, one could consider, for an arbitrary real number s, the function "u""s" defined by
formula_18
It is straightforward to see that
formula_19
By the above analysis, if formula_20 then "u""s" cannot attain a maximum value. One might wish to consider the limit as s to 0 in order to conclude that u also cannot attain a maximum value. However, it is possible for the pointwise limit of a sequence of functions without maxima to have a maxima. Nonetheless, if M has a boundary such that M together with its boundary is compact, then supposing that u can be continuously extended to the boundary, it follows immediately that both u and "u""s" attain a maximum value on formula_21 Since we have shown that "u""s", as a function on M, does not have a maximum, it follows that the maximum point of "u""s", for any s, is on formula_22 By the sequential compactness of formula_23 it follows that the maximum of u is attained on formula_22 This is the weak maximum principle for harmonic functions. This does not, by itself, rule out the possibility that the maximum of u is also attained somewhere on M. That is the content of the "strong maximum principle," which requires further analysis.
The use of the specific function formula_24 above was very inessential. All that mattered was to have a function which extends continuously to the boundary and whose Laplacian is strictly positive. So we could have used, for instance,
formula_25
with the same effect.
The classical strong maximum principle for linear elliptic PDE.
Summary of proof.
Let M be an open subset of Euclidean space. Let formula_7 be a twice-differentiable function which attains its maximum value C. Suppose that
formula_26
Suppose that one can find (or prove the existence of):
"C".
formula_28
and such that one has "u" + "h" ≤ "C" on the boundary of Ω with "h"("x"0)
0
Then "L"("u" + "h" − "C") ≥ 0 on Ω with "u" + "h" − "C" ≤ 0 on the boundary of Ω; according to the weak maximum principle, one has "u" + "h" − "C" ≤ 0 on Ω. This can be reorganized to say
formula_29
for all x in Ω. If one can make the choice of h so that the right-hand side has a manifestly positive nature, then this will provide a contradiction to the fact that "x"0 is a maximum point of u on M, so that its gradient must vanish.
Proof.
The above "program" can be carried out. Choose Ω to be a spherical annulus; one selects its center "x"c to be a point closer to the closed set "u"−1("C") than to the closed set ∂"M", and the outer radius R is selected to be the distance from this center to "u"−1("C"); let "x"0 be a point on this latter set which realizes the distance. The inner radius ρ is arbitrary. Define
formula_30
Now the boundary of Ω consists of two spheres; on the outer sphere, one has "h"
0; due to the selection of R, one has "u" ≤ "C" on this sphere, and so "u" + "h" − "C" ≤ 0 holds on this part of the boundary, together with the requirement "h"("x"0)
0. On the inner sphere, one has "u" < "C". Due to the continuity of u and the compactness of the inner sphere, one can select "δ" > 0 such that "u" + "δ" < "C". Since h is constant on this inner sphere, one can select "ε" > 0 such that "u" + "h" ≤ "C" on the inner sphere, and hence on the entire boundary of Ω.
Direct calculation shows
formula_31
There are various conditions under which the right-hand side can be guaranteed to be nonnegative; see the statement of the theorem below.
Lastly, note that the directional derivative of h at "x"0 along the inward-pointing radial line of the annulus is strictly positive. As described in the above summary, this will ensure that a directional derivative of u at "x"0 is nonzero, in contradiction to "x"0 being a maximum point of u on the open set M.
Statement of the theorem.
The following is the statement of the theorem in the books of Morrey and Smoller, following the original statement of Hopf (1927):
<templatestyles src="Template:Blockquote/styles.css" />
The point of the continuity assumption is that continuous functions are bounded on compact sets, the relevant compact set here being the spherical annulus appearing in the proof. Furthermore, by the same principle, there is a number λ such that for all x in the annulus, the matrix ["a""ij"("x")] has all eigenvalues greater than or equal to λ. One then takes α, as appearing in the proof, to be large relative to these bounds. Evans's book has a slightly weaker formulation, in which there is assumed to be a positive number λ which is a lower bound of the eigenvalues of ["a""ij"] for all x in M.
These continuity assumptions are clearly not the most general possible in order for the proof to work. For instance, the following is Gilbarg and Trudinger's statement of the theorem, following the same proof:
<templatestyles src="Template:Blockquote/styles.css" />
One cannot naively extend these statements to the general second-order linear elliptic equation, as already seen in the one-dimensional case. For instance, the ordinary differential equation "y"″ + 2"y"
0 has sinusoidal solutions, which certainly have interior maxima. This extends to the higher-dimensional case, where one often has solutions to "eigenfunction" equations Δ"u" + "cu"
0 which have interior maxima. The sign of "c" is relevant, as also seen in the one-dimensional case; for instance the solutions to "y"″ - 2"y"
0 are exponentials, and the character of the maxima of such functions is quite different from that of sinusoidal functions.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\partial^2u}{\\partial x^2}+\\frac{\\partial^2u}{\\partial y^2}=0."
},
{
"math_id": 1,
"text": "\\sum_{i=1}^n\\sum_{j=1}^n a_{ij}\\frac{\\partial^2u}{\\partial x^i\\,\\partial x^j}=0"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^n \\lambda_i \\left. \\frac{d^2}{dt^2}\\right|_{t=0}\\big(u(x+tv_i)\\big)=0."
},
{
"math_id": 3,
"text": "\\sum_{i=1}^n\\sum_{j=1}^n a_{ij}\\frac{\\partial^2u}{\\partial x^i \\, \\partial x^j}+\\sum_{i=1}^n b_i\\frac{\\partial u}{\\partial x^i}=0,"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n\\sum_{j=1}^n a_{ij}\\frac{\\partial^2u}{\\partial x^i \\, \\partial x^j}+\\sum_{i=1}^n b_i\\frac{\\partial u}{\\partial x^i}\\geq 0,"
},
{
"math_id": 5,
"text": "\\sum_{i=1}^n\\sum_{j=1}^n a_{ij}\\frac{\\partial^2u}{\\partial x^i\\,\\partial x^j}+\\sum_{i=1}^n b_i\\frac{\\partial u}{\\partial x^i}\\leq 0,"
},
{
"math_id": 6,
"text": "\\frac{\\partial^2}{\\partial x^2}\\big({-x}^2-y^2\\big)+\\frac{\\partial^2}{\\partial y^2}\\big({-x}^2-y^2\\big)\\leq 0,"
},
{
"math_id": 7,
"text": "u:M\\to\\mathbb{R}"
},
{
"math_id": 8,
"text": "(du)(p)=0"
},
{
"math_id": 9,
"text": "(\\nabla^2 u)(p)\\leq 0,"
},
{
"math_id": 10,
"text": "\\Delta u=|du|^2+2,"
},
{
"math_id": 11,
"text": "\\Delta u\\leq 0"
},
{
"math_id": 12,
"text": "du=0"
},
{
"math_id": 13,
"text": "\\Delta u=|du|^2"
},
{
"math_id": 14,
"text": "\\Delta u=|du|^2-2,"
},
{
"math_id": 15,
"text": "\\Delta u-|du|^4=\\int_M e^{u(x)}\\,dx,"
},
{
"math_id": 16,
"text": "\\Delta u(p)\\leq 0"
},
{
"math_id": 17,
"text": "\\Delta u=0"
},
{
"math_id": 18,
"text": "u_s(x)=u(x)+se^{x_1}."
},
{
"math_id": 19,
"text": "\\Delta u_s=se^{x_1}."
},
{
"math_id": 20,
"text": "s>0"
},
{
"math_id": 21,
"text": "M\\cup\\partial M."
},
{
"math_id": 22,
"text": "\\partial M."
},
{
"math_id": 23,
"text": "\\partial M,"
},
{
"math_id": 24,
"text": "e^{x_1}"
},
{
"math_id": 25,
"text": "u_s(x)=u(x)+s|x|^2"
},
{
"math_id": 26,
"text": "a_{ij}\\frac{\\partial^2u}{\\partial x^i\\,\\partial x^j}+b_i\\frac{\\partial u}{\\partial x^i}\\geq 0."
},
{
"math_id": 27,
"text": "h:\\Omega\\to\\mathbb{R}"
},
{
"math_id": 28,
"text": "a_{ij}\\frac{\\partial^2h}{\\partial x^i\\,\\partial x^j}+b_i\\frac{\\partial h}{\\partial x^i}\\geq 0,"
},
{
"math_id": 29,
"text": "-\\frac{u(x)-u(x_0)}{|x-x_0|}\\geq \\frac{h(x)-h(x_0)}{|x-x_0|}"
},
{
"math_id": 30,
"text": "h(x)=\\varepsilon\\Big(e^{-\\alpha|x-x_{\\text{c}}|^2}-e^{-\\alpha R^2}\\Big)."
},
{
"math_id": 31,
"text": "\\sum_{i=1}^n\\sum_{j=1}^na_{ij}\\frac{\\partial^2h}{\\partial x^i\\,\\partial x^j}+\\sum_{i=1}^nb_i\\frac{\\partial h}{\\partial x^i}=\\varepsilon \\alpha e^{-\\alpha|x-x_{\\text{c}}|^2}\\left(4\\alpha\\sum_{i=1}^n\\sum_{j=1}^n a_{ij}(x)\\big(x^i-x_{\\text{c}}^i\\big)\\big(x^j-x_{\\text{c}}^j\\big)-2\\sum_{i=1}^n a_{ii}-2 \\sum_{i=1}^n b_i\\big(x^i-x_{\\text{c}}^i\\big)\\right)."
}
] |
https://en.wikipedia.org/wiki?curid=1436104
|
143634
|
Examples of groups
|
Some elementary examples of groups in mathematics are given on Group (mathematics).
Further examples are listed here.
Permutations of a set of three elements.
Consider three colored blocks (red, green, and blue), initially placed in the order RGB. Let "a" be the operation "swap the first block and the second block", and "b" be the operation "swap the second block and the third block".
We can write "xy" for the operation "first do "y", then do "x""; so that "ab" is the operation RGB → RBG → BRG, which could be described as "move the first two blocks one position to the right and put the third block into the first position". If we write "e" for "leave the blocks as they are" (the identity operation), then we can write the six permutations of the three blocks as follows:
Note that "aa" has the effect RGB → GRB → RGB; so we can write "aa" = "e". Similarly, "bb" = ("aba")("aba") = "e"; ("ab")("ba") = ("ba")("ab") = "e"; so every element has an inverse.
By inspection, we can determine associativity and closure; note in particular that ("ba")"b" = "bab" = "b"("ab").
Since it is built up from the basic operations "a" and "b", we say that the set {"a", "b"} "generates" this group. The group, called the "symmetric group" S3, has order 6, and is non-abelian (since, for example, "ab" ≠ "ba").
Group of translations of the plane.
A "translation" of the plane is a rigid movement of every point of the plane for a certain distance in a certain direction.
For instance "move in the North-East direction for 2 kilometres" is a translation of the plane.
Two translations such as "a" and "b" can be composed to form a new translation "a" ∘ "b" as follows: first follow the prescription of "b", then that of "a".
For instance, if
"a" = "move North-East for 3 kilometres"
and
"b" = "move South-East for 4 kilometres"
then
"a" ∘ "b" = "move to bearing 8.13° for 5 kilometres" "(bearing is measured counterclockwise and from East)"
Or, if
"a" = "move to bearing 36.87° for 3 kilometres" "(bearing is measured counterclockwise and from East)"
and
"b" = "move to bearing 306.87° for 4 kilometres" "(bearing is measured counterclockwise and from East)"
then
"a" ∘ "b" = "move East for 5 kilometres"
(see Pythagorean theorem for why this is so, geometrically).
The set of all translations of the plane with composition as the operation forms a group:
This is an abelian group and our first (nondiscrete) example of a Lie group: a group which is also a manifold.
Symmetry group of a square: dihedral group of order 8.
Groups are very important to describe the symmetry of objects, be they geometrical (like a tetrahedron) or algebraic (like a set of equations).
As an example, we consider a glass square of a certain thickness (with a letter "F" written on it, just to make the different positions distinguishable).
In order to describe its symmetry, we form the set of all those rigid movements of the square that do not make a visible difference (except the "F"). For instance, if an object turned 90° clockwise still looks the same, the movement is one element of the set, for instance "a".
We could also flip it around a vertical axis so that its bottom surface becomes its top surface, while the left edge becomes the right edge. Again, after performing this movement, the glass square looks the same, so this is also an element of our set and we call it "b". The movement that does nothing is denoted by "e".
Given two such movements "x" and "y", it is possible to define the composition "x" ∘ "y" as above: first the movement "y" is performed, followed by the movement "x".
The result will leave the slab looking like before.
The point is that the set of all those movements, with composition as the operation, forms a group.
This group is the most concise description of the square's symmetry.
Chemists use symmetry groups of this type to describe the symmetry of crystals and molecules.
Generating the group.
Let's investigate our square's symmetry group some more. Right now, we have the elements "a", "b" and "e", but we can easily form more:
for instance "a" ∘ "a", also written as "a"2, is a 180° degree turn.
"a"3 is a 270° clockwise rotation (or a 90° counter-clockwise rotation).
We also see that "b"2 = "e" and also "a"4 = "e".
Here's an interesting one: what does "a" ∘ "b" do?
First flip horizontally, then rotate.
Try to visualize that "a" ∘ "b" = "b" ∘ "a"3.
Also, "a"2 ∘ "b" is a vertical flip and is equal to "b" ∘ "a"2.
We say that elements "a" and "b" generate the group.
This group of order 8 has the following Cayley table:
For any two elements in the group, the table records what their composition is. Here we wrote ""a"3"b"" as a shorthand for "a"3 ∘ "b".
In mathematics this group is known as the dihedral group of order 8, and is either denoted Dih4, D4 or D8, depending on the convention.
This was an example of a non-abelian group: the operation ∘ here is not commutative, which can be seen from the table; the table is not symmetrical about the main diagonal.
Normal subgroup.
This version of the Cayley table shows that this group has one normal subgroup shown with a red background. In this table r means rotations, and f means flips. Because the subgroup is normal, the left coset is the same as the right coset.
Free group on two generators.
The free group with two generators "a" and "b" consists of all finite strings/words that can be formed from the four symbols "a", "a"−1, "b" and "b"−1 such that no "a" appears directly next to an "a"−1 and no "b" appears directly next to a "b"−1.
Two such strings can be concatenated and converted into a string of this type by repeatedly replacing the "forbidden" substrings with the empty string.
For instance: ""abab"−1"a"−1" concatenated with
""abab"−1"a" yields "abab"−1"a"−1"abab"−1"a", which gets reduced to "abaab"−1"a".
One can check that the set of those strings with this operation forms a group with the empty string ε := " being the identity element
(Usually the quotation marks are left off; this is why the symbol ε is required).
This is another infinite non-abelian group.
Free groups are important in algebraic topology; the free group in two generators is also used for a proof of the Banach–Tarski paradox.
Set of maps.
Sets of maps from a set to a group.
Let "G" be a group and "S" a set. The set of maps "M"("S", "G") is itself a group; namely for two maps "f", "g" of "S" into "G" we define "fg" to be the map such that ("fg")("x") = "f"("x")"g"("x") for every "x" in "S" and "f" −1 to be the map such that "f" −1("x") = "f"("x")−1.
Take maps "f", "g", and "h" in "M"("S", "G").
For every "x" in "S", "f"("x") and "g"("x") are both in "G", and so is ("fg")("x").
Therefore, "fg" is also in "M"("S", "G"), i.e. "M"("S", "G") is closed.
"M"("S", "G") is associative because (("fg")"h")("x") = ("fg")("x")"h"("x") = ("f"("x")"g"("x"))"h"("x") = "f"("x")("g"("x")"h"("x")) = "f"("x")("gh")("x") = ("f"("gh"))("x").
And there is a map "i" such that "i"("x") = "e" where "e" is the identity element of "G".
The map "i" is such that for all "f" in "M"("S", "G") we have
"fi" = "if" = "f", i.e. "i" is the identity element of "M"("S", "G").
Thus, "M"("S", "G") is actually a group.
If "G" is abelian then ("fg")("x") = "f"("x")"g"("x") = "g"("x")"f"("x") = ("gf")("x"), and therefore so is "M"("S", "G").
Automorphism groups.
Groups of permutations.
Let "G" be the set of bijective mappings of a set "S" onto itself. Then "G" forms a group under ordinary composition of mappings. This group is called the symmetric group, and is commonly denoted formula_0, Σ"S", or formula_1. The identity element of "G" is the identity map of "S". For two maps "f", "g" in "G" are bijective, "fg" is also bijective. Therefore, "G" is closed. The composition of maps is associative; hence "G" is a group. "S" may be either finite or infinite.
Matrix groups.
If "n" is some positive integer, we can consider the set of all invertible "n" by "n" matrices with real number components, say.
This is a group with matrix multiplication as the operation. It is called the general linear group, and denoted GL"n"(R) or GL("n", R) (where R is the set of real numbers). Geometrically, it contains all combinations of rotations, reflections, dilations and skew transformations of "n"-dimensional Euclidean space that fix a given point (the origin).
If we restrict ourselves to matrices with determinant 1, then we get another group, the special linear group, SL"n"(R) or SL("n", R).
Geometrically, this consists of all the elements of GL"n"(R) that preserve both orientation and volume of the various geometric solids in Euclidean space.
If instead we restrict ourselves to orthogonal matrices, then we get the orthogonal group O"n"(R) or O("n", R).
Geometrically, this consists of all combinations of rotations and reflections that fix the origin. These are precisely the transformations which preserve lengths and angles.
Finally, if we impose both restrictions, then we get the special orthogonal group SO"n"(R) or SO("n", R), which consists of rotations only.
These groups are our first examples of infinite non-abelian groups. They are also happen to be Lie groups. In fact, most of the important Lie groups (but not all) can be expressed as matrix groups.
If this idea is generalised to matrices with complex numbers as entries, then we get further useful Lie groups, such as the unitary group U("n").
We can also consider matrices with quaternions as entries; in this case, there is no well-defined notion of a determinant (and thus no good way to define a quaternionic "volume"), but we can still define a group analogous to the orthogonal group, the symplectic group Sp("n").
Furthermore, the idea can be treated purely algebraically with matrices over any field, but then the groups are not Lie groups.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{Sym}(S)"
},
{
"math_id": 1,
"text": "\\mathfrak{S}_{S}"
}
] |
https://en.wikipedia.org/wiki?curid=143634
|
1436372
|
Transition point
|
In the field of fluid dynamics the point at which the boundary layer changes from laminar to turbulent is called the transition point. Where and how this transition occurs depends on the Reynolds number, the pressure gradient, pressure fluctuations due to sound, surface vibration, the initial turbulence level of the flow, boundary layer suction, surface heat flows, and surface roughness. The effects of a boundary layer turned turbulent are an increase in drag due to skin friction. As speed increases, the upper surface transition point tends to move forward. As the angle of attack increases, the upper surface transition point also tends to move forward.
Position.
The exact position of the transition point is hard to determine due to it being dependent on a large amount of factors. Several methods to predict it to a certain degree of accuracy do exist, however. Most of these methods revolve around analysing the stability of the (laminar) boundary layer using stability theory: a laminar boundary layer may become unstable due to small disturbances, turning it turbulent. One such method assessing the transition point this way is the eN method.
eN method.
The eN method works by superimposing small disturbances on the flow, considering it to be laminar. The assumption is made that both the original and the newly disturbed flow satisfy the Navier-Stokes equations. This disturbed flow can be linearised and described with a perturbation equation. This equation may have unstable solutions. Any such case where a disturbance is caused where the perturbation equation has unstable solutions can be considered unstable, and hence could lead to a transition point. This method assumes a flow parallel to the boundary layer with a constant shape, which will not always be the case in analysis. The method can be used to determine the local (in)stability at the span-wise position, however. If a local transition occurs, it must also occur under the same circumstances on the global frame. This analysis can be repeated for multiple span-wise stations. As the transition point is determined by the first point where this happens, only the point closest to the leading edge where this happens is sought for.
A two-dimensional disturbance stream function can be defined as formula_0, from which the disturbance velocity components in the x- and y directions follow from formula_1. Here the circular frequency ω is taken to be the real in the disturbance stream, and the wave number α complex. Hence, in the case of an instability, the complex part of the wave number needs to be positive for there to be a growing disturbance. Any prior disturbance passing through will be amplified by formula_2, where x0 is the value of x where the disturbance with frequency ω first becomes unstable (known as the Orr-Sommerfeld equation). Experiments by Smith and Gamberoni, and later by Van Ingen have shown that transition occurs when the amplification factor (being the "critical amplification factor") equals 9. For clean wind tunnels and for atmospheric turbulence, the critical amplification factors equals 12 and 4 in that order.
Experiments have shown that the largest factors affecting the position where this happens are the shape of the velocity profile over the lift-generating surface, the Reynolds number, and the frequency or wavelength of the disturbances itself.
Behind the transition point in a boundary layer the mean speed and friction drag increases.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Psi(x,y,t) = \\phi(y)e^{-\\alpha_i x}e^{i(\\alpha_r x - \\omega t)}"
},
{
"math_id": 1,
"text": "u' = \\frac{\\partial \\Psi}{\\partial y}; v' = -\\frac{\\partial \\Psi}{\\partial x}"
},
{
"math_id": 2,
"text": "\\sigma_a = \\ln{\\frac{a}{a_0}} = \\int_{x0}^x - \\alpha_i dx"
}
] |
https://en.wikipedia.org/wiki?curid=1436372
|
1436650
|
Nucleotide diversity
|
Nucleotide diversity is a concept in molecular genetics which is used to measure the degree of polymorphism within a population.
One commonly used measure of nucleotide diversity was first introduced by Nei and Li in 1979. This measure is defined as the average number of nucleotide differences per site between two DNA sequences in all possible pairs in the sample population, and is denoted by formula_0.
An estimator for formula_0 is given by:
formula_1
where formula_2 and formula_3 are the respective frequencies of the formula_4 th and formula_5 th sequences, formula_6 is the number of nucleotide differences per nucleotide site between the formula_4 th and formula_5 th sequences, and formula_7 is the number of sequences in the sample. The term in front of the sums guarantees an unbiased estimator, which does not depend on how many sequences you sample.
Nucleotide diversity is a measure of genetic variation. It is usually associated with other statistical measures of population diversity, and is similar to expected heterozygosity. This statistic may be used to monitor diversity within or between ecological populations, to examine the genetic variation in crops and related species, or to determine evolutionary relationships.
Nucleotide diversity can be calculated by examining the DNA sequences directly, or may be estimated from molecular marker data, such as Random Amplified Polymorphic DNA (RAPD) data and Amplified Fragment Length Polymorphism (AFLP) data.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "\\hat{\\pi} = \\frac{n}{n-1} \\sum_{ij} x_i x_j \\pi_{ij} = \\frac{n}{n-1} \\sum_{i=2}^n \\sum_{j=1}^{i-1} 2 x_i x_j \\pi_{ij}"
},
{
"math_id": 2,
"text": "x_i"
},
{
"math_id": 3,
"text": "x_j"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "\\pi_{ij}"
},
{
"math_id": 7,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=1436650
|
1436668
|
Voigt notation
|
Mathematical Concept
In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application.
For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector
formula_0
As another example:
The stress tensor (in matrix notation) is given as
formula_1
In Voigt notation it is simplified to a 6-dimensional vector:
formula_2
The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as
formula_3
Its representation in Voigt notation is
formula_4
where formula_5, formula_6, and formula_7 are engineering shear strains.
The benefit of using different representations for stress and strain is that the scalar invariance
formula_8
is preserved.
Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix.
Mnemonic rule.
A simple mnemonic rule for memorizing Voigt notation is as follows:
Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue).
Mandel notation.
For a symmetric tensor of second rank
formula_9
only six components are distinct, the three on the diagonal and the others being off-diagonal.
Thus it can be expressed, in Mandel notation, as the vector
formula_10
The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors,
for example:
formula_11
A symmetric tensor of rank four satisfying formula_12 and formula_13 has 81 components in three-dimensional space, but only 36
components are distinct. Thus, in Mandel notation, it can be expressed as
formula_14
Applications.
The notation is named after physicist Woldemar Voigt & John Nye (scientist). It is useful, for example, in calculations involving constitutive models to simulate materials, such as the generalized Hooke's law, as well as finite element analysis, and Diffusion MRI.
Hooke's law has a symmetric fourth-order stiffness tensor with 81 components (3×3×3×3), but because the application of such a rank-4 tensor to a symmetric rank-2 tensor must yield another symmetric rank-2 tensor, not all of the 81 elements are independent. Voigt notation enables such a rank-4 tensor to be "represented" by a 6×6 matrix. However, Voigt's form does not preserve the sum of the squares, which in the case of Hooke's law has geometric significance. This explains why weights are introduced (to make the mapping an isometry).
A discussion of invariance of Voigt's notation and Mandel's notation can be found in Helnwein (2001).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\langle x_{1 1}, x_{2 2}, x_{1 2}\\rangle."
},
{
"math_id": 1,
"text": "\\boldsymbol{\\sigma}=\n\\begin{bmatrix}\n \\sigma_{xx} & \\sigma_{xy} & \\sigma_{xz} \\\\\n \\sigma_{yx} & \\sigma_{yy} & \\sigma_{yz} \\\\\n \\sigma_{zx} & \\sigma_{zy} & \\sigma_{zz}\n\\end{bmatrix}.\n"
},
{
"math_id": 2,
"text": "\\tilde\\sigma= (\\sigma_{xx}, \\sigma_{yy}, \\sigma_{zz},\n \\sigma_{yz},\\sigma_{xz},\\sigma_{xy}) \\equiv (\\sigma_1, \\sigma_2, \\sigma_3, \\sigma_4, \\sigma_5, \\sigma_6).\n"
},
{
"math_id": 3,
"text": "\\boldsymbol{\\epsilon}=\n\\begin{bmatrix}\n \\epsilon_{xx} & \\epsilon_{xy} & \\epsilon_{xz} \\\\\n \\epsilon_{yx} & \\epsilon_{yy} & \\epsilon_{yz} \\\\\n \\epsilon_{zx} & \\epsilon_{zy} & \\epsilon_{zz}\n\\end{bmatrix}.\n"
},
{
"math_id": 4,
"text": "\\tilde\\epsilon= (\\epsilon_{xx}, \\epsilon_{yy}, \\epsilon_{zz},\n \\gamma_{yz},\\gamma_{xz},\\gamma_{xy}) \\equiv (\\epsilon_1, \\epsilon_2, \\epsilon_3, \\epsilon_4, \\epsilon_5, \\epsilon_6),\n"
},
{
"math_id": 5,
"text": "\\gamma_{xy}=2\\epsilon_{xy}"
},
{
"math_id": 6,
"text": "\\gamma_{yz} = 2\\epsilon_{yz}"
},
{
"math_id": 7,
"text": "\\gamma_{zx} = 2\\epsilon_{zx}"
},
{
"math_id": 8,
"text": " \\boldsymbol{\\sigma}\\cdot\\boldsymbol{\\epsilon} = \\sigma_{ij}\\epsilon_{ij} = \\tilde\\sigma \\cdot \\tilde\\epsilon\n"
},
{
"math_id": 9,
"text": " \\boldsymbol{\\sigma}=\n\\begin{bmatrix}\n \\sigma_{11} & \\sigma_{12} & \\sigma_{13} \\\\\n \\sigma_{21} & \\sigma_{22} & \\sigma_{23} \\\\\n \\sigma_{31} & \\sigma_{32} & \\sigma_{33}\n\\end{bmatrix}\n"
},
{
"math_id": 10,
"text": "\n\\tilde \\sigma ^M =\n\\langle \\sigma_{11}, \n\\sigma_{22},\n\\sigma_{33},\n\\sqrt 2 \\sigma_{23},\n\\sqrt 2 \\sigma_{13},\n\\sqrt 2 \\sigma_{12}\n\\rangle. "
},
{
"math_id": 11,
"text": " \\tilde \\sigma : \\tilde \\sigma = \\tilde \\sigma^M \\cdot \\tilde \\sigma^M = \n\\sigma_{11}^2 +\n\\sigma_{22}^2 +\n\\sigma_{33}^2 +\n2 \\sigma_{23}^2 +\n2 \\sigma_{13}^2 +\n2 \\sigma_{12}^2.\n"
},
{
"math_id": 12,
"text": " D_{ijkl} = D_{jikl} "
},
{
"math_id": 13,
"text": " D_{ijkl} = D_{ijlk} "
},
{
"math_id": 14,
"text": " \\tilde D^M =\n\\begin{pmatrix}\n D_{1111} & D_{1122} & D_{1133} & \\sqrt 2 D_{1123} & \\sqrt 2 D_{1113} & \\sqrt 2 D_{1112} \\\\\n D_{2211} & D_{2222} & D_{2233} & \\sqrt 2 D_{2223} & \\sqrt 2 D_{2213} & \\sqrt 2 D_{2212} \\\\\n D_{3311} & D_{3322} & D_{3333} & \\sqrt 2 D_{3323} & \\sqrt 2 D_{3313} & \\sqrt 2 D_{3312} \\\\\n \\sqrt 2 D_{2311} & \\sqrt 2 D_{2322} & \\sqrt 2 D_{2333} & 2 D_{2323} & 2 D_{2313} & 2 D_{2312} \\\\\n \\sqrt 2 D_{1311} & \\sqrt 2 D_{1322} & \\sqrt 2 D_{1333} & 2 D_{1323} & 2 D_{1313} & 2 D_{1312} \\\\\n \\sqrt 2 D_{1211} & \\sqrt 2 D_{1222} & \\sqrt 2 D_{1233} & 2 D_{1223} & 2 D_{1213} & 2 D_{1212} \\\\\n\\end{pmatrix}.\n"
}
] |
https://en.wikipedia.org/wiki?curid=1436668
|
14367845
|
Xenobiotic metabolism
|
Xenobiotic metabolism (from the Greek xenos "stranger" and biotic "related to living beings") is the set of metabolic pathways that modify the chemical structure of xenobiotics, which are compounds foreign to an organism's normal biochemistry, such as drugs and poisons. These pathways are a form of biotransformation present in all major groups of organisms, and are considered to be of ancient origin. These reactions often act to detoxify poisonous compounds; however, in cases such as in the metabolism of alcohol, the intermediates in xenobiotic metabolism can themselves be the cause of toxic effects.
Xenobiotic metabolism is divided into three phases. In phase I, enzymes such as cytochrome P450 oxidases introduce reactive or polar groups into xenobiotics. These modified compounds are then conjugated to polar compounds in phase II reactions. These reactions are catalysed by transferase enzymes such as glutathione S-transferases. Finally, in phase III, the conjugated xenobiotics may be further processed, before being recognised by efflux transporters and pumped out of cells.
The reactions in these pathways are of particular interest in medicine as part of drug metabolism and as a factor contributing to multidrug resistance in infectious diseases and cancer chemotherapy. The actions of some drugs as substrates or inhibitors of enzymes involved in xenobiotic metabolism are a common reason for hazardous drug interactions. These pathways are also important in environmental science, with the xenobiotic metabolism of microorganisms determining whether a pollutant will be broken down during bioremediation, or persist in the environment. The enzymes of xenobiotic metabolism, particularly the glutathione S-transferases are also important in agriculture, since they may produce resistance to pesticides and herbicides.
Permeability barriers and detoxification.
That the exact compounds an organism is exposed to will be largely unpredictable, and may differ widely over time, is a major characteristic of xenobiotic toxic stress. The major challenge faced by xenobiotic detoxification systems is that they must be able to remove the almost-limitless number of xenobiotic compounds from the complex mixture of chemicals involved in normal metabolism. The solution that has evolved to address this problem is an elegant combination of physical barriers and low-specificity enzymatic systems.
All organisms use cell membranes as hydrophobic permeability barriers to control access to their internal environment. Polar compounds cannot diffuse across these cell membranes, and the uptake of useful molecules is mediated through transport proteins that specifically select substrates from the extracellular mixture. This selective uptake means that most hydrophilic molecules cannot enter cells, since they are not recognised by any specific transporters. In contrast, the diffusion of hydrophobic compounds across these barriers cannot be controlled, and organisms, therefore, cannot exclude lipid-soluble xenobiotics using membrane barriers.
However, the existence of a permeability barrier means that organisms were able to evolve detoxification systems that exploit the hydrophobicity common to membrane-permeable xenobiotics. These systems therefore solve the specificity problem by possessing such broad substrate specificities that they metabolise almost any non-polar compound. Useful metabolites are excluded since they are polar, and in general contain one or more charged groups.
The detoxification of the reactive by-products of normal metabolism cannot be achieved by the systems outlined above, because these species are derived from normal cellular constituents and usually share their polar characteristics. However, since these compounds are few in number, specific enzymes can recognize and remove them. Examples of these specific detoxification systems are the glyoxalase system, which removes the reactive aldehyde methylglyoxal, and the various antioxidant systems that eliminate reactive oxygen species.
Phases of detoxification.
The metabolism of xenobiotics is often divided into three phases: modification, conjugation, and excretion. These reactions act in concert to detoxify xenobiotics and remove them from cells.
Phase I - modification.
In phase I, a variety of enzymes acts to introduce reactive and polar groups into their substrates. One of the most common modifications is hydroxylation catalysed by the cytochrome P-450-dependent mixed-function oxidase system. These enzyme complexes act to incorporate an atom of oxygen into nonactivated hydrocarbons, which can result in either the introduction of hydroxyl groups or N-, O- and S-dealkylation of substrates. The reaction mechanism of the P-450 oxidases proceeds through the reduction of cytochrome-bound oxygen and the generation of a highly-reactive oxyferryl species, according to the following scheme:
formula_0
Phase II - conjugation.
In subsequent phase II reactions, these activated xenobiotic metabolites are conjugated with charged species such as glutathione (GSH), sulfate, glycine, or glucuronic acid. These reactions are catalysed by a large group of broad-specificity transferases, which in combination can metabolise almost any hydrophobic compound that contains nucleophilic or electrophilic groups. One of the most important of these groups are the glutathione S-transferases (GSTs). The addition of large anionic groups (such as GSH) detoxifies reactive electrophiles and produces more polar metabolites that cannot diffuse across membranes, and may, therefore, be actively transported.
Phase III - further modification and excretion.
After phase II reactions, the xenobiotic conjugates may be further metabolised. A common example is the processing of glutathione conjugates to acetylcysteine (mercapturic acid) conjugates. Here, the γ-glutamate and glycine residues in the glutathione molecule are removed by Gamma-glutamyl transpeptidase and dipeptidases. In the final step, the cystine residue in the conjugate is acetylated.
Conjugates and their metabolites can be excreted from cells in phase III of their metabolism, with the anionic groups acting as affinity tags for a variety of membrane transporters of the multidrug resistance protein (MRP) family. These proteins are members of the family of ATP-binding cassette transporters and can catalyse the ATP-dependent transport of a huge variety of hydrophobic anions, and thus act to remove phase II products to the extracellular medium, where they may be further metabolised or excreted.
Endogenous toxins.
The detoxification of endogenous reactive metabolites such as peroxides and reactive aldehydes often cannot be achieved by the system described above. This is the result of these species' being derived from normal cellular constituents and usually sharing their polar characteristics. However, since these compounds are few in number, it is possible for enzymatic systems to utilize specific molecular recognition to recognize and remove them. The similarity of these molecules to useful metabolites therefore means that different detoxification enzymes are usually required for the metabolism of each group of endogenous toxins. Examples of these specific detoxification systems are the glyoxalase system, which acts to dispose of the reactive aldehyde methylglyoxal, and the various antioxidant systems that remove reactive oxygen species.
History.
Studies on how people transform the substances that they ingest began in the mid-nineteenth century, with chemists discovering that organic chemicals such as benzaldehyde could be oxidized and conjugated to amino acids in the human body. During the remainder of the nineteenth century, several other basic detoxification reactions were discovered, such as methylation, acetylation, and sulfonation.
In the early twentieth century, work moved on to the investigation of the enzymes and pathways that were responsible for the production of these metabolites. This field became defined as a separate area of study with the publication by Richard Williams of the book "Detoxication mechanisms" in 1947. This modern biochemical research resulted in the identification of glutathione "S"-transferases in 1961, followed by the discovery of cytochrome P450s in 1962, and the realization of their central role in xenobiotic metabolism in 1963.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Databases
Drug metabolism
Microbial biodegradation
History
|
[
{
"math_id": 0,
"text": "\\mbox{NADPH} + \\mbox{H}^+ + \\mbox{RH} \\rightarrow \\mbox{NADP}^+ + \\mbox{H}_2\\mbox{O} +\\mbox{ROH} \\, "
}
] |
https://en.wikipedia.org/wiki?curid=14367845
|
143681
|
Elongation (astronomy)
|
In astronomy, angular separation between the Sun and a planet, with the Earth as a reference point
In astronomy, a planet's elongation is the angular separation between the Sun and the planet, with Earth as the reference point. The greatest elongation of a given inferior planet occurs when this planet's position, in its orbital path around the Sun, is at tangent to the observer on Earth. Since an inferior planet is well within the area of Earth's orbit around the Sun, observation of its elongation should not pose that much a challenge (compared to deep-sky objects, for example). When a planet is at its greatest elongation, it appears farthest from the Sun as viewed from Earth, so its apparition is also best at that point.
When an inferior planet is visible after sunset, it is near its greatest eastern elongation. When an inferior planet is visible before sunrise, it is near its greatest western elongation. The angle of the maximum elongation (east or west) for Mercury is between 18° and 28°, while that for Venus is between 45° and 47°. These values vary because the planetary orbits are elliptical rather than perfectly circular. Another factor contributing to this inconsistency is orbital inclination, in which each planet's orbital plane is slightly tilted relative to a reference plane, like the ecliptic and invariable planes.
Astronomical tables and websites, such as Heavens-Above, forecast when and where the planets reach their next maximum elongations.
Elongation period.
Greatest elongations of a planet happen periodically, with a greatest eastern elongation followed by a greatest western elongation, and "vice versa". The period depends on the relative angular velocity of Earth and the planet, as seen from the Sun. The time it takes to complete this period is the synodic period of the planet.
Let "T" be the period (for example the time between two greatest eastern elongations), "ω" be the relative angular velocity, "ω"e Earth's angular velocity and "ω"p the planet's angular velocity. Then
formula_0
where "T"e and "T"p are Earth's and the planet's years (i.e. periods of revolution around the Sun, called sidereal periods).
For example, Venus's year (sidereal period) is 225 days, and Earth's is 365 days. Thus Venus's synodic period, which gives the time between every two eastern greatest elongations, is 584 days; this also applies to the western counterparts.
These values are approximate, because (as mentioned above) the planets do not have perfectly circular, coplanar orbits. When a planet is closer to the Sun it moves faster than when it is further away, so exact determination of the date and time of greatest elongation requires a much more complicated analysis of orbital mechanics.
Of superior planets.
Superior planets, dwarf planets and asteroids undergo a different cycle. After conjunction, such an object's elongation continues to increase until it approaches a maximum value larger than 90° (impossible with inferior planets) which is known as "opposition" and can also be examined as a heliocentric conjunction with Earth. This is archetypally very near 180°. As seen by an observer on the superior planet at opposition, the Earth appears at conjunction with the Sun. Technically, the point of opposition can be different from the time and point of maximum elongation. Opposition is defined as the moment when the apparent ecliptic longitude of any such object versus the Sun (seen from earth) differs by (is) 180°; it thus ignores how much the object differs from the plane of the Earth's orbit. For example, Pluto, whose orbit is highly inclined to the essentially matching plane of the planets, has maximum elongation much less than 180° at opposition. The six-word term "maximum apparent elongation from the sun" provides a fuller definition of elongation.
All superior planets are most conspicuous at their oppositions because they are near, or at, their closest to Earth and are also above the horizon all night. The variation in magnitude caused by changes in elongation are greater the closer the planet's orbit is to the Earth's. Mars' magnitude in particular changes with elongation: it can be as low as +1.8 when in conjunction near aphelion but at a rare favourable opposition it is as high as −2.9, which translates to seventy-five times brighter than its minimum brightness. As one moves further out, the difference in magnitude that correlates to the difference in elongation gradually falls. At opposition, the brightness of Jupiter from Earth ranges 3.3-fold; whereas that of Uranus – the most distant Solar System body visible to the naked eye – ranges by 1.7 times.
Since asteroids travel in an orbit not much larger than the Earth's, their magnitude can vary greatly depending on elongation. More than a dozen objects in the asteroid belt can be seen with 10×50 binoculars at an average opposition, but of these only Ceres and Vesta are always above the binocular limit of +9.5 when the objects at their worst points in their orbital opposition (smallest elongations).
A quadrature occurs when the position of a body (moon or planet) is such that its elongation is 90° or 270°; i.e. the body-earth-sun angle is 90°.
Of moons of other planets.
Sometimes elongation may instead refer to the angular distance of a moon of another planet from its central planet, for instance the angular distance of Io from Jupiter. Here we can also talk about "greatest eastern elongation" and "greatest western elongation". In the case of the moons of Uranus, studies often deal with "greatest northern elongation" and "greatest southern elongation" instead, due to the very high inclination of Uranus' axis of rotation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T = {2\\pi\\over \\omega} = {2\\pi\\over \\omega_\\mathrm{p} - \\omega_\\mathrm{e}} = {2\\pi\\over {2\\pi\\over T_\\mathrm{p}} - {2\\pi\\over T_\\mathrm{e}}} = {1\\over {{1\\over T_\\mathrm{p}} - {1\\over T_\\mathrm{e} }}}\n= {T_\\mathrm{e} \\over {T_\\mathrm{e} \\over T_\\mathrm{p}} - 1} "
}
] |
https://en.wikipedia.org/wiki?curid=143681
|
14368398
|
Capacity of a set
|
In mathematics, the capacity of a set in Euclidean space is a measure of the "size" of that set. Unlike, say, Lebesgue measure, which measures a set's volume or physical extent, capacity is a mathematical analogue of a set's ability to hold electrical charge. More precisely, it is the capacitance of the set: the total charge a set can hold while maintaining a given potential energy. The potential energy is computed with respect to an idealized ground at infinity for the harmonic or Newtonian capacity, and with respect to a surface for the condenser capacity.
Historical note.
The notion of capacity of a set and of "capacitable" set was introduced by Gustave Choquet in 1950: for a detailed account, see reference .
Definitions.
Condenser capacity.
Let Σ be a closed, smooth, ("n" − 1)-dimensional hypersurface in "n"-dimensional Euclidean space formula_0, "n" ≥ 3; "K" will denote the "n"-dimensional compact (i.e., closed and bounded) set of which Σ is the boundary. Let "S" be another ("n" − 1)-dimensional hypersurface that encloses Σ: in reference to its origins in electromagnetism, the pair (Σ, "S") is known as a condenser. The condenser capacity of Σ relative to "S", denoted "C"(Σ, "S") or cap(Σ, "S"), is given by the surface integral
formula_1
where:
formula_2
is the normal derivative of "u" across "S′"; and
"C"(Σ, "S") can be equivalently defined by the volume integral
formula_3
The condenser capacity also has a variational characterization: "C"(Σ, "S") is the infimum of the Dirichlet's energy functional
formula_4
over all continuously differentiable functions "v" on "D" with "v"("x") = 1 on Σ and "v"("x") = 0 on "S".
Harmonic capacity.
Heuristically, the harmonic capacity of "K", the region bounded by Σ, can be found by taking the condenser capacity of Σ with respect to infinity. More precisely, let "u" be the harmonic function in the complement of "K" satisfying "u" = 1 on Σ and "u"("x") → 0 as "x" → ∞. Thus "u" is the Newtonian potential of the simple layer Σ. Then the harmonic capacity or Newtonian capacity of "K", denoted "C"("K") or cap("K"), is then defined by
formula_5
If "S" is a rectifiable hypersurface completely enclosing "K", then the harmonic capacity can be equivalently rewritten as the integral over "S" of the outward normal derivative of "u":
formula_6
The harmonic capacity can also be understood as a limit of the condenser capacity. To wit, let "S""r" denote the sphere of radius "r" about the origin in formula_0. Since "K" is bounded, for sufficiently large "r", "S""r" will enclose "K" and (Σ, "S""r") will form a condenser pair. The harmonic capacity is then the limit as "r" tends to infinity:
formula_7
The harmonic capacity is a mathematically abstract version of the electrostatic capacity of the conductor "K" and is always non-negative and finite: 0 ≤ "C"("K") < +∞.
The Wiener capacity or Robin constant "W(K)" of "K" is given by
formula_8
Logarithmic capacity.
In two dimensions, the capacity is defined as above, but dropping the factor of formula_9 in the definition:
formula_10
This is often called the logarithmic capacity, the term "logarithmic" arises, as the potential function goes from being an inverse power to a logarithm in the formula_11 limit. This is articulated below. It may also be called the conformal capacity, in reference to its relation to the conformal radius.
Properties.
The harmonic function "u" is called the capacity potential, the Newtonian potential when formula_12 and the logarithmic potential when formula_13. It can be obtained via a Green's function as
formula_14
with "x" a point exterior to "S", and
formula_15
when formula_16 and
formula_17
for formula_13.
The measure formula_18 is called the capacitary measure or equilibrium measure. It is generally taken to be a Borel measure. It is related to the capacity as
formula_19
The variational definition of capacity over the Dirichlet energy can be re-expressed as
formula_20
with the infimum taken over all positive Borel measures formula_21 concentrated on "K", normalized so that formula_22 and with formula_23 is the energy integral
formula_24
Generalizations.
The characterization of the capacity of a set as the minimum of an energy functional achieving particular boundary values, given above, can be extended to other energy functionals in the calculus of variations.
Divergence form elliptic operators.
Solutions to a uniformly elliptic partial differential equation with divergence form
formula_25
are minimizers of the associated energy functional
formula_26
subject to appropriate boundary conditions.
The capacity of a set "E" with respect to a domain "D" containing "E" is defined as the infimum of the energy over all continuously differentiable functions "v" on "D" with "v"("x") = 1 on "E"; and "v"("x") = 0 on the boundary of "D".
The minimum energy is achieved by a function known as the "capacitary potential" of "E" with respect to "D", and it solves the obstacle problem on "D" with the obstacle function provided by the indicator function of "E". The capacitary potential is alternately characterized as the unique solution of the equation with the appropriate boundary conditions.
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "C(\\Sigma, S) = - \\frac1{(n - 2) \\sigma_{n}} \\int_{S'} \\frac{\\partial u}{\\partial \\nu}\\,\\mathrm{d}\\sigma',"
},
{
"math_id": 2,
"text": "\\frac{\\partial u}{\\partial \\nu} (x) = \\nabla u (x) \\cdot \\nu (x)"
},
{
"math_id": 3,
"text": "C(\\Sigma, S) = \\frac1{(n - 2) \\sigma_{n}} \\int_{D} | \\nabla u |^{2}\\mathrm{d}x."
},
{
"math_id": 4,
"text": "I[v] = \\frac1{(n - 2) \\sigma_{n}} \\int_{D} | \\nabla v |^{2}\\mathrm{d}x"
},
{
"math_id": 5,
"text": "C(K) = \\int_{\\mathbb{R}^n\\setminus K} |\\nabla u|^2\\mathrm{d}x."
},
{
"math_id": 6,
"text": "C(K) = \\int_S \\frac{\\partial u}{\\partial\\nu}\\,\\mathrm{d}\\sigma."
},
{
"math_id": 7,
"text": "C(K) = \\lim_{r \\to \\infty} C(\\Sigma, S_{r})."
},
{
"math_id": 8,
"text": "C(K) = e^{-W(K)}"
},
{
"math_id": 9,
"text": "(n-2)"
},
{
"math_id": 10,
"text": "C(\\Sigma, S) \n= - \\frac1{2\\pi} \\int_{S'} \\frac{\\partial u}{\\partial \\nu}\\,\\mathrm{d}\\sigma'\n= \\frac1{2\\pi} \\int_{D} | \\nabla u |^{2}\\mathrm{d}x\n"
},
{
"math_id": 11,
"text": "n\\to 2"
},
{
"math_id": 12,
"text": "n \\ge 3"
},
{
"math_id": 13,
"text": "n=2"
},
{
"math_id": 14,
"text": "u(x)=\\int_S G(x-y)d\\mu(y)"
},
{
"math_id": 15,
"text": "G(x-y)=\\frac1{|x-y|^{n-2}}"
},
{
"math_id": 16,
"text": "n\\ge 3"
},
{
"math_id": 17,
"text": "G(x-y)=\\log\\frac1{|x-y|}"
},
{
"math_id": 18,
"text": "\\mu"
},
{
"math_id": 19,
"text": "C(K)=\\int_Sd\\mu(y)=\\mu(S)"
},
{
"math_id": 20,
"text": "C(K)=\\left[\\inf_\\lambda E(\\lambda)\\right]^{-1}"
},
{
"math_id": 21,
"text": "\\lambda"
},
{
"math_id": 22,
"text": "\\lambda(K)=1"
},
{
"math_id": 23,
"text": "E(\\lambda)"
},
{
"math_id": 24,
"text": "E(\\lambda)=\\int\\int_{K\\times K} G(x-y) d\\lambda(x) d\\lambda(y)"
},
{
"math_id": 25,
"text": " \\nabla \\cdot ( A \\nabla u ) = 0 "
},
{
"math_id": 26,
"text": "I[u] = \\int_D (\\nabla u)^T A (\\nabla u)\\,\\mathrm{d}x"
}
] |
https://en.wikipedia.org/wiki?curid=14368398
|
143696
|
Orbital period
|
Time an astronomical object takes to complete one orbit around another object
The orbital period (also revolution period) is the amount of time a given astronomical object takes to complete one orbit around another object. In astronomy, it usually applies to planets or asteroids orbiting the Sun, moons orbiting planets, exoplanets orbiting other stars, or binary stars. It may also refer to the time it takes a satellite orbiting a planet or moon to complete one orbit.
For celestial objects in general, the orbital period is determined by a 360° revolution of one body around its primary, "e.g." Earth around the Sun.
Periods in astronomy are expressed in units of time, usually hours, days, or years.
Small body orbiting a central body.
According to Kepler's Third Law, the orbital period "T" of two point masses orbiting each other in a circular or elliptic orbit is:
formula_0
where:
For all ellipses with a given semi-major axis the orbital period is the same, regardless of eccentricity.
Inversely, for calculating the distance where a body has to orbit in order to have a given orbital period T:
formula_1
For instance, for completing an orbit every 24 hours around a mass of 100 kg, a small body has to orbit at a distance of 1.08 meters from the central body's center of mass.
In the special case of perfectly circular orbits, the semimajor axis a is equal to the radius of the orbit, and the orbital velocity is constant and equal to
formula_2
where:
This corresponds to <templatestyles src="Fraction/styles.css" />1⁄√2 times (≈ 0.707 times) the escape velocity.
Effect of central body's density.
For a perfect sphere of uniform density, it is possible to rewrite the first equation without measuring the mass as:
formula_3
where:
For instance, a small body in circular orbit 10.5 cm above the surface of a sphere of tungsten half a metre in radius would travel at slightly more than 1 mm/s, completing an orbit every hour. If the same sphere were made of lead the small body would need to orbit just 6.7 mm above the surface for sustaining the same orbital period.
When a very small body is in a circular orbit barely above the surface of a sphere of any radius and mean density "ρ" (in kg/m3), the above equation simplifies to (since "M"
"Vρ"
π"a"3"ρ")
formula_4
Thus the orbital period in low orbit depends only on the density of the central body, regardless of its size.
So, for the Earth as the central body (or any other spherically symmetric body with the same mean density, about 5,515 kg/m3, e.g. Mercury with 5,427 kg/m3 and Venus with 5,243 kg/m3) we get:
"T" = 1.41 hours
and for a body made of water ("ρ" ≈ 1,000 kg/m3), or bodies with a similar density, e.g. Saturn's moons Iapetus with 1,088 kg/m3 and Tethys with 984 kg/m3 we get:
"T" = 3.30 hours
Thus, as an alternative for using a very small number like "G", the strength of universal gravity can be described using some reference material, such as water: the orbital period for an orbit just above the surface of a spherical body of water is 3 hours and 18 minutes. Conversely, this can be used as a kind of "universal" unit of time if we have a unit of density.
Two bodies orbiting each other.
In celestial mechanics, when both orbiting bodies' masses have to be taken into account, the orbital period "T" can be calculated as follows:
formula_5
where:
In a parabolic or hyperbolic trajectory, the motion is not periodic, and the duration of the full trajectory is infinite.
Related periods.
For celestial objects in general, the orbital period typically refers to the sidereal period, determined by a 360° revolution of one body around its primary relative to the fixed stars projected in the sky. For the case of the Earth orbiting around the Sun, this period is referred to as the sidereal year. This is the orbital period in an inertial (non-rotating) frame of reference.
Orbital periods can be defined in several ways. The tropical period is more particularly about the position of the parent star. It is the basis for the solar year, and respectively the calendar year.
The synodic period refers not to the orbital relation to the parent star, but to other celestial objects, making it not a merely different approach to the orbit of an object around its parent, but a period of orbital relations with other objects, normally Earth, and their orbits around the Sun. It applies to the elapsed time where planets return to the same kind of phenomenon or location, such as when any planet returns between its consecutive observed conjunctions with or oppositions to the Sun. For example, Jupiter has a synodic period of 398.8 days from Earth; thus, Jupiter's opposition occurs once roughly every 13 months.
There are many periods related to the orbits of objects, each of which are often used in the various fields of astronomy and astrophysics, particularly they must not be confused with other revolving periods like rotational periods. Examples of some of the common orbital ones include the following:
Periods can be also defined under different specific astronomical definitions that are mostly caused by the small complex external gravitational influences of other celestial objects. Such variations also include the true placement of the centre of gravity between two astronomical bodies (barycenter), perturbations by other planets or bodies, orbital resonance, general relativity, etc. Most are investigated by detailed complex astronomical theories using celestial mechanics using precise positional observations of celestial objects via astrometry.
Synodic period.
One of the observable characteristics of two bodies which orbit a third body in different orbits, and thus have different orbital periods, is their synodic period, which is the time between conjunctions.
An example of this related period description is the repeated cycles for celestial bodies as observed from the Earth's surface, the synodic period, applying to the elapsed time where planets return to the same kind of phenomenon or for example, when any planet returns between its consecutive observed conjunctions with or oppositions to the Sun. For example, Jupiter has a synodic period of 398.8 days from Earth; thus, Jupiter's opposition occurs once roughly every 13 months.
If the orbital periods of the two bodies around the third are called "T"1 and "T"2, so that "T"1 < "T"2, their synodic period is given by:
formula_6
Examples of sidereal and synodic periods.
Table of synodic periods in the Solar System, relative to Earth:
In the case of a planet's moon, the synodic period usually means the Sun-synodic period, namely, the time it takes the moon to complete its illumination phases, completing the solar phases for an astronomer on the planet's surface. The Earth's motion does not determine this value for other planets because an Earth observer is not orbited by the moons in question. For example, Deimos's synodic period is 1.2648 days, 0.18% longer than Deimos's sidereal period of 1.2624 d.
Synodic periods relative to other planets.
The concept of synodic period applies not just to the Earth, but also to other planets as well, and the formula for computation is the same as the one given above. Here is a table which lists the synodic periods of some planets relative to each other:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T = 2\\pi\\sqrt{\\frac{a^3}{GM}}"
},
{
"math_id": 1,
"text": "a = \\sqrt[3]{\\frac{GMT^2}{4\\pi^2}}"
},
{
"math_id": 2,
"text": "v_\\text{o} = \\sqrt{\\frac{G M}{r}}"
},
{
"math_id": 3,
"text": "T = \\sqrt{\\frac{a^3}{r^3} \\frac{3 \\pi}{G \\rho}}"
},
{
"math_id": 4,
"text": "T = \\sqrt{ \\frac {3\\pi}{G \\rho} }"
},
{
"math_id": 5,
"text": "T= 2\\pi\\sqrt{\\frac{a^3}{G \\left(M_1 + M_2\\right)}}"
},
{
"math_id": 6,
"text": "\\frac{1}{T_\\mathrm{syn}} = \\frac{1}{T_1} - \\frac{1}{T_2}"
}
] |
https://en.wikipedia.org/wiki?curid=143696
|
14369650
|
Strengthening mechanisms of materials
|
Methods have been devised to modify the yield strength, ductility, and toughness of both crystalline and amorphous materials. These strengthening mechanisms give engineers the ability to tailor the mechanical properties of materials to suit a variety of different applications. For example, the favorable properties of steel result from interstitial incorporation of carbon into the iron lattice. Brass, a binary alloy of copper and zinc, has superior mechanical properties compared to its constituent metals due to solution strengthening. Work hardening (such as beating a red-hot piece of metal on anvil) has also been used for centuries by blacksmiths to introduce dislocations into materials, increasing their yield strengths.
Basic description.
Plastic deformation occurs when large numbers of dislocations move and multiply so as to result in macroscopic deformation. In other words, it is the movement of dislocations in the material which allows for deformation. If we want to enhance a material's mechanical properties (i.e. increase the yield and tensile strength), we simply need to introduce a mechanism which prohibits the mobility of these dislocations. Whatever the mechanism may be, (work hardening, grain size reduction, etc.) they all hinder dislocation motion and render the material stronger than previously.
The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms, so this mode of stress relief is energetically favorable. Hence, the hardness and strength (both yield and tensile) critically depend on the ease with which dislocations move. Pinning points, or locations in the crystal that oppose the motion of dislocations, can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations and solute particles, creating physical barriers from second phase precipitates forming along grain boundaries. There are five main strengthening mechanisms for metals, each is a method to prevent dislocation motion and propagation, or make it energetically unfavorable for the dislocation to move. For a material that has been strengthened, by some processing method, the amount of force required to start irreversible (plastic) deformation is greater than it was for the original material.
In amorphous materials such as polymers, amorphous ceramics (glass), and amorphous metals, the lack of long range order leads to yielding via mechanisms such as brittle fracture, crazing, and shear band formation. In these systems, strengthening mechanisms do not involve dislocations, but rather consist of modifications to the chemical structure and processing of the constituent material.
The strength of materials cannot infinitely increase. Each of the mechanisms explained below involves some trade-off by which other material properties are compromised in the process of strengthening.
Strengthening mechanisms in metals.
Work hardening.
The primary species responsible for work hardening are dislocations. Dislocations interact with each other by generating stress fields in the material. The interaction between the stress fields of dislocations can impede dislocation motion by repulsive or attractive interactions. Additionally, if two dislocations cross, dislocation line entanglement occurs, causing the formation of a jog which opposes dislocation motion. These entanglements and jogs act as pinning points, which oppose dislocation motion. As both of these processes are more likely to occur when more dislocations are present, there is a correlation between dislocation density and shear strength.
The shear strengthening provided by dislocation interactions can be described by:
formula_0
where formula_1 is a proportionality constant, formula_2 is the shear modulus, formula_3 is the Burgers vector, and formula_4 is the dislocation density.
Dislocation density is defined as the dislocation line length per unit volume:
formula_5
Similarly, the axial strengthening will be proportional to the dislocation density.
formula_6
This relationship does not apply when dislocations form cell structures. When cell structures are formed, the average cell size controls the strengthening effect.
Increasing the dislocation density increases the yield strength which results in a higher shear stress required to move the dislocations. This process is easily observed while working a material (by a process of cold working in metals). Theoretically, the strength of a material with no dislocations will be extremely high (formula_7) because plastic deformation would require the breaking of many bonds simultaneously. However, at moderate dislocation density values of around 107-109 dislocations/m2, the material will exhibit a significantly lower mechanical strength. Analogously, it is easier to move a rubber rug across a surface by propagating a small ripple through it than by dragging the whole rug. At dislocation densities of 1014 dislocations/m2 or higher, the strength of the material becomes high once again. Also, the dislocation density cannot be infinitely high, because then the material would lose its crystalline structure.
Solid solution strengthening and alloying.
For this strengthening mechanism, solute atoms of one element are added to another, resulting in either substitutional or interstitial point defects in the crystal (see Figure on the right). The solute atoms cause lattice distortions that impede dislocation motion, increasing the yield stress of the material. Solute atoms have stress fields around them which can interact with those of dislocations. The presence of solute atoms impart compressive or tensile stresses to the lattice, depending on solute size, which interfere with nearby dislocations, causing the solute atoms to act as potential barriers.
The shear stress required to move dislocations in a material is:
formula_8
where formula_9 is the solute concentration and formula_10 is the strain on the material caused by the solute.
Increasing the concentration of the solute atoms will increase the yield strength of a material, but there is a limit to the amount of solute that can be added, and one should look at the phase diagram for the material and the alloy to make sure that a second phase is not created.
In general, the solid solution strengthening depends on the concentration of the solute atoms, shear modulus of the solute atoms, size of solute atoms, valency of solute atoms (for ionic materials), and the symmetry of the solute stress field. The magnitude of strengthening is higher for non-symmetric stress fields because these solutes can interact with both edge and screw dislocations, whereas symmetric stress fields, which cause only volume change and not shape change, can only interact with edge dislocations.
Precipitation hardening.
In most binary systems, alloying above a concentration given by the phase diagram will cause the formation of a second phase. A second phase can also be created by mechanical or thermal treatments. The particles that compose the second phase precipitates act as pinning points in a similar manner to solutes, though the particles are not necessarily single atoms.
The dislocations in a material can interact with the precipitate atoms in one of two ways (see Figure 2). If the precipitate atoms are small, the dislocations would cut through them. As a result, new surfaces (b in Figure 2) of the particle would get exposed to the matrix and the particle-matrix interfacial energy would increase. For larger precipitate particles, looping or bowing of the dislocations would occur and result in dislocations getting longer. Hence, at a critical radius of about 5 nm, dislocations will preferably cut across the obstacle, while for a radius of 30 nm, the dislocations will readily bow or loop to overcome the obstacle.
The mathematical descriptions are as follows:
For particle bowing-
formula_11
For particle cutting-
formula_12
Dispersion strengthening.
Dispersion strengthening is a type of particulate strengthening in which incoherent precipitates attract and pin dislocations. These particles are typically larger than those in the Orowon precipitation hardening discussed above. The effect of dispersion strengthening is effective at high temperatures whereas precipitation strengthening from heat treatments are typically limited to temperatures much lower than the melting temperature of the material. One common type of dispersion strengthening is oxide dispersion strengthening.
Grain boundary strengthening.
In a polycrystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain boundaries act as an impediment to dislocation motion for the following two reasons:
1. Dislocation must change its direction of motion due to the differing orientation of grains.<br>
2. Discontinuity of slip planes from grain one to grain two.
The stress required to move a dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of dislocations per grain decreases with average grain size (see Figure 3). A lower number of dislocations per grain results in a lower dislocation 'pressure' building up at grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This relationship is the Hall-Petch relationship and can be mathematically described as follows:
formula_13,
where formula_14 is a constant, formula_15 is the average grain diameter and formula_16 is the original yield stress.
The fact that the yield strength increases with decreasing grain size is accompanied by the caveat that the grain size cannot be decreased infinitely. As the grain size decreases, more free volume is generated resulting in lattice mismatch. Below approximately 10 nm, the grain boundaries will tend to slide instead; a phenomenon known as grain-boundary sliding. If the grain size gets too small, it becomes more difficult to fit the dislocations in the grain and the stress required to move them is less. It was not possible to produce materials with grain sizes below 10 nm until recently, so the discovery that strength decreases below a critical grain size is still finding new applications.
Transformation hardening.
This method of hardening is used for steels.
High-strength steels generally fall into three basic categories, classified by the strengthening mechanism employed.
1- solid-solution-strengthened steels (rephos steels)
2- grain-refined steels or high strength low alloy steels (HSLA)
3- transformation-hardened steels
Transformation-hardened steels are the third type of high-strength steels. These steels use predominantly higher levels of C and Mn along with heat treatment to increase strength. The finished product will have a duplex micro-structure of ferrite with varying levels of degenerate
martensite. This allows for varying levels of strength. There are three basic types of transformation-hardened steels. These are dual-phase (DP), transformation-induced plasticity (TRIP), and martensitic steels.
The annealing process for dual -phase steels consists of first holding the steel in the alpha + gamma temperature region for a set period of time. During that time C and Mn diffuse into the austenite leaving a ferrite of greater purity. The steel is then quenched so that the austenite is transformed
into martensite, and the ferrite remains on cooling. The steel is then subjected to a temper cycle to allow some level of marten-site decomposition. By controlling the amount of martensite in the steel, as well as the degree of temper, the strength level can be controlled. Depending on
processing and chemistry, the strength level can range from 350 to 960 MPa.
TRIP steels also use C and Mn, along with heat treatment, in order to retain small amounts of austenite and bainite in a ferrite matrix. Thermal processing for TRIP steels again involves annealing the steel in the a + g region for a period of time sufficient to allow C and Mn to diffuse
into austenite. The steel is then quenched to a point above the martensite start temperature and held there. This allows the formation of bainite, an austenite decomposition product. While at this temperature, more C is allowed to enrich the retained austenite. This, in turn, lowers the
martensite start temperature to below room temperature. Upon final quenching a metastable austenite is retained in the predominantly ferrite matrix along with small amounts of bainite (and other forms of decomposed austenite). This combination of micro-structures has the added
benefits of higher strengths and resistance to necking during forming. This offers great improvements in formability over other high-strength steels. Essentially, as the TRIP steel is being formed, it becomes much stronger. Tensile strengths of TRIP steels are in the range of 600-960 MPa.
Martensitic steels are also high in C and Mn. These are fully quenched to martensite during processing. The martensite structure is then tempered back to the appropriate strength level, adding toughness to the steel. Tensile strengths for these steels range as high as 1500 MPa.
Strengthening mechanisms in amorphous materials.
Polymer.
Polymers fracture via breaking of inter- and intra molecular bonds; hence, the chemical structure of these materials plays a huge role in increasing strength. For polymers consisting of chains which easily slide past each other, chemical and physical cross linking can be used to increase rigidity and yield strength. In thermoset polymers (thermosetting plastic), disulfide bridges and other covalent cross links give rise to a hard structure which can withstand very high temperatures. These cross-links are particularly helpful in improving tensile strength of materials which contain much free volume prone to crazing, typically glassy brittle polymers. In thermoplastic elastomer, phase separation of dissimilar monomer components leads to association of hard domains within a sea of soft phase, yielding a physical structure with increased strength and rigidity. If yielding occurs by chains sliding past each other (shear bands), the strength can also be increased by introducing kinks into the polymer chains via unsaturated carbon-carbon bonds.
Adding filler materials such as fibers, platelets, and particles is a commonly employed technique for strengthening polymer materials. Fillers such as clay, silica, and carbon network materials have been extensively researched and used in polymer composites in part due to their effect on mechanical properties. Stiffness-confinement effects near rigid interfaces, such as those between a polymer matrix and stiffer filler materials, enhance the stiffness of composites by restricting polymer chain motion. This is especially present where fillers are chemically treated to strongly interact with polymer chains, increasing the anchoring of polymer chains to the filler interfaces and thus further restricting the motion of chains away from the interface. Stiffness-confinement effects have been characterized in model nanocomposites, and shows that composites with length scales on the order of nanometers increase the effect of the fillers on polymer stiffness dramatically.
Increasing the bulkiness of the monomer unit via incorporation of aryl rings is another strengthening mechanism. The anisotropy of the molecular structure means that these mechanisms are heavily dependent on the direction of applied stress. While aryl rings drastically increase rigidity along the direction of the chain, these materials may still be brittle in perpendicular directions. Macroscopic structure can be adjusted to compensate for this anisotropy. For example, the high strength of Kevlar arises from a stacked multilayer macrostructure where aromatic polymer layers are rotated with respect to their neighbors. When loaded oblique to the chain direction, ductile polymers with flexible linkages, such as oriented polyethylene, are highly prone to shear band formation, so macroscopic structures which place the load parallel to the draw direction would increase strength.
Mixing polymers is another method of increasing strength, particularly with materials that show crazing preceding brittle fracture such as atactic polystyrene (APS). For example, by forming a 50/50 mixture of APS with polyphenylene oxide (PPO), this embrittling tendency can be almost completely suppressed, substantially increasing the fracture strength.
Interpenetrating polymer networks (IPNs), consisting of interlacing crosslinked polymer networks that are not covalently bonded to one another, can lead to enhanced strength in polymer materials. The use of an IPN approach imposes compatibility (and thus macroscale homogeneity) on otherwise immiscible blends, allowing for a blending of mechanical properties. For example, silicone-polyurethane IPNs show increased tear and flexural strength over base silicone networks, while preserving the high elastic recovery of the silicone network at high strains. Increased stiffness can also be achieved by pre-straining polymer networks and then sequentially forming a secondary network within the strained material. This takes advantage of the anisotropic strain hardening of the original network (chain alignment from stretching of the polymer chains) and provides a mechanism whereby the two networks transfer stress to one another due to the imposed strain on the pre-strained network.
Glass.
Many silicate glasses are strong in compression but weak in tension. By introducing compression stress into the structure, the tensile strength of the material can be increased. This is typically done via two mechanisms: thermal treatment (tempering) or chemical bath (via ion exchange).
In tempered glasses, air jets are used to rapidly cool the top and bottom surfaces of a softened (hot) slab of glass. Since the surface cools quicker, there is more free volume at the surface than in the bulk melt. The core of the slab then pulls the surface inward, resulting in an internal compressive stress at the surface. This substantially increases the tensile strength of the material as tensile stresses exerted on the glass must now resolve the compressive stresses before yielding.
formula_17
Alternately, in chemical treatment, a glass slab treated containing network formers and modifiers is submerged into a molten salt bath containing ions larger than those present in the modifier. Due to a concentration gradient of the ions, mass transport must take place. As the larger cation diffuses from the molten salt into the surface, it replaces the smaller ion from the modifier. The larger ion squeezing into surface introduces compressive stress in the glass's surface. A common example is treatment of sodium oxide modified silicate glass in molten potassium chloride.
Examples of chemically strengthened glass are Gorilla Glass developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation.
Composite strengthening.
Many of the basic strengthening mechanisms can be classified based on their dimensionality. At 0-D there is precipitate and solid solution strengthening with particulates strengthening structure, at 1-D there is work/forest hardening with line dislocations as the hardening mechanism, and at 2-D there is grain boundary strengthening with surface energy of granular interfaces providing strength improvement. The two primary types of composite strengthening, fiber reinforcement and laminar reinforcement, fall in the 1-D and 2-D classes, respectively. The anisotropy of fiber and laminar composite strength reflects these dimensionalities. The primary idea behind composite strengthening is to combine materials with opposite strengths and weaknesses to create a material which transfers load onto the stiffer material but benefits from the ductility and toughness of the softer material.
Fiber reinforcement.
Fiber-reinforced composites (FRCs) consist of a matrix of one material containing parallel embedded fibers. There are two variants of fiber-reinforced composites, one with stiff fibers and a ductile matrix and one with ductile fibers and a stiff matrix. The former variant is exemplified by fiberglass which contains very strong but delicate glass fibers embedded in a softer plastic matrix resilient to fracture. The latter variant is found in almost all buildings as reinforced concrete with ductile, high tensile-strength steel rods embedded in brittle, high compressive-strength concrete. In both cases, the matrix and fibers have complimentary mechanical properties and the resulting composite material is therefore more practical for applications in the real world.
For a composite containing aligned, stiff fibers which span the length of the material and a soft, ductile matrix, the following descriptions provide a rough model.
Four stages of deformation.
The condition of a fiber-reinforced composite under applied tensile stress along the direction of the fibers can be decomposed into four stages from small strain to large strain. Since the stress is parallel to the fibers, the deformation is described by the isostrain condition, i.e., the fiber and matrix experience the same strain. At each stage, the composite stress (formula_18) is given in terms of the volume fractions of the fiber and matrix (formula_19), the Young's moduli of the fiber and matrix (formula_20), the strain of the composite (formula_21), and the stress of the fiber and matrix as read from a stress-strain curve (formula_22).
Tensile strength.
Due to the heterogeneous nature of FRCs, they also feature multiple tensile strengths (TS), one corresponding to each component. Given the assumptions outlined above, the first tensile strength would correspond to failure of the fibers, with some support from the matrix plastic deformation strength, and the second with failure of the matrix.
formula_28
formula_29
Anisotropy (Orientation effects).
As a result of the aforementioned dimensionality (1-D) of fiber reinforcement, significant anisotropy is observed in its mechanical properties. The following equations model the tensile strength of a FRC as a function of the misalignment angle (formula_30) between the fibers and the applied force, the stresses in the parallel and perpendicular, or formula_31 and formula_32o, cases (formula_33), and the shear strength of the matrix (formula_34).
The angle is small enough to maintain load transfer onto the fibers and prevent delamination of fibers "and" the misaligned stress samples a slightly larger cross-sectional area of the fiber so the strength of the fiber is not just maintained but actually increases compared to the parallel case.
formula_35
The angle is large enough that the load is not effectively transferred to the fibers and the matrix experiences enough strain to fracture.
formula_36
The angle is close to 90o so most of the load remains in the matrix and thus tensile transverse matrix fracture is the dominant failure condition. This can be seen as complementary to the small angle case, with similar form but with an angle formula_37.
formula_38
Applications.
Strengthening of materials is useful in many applications. A primary application of strengthened materials is for construction. In order to have stronger buildings and bridges, one must have a strong frame that can support high tensile or compressive load and resist plastic deformation. The steel frame used to make the building should be as strong as possible so that it does not bend under the entire weight of the building. Polymeric roofing materials would also need to be strong so that the roof does not cave in when there is build-up of snow on the rooftop.
Research is also currently being done to increase the strength of metallic materials through the addition of polymer materials such as bonded carbon fiber reinforced polymer to (CFRP).
Current research.
Molecular dynamics simulation assisted studies.
The method has been widely applied in materials science as it can yield information about the structure, properties, and dynamics on the atomic scale that cannot be easily resolved with experiments. The fundamental mechanism behind MD simulation is based on classical mechanics, from which we know the force exerted on a particle is caused by the negative gradient of the potential energy with respect to the particle position. Therefore, a standard procedure to conduct MD simulation is to divide the time into discrete time steps and solve the equations of motion over these intervals repeatedly to update the positions and energies of the particles. Direct observation of atomic arrangements and energetics of particles on the atomic scale makes it a powerful tool to study microstructural evolution and strengthening mechanisms.
Grain boundary strengthening.
There have been extensive studies on different strengthening mechanisms using MD simulation. These studies reveal the microstructural evolution that cannot be either easily observed from an experiment or predicted by a simplified model. Han et al. investigated the grain boundary strengthening mechanism and the effects of grain size in nanocrystalline graphene through a series of MD simulations. Previous studies observed inconsistent grain size dependence of the strength of graphene at the length scale of nm and the conclusions remained unclear. Therefore, Han et al. utilized MD simulation to observe the structural evolution of graphene with nanosized grains directly. The nanocrystalline graphene samples were generated with random shapes and distribution to simulate well-annealed polycrystalline samples. The samples were then loaded with uniaxial tensile stress, and the simulations were carried out at room temperature. By decreasing the grain size of graphene, Han et al. observed a transition from an inverse pseudo Hall-Petch behavior to pseudo Hall-Petch behavior and the critical grain size is 3.1 nm. Based on the arrangement and energetics of simulated particles, the inverse pseudo Hall-Petch behavior can be attributed to the creation of stress concentration sites due to the increase in the density of grain boundary junctions. Cracks then preferentially nucleate on these sites and the strength decreases. However, when the grain size is below the critical value, the stress concentration at the grain boundary junctions decreases because of stress cancellation between 5 and 7 defects. This cancellation helps graphene sustain the tensile load and exhibit a pseudo Hall-Petch behavior. This study explains the previous inconsistent experimental observations and provides an in-depth understanding of the grain boundary strengthening mechanism of nanocrystalline graphene, which cannot be easily obtained from either in-situ or ex-situ experiments.
Precipitate strengthening.
There are also MD studies done on precipitate strengthening mechanisms. Shim et al. applied MD simulations to study the precipitate strengthening effects of nanosized body-centered-cubic (bcc) Cu on face-centered-cubic (fcc) Fe. As discussed in the previous section, the precipitate strengthening effects are caused by the interaction between dislocations and precipitates. Therefore, the characteristics of dislocation play an important role on the strengthening effects. It is known that a screw dislocation in bcc metals has very complicated features, including a non-planar core and the twinning-anti-twinning asymmetry. This complicates the strengthening mechanism analysis and modeling and it cannot be easily revealed by high resolution electron microscopy. Thus, Shim et al. simulated coherent bcc Cu precipitates with diameters ranging from 1 to 4 nm embedded in the fcc Fe matrix. A screw dislocation is then introduced and driven to glide on a {112} plane by an increasing shear stress until it detaches from the precipitates. The shear stress that causes the detachment is regarded as the critical resolved shear stress (CRSS). Shim et al. observed that the screw dislocation velocity in the twinning direction is 2-4 times larger than that in the anti-twinning direction. The reduced velocity in the anti-twinning direction is mainly caused by a transition in the screw dislocation glide from the kink-pair to the cross-kink mechanism. In contrast, a screw dislocation overcomes the precipitates of 1–3.5 nm by shearing in the twinning direction. In addition, it also has been observed that the screw dislocation detachment mechanism with the larger, transformed precipitates involves annihilation-and-renucleation and Orowan looping in the twinning and anti-twinning direction, respectively. To fully characterize the involved mechanisms, it requires intensive transmission electron microscopy analysis and it is normally hard to give a comprehensive characterization.
Solid solution strengthening and alloying.
A similar study has been done by Zhang et al. on studying the solid solution strengthening of Co, Ru, and Re of different concentrations in fcc Ni. The edge dislocation was positioned at the center of Ni and its slip system was set to be <110> {111}. Shear stress was then applied to the top and bottom surfaces of the Ni with a solute atom (Co, Ru, or Re) embedded at the center at 300 K. Previous studies have shown that the general view of size and modulus effects cannot fully explain the solid solution strengthening caused by Re in this system due to their small values. Zhang et al. took a step further to combine the first-principle DFT calculations with MD to study the influence of stacking fault energy (SFE) on strengthening, as partial dislocations can easily form in this material structure. MD simulation results indicate that Re atoms strongly drag to edge dislocation motion and the DFT calculation reveals a dramatic increase in SFE, which is due to the interaction between host atoms and solute atoms located in the slip plane. Further, similar relations have also been found in fcc Ni embedded with Ru and Co.
Limitation of the MD studies of strengthening mechanisms.
These studies show great examples of how the MD method can assist the studies of strengthening mechanisms and provides more insights on the atomic scale. However, it is important to note the limitations of the method.
To obtain accurate MD simulation results, it is essential to build a model that properly describes the interatomic potential based on bonding. The interatomic potentials are approximations rather than exact descriptions of interactions. The accuracy of the description varies significantly with the system and complexity of the potential form. For example, if the bonding is dynamic, which means that there is a change in bonding depending on atomic positions, the dedicated interatomic potential is required to enable the MD simulation to yield accurate results. Therefore, interatomic potentials need to be tailored based on bonding. The following interatomic potential models are commonly used in materials science: Born-Mayer potential, Morse potential, Lennard Jones potential, and Mie potential. Although they give very similar results for the variation of potential energy with respect to the particle position, there is a non-negligible difference in their repulsive tails. These characteristics make them better describe materials systems with specific chemical bonds, respectively.
In addition to inherent errors in interatomic potentials, the number of atoms and the time steps in MD is limited by the computational power. Nowadays, it is common to simulate an MD system with multimillion atoms and it can even achieve simulations with multimillion atoms. However this still limits the length scale of the simulation to roughly a micron in size. The time steps in MD are also very small and a long simulation will only yield results at the time scale of a few nanoseconds. To further extend the scale of simulation time, it is common to apply a bias potential that changes the barrier height, therefore, accelerating the dynamics. This method is called hyperdynamics. The proper application of this method typically can extend the simulation times to microseconds.
Nanostructure fabrication for material strengthening.
Based on the mechanism of strengthening discussed in the previous contents, nowadays people are also working on enhancing the strength by purposely fabricating nanostructures in materials. Here we introduce several representative methods, including hierarchical nanotwined structures, pushing the limit of grain size for strengthening and dislocation engineering.
Hierarchical nanotwinned structures.
As mentioned in the previous content, hindering dislocation motion renders great strengthening to materials. Nanoscale twins – crystalline regions related by symmetry have the ability to effectively block the dislocation motion due to the microstructure change at the interface. The formation of hierarchical nanotwinned structures pushes the hindrance effect to the extreme, due to the construction of a complex 3D nanotwinned network. Thus, the delicate design of hierarchical nanotwinned structures is of great importance for inventing materials with super strength. For instance, Yue et al. constructed a diamond composite with hierarchically nanotwinned structure by manipulating the synthesis pressure. The obtained composite showed the higher strength than typical engineering metals and ceramics.
Pushing the limit of grain size for strengthening.
The Hall-Petch effect illustrates that the yield strength of materials increases with decreasing grain size. However, many researchers have found that the nanocrystalline materials will soften when the grain size decreases to the critical point, which is called the inverse Hall-Petch effect. The interpretations of this phenomenon is that the extremely small grains are not able to support dislocation pileup which provides extra stress concentration in the large grains. At this point, the strengthening mechanism changes from dislocation-dominated strain hardening to growth softening and grain rotation. Typically, the inverse Hall-Petch effect will happens at grain size ranging from 10 nm to 30 nm and makes it hard for nanocrystalline materials to achieve a high strength. To push the limit of grain size for strengthening, the hindrance of grain rotation and growth could be achieved by grain boundary stabilization.
The construction of nanolaminated structure with low-angle grain boundaries is one method to obtain ultrafine grained materials with ultra-strength. Lu et al. applied a very high rate shear deformation with high strain gradients on the top surface layer of bulk Ni sample and introduced nanolaminated structures. This material exhibits an ultra-high hardness, higher than any reported ultrafine-grained nickel. The exceptional strength is resulted from the appearance of low-angle grain boundaries, which have low-energy states efficient for enhancing structure stability.
Another method to stabilize grain boundaries is the addition of nonmetallic impurities. Nonmetallic impurities often aggregate at grain boundaries and have the ability to impact the strength of materials by changing the grain boundary energy. Rupert et al. conducted first-principles simulations to study the impact of the addition of common nonmetallic impurities on Σ5 (310) grain boundary energy in Cu. They claimed that the decrease of covalent radius of the impurity and the increase of electronegativity of the impurity would lead to the increase of the grain boundary energy and further strengthen the materials. For instance, boron stabilized the grain boundaries by enhancing the charge density among the adjacent Cu atoms to improve the connection between two grain boundaries.
Dislocation engineering.
Previous studies on the impact of dislocation motion on materials strengthening mainly focused on high density dislocation, which is effective for enhancing strength with the cost of reducing ductility. Engineering dislocation structures and distribution is promising to comprehensively improve the performance of material.
Solutes tend to aggregate at dislocations and are promising for dislocation engineering. Kimura et al. conducted atom probe tomograph and observed the aggregation of niobium atoms to the dislocations. The segregation energy was calculated to be almost the same as the grain boundary segregation energy. That's to say, the interaction between niobium atoms and dislocations hindered the recovery of dislocations and thus strengthened the materials.
Introducing dislocations with heterogeneous characteristics could also be utilized for material strengthening. Lu et al. introduced ordered oxygen complexes into TiZrHfNb alloy. Unlike the traditional interstitial strengthening, the introduction of the ordered oxygen complexes enhanced the strength of the alloy without the sacrifice of ductility. The mechanism was that the ordered oxygen complexes changed the dislocation motion mode from planar slip to wavy slip and promoted double cross-slip.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta\\tau_d = \\alpha Gb\\sqrt{\\rho_\\perp}"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": " G "
},
{
"math_id": 3,
"text": " b "
},
{
"math_id": 4,
"text": " \\rho_\\perp "
},
{
"math_id": 5,
"text": "\\rho_\\perp = \\frac{\\ell}{\\ell^3}"
},
{
"math_id": 6,
"text": " \\Delta\\sigma_{y} \\propto {Gb\\sqrt{\\rho_\\perp}} "
},
{
"math_id": 7,
"text": " \\sigma \\approx \\frac{G}{10} "
},
{
"math_id": 8,
"text": " \\Delta\\tau = Gb\\sqrt{c} \\epsilon^{3/2} "
},
{
"math_id": 9,
"text": " c "
},
{
"math_id": 10,
"text": " \\epsilon "
},
{
"math_id": 11,
"text": " \\Delta \\tau = {Gb\\over L-2r} "
},
{
"math_id": 12,
"text": " \\Delta \\tau = {\\gamma \\pi r \\over b L} "
},
{
"math_id": 13,
"text": " \\sigma_{y} = \\sigma_{y,0} + {k \\over {d^x}} "
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "d"
},
{
"math_id": 16,
"text": "\\sigma_{y,0}"
},
{
"math_id": 17,
"text": " \\sigma_{y=modified} = \\sigma_{y,0} + \\sigma_{compressive} "
},
{
"math_id": 18,
"text": " \\sigma_{c} "
},
{
"math_id": 19,
"text": " V_{f}, V_{m} "
},
{
"math_id": 20,
"text": " E_{f}, E_{m} "
},
{
"math_id": 21,
"text": " \\epsilon_{c} "
},
{
"math_id": 22,
"text": " \\sigma_{f}(\\epsilon_{c}), \\sigma_{m}(\\epsilon_{c}) "
},
{
"math_id": 23,
"text": " \\sigma_{c} = V_{f}\\epsilon_{c}E_{f} + V_{m}\\epsilon_{c}E_{m} "
},
{
"math_id": 24,
"text": " E_{c} = V_{f}E_{f} + V_{m}E_{m} "
},
{
"math_id": 25,
"text": " \\sigma_{c} = V_{f}\\epsilon_{c}E_{f} + V_{m}\\sigma_{m}(\\epsilon_{c}) "
},
{
"math_id": 26,
"text": " \\sigma_{c} = V_{f}\\sigma_{f}(\\epsilon_{c}) + V_{m}\\sigma_{m}(\\epsilon_{c}) "
},
{
"math_id": 27,
"text": " \\sigma_{c} \\approx V_{m}\\sigma_{m}(\\epsilon_{c}) "
},
{
"math_id": 28,
"text": "TS_1 = V_fTS_f + V_m\\sigma_m(\\epsilon_c) "
},
{
"math_id": 29,
"text": "TS_2 = V_mTS_m "
},
{
"math_id": 30,
"text": "\\theta"
},
{
"math_id": 31,
"text": "\\theta=0"
},
{
"math_id": 32,
"text": "90"
},
{
"math_id": 33,
"text": "\\ \\sigma_{||}, \\sigma_{\\perp}"
},
{
"math_id": 34,
"text": " \\tau_{my} "
},
{
"math_id": 35,
"text": "TS(\\theta) = \\frac{\\sigma_{||}}{\\cos^2(\\theta)}"
},
{
"math_id": 36,
"text": "TS(\\theta) = \\frac{\\tau_{my}}{\\sin(\\theta)\\cos(\\theta)} "
},
{
"math_id": 37,
"text": "90-\\theta"
},
{
"math_id": 38,
"text": "TS(\\theta) = \\frac{\\sigma_{\\perp}}{\\sin^2(\\theta)}"
}
] |
https://en.wikipedia.org/wiki?curid=14369650
|
14369709
|
Schottky anomaly
|
The Schottky anomaly is an effect observed in solid-state physics where the specific heat capacity of a solid at low temperature has a peak. It is called anomalous because the heat capacity usually increases with temperature, or stays constant. It occurs in systems with a limited number of energy levels so that E(T) increases with sharp steps, one for each energy level that becomes available. Since Cv =(dE/dT), it will experience a large peak as the temperature crosses over from one step to the next.
This effect can be explained by looking at the change in entropy of the system. At zero temperature only the lowest energy level is occupied, entropy is zero, and there is very little probability of a transition to a higher energy level. As the temperature increases, there is an increase in entropy and thus the probability of a transition goes up. As the temperature approaches the difference between the energy levels there is a broad peak in the specific heat corresponding to a large change in entropy for a small change in temperature. At high temperatures all of the levels are populated evenly, so there is again little change in entropy for small changes in temperature, and thus a lower specific heat capacity.
formula_1
For a two level system the specific heat coming from the Schottky anomaly has the form:
formula_2
Where Δ is the energy between the two levels.
This anomaly is usually seen in paramagnetic salts or even ordinary glass (due to paramagnetic iron impurities) at low temperature. At high temperature the paramagnetic spins have many spin states available, but at low temperatures some of the spin states are "frozen out" (having too high energy due to crystal field splitting), and the entropy per paramagnetic atom is lowered.
It was named after Walter H. Schottky.
Details.
In a system where particles can have either a state of energy 0 or formula_3, the expected value of the energy of a particle in the canonical ensemble is:
formula_4
with the inverse temperature formula_5 and the Boltzmann constant formula_6.
The total energy of formula_7 independent particles is thus:
formula_8
The heat capacity is therefore:
formula_9
Plotting formula_0 as a function of temperature, a peak can be seen at formula_10. In this section
formula_11
for the formula_12 in the introductory section.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "S = \\int_0^T \\! \\left(\\frac{C_{v}}{T}\\right)dT \\,"
},
{
"math_id": 2,
"text": "C_{\\rm Schottky} = R\\left(\\frac{\\Delta}{T}\\right)^{2} \\frac{e^{\\Delta / T}}{[1 + e^{\\Delta / T}]^{2}} \\,"
},
{
"math_id": 3,
"text": "\\epsilon"
},
{
"math_id": 4,
"text": " \\langle \\epsilon \\rangle = \\epsilon \\cdot \\frac {e^{-\\beta \\epsilon}} {1 + e ^ {- \\beta \\epsilon}} = \\frac {\\epsilon} {e ^ {+ \\beta \\epsilon} +1} "
},
{
"math_id": 5,
"text": " \\beta = \\frac {1} {k_\\mathrm{B} T} "
},
{
"math_id": 6,
"text": " k_\\mathrm{B}"
},
{
"math_id": 7,
"text": " N "
},
{
"math_id": 8,
"text": " U = N \\langle \\epsilon \\rangle = \\frac {N \\epsilon} {e ^ {+ \\beta \\epsilon} +1} "
},
{
"math_id": 9,
"text": "C = \\left (\\frac {\\partial U} {\\partial T} \\right)_\\epsilon = - \\frac {1} {k_\\mathrm{B} T^2} \\frac {\\partial U} { \\partial \\beta} = Nk_\\mathrm{B} \\left (\\frac {\\epsilon} {k_\\mathrm{B} T} \\right)^2 \\frac {e^{+ \\frac {\\epsilon} {k_\\mathrm{B} T}}} {\\left (e ^ {+\\frac {\\epsilon} {k_\\mathrm{B} T}} + 1 \\right) ^ 2} "
},
{
"math_id": 10,
"text": " k_\\mathrm{B}T \\approx 0.417 \\epsilon "
},
{
"math_id": 11,
"text": "\\frac {\\epsilon}{k_\\mathrm B} = \\Delta"
},
{
"math_id": 12,
"text": "\\Delta"
}
] |
https://en.wikipedia.org/wiki?curid=14369709
|
14370049
|
Schwarzschild criterion
|
Discovered by Martin Schwarzschild, the Schwarzschild criterion is a criterion in astrophysics where a stellar medium is stable against convection when the rate of change in temperature (T) by altitude (Z) satisfies
formula_0
where formula_1 is gravity and formula_2 is the heat capacity at constant pressure.
If a gas is unstable against convection then if an element is displaced upwards its buoyancy will cause it to keep rising or, if it is displaced downwards, it is denser than its surroundings and will continue to sink. Therefore, the Schwarzschild criterion dictates whether an element of a star will rise or sink if displaced by random fluctuations within the star or if the forces the element experiences will return it to its original position.
For the Schwarzschild criterion to hold the displaced element must have a bulk velocity which is highly subsonic. If this is the case then the time over which the pressures surrounding the element changes is much longer than the time it takes for a sound wave to travel through the element and smooth out pressure differences between the element and its surroundings. If this were not the case the element would not hold together as it traveled through the star.
In order to keep rising or sinking in the star the displaced element must not be able to become the same density as the gas surrounding it. In other words, it must respond adiabatically to its surroundings. In order for this to be true it must move fast enough for there to be insufficient time for the element to exchange heat with its surroundings.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " -\\frac{dT}{dZ} < \\frac{g}{C_p}"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "C_p"
}
] |
https://en.wikipedia.org/wiki?curid=14370049
|
1437020
|
Guiding center
|
In physics, the motion of an electrically charged particle such as an electron or ion in a plasma in a magnetic field can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation.
Gyration.
If the magnetic field is uniform and all other forces are absent, then the Lorentz force will cause a particle to undergo a constant acceleration perpendicular to both the particle velocity and the magnetic field. This does not affect particle motion parallel to the magnetic field, but results in circular motion at constant speed in the plane perpendicular to the magnetic field. This circular motion is known as the gyromotion. For a particle with mass formula_0 and charge formula_1 moving in a magnetic field with strength formula_2, it has a frequency, called the gyrofrequency or cyclotron frequency, of
formula_3
For a speed perpendicular to the magnetic field of formula_4, the radius of the orbit, called the gyroradius or Larmor radius, is
formula_5
Parallel motion.
Since the magnetic Lorentz force is always perpendicular to the magnetic field, it has no influence (to lowest order) on the parallel motion. In a uniform field with no additional forces, a charged particle will gyrate around the magnetic field according to the perpendicular component of its velocity and drift parallel to the field according to its initial parallel velocity, resulting in a helical orbit. If there is a force with a parallel component, the particle and its guiding center will be correspondingly accelerated.
If the field has a parallel gradient, a particle with a finite Larmor radius will also experience a force in the direction away from the larger magnetic field. This effect is known as the magnetic mirror. While it is closely related to guiding center drifts in its physics and mathematics, it is nevertheless considered to be distinct from them.
General force drifts.
Generally speaking, when there is a force on the particles perpendicular to the magnetic field, then they drift in a direction perpendicular to both the force and the field. If formula_6 is the force on one particle, then the drift velocity is
formula_7
These drifts, in contrast to the mirror effect and the non-uniform "B" drifts, do not depend on finite Larmor radius, but are also present in cold plasmas. This may seem counterintuitive. If a particle is stationary when a force is turned on, where does the motion perpendicular to the force come from and why doesn't the force produce a motion parallel to itself? The answer is the interaction with the magnetic field. The force initially results in an acceleration parallel to itself, but the magnetic field deflects the resulting motion in the drift direction. Once the particle is moving in the drift direction, the magnetic field deflects it back against the external force, so that the average acceleration in the direction of the force is zero. There is, however, a one-time displacement in the direction of the force equal to ("f"/"m")"ω"c−2, which should be considered a consequence of the polarization drift (see below) while the force is being turned on. The resulting motion is a cycloid. More generally, the superposition of a gyration and a uniform perpendicular drift is a trochoid.
All drifts may be considered special cases of the force drift, although this is not always the most useful way to think about them. The obvious cases are electric and gravitational forces. The grad-B drift can be considered to result from the force on a magnetic dipole in a field gradient. The curvature, inertia, and polarisation drifts result from treating the acceleration of the particle as fictitious forces. The diamagnetic drift can be derived from the force due to a pressure gradient. Finally, other forces such as radiation pressure and collisions also result in drifts.
Gravitational field.
A simple example of a force drift is a plasma in a gravitational field, e.g. the ionosphere. The drift velocity is
formula_8
Because of the mass dependence, the gravitational drift for the electrons can normally be ignored.
The dependence on the charge of the particle implies that the drift direction is opposite for ions as for electrons, resulting in a current. In a fluid picture, it is this current crossed with the magnetic field that provides that force counteracting the applied force.
Electric field.
This drift, often called the formula_9 ("E"-cross-"B") drift, is a special case because the electric force on a particle depends on its charge (as opposed, for example, to the gravitational force considered above). As a result, ions (of whatever mass and charge) and electrons both move in the same direction at the same speed, so there is no net current (assuming quasineutrality of the plasma). In the context of special relativity, in the frame moving with this velocity, the electric field vanishes. The value of the drift velocity is given by
formula_10
Nonuniform E.
If the electric field is not uniform, the above formula is modified to read
formula_11
Nonuniform B.
Guiding center drifts may also result not only from external forces but also from non-uniformities in the magnetic field. It is convenient to express these drifts in terms of the parallel and perpendicular kinetic energies
formula_12
In that case, the explicit mass dependence is eliminated. If the ions and electrons have similar temperatures, then they also have similar, though oppositely directed, drift velocities.
Grad-B drift.
When a particle moves into a larger magnetic field, the curvature of its orbit becomes tighter, transforming the otherwise circular orbit into a cycloid. The drift velocity is
formula_13
Curvature drift.
In order for a charged particle to follow a curved field line, it needs a drift velocity out of the plane of curvature to provide the necessary centripetal force. This velocity is
formula_14
where formula_15 is the radius of curvature pointing outwards, away from the center of the circular arc which best approximates the curve at that point.
formula_16
where formula_17 is the unit vector in the direction of the magnetic field. This drift can be decomposed into the sum of the curvature drift and the term
formula_18
In the important limit of stationary magnetic field and weak electric field, the inertial drift is dominated by the curvature drift term.
Curved vacuum drift.
In the limit of small plasma pressure, Maxwell's equations provide a relationship between gradient and curvature that allows the corresponding drifts to be combined as follows
formula_19
For a species in thermal equilibrium, formula_20 can be replaced by formula_21 (formula_22 for formula_23 and formula_24 for formula_25).
The expression for the grad-B drift above can be rewritten for the case when formula_26 is due to the curvature.
This is most easily done by realizing that in a vacuum, Ampere's Law is
formula_27. In cylindrical coordinates chosen such that the azimuthal direction is parallel to the magnetic field and the radial direction is parallel to the gradient of the field, this becomes
formula_28
Since formula_29 is a constant, this implies that
formula_30
and the grad-B drift velocity can be written
formula_31
Polarization drift.
A time-varying electric field also results in a drift given by
formula_32
Obviously this drift is different from the others in that it cannot continue indefinitely. Normally an oscillatory electric field results in a polarization drift oscillating 90 degrees out of phase. Because of the mass dependence, this effect is also called the inertia drift. Normally the polarization drift can be neglected for electrons because of their relatively small mass.
Diamagnetic drift.
The diamagnetic drift is not actually a guiding center drift. A pressure gradient does not cause any single particle to drift. Nevertheless, the fluid velocity is defined by counting the particles moving through a reference area, and a pressure gradient results in more particles in one direction than in the other. The net velocity of the fluid is given by
formula_33
Drift Currents.
With the important exception of the formula_9 drift, the drift velocities of differently charged particles will be different. This difference in velocities results in a current, while the mass dependence of the drift velocity can result in chemical separation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "\\omega_{\\rm c} = \\frac{|q|B}{m} . "
},
{
"math_id": 4,
"text": "v_{\\perp}"
},
{
"math_id": 5,
"text": "\\rho_{\\rm L} =\\frac{ v_{\\perp}}{ \\omega_{\\rm c}} . "
},
{
"math_id": 6,
"text": "\\boldsymbol{F}"
},
{
"math_id": 7,
"text": "\\boldsymbol{v}_f = \\frac{1}{q} \\frac{\\boldsymbol{F}\\times\\boldsymbol{B}}{B^2}."
},
{
"math_id": 8,
"text": "\\boldsymbol{v}_g = \\frac{m}{q} \\frac{\\boldsymbol{g}\\times\\boldsymbol{B}}{B^2}"
},
{
"math_id": 9,
"text": "\\boldsymbol{E}\\times\\boldsymbol{B}"
},
{
"math_id": 10,
"text": "\\boldsymbol{v}_E = \\frac{\\boldsymbol{E}\\times\\boldsymbol{B}}{B^2}"
},
{
"math_id": 11,
"text": "\\boldsymbol{v}_E = \\left( 1 + \\frac{1}{4}\\rho_{\\rm L}^2\\nabla^2 \\right) \\frac{\\boldsymbol{E}\\times\\boldsymbol{B}}{B^2}"
},
{
"math_id": 12,
"text": "\\begin{align}\nK_\\| &= \\tfrac{1}{2}mv_\\|^2 \\\\[1ex]\nK_\\perp &= \\tfrac{1}{2}mv_\\perp^2\n\\end{align}"
},
{
"math_id": 13,
"text": "\\boldsymbol{v}_{\\nabla B} = \\frac{K_\\perp}{qB} \\frac{\\boldsymbol{B}\\times\\nabla B}{B^{2}}"
},
{
"math_id": 14,
"text": "\\boldsymbol{v}_{R }= \\frac{2K_\\|}{qB}\\frac{\\boldsymbol{R}_{c}\\times \\boldsymbol{B}}{R_{c}^{2} B}"
},
{
"math_id": 15,
"text": "\\boldsymbol{R}_{c}"
},
{
"math_id": 16,
"text": "\\boldsymbol{v}_{\\rm inertial} = \\frac{v_{\\parallel}}{\\omega_c}\\, \\hat{\\boldsymbol{b}}\\times\\frac{\\mathrm{d} \\hat{\\boldsymbol{b}} }{\\mathrm{d} t},"
},
{
"math_id": 17,
"text": "\\hat{\\boldsymbol{b}} = \\boldsymbol{B}/B"
},
{
"math_id": 18,
"text": "\\frac{v_\\|}{\\omega_c}\\, \\hat{\\boldsymbol{b}}\\times\\left[\\frac{\\partial\\hat{\\boldsymbol{b}} }{\\partial t} + (\\boldsymbol{v}_E\\cdot\\nabla\\hat{\\boldsymbol{b}})\n\\right]."
},
{
"math_id": 19,
"text": "\\boldsymbol{v}_R + \\boldsymbol{v}_{\\nabla B} = \\frac{2K_\\| + K_\\perp}{qB} \\frac{\\boldsymbol{R}_c\\times\\boldsymbol{B}}{R_c^2 B}"
},
{
"math_id": 20,
"text": "2K_\\|+K_\\perp"
},
{
"math_id": 21,
"text": "2k_\\text{B}T"
},
{
"math_id": 22,
"text": "k_\\text{B}T/2"
},
{
"math_id": 23,
"text": "K_\\|"
},
{
"math_id": 24,
"text": "k_\\text{B}T"
},
{
"math_id": 25,
"text": "K_\\perp"
},
{
"math_id": 26,
"text": "\\nabla B "
},
{
"math_id": 27,
"text": "\\nabla\\times\\boldsymbol{B} = 0 "
},
{
"math_id": 28,
"text": "\\nabla\\times\\boldsymbol{B} = \\frac{1}{r} \\frac{\\partial}{\\partial r} \\left( r B_\\theta \\right) \\hat{z} = 0 "
},
{
"math_id": 29,
"text": " r B_\\theta "
},
{
"math_id": 30,
"text": " \\nabla B = - B \\frac{\\boldsymbol{R}_c}{R_c^2} "
},
{
"math_id": 31,
"text": "\\boldsymbol{v}_{\\nabla B} = -\\frac{K_\\perp}{q} \\frac{\\boldsymbol{B}\\times \\boldsymbol{R}_c}{R_c^2 B^2}"
},
{
"math_id": 32,
"text": "\\boldsymbol{v}_p = \\frac{m}{qB^2}\\frac{d\\boldsymbol{E}}{dt}"
},
{
"math_id": 33,
"text": "\\boldsymbol{v}_D = -\\frac{\\nabla p\\times\\boldsymbol{B}}{qn B^2}"
}
] |
https://en.wikipedia.org/wiki?curid=1437020
|
14370620
|
Apply
|
Function that maps a function and its arguments to the function value
In mathematics and computer science, apply is a function that applies a function to arguments. It is central to programming languages derived from lambda calculus, such as LISP and Scheme, and also in functional languages. It has a role in the study of the denotational semantics of computer programs, because it is a continuous function on complete partial orders. Apply is also a continuous function in homotopy theory, and, indeed underpins the entire theory: it allows a homotopy deformation to be viewed as a continuous path in the space of functions. Likewise, valid mutations (refactorings) of computer programs can be seen as those that are "continuous" in the Scott topology.
The most general setting for apply is in category theory, where it is right adjoint to currying in closed monoidal categories. A special case of this are the Cartesian closed categories, whose internal language is simply typed lambda calculus.
Programming.
In computer programming, apply applies a function to a list of arguments. "Eval" and "apply" are the two interdependent components of the "eval-apply cycle", which is the essence of evaluating Lisp, described in SICP. Function application corresponds to beta reduction in lambda calculus.
Apply function.
Apply is also the name of a special function in many languages, which takes a function and a list, and uses the list as the function's own argument list, as if the function were called with the elements of the list as the arguments. This is important in languages with variadic functions, because this is the only way to call a function with an indeterminate (at compile time) number of arguments.
Common Lisp and Scheme.
In Common Lisp apply is a function that applies a function to a list of arguments (note here that "+" is a variadic function that takes any number of arguments):
Similarly in Scheme:
C++.
In C++, Bind is used either via the std namespace or via the boost namespace.
C# and Java.
In C# and Java, variadic arguments are simply collected in an array. Caller can explicitly pass in an array in place of the variadic arguments. This can only be done for a variadic parameter. It is not possible to apply an array of arguments to non-variadic parameter without using reflection. An ambiguous case arises should the caller want to pass an array itself as one of the arguments rather than using the array as a "list" of arguments. In this case, the caller should cast the array to codice_0 to prevent the compiler from using the "apply" interpretation.
variadicFunc(arrayOfArgs);With version 8 lambda expressions were introduced. Functions are implemented as objects with a functional interface, an interface with only one non-static method. The standard interface
Function<T,R>
consist of the method (plus some static utility functions):
R apply(T para)
Go.
In Go, typed variadic arguments are simply collected in a slice. The caller can explicitly pass in a slice in place of the variadic arguments, by appending a codice_1 to the slice argument. This can only be done for a variadic parameter. The caller can not apply an array of arguments to non-variadic parameters, without using reflection..
Haskell.
In Haskell, functions may be applied by simple juxtaposition:
func param1 param2 ...
In Haskell, the syntax may also be interpreted that each parameter curries its function in turn. In the above example, "func param1" returns another function accepting one fewer parameters, that is then applied to param2, and so on, until the function has no more parameters.
JavaScript.
In JavaScript, function objects have an codice_2 method, the first argument is the value of the codice_3 keyword inside the function; the second is the list of arguments:
func.apply(null, args);
ES6 adds the spread operator codice_4 which may be used instead of codice_2.
Lua.
In Lua, apply can be written this way:
function apply(f...)
return f(...)
end
Perl.
In Perl, arrays, hashes and expressions are automatically "flattened" into a single list when evaluated in a list context, such as in the argument list of a function
@args = (@some_args, @more_args);
func(@args);
func(@some_args, @more_args);
PHP.
In PHP, codice_2 is called codice_7:
call_user_func_array('func_name', $args);
Python and Ruby.
In Python and Ruby, the same asterisk notation used in defining variadic functions is used for calling a function on a sequence and array respectively:
func(*args)
Python originally had an apply function, but this was deprecated in favour of the asterisk in 2.3 and removed in 3.0.
R.
In R, codice_8 constructs and executes a function call from a name or a function and a list of arguments to be passed to it:
f(x1, x2)
do.call(what = f, args = list(x1, x2))
Smalltalk.
In Smalltalk, block (function) objects have a codice_9 method which takes an array of arguments:
aBlock valueWithArguments: args
Tcl.
Since Tcl 8.5, a function can be applied to arguments with the codice_2 command apply func ?arg1 arg2 ...? where the function is a two element list {args body} or a three element list {args body namespace}.
Universal property.
Consider a function formula_0, that is, formula_1 where the bracket notation formula_2 denotes the space of functions from "A" to "B". By means of currying, there is a unique function formula_3.
Then Apply provides the universal morphism
formula_4,
so that
formula_5
or, equivalently one has the commuting diagram
formula_6
More precisely, curry and apply are adjoint functors.
The notation formula_2 for the space of functions from "A" to "B" occurs more commonly in computer science. In category theory, however, formula_2 is known as the exponential object, and is written as formula_7. There are other common notational differences as well; for example "Apply" is often called "Eval", even though in computer science, these are not the same thing, with eval distinguished from "Apply", as being the evaluation of the quoted string form of a function with its arguments, rather than the application of a function to some arguments.
Also, in category theory, "curry" is commonly denoted by formula_8, so that formula_9 is written for "curry"("g"). This notation is in conflict with the use of formula_8 in lambda calculus, where lambda is used to denote bound variables. With all of these notational changes accounted for, the adjointness of "Apply" and "curry" is then expressed in the commuting diagram
The articles on exponential object and Cartesian closed category provide a more precise discussion of the category-theoretic formulation of this idea. Thus the use of lambda here is not accidental; the internal language of Cartesian closed categories is simply-typed lambda calculus. The most general possible setting for "Apply" are the closed monoidal categories, of which the cartesian closed categories are an example. In homological algebra, the adjointness of curry and apply is known as tensor-hom adjunction.
Topological properties.
In order theory, in the category of complete partial orders endowed with the Scott topology, both "curry" and "apply" are continuous functions (that is, they are Scott continuous). This property helps establish the foundational validity of the study of the denotational semantics of computer programs.
In algebraic geometry and homotopy theory, "curry" and "apply" are both continuous functions when the space formula_10 of continuous functions from formula_11 to formula_12 is given the compact open topology, and formula_11 is locally compact Hausdorff. This result is very important, in that it underpins homotopy theory, allowing homotopic deformations to be understood as continuous paths in the space of functions.
|
[
{
"math_id": 0,
"text": "g:(X\\times Y)\\to Z"
},
{
"math_id": 1,
"text": "g\\isin [(X\\times Y)\\to Z]"
},
{
"math_id": 2,
"text": "[A\\to B]"
},
{
"math_id": 3,
"text": "\\mbox{curry}(g) :X\\to [Y\\to Z]"
},
{
"math_id": 4,
"text": "\\mbox{Apply}:([Y\\to Z]\\times Y) \\to Z"
},
{
"math_id": 5,
"text": "\\mbox {Apply}(f,y)=f(y)"
},
{
"math_id": 6,
"text": "\\mbox{Apply} \\circ \\left( \\mbox{curry}(g) \\times \\mbox{id}_Y \\right) = g"
},
{
"math_id": 7,
"text": "B^A"
},
{
"math_id": 8,
"text": "\\lambda"
},
{
"math_id": 9,
"text": "\\lambda g"
},
{
"math_id": 10,
"text": "Y^X"
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "Y"
}
] |
https://en.wikipedia.org/wiki?curid=14370620
|
1437696
|
Velocity-addition formula
|
Equation used in relativistic physics
In relativistic physics, a velocity-addition formula is an equation that specifies how to combine the velocities of objects in a way that is consistent with the requirement that no object's speed can exceed the speed of light. Such formulas apply to successive Lorentz transformations, so they also relate different frames. Accompanying velocity addition is a kinematic effect known as Thomas precession, whereby successive non-collinear Lorentz boosts become equivalent to the composition of a rotation of the coordinate system and a boost.
Standard applications of velocity-addition formulas include the Doppler shift, Doppler navigation, the aberration of light, and the dragging of light in moving water observed in the 1851 Fizeau experiment.
The notation employs u as velocity of a body within a Lorentz frame "S", and v as velocity of a second frame "S"′, as measured in "S", and u′ as the transformed velocity of the body within the second frame.
History.
The speed of light in a fluid is slower than the speed of light in vacuum, and it changes if the fluid is moving along with the light. In 1851, Fizeau measured the speed of light in a fluid moving parallel to the light using an interferometer. Fizeau's results were not in accord with the then-prevalent theories. Fizeau experimentally correctly determined the zeroth term of an expansion of the relativistically correct addition law in terms of as is described below. Fizeau's result led physicists to accept the empirical validity of the rather unsatisfactory theory by Fresnel that a fluid moving with respect to the stationary aether "partially" drags light with it, i.e. the speed is instead of , where "c" is the speed of light in the aether, "n" is the refractive index of the fluid, and "V" is the speed of the fluid with respect to the aether.
The aberration of light, of which the easiest explanation is the relativistic velocity addition formula, together with Fizeau's result, triggered the development of theories like Lorentz aether theory of electromagnetism in 1892. In 1905 Albert Einstein, with the advent of special relativity, derived the standard configuration formula ("V" in the ) for the addition of relativistic velocities. The issues involving aether were, gradually over the years, settled in favor of special relativity.
Galilean relativity.
It was observed by Galileo that a person on a uniformly moving ship has the impression of being at rest and sees a heavy body falling vertically downward. This observation is now regarded as the first clear statement of the principle of mechanical relativity. Galileo saw that from the point of view of a person standing on the shore, the motion of falling downwards on the ship would be combined with, or added to, the forward motion of the ship. In terms of velocities, it can be said that the velocity of the falling body relative to the shore equals the velocity of that body relative to ship plus the velocity of the ship relative to the shore.
In general for three objects A (e.g. Galileo on the shore), B (e.g. ship), C (e.g. falling body on ship) the velocity vector formula_0 of C relative to A (velocity of falling object as Galileo sees it) is the sum of the velocity formula_1 of C relative to B (velocity of falling object relative to ship) plus the velocity v of B relative to A (ship's velocity away from the shore). The addition here is the vector addition of vector algebra and the resulting velocity is usually represented in the form
formula_2
The cosmos of Galileo consists of absolute space and time and the addition of velocities corresponds to composition of Galilean transformations. The relativity principle is called Galilean relativity. It is obeyed by Newtonian mechanics.
Special relativity.
According to the theory of special relativity, the frame of the ship has a different clock rate and distance measure, and the notion of simultaneity in the direction of motion is altered, so the addition law for velocities is changed. This change is not noticeable at low velocities but as the velocity increases towards the speed of light it becomes important. The addition law is also called a composition law for velocities. For collinear motions, the speed of the object, formula_3, e.g. a cannonball fired horizontally out to sea, as measured from the ship, moving at speed formula_4, would be measured by someone standing on the shore and watching the whole scene through a telescope as
formula_5
The composition formula can take an algebraically equivalent form, which can be easily derived by using only the principle of constancy of the speed of light,
formula_6
The cosmos of special relativity consists of Minkowski spacetime and the addition of velocities corresponds to composition of Lorentz transformations. In the special theory of relativity Newtonian mechanics is modified into relativistic mechanics.
Standard configuration.
The formulas for boosts in the standard configuration follow most straightforwardly from taking differentials of the inverse Lorentz boost in standard configuration. If the primed frame is travelling with speed formula_4 with Lorentz factor formula_7 in the positive relative to the unprimed frame, then the differentials are
formula_8
Divide the first three equations by the fourth,
formula_9
or
formula_10
which is
Transformation of velocity ("Cartesian components")
formula_11
formula_12
formula_13
in which expressions for the primed velocities were obtained using the standard recipe by replacing v by –v and swapping primed and unprimed coordinates. If coordinates are chosen so that all velocities lie in a (common) "x"–"y" plane, then velocities may be expressed as
formula_14
(see polar coordinates) and one finds
Transformation of velocity ("Plane polar components")
formula_15
formula_16
<templatestyles src="Template:Hidden begin/styles.css"/>Details for u
formula_17
The proof as given is highly formal. There are other more involved proofs that may be more enlightening, such as the one below.
<templatestyles src="Math_proof/styles.css" />A proof using 4-vectors and Lorentz transformation matrices
Since a relativistic transformation rotates space and time into each other much as geometric rotations in the plane rotate the "x"- and "y"-axes, it is convenient to use the same units for space and time, otherwise a unit conversion factor appears throughout relativistic formulae, being the speed of light. In a system where lengths and times are measured in the same units, the speed of light is dimensionless and equal to 1. A velocity is then expressed as fraction of the speed of light.
To find the relativistic transformation law, it is useful to introduce the four-velocities "V" = ("V"0, "V"1, 0, 0), which is the motion of the ship away from the shore, as measured from the shore, and "U′" = ("U′"0, "U′"1, "U′"2, "U′"3) which is the motion of the fly away from the ship, as measured from the ship. The four-velocity is defined to be a four-vector with relativistic length equal to 1, future-directed and tangent to the world line of the object in spacetime. Here, "V"0 corresponds to the time component and "V"1 to the "x" component of the ship's velocity as seen from the shore. It is convenient to take the "x"-axis to be the direction of motion of the ship away from the shore, and the "y"-axis so that the "x"–"y" plane is the plane spanned by the motion of the ship and the fly. This results in several components of the velocities being zero:
"V"2 = "V"3 = "U′"3 = 0
The ordinary velocity is the ratio of the rate at which the space coordinates are increasing to the rate at which the time coordinate is increasing:
formula_18
Since the relativistic length of "V" is 1,
formula_19
so
formula_20
The Lorentz transformation matrix that converts velocities measured in the ship frame to the shore frame is the "inverse" of the transformation described on the Lorentz transformation page, so the minus signs that appear there must be inverted here:
formula_21
This matrix rotates the pure time-axis vector (1, 0, 0, 0) to ("V"0, "V"1, 0, 0), and all its columns are relativistically orthogonal to one another, so it defines a Lorentz transformation.
If a fly is moving with four-velocity "U′" in the ship frame, and it is boosted by multiplying by the matrix above, the new four-velocity in the shore frame is "U" = ("U"0, "U"1, "U"2, "U"3),
formula_22
Dividing by the time component "U"0 and substituting for the components of the four-vectors "U′" and "V" in terms of the components of the three-vectors u′ and v gives the relativistic composition law as
formula_23
The form of the relativistic composition law can be understood as an effect of the failure of simultaneity at a distance. For the parallel component, the time dilation decreases the speed, the length contraction increases it, and the two effects cancel out. The failure of simultaneity means that the fly is changing slices of simultaneity as the projection of u′ onto v. Since this effect is entirely due to the time slicing, the same factor multiplies the perpendicular component, but for the perpendicular component there is no length contraction, so the time dilation multiplies by a factor of <templatestyles src="Fraction/styles.css" />1⁄"V"0 = √(1 − "v"12).
General configuration.
Starting from the expression in coordinates for "v" parallel to the , expressions for the perpendicular and parallel components can be cast in vector form as follows, a trick which also works for Lorentz transformations of other 3d physical quantities originally in set up standard configuration. Introduce the velocity vector u in the unprimed frame and u′ in the primed frame, and split them into components parallel (∥) and perpendicular (⊥) to the relative velocity vector v (see hide box below) thus
formula_24
then with the usual Cartesian standard basis vectors e"x", e"y", e"z", set the velocity in the unprimed frame to be
formula_25
which gives, using the results for the standard configuration,
formula_26
where · is the dot product. Since these are vector equations, they still have the same form for v in "any" direction. The only difference from the coordinate expressions is that the above expressions refers to "vectors", not components.
One obtains
formula_27
where "α""v" = 1/"γ""v" is the reciprocal of the Lorentz factor. The ordering of operands in the definition is chosen to coincide with that of the standard configuration from which the formula is derived.
<templatestyles src="Template:Hidden begin/styles.css"/>The algebra
formula_28
<templatestyles src="Template:Hidden begin/styles.css"/>Decomposition into parallel and perpendicular components in terms of "V"
Either the parallel or the perpendicular component for each vector needs to be found, since the other component will be eliminated by substitution of the full vectors.
The parallel component of u′ can be found by projecting the full vector into the direction of the relative motion
formula_29
and the perpendicular component of "u"'′ can be found by the geometric properties of the cross product (see figure above right),
formula_30
In each case, v/"v" is a unit vector in the direction of relative motion.
The expressions for u|| and u⊥ can be found in the same way. Substituting the parallel component into
formula_31
results in the above equation.
Using an identity in formula_32 and formula_33,
formula_34
and in the forwards (v positive, S → S') direction
formula_35
where the last expression is by the standard vector analysis formula v × (v × u) = (v ⋅ u)v − (v ⋅ v)u. The first expression extends to any number of spatial dimensions, but the cross product is defined in three dimensions only. The objects "A", "B", "C" with "B" having velocity v relative to "A" and "C" having velocity u relative to "A" can be anything. In particular, they can be three frames, or they could be the laboratory, a decaying particle and one of the decay products of the decaying particle.
Properties.
The relativistic addition of 3-velocities is non-linear, so in general
formula_36
for real number "λ", although it is true that
formula_37
Also, due to the last terms, is in general neither commutative
formula_38
nor associative
formula_39
It deserves special mention that if u and v′ refer to velocities of pairwise parallel frames (primed parallel to unprimed and doubly primed parallel to primed), then, according to Einstein's velocity reciprocity principle, the unprimed frame moves with velocity −u relative to the primed frame, and the primed frame moves with velocity −v′ relative to the doubly primed frame hence (−v′ ⊕ −u) is the velocity of the unprimed frame relative to the doubly primed frame, and one might expect to have u ⊕ v′ = −(−v′ ⊕ −u) by naive application of the reciprocity principle. This does not hold, though the magnitudes are equal. The unprimed and doubly primed frames are "not" parallel, but related through a rotation. This is related to the phenomenon of Thomas precession, and is not dealt with further here.
The norms are given by
formula_40
and
formula_41
<templatestyles src="Math_proof/styles.css" />Proof
formula_42
Reverse formula found by using standard procedure of swapping v for -v and u for u′.
It is clear that the non-commutativity manifests itself as an additional "rotation" of the coordinate frame when two boosts are involved, since the norm squared is the same for both orders of boosts.
The gamma factors for the combined velocities are computed as
formula_43
<templatestyles src="Math_proof/styles.css" />Detailed proof
formula_44
Reverse formula found by using standard procedure of swapping v for −v and u for u′.
Notational conventions.
Notations and conventions for the velocity addition vary from author to author. Different symbols may be used for the operation, or for the velocities involved, and the operands may be switched for the same expression, or the symbols may be switched for the same velocity. A completely separate symbol may also be used for the transformed velocity, rather than the prime used here. Since the velocity addition is non-commutative, one cannot switch the operands or symbols without changing the result.
Examples of alternative notation include:
(using units where c = 1) formula_45
formula_46
formula_47
formula_48
Applications.
Some classical applications of velocity-addition formulas, to the Doppler shift, to the aberration of light, and to the dragging of light in moving water, yielding relativistically valid expressions for these phenomena are detailed below. It is also possible to use the velocity addition formula, assuming conservation of momentum (by appeal to ordinary rotational invariance), the correct form of the 3-vector part of the momentum four-vector, without resort to electromagnetism, or a priori not known to be valid, relativistic versions of the Lagrangian formalism. This involves experimentalist bouncing off relativistic billiard balls from each other. This is not detailed here, but see for reference (primary source) and .
Fizeau experiment.
When light propagates in a medium, its speed is reduced, in the rest frame of the medium, to "c""m" = <templatestyles src="Fraction/styles.css" />"c"⁄"n""m", where "n""m" is the index of refraction of the medium "m". The speed of light in a medium uniformly moving with speed "V" in the positive "x"-direction as measured in the lab frame is given directly by the velocity addition formulas. For the forward direction (standard configuration, drop index m on "n") one gets,
formula_49
Collecting the largest contributions explicitly,
formula_50
Fizeau found the first three terms. The classical result is the first two terms.
Aberration of light.
Another basic application is to consider the deviation of light, i.e. change of its direction, when transforming to a new reference frame with parallel axes, called aberration of light. In this case, "v"′ = "v" = "c", and insertion in the formula for tan "θ" yields
formula_51
For this case one may also compute sin "θ" and cos "θ" from the standard formulae,
formula_52
<templatestyles src="Template:Hidden begin/styles.css"/>Trigonometry
formula_53
formula_54
the trigonometric manipulations essentially being identical in the cos case to the manipulations in the sin case. Consider the difference,
formula_55
correct to order . Employ in order to make small angle approximations a trigonometric formula,
formula_56
where cos("θ" + "θ"′) ≈ cos "θ"′, sin("θ" − "θ"′) ≈ ("θ" − "θ"′) were used.
Thus the quantity
formula_57
the classical aberration angle, is obtained in the limit .
Relativistic Doppler shift.
Here "velocity components" will be used as opposed to "speed" for greater generality, and in order to avoid perhaps seemingly ad hoc introductions of minus signs. Minus signs occurring here will instead serve to illuminate features when speeds less than that of light are considered.
For light waves in vacuum, time dilation together with a simple geometrical observation alone suffices to calculate the Doppler shift in standard configuration (collinear relative velocity of emitter and observer as well of observed light wave).
All velocities in what follows are parallel to the common positive , so subscripts on velocity components are dropped. In the observers frame, introduce the geometrical observation
formula_58
as the spatial distance, or wavelength, between two pulses (wave crests), where "T" is the time elapsed between the emission of two pulses. The time elapsed between the passage of two pulses "at the same point in space" is the "time period" τ, and its inverse "ν" = <templatestyles src="Fraction/styles.css" />1⁄"τ" is the observed (temporal) frequency. The corresponding quantities in the emitters frame are endowed with primes.
For light waves formula_59
and the observed frequency is
formula_60
where "T" = "γ""V""T"′ is standard time dilation formula.
Suppose instead that the wave is not composed of light waves with speed "c", but instead, for easy visualization, bullets fired from a relativistic machine gun, with velocity "s"′ in the frame of the emitter. Then, in general, the geometrical observation is "precisely the same". But now, "s"′ ≠ "s", and "s" is given by velocity addition,
formula_61
The calculation is then essentially the same, except that here it is easier carried out upside down with "τ" = <templatestyles src="Fraction/styles.css" />1⁄"ν" instead of ν. One finds
formula_62
<templatestyles src="Template:Hidden begin/styles.css"/>Details in derivation
formula_63
Observe that in the typical case, the "s"′ that enters is "negative". The formula has general validity though. When "s"′ = −"c", the formula reduces to the formula calculated directly for light waves above,
formula_64
If the emitter is not firing bullets in empty space, but emitting waves in a medium, then the "formula still applies", but now, it may be necessary to first calculate "s"′ from the velocity of the emitter relative to the medium.
Returning to the case of a light emitter, in the case the observer and emitter are not collinear, the result has little modification,
formula_65
where θ is the angle between the light emitter and the observer. This reduces to the previous result for collinear motion when "θ" = 0, but for transverse motion corresponding to "θ" = "π"/2, the frequency is shifted by the Lorentz factor. This does not happen in the classical optical Doppler effect.
Hyperbolic geometry.
Associated to the relativistic velocity formula_66 of an object is a quantity formula_67 whose norm is called rapidity. These are related through
formula_68
where the vector formula_69 is thought of as being Cartesian coordinates on a 3-dimensional subspace of the Lie algebra formula_70 of the Lorentz group spanned by the boost generators formula_71. This space, call it "rapidity space", is isomorphic to ℝ3 as a vector space, and is mapped to the open unit ball,
formula_72, "velocity space", via the above relation. The addition law on collinear form coincides with the law of addition of hyperbolic tangents
formula_73
with
formula_74
The line element in velocity space formula_75 follows from the expression for "relativistic relative velocity" in any frame,
formula_76
where the speed of light is set to unity so that formula_77 and formula_78 agree. It this expression, formula_79 and formula_80 are velocities of two objects in any one given frame. The quantity formula_81 is the speed of one or the other object "relative" to the other object as seen "in the given frame". The expression is Lorentz invariant, i.e. independent of which frame is the given frame, but the quantity it calculates is "not". For instance, if the given frame is the rest frame of object one, then formula_82.
The line element is found by putting formula_83 or equivalently formula_84,
formula_85
with "θ" and φ the usual spherical angle coordinates for formula_66 taken in the "z"-direction. Now introduce ζ through
formula_86
and the line element on rapidity space formula_87 becomes
formula_88
Relativistic particle collisions.
In scattering experiments the primary objective is to measure the invariant scattering cross section. This enters the formula for scattering of two particle types into a final state formula_89 assumed to have two or more particles,
formula_90
or, in most textbooks,
formula_91
where
The objective is to find a correct expression for "relativistic relative speed" formula_99 and an invariant expression for the incident flux.
Non-relativistically, one has for relative speed formula_100. If the system in which velocities are measured is the rest frame of particle type formula_101, it is required that formula_102 Setting the speed of light formula_103, the expression for formula_99 follows immediately from the formula for the norm (second formula) in the "general configuration" as
formula_104
The formula reduces in the classical limit to formula_105 as it should, and gives the correct result in the rest frames of the particles. The relative velocity is "incorrectly given" in most, perhaps "all" books on particle physics and quantum field theory. This is mostly harmless, since if either one particle type is stationary or the relative motion is collinear, then the right result is obtained from the incorrect formulas. The formula is invariant, but not manifestly so. It can be rewritten in terms of four-velocities as
formula_106
The correct expression for the flux, published by Christian Møller in 1945, is given by
formula_107
One notes that for collinear velocities, formula_108. In order to get a "manifestly" Lorentz invariant expression one writes formula_109 with formula_110, where formula_111is the density in the rest frame, for the individual particle fluxes and arrives at
formula_112
In the literature the quantity formula_113 as well as formula_81 are both referred to as the relative velocity. In some cases (statistical physics and dark matter literature), formula_113 is referred to as the "Møller velocity", in which case formula_81 means relative velocity. The true relative velocity is at any rate formula_99. The discrepancy between formula_99 and formula_81 is relevant though in most cases velocities are collinear. At LHC the crossing angle is small, around 300 "μ"rad, but at
the old Intersecting Storage Ring at CERN, it was about 18◦.
Remarks.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{u}"
},
{
"math_id": 1,
"text": "\\mathbf{u'}"
},
{
"math_id": 2,
"text": " \\mathbf{u} = \\mathbf{v} + \\mathbf{u'}."
},
{
"math_id": 3,
"text": "u'"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": " u = {v+u'\\over 1+(vu'/c^2)} . "
},
{
"math_id": 6,
"text": " {c-u \\over c+u} = \\left({c-u' \\over c+u'}\\right)\\left({c-v \\over c+v}\\right)."
},
{
"math_id": 7,
"text": "\\gamma_{_v} = 1/\\sqrt{1-v^2/c^2}"
},
{
"math_id": 8,
"text": "dx = \\gamma_{_v}(dx' + vdt'), \\quad dy = dy', \\quad dz = dz', \\quad dt = \\gamma_{_v}\\left(dt' + \\frac{v}{c^2}dx'\\right)."
},
{
"math_id": 9,
"text": "\\frac{dx}{dt} = \\frac{\\gamma_{_v}(dx' + vdt')}{\\gamma_{_v}(dt' + \\frac{v}{c^2}dx')},\n \\quad \\frac{dy}{dt} = \\frac{dy'}{\\gamma_{_v}(dt' + \\frac{v}{c^2}dx')}, \\quad \\frac{dz}{dt} = \\frac{dz'}{\\gamma_{_v}(dt' + \\frac{v}{c^2}dx')},"
},
{
"math_id": 10,
"text": "u_x = \\frac{dx}{dt} = \\frac{\\frac{dx'}{dt'} + v}{(1 + \\frac{v}{c^2}\\frac{dx'}{dt'})},\n\\quad u_y = \\frac{dy}{dt} = \\frac{\\frac{dy'}{dt'}}{\\gamma_{_v} \\ (1 + \\frac{v}{c^2}\\frac{dx'}{dt'})},\n\\quad u_z = \\frac{dz}{dt} = \\frac{\\frac{dz'}{dt'}}{\\gamma_{_v} \\ (1 + \\frac{v}{c^2}\\frac{dx'}{dt'})},"
},
{
"math_id": 11,
"text": "u_x = \\frac{u_x' + v}{1 + \\frac{v}{c^2}u_x'}, \\quad u_x' = \\frac{u_x - v}{1 - \\frac{v}{c^2}u_x},"
},
{
"math_id": 12,
"text": "u_y = \\frac{u_y'\\sqrt{1-\\frac{v^2}{c^2}}}{1 + \\frac{v}{c^2}u_x'}, \\quad u_y' = \\frac{u_y\\sqrt{1-\\frac{v^2}{c^2}}}{1 - \\frac{v}{c^2}u_x},"
},
{
"math_id": 13,
"text": "u_z = \\frac{u_z'\\sqrt{1-\\frac{v^2}{c^2}}}{1 + \\frac{v}{c^2}u_x'}, \\quad u_z' = \\frac{u_z\\sqrt{1-\\frac{v^2}{c^2}}}{1 - \\frac{v}{c^2}u_x},"
},
{
"math_id": 14,
"text": "u_x = u\\cos \\theta, u_y = u\\sin \\theta,\\quad u_x' = u'\\cos \\theta', \\quad u_y' = u'\\sin \\theta',"
},
{
"math_id": 15,
"text": "u = \\frac{\\sqrt{u'^2 +v^2+2vu'\\cos \\theta' - \\left(\\frac{vu'\\sin\\theta'}{c}\\right)^2}}{1 + \\frac{v}{c^2}u'\\cos \\theta'},"
},
{
"math_id": 16,
"text": "\\tan \\theta = \\frac{u_y}{u_x} = \\frac{\\sqrt{1-\\frac{v^2}{c^2}}u_y'}{u_x' + v} = \\frac{\\sqrt{1-\\frac{v^2}{c^2}}u'\\sin \\theta'}{u'\\cos \\theta' + v}."
},
{
"math_id": 17,
"text": "\\begin{align}\nu &= \\sqrt{u_x^2 + u_y^2} = \\frac{\\sqrt{(u_x'+v)^2 + (1-\\frac{v^2}{c^2})u_y'^2}}{1 + \\frac{v}{c^2}u_x'}\n= \\frac{\\sqrt{u_x'^2+v^2+2u_x'v + (1-\\frac{v^2}{c^2})u_y'^2}}{1 + \\frac{v}{c^2}u_x'}\\\\\n&=\\frac{\\sqrt{u'^2 \\cos^2 \\theta'+v^2+2vu'\\cos \\theta' + u'^2\\sin^2\\theta' - \\frac{v^2}{c^2}u'^2\\sin^2\\theta'}}{1 + \\frac{v}{c^2}u_x'}\\\\\n&=\\frac{\\sqrt{u'^2 +v^2+2vu'\\cos \\theta' - (\\frac{vu'\\sin\\theta'}{c})^2}}{1 + \\frac{v}{c^2}u'\\cos \\theta'}\n\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align}\n\\mathbf{v} &= (v_1, v_2, v_3) = (V_1/V_0, 0, 0),\\\\\n\\mathbf{u}' &= (u'_1, u'_2, u'_3) = (U'_1/U'_0, U'_2/U'_0, 0)\n\\end{align}"
},
{
"math_id": 19,
"text": " V_0^2 - V_1^2 = 1,"
},
{
"math_id": 20,
"text": " V_0 = 1/\\sqrt{1-v_1^2} \\ = \\gamma, \\quad V_1 = v_1/\\sqrt{1-v_1^2} = v_1 \\gamma ."
},
{
"math_id": 21,
"text": "\n\\begin{pmatrix} \\gamma & v_1 \\gamma & 0 & 0 \\\\ v_1 \\gamma & \\gamma & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix}\n"
},
{
"math_id": 22,
"text": "\\begin{align} U_0 &= V_0 U'_0 + V_1 U'_1,\\\\\nU_1 &= V_1 U'_0 + V_0 U'_1,\\\\\nU_2 &= U'_2,\\\\\nU_3 &= U'_3.\\end{align} "
},
{
"math_id": 23,
"text": "\\begin{align} u_1 &= { v_1 + u'_1 \\over 1 + v_1 u'_1 },\\\\\nu_2 &= { u'_2 \\over (1 + v_1 u'_1) }{ 1 \\over V_0 } = { u'_2 \\over 1 + v_1 u'_1 } \\sqrt{1 - v_1^2},\\\\\nu_3 &= 0\\end{align} "
},
{
"math_id": 24,
"text": "\\mathbf{u} = \\mathbf{u}_\\parallel + \\mathbf{u}_\\perp,\\quad \\mathbf{u}' = \\mathbf{u}'_\\parallel + \\mathbf{u}'_\\perp ,"
},
{
"math_id": 25,
"text": "\\mathbf{u}_\\parallel = u_x \\mathbf{e}_x,\\quad \\mathbf{u}_\\perp = u_y \\mathbf{e}_y + u_z \\mathbf{e}_z ,\\quad \\mathbf{v} = v\\mathbf{e}_x,"
},
{
"math_id": 26,
"text": "\\mathbf u_\\parallel = \\frac{\\mathbf u_\\parallel' + \\mathbf v}{1 + \\frac{\\mathbf v \\cdot \\mathbf u_\\parallel'}{c^2}},\n\\quad \\mathbf u_\\perp = \\frac{\\sqrt{1-\\frac{v^2}{c^2}}\\mathbf u_\\perp'}{1 + \\frac{\\mathbf v\\cdot \\mathbf u_\\parallel'}{c^2}}.\n"
},
{
"math_id": 27,
"text": "\\mathbf{u} = \\mathbf u_\\parallel + \\mathbf u_\\perp = \\frac{1}{1+\\frac{\\mathbf{v}\\cdot\\mathbf{u}'}{c^{2}}}\\left[\\alpha_v\\mathbf{u}'+ \\mathbf{v} + (1-\\alpha_v)\\frac{(\\mathbf{v}\\cdot\\mathbf{u}')}{v^{2}}\\mathbf{v}\\right] \\equiv \\mathbf v \\oplus \\mathbf u',"
},
{
"math_id": 28,
"text": "\\begin{align}\n\\frac{\\mathbf u'_\\parallel + \\mathbf v}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}} + \\frac{\\alpha_v \\mathbf u'_\\perp}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}} &= \\frac{\\mathbf v + \\frac{\\mathbf v \\cdot \\mathbf u'}{v^2}\\mathbf v}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\n+ \\frac{\\alpha_v \\mathbf u' - \\alpha_v\\frac{\\mathbf v \\cdot \\mathbf u'}{v^2}\\mathbf v}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\\\\n\n&=\\frac{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{v^2}(1 - \\alpha_v)}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf v +\n\\alpha_v\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf u'\\\\\n\n&=\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf v +\n\\alpha_v\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf u' +\n\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\frac{\\mathbf v \\cdot \\mathbf u'}{v^2}(1 - \\alpha_v)\\mathbf v\\\\\n\n&=\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf v +\n\\alpha_v\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf u' +\n\\frac{1}{c^2}\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\frac{\\mathbf v \\cdot \\mathbf u'}{v^2/c^2}(1 - \\alpha_v)\\mathbf v\\\\\n\n&=\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf v +\n\\alpha_v\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\mathbf u' +\n\\frac{1}{c^2}\\frac{1}{1 + \\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}}\\frac{\\mathbf v \\cdot \\mathbf u'}{(1-\\alpha_v)(1+\\alpha_v)}(1 - \\alpha_v)\\mathbf v\\\\\n\n&= \\frac{1}{1+\\frac{\\mathbf{v}\\cdot\\mathbf{u}'}{c^{2}}}\\left[\\alpha_v\\mathbf{u}'+ \\mathbf{v} + (1-\\alpha_v)\\frac{(\\mathbf{v}\\cdot\\mathbf{u}')}{v^{2}}\\mathbf{v}\\right].\n\n\\end{align}\n"
},
{
"math_id": 29,
"text": "\\mathbf{u}'_\\parallel = \\frac{\\mathbf{v} \\cdot \\mathbf{u}'}{v^2}\\mathbf v,"
},
{
"math_id": 30,
"text": "\\mathbf{u}'_\\perp = - \\frac{\\mathbf{v} \\times (\\mathbf{v} \\times \\mathbf{u}')}{v^2}."
},
{
"math_id": 31,
"text": "\\mathbf u = \\frac{\\mathbf u_\\parallel' + \\mathbf v}{1 + \\frac{\\mathbf v \\cdot \\mathbf u_\\parallel'}{c^2}}\n+\\frac{\\sqrt{1-\\frac{v^2}{c^2}}(\\mathbf u - \\mathbf u_\\parallel')}{1 + \\frac{\\mathbf v\\cdot \\mathbf u_\\parallel'}{c^2}},\n"
},
{
"math_id": 32,
"text": "\\alpha_v"
},
{
"math_id": 33,
"text": "\\gamma_v"
},
{
"math_id": 34,
"text": "\\begin{align}\n\\mathbf v \\oplus \\mathbf u' \\equiv \\mathbf u &=\\frac{1}{1 + \\frac{\\mathbf u' \\cdot \\mathbf v}{c^2}}\\left[\\mathbf v +\n\\frac{\\mathbf u'}{\\gamma_v} + \\frac{1}{c^2}\\frac{\\gamma_v}{1+\\gamma_v}(\\mathbf u' \\cdot \\mathbf v)\\mathbf v\\right]\\\\\n&= \\frac{1}{1 + \\frac{\\mathbf u' \\cdot \\mathbf v}{c^2}}\\left[\\mathbf v + \\mathbf u' +\n\\frac{1}{c^2}\\frac{\\gamma_v}{1+\\gamma_v} \\mathbf v \\times(\\mathbf v \\times \\mathbf u')\\right],\n\\end{align}"
},
{
"math_id": 35,
"text": "\\begin{align}\n\\mathbf v \\oplus \\mathbf u \\equiv \\mathbf u' &=\\frac{1}{1 - \\frac{\\mathbf u \\cdot \\mathbf v}{c^2}}\\left[\\frac{\\mathbf u}{\\gamma_v} - \\mathbf v + \\frac{1}{c^2}\\frac{\\gamma_v}{1+\\gamma_v}(\\mathbf u \\cdot \\mathbf v)\\mathbf v\\right]\\\\\n&= \\frac{1}{1 - \\frac{\\mathbf u \\cdot \\mathbf v}{c^2}}\\left[ \\mathbf u - \\mathbf v +\n\\frac{1}{c^2}\\frac{\\gamma_v}{1+\\gamma_v} \\mathbf v \\times(\\mathbf v \\times \\mathbf u)\\right]\n\\end{align}"
},
{
"math_id": 36,
"text": "(\\lambda \\mathbf{v}) \\oplus (\\lambda \\mathbf{u}) \\neq \\lambda (\\mathbf{v} \\oplus \\mathbf{u}) , "
},
{
"math_id": 37,
"text": "(-\\mathbf{v}) \\oplus (-\\mathbf{u}) = - (\\mathbf{v} \\oplus \\mathbf{u}) , "
},
{
"math_id": 38,
"text": "\\mathbf v \\oplus \\mathbf u \\ne \\mathbf u \\oplus \\mathbf v, "
},
{
"math_id": 39,
"text": "\\mathbf v \\oplus (\\mathbf u \\oplus \\mathbf w) \\ne (\\mathbf v \\oplus \\mathbf u) \\oplus \\mathbf w. "
},
{
"math_id": 40,
"text": "| \\mathbf u |^2 \\equiv |\\mathbf v \\oplus \\mathbf u'|^2 = \\frac{1}{\\left(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}\\right)^2}\\left[\\left(\\mathbf v + \\mathbf u' \\right)^2 - \\frac{1}{c^2}\\left(\\mathbf v \\times \\mathbf u'\\right)^2 \\right] = |\\mathbf u' \\oplus \\mathbf v|^2."
},
{
"math_id": 41,
"text": "| \\mathbf u' |^2 \\equiv |\\mathbf v \\oplus \\mathbf u|^2 = \\frac{1}{\\left(1-\\frac{\\mathbf v \\cdot \\mathbf u}{c^2}\\right)^2}\\left[\\left(\\mathbf u - \\mathbf v \\right)^2 - \\frac{1}{c^2}\\left(\\mathbf v \\times \\mathbf u\\right)^2 \\right] = |\\mathbf u \\oplus \\mathbf v|^2."
},
{
"math_id": 42,
"text": "\\begin{align}\n&\\left(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}\\right)^2|\\mathbf v \\oplus \\mathbf u'|^2\\\\\n&= \\left[ \\mathbf v + \\mathbf u' + \\frac{1}{c^2}\\frac{\\gamma_v}{1+\\gamma_v}\\mathbf v \\times(\\mathbf v \\times \\mathbf u')\\right]^2\\\\\n\n&=(\\mathbf v + \\mathbf u')^2 +2\\frac{1}{c^2} \\frac{\\gamma_v}{\\gamma_v + 1} \\left[(\\mathbf v \\cdot \\mathbf u')^2-(\\mathbf v\\cdot\\mathbf v)(\\mathbf u'\\cdot\\mathbf u')\\right] + \\frac{1}{c^4}\\left(\\frac{\\gamma_v}{\\gamma_v + 1}\\right)^2\\left[(\\mathbf v\\cdot\\mathbf v)^2(\\mathbf u' \\cdot \\mathbf u') - (\\mathbf v\\cdot \\mathbf u')^2(\\mathbf v\\cdot \\mathbf v)\\right]\\\\\n\n&=(\\mathbf v + \\mathbf u')^2 +2\\frac{1}{c^2} \\frac{\\gamma_v}{\\gamma_v + 1} \\left[(\\mathbf v \\cdot \\mathbf u')^2-(\\mathbf v\\cdot\\mathbf v)(\\mathbf u'\\cdot\\mathbf u')\\right]\n+ \\frac{v^2}{c^4}\\left(\\frac{\\gamma_v}{\\gamma_v + 1}\\right)^2\\left[(\\mathbf v\\cdot\\mathbf v)(\\mathbf u' \\cdot \\mathbf u') - (\\mathbf v\\cdot \\mathbf u')^2\\right]\\\\\n\n&=(\\mathbf v + \\mathbf u')^2 +2\\frac{1}{c^2} \\frac{\\gamma_v}{\\gamma_v + 1} \\left[(\\mathbf v \\cdot \\mathbf u')^2-(\\mathbf v\\cdot\\mathbf v)(\\mathbf u'\\cdot\\mathbf u')\\right]\n+ \\frac{(1-\\alpha_v)(1+\\alpha_v)}{c^2}\\left(\\frac{\\gamma_v}{\\gamma_v + 1}\\right)^2\\left[(\\mathbf v\\cdot\\mathbf v)(\\mathbf u' \\cdot \\mathbf u') - (\\mathbf v\\cdot \\mathbf u')^2\\right]\\\\\n\n&=(\\mathbf v + \\mathbf u')^2 +2\\frac{1}{c^2} \\frac{\\gamma_v}{\\gamma_v + 1} \\left[(\\mathbf v \\cdot \\mathbf u')^2-(\\mathbf v\\cdot\\mathbf v)(\\mathbf u'\\cdot\\mathbf u')\\right]\n+ \\frac{(\\gamma_v-1)}{c^2(\\gamma_v + 1)}\\left[(\\mathbf v\\cdot\\mathbf v)(\\mathbf u' \\cdot \\mathbf u') - (\\mathbf v\\cdot \\mathbf u')^2\\right]\\\\\n\n&=(\\mathbf v + \\mathbf u')^2 +2\\frac{1}{c^2} \\frac{\\gamma_v}{\\gamma_v + 1} \\left[(\\mathbf v \\cdot \\mathbf u')^2-(\\mathbf v\\cdot\\mathbf v)(\\mathbf u'\\cdot\\mathbf u')\\right]\n+ \\frac{(1-\\gamma_v)}{c^2(\\gamma_v + 1)}\\left[(\\mathbf v\\cdot \\mathbf u')^2 - (\\mathbf v\\cdot\\mathbf v)(\\mathbf u' \\cdot \\mathbf u')\\right] \\\\\n\n&=(\\mathbf v + \\mathbf u')^2 +\\frac{1}{c^2} \\frac{\\gamma_v+1}{\\gamma_v + 1} \\left[(\\mathbf v \\cdot \\mathbf u')^2-(\\mathbf v\\cdot\\mathbf v)(\\mathbf u'\\cdot\\mathbf u')\\right]\\\\\n\n&=(\\mathbf v + \\mathbf u')^2 -\\frac{1}{c^2} |\\mathbf v \\times \\mathbf u'|^2\n\n\\end{align}"
},
{
"math_id": 43,
"text": "\\gamma_u = \\gamma_{\\mathbf v \\oplus \\mathbf u'} =\\left[ 1 - \\frac{1}{c^2}\\frac{1}{(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\n\\left( (\\mathbf v + \\mathbf u')^2 - \\frac{1}{c^2}(v^2u'^2 - (\\mathbf v \\cdot \\mathbf u')^2)\\right)\\right]^{-\\frac{1}{2}}=\\gamma_v\\gamma_u'\\left(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}\\right), \\quad \\quad \\gamma_u' = \\gamma_v\\gamma_u\\left(1-\\frac{\\mathbf v \\cdot \\mathbf u}{c^2}\\right)"
},
{
"math_id": 44,
"text": "\\begin{align}\n\\gamma_{\\mathbf v \\oplus \\mathbf u'} &= \\left[ \\frac{c^3(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}{c^2(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\n - \\frac{1}{c^2}\\frac{ (\\mathbf v + \\mathbf u')^2 - \\frac{1}{c^2}(v^2u'^2 - (\\mathbf v \\cdot \\mathbf u')^2)}{(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\\right]^{-\\frac{1}{2}}\\\\\n\n&=\\left[ \\frac{c^2(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2 - (\\mathbf v + \\mathbf u')^2 + \\frac{1}{c^2}(v^2u'^2 - (\\mathbf v \\cdot \\mathbf u')^2)}{c^2(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\n\\right]^{-\\frac{1}{2}}\\\\\n\n&=\\left[ \\frac{c^2(1+2\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2} + \\frac{(\\mathbf v \\cdot \\mathbf u')^2}{c^4}) - v^2 - u'^2 - 2(\\mathbf v \\cdot \\mathbf u') + \\frac{1}{c^2}(v^2u'^2 - (\\mathbf v \\cdot \\mathbf u')^2)}{c^2(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\n\\right]^{-\\frac{1}{2}}\\\\\n\n&=\\left[ \\frac{1+2\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2} + \\frac{(\\mathbf v \\cdot \\mathbf u')^2}{c^4} - \\frac{v^2}{c^2} - \\frac{u'^2}{c^2} - \\frac{2}{c^2}(\\mathbf v \\cdot \\mathbf u') + \\frac{1}{c^4}(v^2u'^2 - (\\mathbf v \\cdot \\mathbf u')^2)}{(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\n\\right]^{-\\frac{1}{2}}\\\\\n\n&=\\left[ \\frac{1 + \\frac{(\\mathbf v \\cdot \\mathbf u')^2}{c^4} - \\frac{v^2}{c^2} - \\frac{u'^2}{c^2} + \\frac{1}{c^4}(v^2u'^2 - (\\mathbf v \\cdot \\mathbf u')^2)}{(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2})^2}\n\\right]^{-\\frac{1}{2}}\\\\\n\n&=\\left[ \\frac{\\left(1-\\frac{v^2}{c^2}\\right)\\left(1-\\frac{u'^2}{c^2}\\right)}{\\left(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}\\right)^2}\n\\right]^{-\\frac{1}{2}}\n=\\left[ \\frac{1}{\\gamma_v^2\\gamma_u'^2\\left(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}\\right)^2}\n\\right]^{-\\frac{1}{2}}\\\\\n\n&=\\gamma_v\\gamma_u' \\left(1+\\frac{\\mathbf v \\cdot \\mathbf u'}{c^2}\\right)\n\\end{align}"
},
{
"math_id": 45,
"text": "| \\mathbf{v_{rel}} |^2 =\\frac{1}{(1 - \\mathbf{v_1}\\cdot\\mathbf{v_2})^2}\\left[(\\mathbf{v_1}-\\mathbf{v_2})^2 - (\\mathbf{v_1} \\times \\mathbf{v_2})^2\\right] "
},
{
"math_id": 46,
"text": "\\mathbf{u}\\oplus\\mathbf{v} = \\frac{1}{1+\\frac{\\mathbf{u}\\cdot\\mathbf{v}}{c^2}}\\left[\\mathbf{v}+\\mathbf{u}+\\frac{1}{c^2}\\frac{\\gamma_\\mathbf{u}}{\\gamma_\\mathbf{u}+1}\\mathbf{u}\\times(\\mathbf{u}\\times\\mathbf{v})\\right] "
},
{
"math_id": 47,
"text": "\\mathbf{u}*\\mathbf{v}=\\frac{1}{1+\\frac{\\mathbf{u}\\cdot\\mathbf{v}}{c^2}}\\left[\\mathbf{v}+\\mathbf{u}+\\frac{1}{c^2}\\frac{\\gamma_\\mathbf{u}}{\\gamma_\\mathbf{u}+1}\\mathbf{u}\\times(\\mathbf{u}\\times\\mathbf{v})\\right] "
},
{
"math_id": 48,
"text": "\\mathbf{w}\\circ\\mathbf{v}=\\frac{1}{1+\\frac{\\mathbf{v}\\cdot\\mathbf{w}}{c^{2}}}\\left[\\frac{\\mathbf{w}}{\\gamma_\\mathbf{v}}+\\mathbf{v}+\\frac{1}{c^{2}}\\frac{\\gamma_\\mathbf{v}}{\\gamma_\\mathbf{v}+1}(\\mathbf{w}\\cdot\\mathbf{v})\\mathbf{v}\\right]"
},
{
"math_id": 49,
"text": "\\begin{align}\nc_m &= \\frac{V + c_m'}{1 + \\frac{Vc_m'}{c^2}} = \\frac{V + \\frac{c}{n}}{1 + \\frac{Vc}{nc^2}} = \\frac{c}{n} \\frac{1 + \\frac{nV}{c}}{1 + \\frac{V}{nc}}\\\\\n& = \\frac{c}{n} \\left(1 + \\frac{nV}{c}\\right) \\frac{1}{1 + \\frac{V}{nc}} =\n\\left(\\frac{c}{n} + V\\right) \\left(1 - \\frac{V}{nc} + \\left(\\frac{V}{nc}\\right)^2 - \\cdots\\right).\n\\end{align}"
},
{
"math_id": 50,
"text": "c_m = \\frac{c}{n} + V\\left(1 - \\frac{1}{n^2} - \\frac{V}{nc} + \\cdots\\right)."
},
{
"math_id": 51,
"text": "\\tan \\theta = \\frac{\\sqrt{1-\\frac{V^2}{c^2}}c\\sin \\theta'}{c\\cos \\theta' + V}\n= \\frac{\\sqrt{1-\\frac{V^2}{c^2}}\\sin \\theta'}{\\cos \\theta' + \\frac{V}{c}}."
},
{
"math_id": 52,
"text": "\\sin \\theta =\\frac{\\sqrt{1-\\frac{V^2}{c^2}}\\sin \\theta'}{1+\\frac{V}{c}\\cos \\theta'},"
},
{
"math_id": 53,
"text": "\\begin{align}\\frac{v_y}{v}\n&= \\frac{\\frac{\\sqrt{1-\\frac{V^2}{c^2}}v_y'}{1 + \\frac{V}{c^2}v_x'}}{\\frac{\\sqrt{v'^2 +V^2+2Vv'\\cos \\theta' - (\\frac{Vv'\\sin\\theta'}{c})^2}}{1 + \\frac{V}{c^2}v'\\cos \\theta'}}\\\\\n\n&= \\frac{c\\sqrt{1-\\frac{V^2}{c^2}}\\sin \\theta'}{\\sqrt{c^2 +V^2+2Vc\\cos \\theta' - V^2\\sin^2\\theta'}}\\\\\n\n&= \\frac{c\\sqrt{1-\\frac{V^2}{c^2}}\\sin \\theta'}{\\sqrt{c^2 +V^2+2Vc\\cos \\theta' - V^2(1 - \\cos^2\\theta')}}\n= \\frac{c\\sqrt{1-\\frac{V^2}{c^2}}\\sin \\theta'}{\\sqrt{c^2 +2Vc\\cos \\theta' + V^2\\cos^2\\theta'}}\\\\\n\n&= \\frac{\\sqrt{1-\\frac{V^2}{c^2}}\\sin \\theta'}{1+\\frac{V}{c}\\cos \\theta'},\n\\end{align}"
},
{
"math_id": 54,
"text": "\\cos \\theta = \\frac{\\frac{V}{c} + \\cos \\theta'}{1+\\frac{V}{c}\\cos \\theta'},"
},
{
"math_id": 55,
"text": "\\begin{align}\\sin \\theta - \\sin \\theta' &= \\sin \\theta'\\left(\\frac{\\sqrt{1 - \\frac{V^2}{c^2}}}{1 + \\frac{V}{c} \\cos \\theta'} - 1\\right)\\\\\n&\\approx \\sin \\theta'\\left(1 -\\frac{V}{c} \\cos \\theta' - 1\\right) = -\\frac{V}{c}\\sin\\theta'\\cos\\theta',\\end{align}"
},
{
"math_id": 56,
"text": "\\sin \\theta' - \\sin \\theta = 2\\sin \\frac{1}{2}(\\theta'-\\theta)\\cos\\frac{1}{2}(\\theta + \\theta') \\approx (\\theta' - \\theta)\\cos\\theta', "
},
{
"math_id": 57,
"text": "\\Delta \\theta \\equiv \\theta' - \\theta = \\frac{V}{c}\\sin \\theta',"
},
{
"math_id": 58,
"text": "\\lambda = -sT + VT = (-s + V)T"
},
{
"math_id": 59,
"text": "s = s' = -c,"
},
{
"math_id": 60,
"text": "\\nu = {-s \\over \\lambda} = {-s \\over (V-s)T} = {c \\over (V+c)\\gamma_{_V} T'}\n= \\nu'\\frac{c\\sqrt{1 - {V^2 \\over c^2}}}{c+V} = \\nu'\\sqrt{\\frac{1-\\beta}{1+\\beta}}\\,."
},
{
"math_id": 61,
"text": "s = \\frac{s' + V}{1+{s'V\\over c^2}}."
},
{
"math_id": 62,
"text": "\\tau= {1 \\over \\gamma_{_V}\\nu'}\\left(\\frac{1}{1+{V\\over s'}}\\right), \\quad \\nu = \\gamma_{_V}\\nu'\\left(1+{V\\over s'}\\right)"
},
{
"math_id": 63,
"text": "\\begin{align}{L\\over -s} &= \\frac{\\left(\\frac{-s'-V}{1+{s'V\\over c^2}} + V\\right) T} {\\frac{-s'-V}{1+{s'V\\over c^2}}}\\\\\n\n&={\\gamma_{_V} \\over \\nu'}\\frac{-s'-V + V(1+{s'V\\over c^2})}{-s'-V}\\\\\n&={\\gamma_{_V} \\over \\nu'}\\left(\\frac{s'\\left(1-{V^2\\over c^2}\\right)}{s'+V}\\right)\\\\\n&={\\gamma_{_V} \\over \\nu'}\\left(\\frac{s'\\gamma^{-2}}{s'+V}\\right)\\\\\n&={1 \\over \\gamma_{_V}\\nu'}\\left(\\frac{1}{1+{V\\over s'}}\\right).\\\\\n\\end{align}"
},
{
"math_id": 64,
"text": "\\nu = \\nu'\\gamma_{_V}(1-\\beta) = \\nu'\\frac{1-\\beta}{\\sqrt{1-\\beta}\\sqrt{1+\\beta}}=\\nu'\\sqrt{\\frac{1-\\beta}{1+\\beta}}\\,."
},
{
"math_id": 65,
"text": "\\nu = \\gamma_{_V}\\nu' \\left(1+\\frac{V}{s'}\\cos\\theta\\right),"
},
{
"math_id": 66,
"text": "\\boldsymbol \\beta"
},
{
"math_id": 67,
"text": "\\boldsymbol{\\zeta}"
},
{
"math_id": 68,
"text": "\\mathfrak{so}(3,1) \\supset \\mathrm{span}\\{K_1, K_2, K_3\\} \\approx \\mathbb{R}^3 \\ni \\boldsymbol{\\zeta} = \\boldsymbol{\\hat{\\beta}} \\tanh^{-1}\\beta, \\quad \\boldsymbol{\\beta} \\in \\mathbb{B}^3,"
},
{
"math_id": 69,
"text": "\\boldsymbol \\zeta"
},
{
"math_id": 70,
"text": "\\mathfrak{so}(3, 1)"
},
{
"math_id": 71,
"text": "K_1, K_2, K_3"
},
{
"math_id": 72,
"text": " \\mathbb B^3"
},
{
"math_id": 73,
"text": "\\tanh(\\zeta_v + \\zeta_{u'}) = {\\tanh \\zeta_v + \\tanh \\zeta_{u'} \\over 1+ \\tanh \\zeta_v \\tanh \\zeta_{u'}}"
},
{
"math_id": 74,
"text": "\\frac{v}{c} = \\tanh \\zeta_v \\ , \\quad \\frac{u'}{c} = \\tanh \\zeta_{u'} \\ , \\quad\\, \\frac{u}{c} = \\tanh(\\zeta_v + \\zeta_{u'})."
},
{
"math_id": 75,
"text": "\\mathbb B^3"
},
{
"math_id": 76,
"text": "v_{r} = \\frac{\\sqrt{(\\mathbf{v_1}-\\mathbf{v_2})^2 - (\\mathbf{v_1} \\times \\mathbf{v_2})^2}}{1 - \\mathbf{v_1}\\cdot\\mathbf{v_2}},"
},
{
"math_id": 77,
"text": "v_i"
},
{
"math_id": 78,
"text": "\\beta_i"
},
{
"math_id": 79,
"text": "\\mathbf v_1"
},
{
"math_id": 80,
"text": "\\mathbf v_2"
},
{
"math_id": 81,
"text": "v_r"
},
{
"math_id": 82,
"text": "v_r = v_2"
},
{
"math_id": 83,
"text": "\\mathbf v_2 = \\mathbf v_1 + d\\mathbf v"
},
{
"math_id": 84,
"text": "\\boldsymbol \\beta_2 = \\boldsymbol \\beta_1 + d\\boldsymbol \\beta"
},
{
"math_id": 85,
"text": "dl_\\boldsymbol{\\beta}^2 = \\frac{d\\boldsymbol \\beta^2 - (\\boldsymbol \\beta \\times d\\boldsymbol \\beta)^2}{(1-\\beta^2)^2} = \\frac{d\\beta^2}{(1-\\beta^2)^2} + \\frac{\\beta^2}{1-\\beta^2}(d\\theta^2 + \\sin^2\\theta d\\varphi^2),"
},
{
"math_id": 86,
"text": "\\zeta = |\\boldsymbol \\zeta| = \\tanh^{-1}\\beta,"
},
{
"math_id": 87,
"text": "\\mathbb R^3"
},
{
"math_id": 88,
"text": "dl_{\\boldsymbol \\zeta}^2 = d\\zeta^2 + \\sinh^2\\zeta(d\\theta^2 + \\sin^2\\theta d\\varphi^2)."
},
{
"math_id": 89,
"text": "f"
},
{
"math_id": 90,
"text": "dN_f = R_f \\, dV \\, dt = \\sigma F \\, dV \\, dt"
},
{
"math_id": 91,
"text": "dN_f = \\sigma n_1 n_2 v_r \\, dV \\, dt"
},
{
"math_id": 92,
"text": "dVdt"
},
{
"math_id": 93,
"text": "dN_f"
},
{
"math_id": 94,
"text": "R_f = F\\sigma"
},
{
"math_id": 95,
"text": "F = n_1n_2v_{r}"
},
{
"math_id": 96,
"text": "\\sigma"
},
{
"math_id": 97,
"text": "n_1, n_2 "
},
{
"math_id": 98,
"text": "v_{r} = |\\mathbf v_2 - \\mathbf v_1|"
},
{
"math_id": 99,
"text": "v_{rel}"
},
{
"math_id": 100,
"text": "v_r = |\\mathbf v_2 - \\mathbf v_1|"
},
{
"math_id": 101,
"text": "1"
},
{
"math_id": 102,
"text": "v_{rel} = v_r = |\\mathbf v_2|."
},
{
"math_id": 103,
"text": "c = 1"
},
{
"math_id": 104,
"text": "v_\\text{rel} =\\frac{\\sqrt{(\\mathbf{v_1}-\\mathbf{v_2})^2 - (\\mathbf{v_1} \\times \\mathbf{v_2})^2}}{1 - \\mathbf{v_1}\\cdot\\mathbf{v_2}}."
},
{
"math_id": 105,
"text": "v_r = |\\mathbf v_1 - \\mathbf v_2|"
},
{
"math_id": 106,
"text": "v_\\text{rel} = \\frac{\\sqrt{(u_1 \\cdot u_2)^2 - 1}}{u_1 \\cdot u_2}."
},
{
"math_id": 107,
"text": "F = n_1n_2\\sqrt{(\\mathbf v_1 - \\mathbf v_2)^2 - (\\mathbf v_1 \\times \\mathbf v_2)^2} \\equiv n_1n_2\\bar v."
},
{
"math_id": 108,
"text": "F = n_1n_2|\\mathbf v_2 - \\mathbf v_1| = n_1n_2v_r"
},
{
"math_id": 109,
"text": "J_i = (n_i, n_i\\mathbf v_i)"
},
{
"math_id": 110,
"text": "n_i = \\gamma_i n_i^0"
},
{
"math_id": 111,
"text": "n_i^0"
},
{
"math_id": 112,
"text": "F = (J_1 \\cdot J_2) v_\\text{rel}."
},
{
"math_id": 113,
"text": "\\bar v"
}
] |
https://en.wikipedia.org/wiki?curid=1437696
|
14377820
|
Dirichlet energy
|
A mathematical measure of a function's variability
In mathematics, the Dirichlet energy is a measure of how "variable" a function is. More abstractly, it is a quadratic functional on the Sobolev space "H"1. The Dirichlet energy is intimately connected to Laplace's equation and is named after the German mathematician Peter Gustav Lejeune Dirichlet.
Definition.
Given an open set Ω ⊆ R"n" and a function "u" : Ω → R the Dirichlet energy of the function "u" is the real number
formula_0
where ∇"u" : Ω → R"n" denotes the gradient vector field of the function "u".
Properties and applications.
Since it is the integral of a non-negative quantity, the Dirichlet energy is itself non-negative, i.e. "E"["u"] ≥ 0 for every function "u".
Solving Laplace's equation formula_1 for all formula_2, subject to appropriate boundary conditions, is equivalent to solving the variational problem of finding a function "u" that satisfies the boundary conditions and has minimal Dirichlet energy.
Such a solution is called a harmonic function and such solutions are the topic of study in potential theory.
In a more general setting, where Ω ⊆ R"n" is replaced by any Riemannian manifold "M", and "u" : Ω → R is replaced by "u" : "M" → Φ for another (different) Riemannian manifold Φ, the Dirichlet energy is given by the sigma model. The solutions to the Lagrange equations for the sigma model Lagrangian are those functions "u" that minimize/maximize the Dirichlet energy. Restricting this general case back to the specific case of "u" : Ω → R just shows that the Lagrange equations (or, equivalently, the Hamilton–Jacobi equations) provide the basic tools for obtaining extremal solutions.
|
[
{
"math_id": 0,
"text": "E[u] = \\frac 1 2 \\int_\\Omega \\| \\nabla u(x) \\|^2 \\, dx,"
},
{
"math_id": 1,
"text": "-\\Delta u(x) = 0"
},
{
"math_id": 2,
"text": "x \\in \\Omega"
}
] |
https://en.wikipedia.org/wiki?curid=14377820
|
14381
|
Hamiltonian (quantum mechanics)
|
Quantum operator for the sum of energies of a system
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's "energy spectrum" or its set of "energy eigenvalues", is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by formula_0, where the hat indicates that it is an operator. It can also be written as formula_1 or formula_2.
Introduction.
The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one.
Schrödinger Hamiltonian.
One particle.
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
formula_3
where
formula_4
is the potential energy operator and
formula_5
is the kinetic energy operator in which formula_6 is the mass of the particle, the dot denotes the dot product of vectors, and
formula_7
is the momentum operator where a formula_8 is the del operator. The dot product of formula_8 with itself is the Laplacian formula_9. In three dimensions using Cartesian coordinates the Laplace operator is
formula_10
Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation:
formula_11
which allows one to apply the Hamiltonian to systems described by a wave function formula_12. This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics.
One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields.
Expectation value.
It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
formula_13
Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
formula_14
which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem.
Many particles.
The formalism can be extended to formula_15 particles:
formula_16
where
formula_17
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
formula_18
is the kinetic energy operator of particle formula_19, formula_20 is the gradient for particle formula_19, and formula_21 is the Laplacian for particle n:
formula_22
Combining these yields the Schrödinger Hamiltonian for the formula_15-particle case:
formula_23
However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles:
formula_24
where formula_25 denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as "mass polarization terms", and appear in the Hamiltonian of many-electron atoms (see below).
For formula_15 interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function formula_26 is "not" simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is
formula_27
The general form of the Hamiltonian in this case is:
formula_28
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below.
Schrödinger equation.
The Hamiltonian generates the time evolution of quantum states. If formula_29 is the state of the system at time formula_30, then
formula_31
This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons formula_1 is also called the Hamiltonian. Given the state at some initial time (formula_32), we can solve it to obtain the state at any subsequent time. In particular, if formula_1 is independent of time, then
formula_33
The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in formula_1. One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
formula_34
is a unitary operator. It is the "time evolution operator" or "propagator" of a closed quantum system. If the Hamiltonian is time-independent, formula_35 form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
Dirac formalism.
However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way:
The eigenkets (eigenvectors) of formula_1, denoted formula_36, provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted formula_37, solving the equation:
formula_38
Since formula_1 is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.
Expressions for the Hamiltonian.
Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by formula_6, and charges by formula_39.
Free particle.
The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension:
formula_40
and in higher dimensions:
formula_41
Constant-potential well.
For a particle in a region of constant potential formula_42 (no dependence on space or time), in one dimension, the Hamiltonian is:
formula_43
in three dimensions
formula_44
This applies to the elementary "particle in a box" problem, and step potentials.
Simple harmonic oscillator.
For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to:
formula_45
where the angular frequency formula_46, effective spring constant formula_47, and mass formula_6 of the oscillator satisfy:
formula_48
so the Hamiltonian is:
formula_49
For three dimensions, this becomes
formula_50
where the three-dimensional position vector formula_51 using Cartesian coordinates is formula_52, its magnitude is
formula_53
Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction:
formula_54
Rigid rotor.
For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is:
formula_55
where formula_56, formula_57, and formula_58 are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and formula_59, formula_60, and formula_61 are the total angular momentum operators (components), about the formula_62, formula_63, and formula_64 axes respectively.
Electrostatic (Coulomb) potential.
The Coulomb potential energy for two point charges formula_65 and formula_66 (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism):
formula_67
However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For formula_15 charges, the potential energy of charge formula_68 due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):
formula_69
where formula_70 is the electrostatic potential of charge formula_68 at formula_71. The total potential of the system is then the sum over formula_72:
formula_73
so the Hamiltonian is:
formula_74
Electric dipole in an electric field.
For an electric dipole moment formula_75 constituting charges of magnitude formula_39, in a uniform, electrostatic field (time-independent) formula_76, positioned in one place, the potential is:
formula_77
the dipole moment itself is the operator
formula_78
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
formula_79
Magnetic dipole in a magnetic field.
For a magnetic dipole moment formula_80 in a uniform, magnetostatic field (time-independent) formula_81, positioned in one place, the potential is:
formula_82
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
formula_83
For a spin-<templatestyles src="Fraction/styles.css" />1⁄2 particle, the corresponding spin magnetic moment is:
formula_84
where formula_85 is the "spin g-factor" (not to be confused with the gyromagnetic ratio), formula_86 is the electron charge, formula_87 is the spin operator vector, whose components are the Pauli matrices, hence
formula_88
Charged particle in an electromagnetic field.
For a particle with mass formula_6 and charge formula_39 in an electromagnetic field, described by the scalar potential formula_89 and vector potential formula_90, there are two parts to the Hamiltonian to substitute for. The canonical momentum operator formula_91, which includes a contribution from the formula_90 field and fulfils the canonical commutation relation, must be quantized;
formula_92
where formula_93 is the kinetic momentum. The quantization prescription reads
formula_94
so the corresponding kinetic energy operator is
formula_95
and the potential energy, which is due to the formula_89 field, is given by
formula_96
Casting all of these into the Hamiltonian gives
formula_97
Energy eigenket degeneracy, symmetry, and conservation laws.
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the formula_62 direction is a different state from one propagating in the formula_63 direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be "degenerate".
It turns out that degeneracy occurs whenever a nontrivial unitary operator formula_98 commutes with the Hamiltonian. To see this, suppose that formula_99 is an energy eigenket. Then formula_100 is an energy eigenket with the same eigenvalue, since
formula_101
Since formula_98 is nontrivial, at least one pair of formula_99 and formula_100 must represent distinct states. Therefore, formula_1 has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let formula_102 be the Hermitian generator of formula_98:
formula_103
It is straightforward to show that if formula_98 commutes with formula_1, then so does formula_102:
formula_104
Therefore,
formula_105
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
formula_106
Thus, the expected value of the observable formula_102 is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
Hamilton's equations.
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states formula_107, which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
formula_108
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time formula_30, formula_109, can be expanded in terms of these basis states:
formula_110
where
formula_111
The coefficients formula_112 are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
formula_113
where the last step was obtained by expanding formula_109 in terms of the basis states.
Each formula_112 actually corresponds to "two" independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use formula_112 and its complex conjugate formula_114. With this choice of independent variables, we can calculate the partial derivative
formula_115
By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to
formula_116
Similarly, one can show that
formula_117
If we define "conjugate momentum" variables formula_118 by
formula_119
then the above equations become
formula_120
which is precisely the form of Hamilton's equations, with the formula_121s as the generalized coordinates, the formula_118s as the conjugate momenta, and formula_122 taking the place of the classical Hamiltonian.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\hat{H}"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "\\check{H}"
},
{
"math_id": 3,
"text": " \\hat{H} = \\hat{T} + \\hat{V}, "
},
{
"math_id": 4,
"text": " \\hat{V} = V = V(\\mathbf{r},t) ,"
},
{
"math_id": 5,
"text": "\\hat{T} = \\frac{\\mathbf{\\hat{p}}\\cdot\\mathbf{\\hat{p}}}{2m} = \\frac{\\hat{p}^2}{2m} = -\\frac{\\hbar^2}{2m}\\nabla^2,"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": " \\hat{p} = -i\\hbar\\nabla ,"
},
{
"math_id": 8,
"text": "\\nabla"
},
{
"math_id": 9,
"text": "\\nabla^2"
},
{
"math_id": 10,
"text": "\\nabla^2 = \\frac{\\partial^2}{ {\\partial x}^2} + \\frac{\\partial^2}{ {\\partial y}^2} + \\frac{\\partial^2}{ {\\partial z}^2}"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\hat{H} & = \\hat{T} + \\hat{V} \\\\[6pt]\n & = \\frac{\\mathbf{\\hat{p}}\\cdot\\mathbf{\\hat{p}}}{2m}+ V(\\mathbf{r},t) \\\\[6pt]\n & = -\\frac{\\hbar^2}{2m}\\nabla^2+ V(\\mathbf{r},t)\n\\end{align}"
},
{
"math_id": 12,
"text": "\\Psi(\\mathbf{r}, t)"
},
{
"math_id": 13,
"text": "\\begin{align}\nKE &= -\\frac{\\hbar^2}{2m} \\int_{-\\infty}^{+\\infty} \\psi^* \\left(\\frac{d^2\\psi}{dx^2}\\right) \\, dx \n\\\\ &=-\\frac{\\hbar^2}{2m} \\left( {\\left[ \\psi'(x) \\psi^*(x) \\right]_{-\\infty}^{+\\infty}} - \\int_{-\\infty}^{+\\infty} \\left(\\frac{d\\psi}{dx} \\right)\\left(\\frac{d\\psi}{dx} \\right)^* \\, dx \\right) \n\\\\ &= \\frac{\\hbar^2}{2m} \\int_{-\\infty}^{+\\infty} \\left|\\frac{d\\psi}{dx} \\right|^2 \\, dx \\geq 0\n\\end{align}"
},
{
"math_id": 14,
"text": "E = KE + \\langle V(x) \\rangle = KE + \\int_{-\\infty}^{+\\infty} V(x) |\\psi(x)|^2 \\, dx \\geq V_{\\text{min}}(x) \\int_{-\\infty}^{+\\infty} |\\psi(x)|^2 \\, dx \\geq V_{\\text{min}}(x) "
},
{
"math_id": 15,
"text": "N"
},
{
"math_id": 16,
"text": " \\hat{H} = \\sum_{n=1}^N \\hat{T}_n + \\hat{V} "
},
{
"math_id": 17,
"text": " \\hat{V} = V(\\mathbf{r}_1,\\mathbf{r}_2,\\ldots, \\mathbf{r}_N,t) ,"
},
{
"math_id": 18,
"text": " \\hat{T}_n = \\frac{\\mathbf{\\hat{p}}_n\\cdot\\mathbf{\\hat{p}}_n}{2m_n} = -\\frac{\\hbar^2}{2m_n}\\nabla_n^2"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "\\nabla_n"
},
{
"math_id": 21,
"text": "\\nabla_n^2"
},
{
"math_id": 22,
"text": "\\nabla_n^2 = \\frac{\\partial^2}{\\partial x_n^2} + \\frac{\\partial^2}{\\partial y_n^2} + \\frac{\\partial^2}{\\partial z_n^2},"
},
{
"math_id": 23,
"text": "\\begin{align}\n\\hat{H} & = \\sum_{n=1}^N \\hat{T}_n + \\hat{V} \\\\[6pt]\n & = \\sum_{n=1}^N \\frac{\\mathbf{\\hat{p}}_n\\cdot\\mathbf{\\hat{p}}_n}{2m_n}+ V(\\mathbf{r}_1,\\mathbf{r}_2,\\ldots,\\mathbf{r}_N,t) \\\\[6pt]\n & = -\\frac{\\hbar^2}{2}\\sum_{n=1}^N \\frac{1}{m_n}\\nabla_n^2 + V(\\mathbf{r}_1,\\mathbf{r}_2,\\ldots,\\mathbf{r}_N,t)\n\\end{align} "
},
{
"math_id": 24,
"text": "-\\frac{\\hbar^2}{2M}\\nabla_i\\cdot\\nabla_j "
},
{
"math_id": 25,
"text": "M"
},
{
"math_id": 26,
"text": "V"
},
{
"math_id": 27,
"text": " V = \\sum_{i=1}^N V(\\mathbf{r}_i,t) = V(\\mathbf{r}_1,t) + V(\\mathbf{r}_2,t) + \\cdots + V(\\mathbf{r}_N,t) "
},
{
"math_id": 28,
"text": "\\begin{align}\n \\hat{H} & = -\\frac{\\hbar^2}{2}\\sum_{i=1}^N \\frac{1}{m_i}\\nabla_i^2 + \\sum_{i=1}^N V_i \\\\[6pt]\n & = \\sum_{i=1}^N \\left(-\\frac{\\hbar^2}{2m_i}\\nabla_i^2 + V_i \\right) \\\\[6pt]\n & = \\sum_{i=1}^N \\hat{H}_i\n\\end{align}"
},
{
"math_id": 29,
"text": " \\left| \\psi (t) \\right\\rangle"
},
{
"math_id": 30,
"text": "t"
},
{
"math_id": 31,
"text": " H \\left| \\psi (t) \\right\\rangle = i \\hbar {d\\over\\ d t} \\left| \\psi (t) \\right\\rangle."
},
{
"math_id": 32,
"text": "t = 0"
},
{
"math_id": 33,
"text": " \\left| \\psi (t) \\right\\rangle = e^{-iHt/\\hbar} \\left| \\psi (0) \\right\\rangle."
},
{
"math_id": 34,
"text": " U = e^{-iHt/\\hbar} "
},
{
"math_id": 35,
"text": "\\{U(t)\\}"
},
{
"math_id": 36,
"text": "\\left| a \\right\\rang"
},
{
"math_id": 37,
"text": "\\{ E_a \\}"
},
{
"math_id": 38,
"text": " H \\left| a \\right\\rangle = E_a \\left| a \\right\\rangle."
},
{
"math_id": 39,
"text": "q"
},
{
"math_id": 40,
"text": "\\hat{H} = -\\frac{\\hbar^2}{2m}\\frac{\\partial^2}{\\partial x^2} "
},
{
"math_id": 41,
"text": "\\hat{H} = -\\frac{\\hbar^2}{2m}\\nabla^2 "
},
{
"math_id": 42,
"text": "V = V_0"
},
{
"math_id": 43,
"text": "\\hat{H} = -\\frac{\\hbar^2}{2m}\\frac{\\partial^2}{\\partial x^2} + V_0 "
},
{
"math_id": 44,
"text": "\\hat{H} = -\\frac{\\hbar^2}{2m}\\nabla^2 + V_0 "
},
{
"math_id": 45,
"text": "V = \\frac{k}{2}x^2 = \\frac{m\\omega^2}{2}x^2 "
},
{
"math_id": 46,
"text": "\\omega"
},
{
"math_id": 47,
"text": "k"
},
{
"math_id": 48,
"text": "\\omega^2 = \\frac{k}{m}"
},
{
"math_id": 49,
"text": "\\hat{H} = -\\frac{\\hbar^2}{2m}\\frac{\\partial^2}{\\partial x^2} + \\frac{m\\omega^2}{2}x^2 "
},
{
"math_id": 50,
"text": "\\hat{H} = -\\frac{\\hbar^2}{2m}\\nabla^2 + \\frac{m\\omega^2}{2} r^2 "
},
{
"math_id": 51,
"text": "\\mathbf{r}"
},
{
"math_id": 52,
"text": "(x, y, z)"
},
{
"math_id": 53,
"text": "r^2 = \\mathbf{r}\\cdot\\mathbf{r} = |\\mathbf{r}|^2 = x^2+y^2+z^2 "
},
{
"math_id": 54,
"text": "\\begin{align}\n\\hat{H} & = -\\frac{\\hbar^2}{2m}\\left( \\frac{\\partial^2}{\\partial x^2} + \\frac{\\partial^2}{\\partial y^2} + \\frac{\\partial^2}{\\partial z^2} \\right) + \\frac{m\\omega^2}{2} \\left(x^2 + y^2 + z^2\\right) \\\\[6pt]\n& = \\left(-\\frac{\\hbar^2}{2m}\\frac{\\partial^2}{\\partial x^2} + \\frac{m\\omega^2}{2}x^2\\right) + \\left(-\\frac{\\hbar^2}{2m} \\frac{\\partial^2}{\\partial y^2} + \\frac{m\\omega^2}{2}y^2 \\right ) + \\left(- \\frac{\\hbar^2}{2m}\\frac{\\partial^2}{\\partial z^2} +\\frac{m\\omega^2}{2}z^2 \\right)\n\\end{align}"
},
{
"math_id": 55,
"text": " \\hat{H} = -\\frac{\\hbar^2}{2I_{xx}}\\hat{J}_x^2 -\\frac{\\hbar^2}{2I_{yy}}\\hat{J}_y^2 -\\frac{\\hbar^2}{2I_{zz}}\\hat{J}_z^2 "
},
{
"math_id": 56,
"text": "I_{xx}"
},
{
"math_id": 57,
"text": "I_{yy}"
},
{
"math_id": 58,
"text": "I_{zz}"
},
{
"math_id": 59,
"text": " \\hat{J}_x"
},
{
"math_id": 60,
"text": " \\hat{J}_y"
},
{
"math_id": 61,
"text": " \\hat{J}_z"
},
{
"math_id": 62,
"text": "x"
},
{
"math_id": 63,
"text": "y"
},
{
"math_id": 64,
"text": "z"
},
{
"math_id": 65,
"text": "q_1"
},
{
"math_id": 66,
"text": "q_2"
},
{
"math_id": 67,
"text": "V = \\frac{q_1q_2}{4\\pi\\varepsilon_0 |\\mathbf{r}|}"
},
{
"math_id": 68,
"text": "q_j"
},
{
"math_id": 69,
"text": "V_j = \\frac{1}{2}\\sum_{i\\neq j} q_i \\phi(\\mathbf{r}_i)=\\frac{1}{8\\pi\\varepsilon_0}\\sum_{i\\neq j} \\frac{q_iq_j}{|\\mathbf{r}_i-\\mathbf{r}_j|}"
},
{
"math_id": 70,
"text": "\\phi(\\mathbf{r}_i)"
},
{
"math_id": 71,
"text": "\\mathbf{r}_i"
},
{
"math_id": 72,
"text": "j"
},
{
"math_id": 73,
"text": "V = \\frac{1}{8\\pi\\varepsilon_0}\\sum_{j=1}^N\\sum_{i\\neq j} \\frac{q_iq_j}{|\\mathbf{r}_i-\\mathbf{r}_j|}"
},
{
"math_id": 74,
"text": "\\begin{align}\n\\hat{H} & = -\\frac{\\hbar^2}{2}\\sum_{j=1}^N\\frac{1}{m_j}\\nabla_j^2 + \\frac{1}{8\\pi\\varepsilon_0}\\sum_{j=1}^N\\sum_{i\\neq j} \\frac{q_iq_j}{|\\mathbf{r}_i-\\mathbf{r}_j|} \\\\\n & = \\sum_{j=1}^N \\left ( -\\frac{\\hbar^2}{2m_j}\\nabla_j^2 + \\frac{1}{8\\pi\\varepsilon_0}\\sum_{i\\neq j} \\frac{q_iq_j}{|\\mathbf{r}_i-\\mathbf{r}_j|}\\right) \\\\\n\\end{align}"
},
{
"math_id": 75,
"text": "\\mathbf{d}"
},
{
"math_id": 76,
"text": "\\mathbf{E}"
},
{
"math_id": 77,
"text": "V = -\\mathbf{\\hat{d}}\\cdot\\mathbf{E} "
},
{
"math_id": 78,
"text": "\\mathbf{\\hat{d}} = q\\mathbf{\\hat{r}} "
},
{
"math_id": 79,
"text": "\\hat{H} = -\\mathbf{\\hat{d}}\\cdot\\mathbf{E} = -q\\mathbf{\\hat{r}}\\cdot\\mathbf{E}"
},
{
"math_id": 80,
"text": "\\boldsymbol{\\mu}"
},
{
"math_id": 81,
"text": "\\mathbf{B}"
},
{
"math_id": 82,
"text": "V = -\\boldsymbol{\\mu}\\cdot\\mathbf{B} "
},
{
"math_id": 83,
"text": "\\hat{H} = -\\boldsymbol{\\mu}\\cdot\\mathbf{B} "
},
{
"math_id": 84,
"text": "\\boldsymbol{\\mu}_S = \\frac{g_s e}{2m} \\mathbf{S} "
},
{
"math_id": 85,
"text": "g_s"
},
{
"math_id": 86,
"text": "e"
},
{
"math_id": 87,
"text": "\\mathbf{S}"
},
{
"math_id": 88,
"text": "\\hat{H} = \\frac{g_s e}{2m} \\mathbf{S} \\cdot\\mathbf{B} "
},
{
"math_id": 89,
"text": "\\phi"
},
{
"math_id": 90,
"text": "\\mathbf{A}"
},
{
"math_id": 91,
"text": "\\mathbf{\\hat{p}}"
},
{
"math_id": 92,
"text": "\\mathbf{\\hat{p}} = m\\dot{\\mathbf{r}} + q\\mathbf{A} ,"
},
{
"math_id": 93,
"text": "m\\dot{\\mathbf{r}}"
},
{
"math_id": 94,
"text": "\\mathbf{\\hat{p}} = -i\\hbar\\nabla ,"
},
{
"math_id": 95,
"text": "\\hat{T} = \\frac{1}{2} m\\dot{\\mathbf{r}}\\cdot\\dot{\\mathbf{r}} = \\frac{1}{2m} \\left ( \\mathbf{\\hat{p}} - q\\mathbf{A} \\right)^2 "
},
{
"math_id": 96,
"text": "\\hat{V} = q\\phi ."
},
{
"math_id": 97,
"text": "\\hat{H} = \\frac{1}{2m} \\left ( -i\\hbar\\nabla - q\\mathbf{A} \\right)^2 + q\\phi ."
},
{
"math_id": 98,
"text": "U"
},
{
"math_id": 99,
"text": "|a\\rang"
},
{
"math_id": 100,
"text": "U|a\\rang"
},
{
"math_id": 101,
"text": "UH |a\\rangle = U E_a|a\\rangle = E_a (U|a\\rangle) = H \\; (U|a\\rangle). "
},
{
"math_id": 102,
"text": "G"
},
{
"math_id": 103,
"text": " U = I - i \\varepsilon G + O(\\varepsilon^2) "
},
{
"math_id": 104,
"text": " [H, G] = 0 "
},
{
"math_id": 105,
"text": "\n\\frac{\\partial}{\\partial t} \\langle\\psi(t)|G|\\psi(t)\\rangle\n= \\frac{1}{i\\hbar} \\langle\\psi(t)|[G,H]|\\psi(t)\\rangle\n= 0.\n"
},
{
"math_id": 106,
"text": " \\langle\\psi (t)|H = - i \\hbar {d\\over\\ d t} \\langle\\psi(t)|."
},
{
"math_id": 107,
"text": "\\left\\{\\left| n \\right\\rangle\\right\\}"
},
{
"math_id": 108,
"text": " \\langle n' | n \\rangle = \\delta_{nn'}"
},
{
"math_id": 109,
"text": "\\left| \\psi\\left(t\\right) \\right\\rangle"
},
{
"math_id": 110,
"text": " |\\psi (t)\\rangle = \\sum_{n} a_n(t) |n\\rangle "
},
{
"math_id": 111,
"text": " a_n(t) = \\langle n | \\psi(t) \\rangle. "
},
{
"math_id": 112,
"text": "a_n(t)"
},
{
"math_id": 113,
"text": " \\langle H(t) \\rangle \\mathrel\\stackrel{\\mathrm{def}}{=} \\langle\\psi(t)|H|\\psi(t)\\rangle\n= \\sum_{nn'} a_{n'}^* a_n \\langle n'|H|n \\rangle "
},
{
"math_id": 114,
"text": "a_n^*(t)"
},
{
"math_id": 115,
"text": "\\frac{\\partial \\langle H \\rangle}{\\partial a_{n'}^{*}}\n= \\sum_{n} a_n \\langle n'|H|n \\rangle\n= \\langle n'|H|\\psi\\rangle\n"
},
{
"math_id": 116,
"text": "\\frac{\\partial \\langle H \\rangle}{\\partial a_{n'}^{*}}\n= i \\hbar \\frac{\\partial a_{n'}}{\\partial t} "
},
{
"math_id": 117,
"text": " \\frac{\\partial \\langle H \\rangle}{\\partial a_n}\n= - i \\hbar \\frac{\\partial a_{n}^{*}}{\\partial t} "
},
{
"math_id": 118,
"text": "\\pi_n"
},
{
"math_id": 119,
"text": " \\pi_{n}(t) = i \\hbar a_n^*(t) "
},
{
"math_id": 120,
"text": " \\frac{\\partial \\langle H \\rangle}{\\partial \\pi_n} = \\frac{\\partial a_n}{\\partial t},\\quad \\frac{\\partial \\langle H \\rangle}{\\partial a_n} = - \\frac{\\partial \\pi_n}{\\partial t} "
},
{
"math_id": 121,
"text": "a_n"
},
{
"math_id": 122,
"text": "\\langle H\\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=14381
|
14382068
|
Artificial bee colony algorithm
|
Algorithm in computer science
In computer science and operations research, the artificial bee colony algorithm (ABC) is an optimization algorithm based on the intelligent foraging behaviour of honey bee swarm, proposed by Derviş Karaboğa (Erciyes University) in 2005.
Algorithm.
In the ABC model, the colony consists of three groups of bees: employed bees, onlookers and scouts. It is assumed that there is only one artificial employed bee for each food source. In other words, the number of employed bees in the colony is equal to the number of food sources around the hive. Employed bees go to their food source and come back to hive and dance on this area. The employed bee whose food source has been abandoned becomes a scout and starts to search for finding a new food source. Onlookers watch the dances of employed bees and choose food sources depending on dances. The main steps of the algorithm are given below:
In ABC, a population based algorithm, the position of a food source represents a possible solution to the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The number of the employed bees is equal to the number of solutions in the population. At the first step, a randomly distributed initial population (food source positions) is generated. After initialization, the population is subjected to repeat the cycles of the search processes of the employed, onlooker, and scout bees, respectively. An employed bee produces a modification on the source position in her memory and discovers a new food source position. Provided that the nectar amount of the new one is higher than that of the previous source, the bee memorizes the new source position and forgets the old one. Otherwise she keeps the position of the one in her memory. After all employed bees complete the search process, they share the position information of the sources with the onlookers on the dance area. Each onlooker evaluates the nectar information taken from all employed bees and then chooses a food source depending on the nectar amounts of sources. As in the case of the employed bee, she produces a modification on the source position in her memory and checks its nectar amount. Providing that its nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one. The sources abandoned are determined and new sources are randomly produced to be replaced with the abandoned ones by artificial scouts.
Artificial bee colony algorithm.
Artificial bee colony (ABC) algorithm is an optimization technique that simulates the foraging behavior of honey bees, and has been successfully applied to various practical problems. ABC belongs to the group of swarm intelligence algorithms and was proposed by Karaboga in 2005.
A set of honey bees, called swarm, can successfully accomplish tasks through social cooperation. In the ABC algorithm, there are three types of bees: employed bees, onlooker bees, and scout bees. The employed bees search food around the food source in their memory; meanwhile they share the information of these food sources to the onlooker bees. The onlooker bees tend to select good food sources from those found by the employed bees. The food source that has higher quality (fitness) will have a large chance to be selected by the onlooker bees than the one of lower quality. The scout bees are translated from a few employed bees, which abandon their food sources and search new ones.
In the ABC algorithm, the first half of the swarm consists of employed bees, and the second half constitutes the onlooker bees.
The number of employed bees or the onlooker bees is equal to the number of solutions in the swarm. The ABC generates a randomly distributed initial population of SN solutions (food sources), where SN denotes the swarm size.
Let formula_0 represent the formula_1 solution in the swarm, where formula_2 is the dimension size.
Each employed bee formula_3 generates a new candidate solution formula_4 in the neighborhood of its present position as equation below:
formula_5
where formula_6 is a randomly selected candidate solution (formula_7), formula_8 is a random dimension index selected from the set formula_9, and formula_10 is a random number within formula_11. Once the new candidate solution formula_12 is generated, a greedy selection is used. If the fitness value of formula_12 is better than that of its parent formula_13, then update formula_13 with formula_12; otherwise keep formula_13 unchanged. After all employed bees complete the search process; they share the information of their food sources with the onlooker bees through waggle dances. An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to its nectar amount. This probabilistic selection is really a roulette wheel selection mechanism which is described as equation below:
formula_14
where formula_15 is the fitness value of the formula_1 solution in the swarm. As seen, the better the solution formula_16, the higher the probability of the formula_1 food source selected. If a position cannot be improved over a predefined number (called limit) of cycles, then the food source is abandoned. Assume that the abandoned source is formula_13, and then the scout bee discovers a new food source to be replaced with formula_1 as equation below:
formula_17
where formula_18 is a random number withinformula_19 based on a normal distribution, and formula_20 are lower and upper boundaries of the formula_21 dimension, respectively.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_i=\\{x_{i,1},x_{i,2},\\ldots,x_{i,n}\\}"
},
{
"math_id": 1,
"text": "i^{th}"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "X_{i}"
},
{
"math_id": 4,
"text": "V_{i}"
},
{
"math_id": 5,
"text": "v_{i,k} = x_{i,k}+\\Phi_{i,k}\\times (x_{i,k}-x_{j,k})"
},
{
"math_id": 6,
"text": "X_j"
},
{
"math_id": 7,
"text": "i\\neq j"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "\\{1,2,\\ldots,n\\}"
},
{
"math_id": 10,
"text": "\\Phi_{i,k}"
},
{
"math_id": 11,
"text": "[-1,1]"
},
{
"math_id": 12,
"text": "V_i"
},
{
"math_id": 13,
"text": "X_i"
},
{
"math_id": 14,
"text": "P_i=\\frac{\\mathrm {fit}_i}{\\sum_j{\\mathrm {fit}_j}}"
},
{
"math_id": 15,
"text": "\\mathrm {fit}_i"
},
{
"math_id": 16,
"text": "i"
},
{
"math_id": 17,
"text": "x_{i,k}=lb_k+\\Phi_{i,k}\\times(ub_k-lb_k)"
},
{
"math_id": 18,
"text": "\\Phi_{i,k}=\\mathrm {rand}(0,1)"
},
{
"math_id": 19,
"text": "[0,1]"
},
{
"math_id": 20,
"text": "lb_k, ub_k"
},
{
"math_id": 21,
"text": "k^{th}"
}
] |
https://en.wikipedia.org/wiki?curid=14382068
|
14384942
|
Inverse exchange-traded fund
|
Fund traded on a public stock market designed to perform as the inverse of the reference it tracks
An inverse exchange-traded fund is an exchange-traded fund (ETF), traded on a public stock market, which is designed to perform as the "inverse" of whatever index or benchmark it is designed to track. These funds work by using short selling, trading derivatives such as futures contracts, and other leveraged investment techniques.
By providing over short investing horizons and excluding the impact of fees and other costs, performance opposite to their benchmark, inverse ETFs give a result similar to short selling the stocks in the index. An inverse S&P 500 ETF, for example, seeks a daily percentage movement opposite that of the S&P. If the S&P 500 rises by 1%, the inverse ETF is designed to fall by 1%; and if the S&P falls by 1%, the inverse ETF should rise by 1%. Because their value rises in a declining market environment, they are popular investments in bear markets.
Short sales have the potential to expose an investor to unlimited losses, whether or not the sale involves a stock or ETF. An inverse ETF, on the other hand, provides many of the same benefits as shorting, yet it exposes an investor only to the loss of the purchase price. Another advantage of inverse ETFs is that they may be held in IRA accounts, while short sales are not permitted in these accounts.
Systemic impact.
Because inverse ETFs and leveraged ETFs must change their notional every day to replicate daily returns (discussed below), their use generates trading, which is generally done at the end of the day, in the last hour of trading. Some have claimed that this trading causes increased volatility , while others argue that the activity is not significant. In 2015 the three U.S. listing exchanges—the New York Stock Exchange, NASDAQ and BATS Global Markets—resolved to cease accepting stop-loss orders on any traded securities.
Fees and other issues.
Fees.
Inverse and leveraged inverse ETFs tend to have higher expense ratios than standard index ETFs, since the funds are by their nature actively managed; these costs can eat away at performance.
Short-terms vs. long-term.
In a market with a long-term upward bias, profit-making opportunities via inverse funds are limited in long time spans. In addition, a flat or rising market means these funds might struggle to make money. Inverse ETFs are designed to be used for relatively short-term investing as part of a market timing strategy.
Volatility loss.
An inverse ETF, like any leveraged ETF, needs to buy when the market rises and sell when it falls in order to maintain a fixed leverage ratio. This results in a volatility loss proportional to the market variance. Compared to a short position with identical initial exposure, the inverse ETF will therefore usually deliver inferior returns. The exception is if the market declines significantly on low volatility so that the capital gain outweighs the volatility loss. Such large declines benefit the inverse ETF because the relative exposure of the short position drops as the market fall.
Since the risk of the inverse ETF and a fixed short position will differ significantly as the index drifts away from its initial value, differences in realized payoff have no clear interpretation. It may therefore be better to evaluate the performance assuming the index returns to the initial level. In that case an inverse ETF will always incur a volatility loss relative to the short position.
As with synthetic options, leveraged ETFs need to be frequently rebalanced. In financial mathematics terms, they are not Delta One products: they have Gamma.
The volatility loss is also sometimes referred to as a compounding error.
Hypothetical examples.
If one invests $100 in an inverse ETF position in an asset worth $100, and the asset's value changes the first day to $80, and the following day to $60, then the value of the inverse ETF position will increase by 20% (because the asset decreased by 20% from 100 to 80) and then increase by 25% (because the asset decreased by 25% from 80 to 60). So the ETF's value will be $100*1.20*1.25=$150. The gain of an equivalent short position will however be $100–$60=$40, and so we see that the capital gain of the ETF outweighs the volatility loss relative to the short position. However, if the market swings back to $100 again, then the net profit of the short position is zero. However, since the value of the asset increased by 67% (from $60 to $100), the inverse ETF must lose 67%, meaning it will lose $100. Thus the investment in shorts went from $100 to $140 and back to $100. The investment in the inverse ETF, however, went from $100 to $150 to $50.
An investor in an inverse ETF may correctly predict the collapse of an asset and still suffer heavy losses. For example, if one invests $100 in an inverse ETF position in an asset worth $100, and the asset's value drops 99% (to $1) the next day, the inverse asset will gain 99% (to $199) on that day. If the asset then climbs from $1 to $2 on the following day (100% intra-day gain), the inverse ETF position would drop 100% that day, and the investment would be completely lost, despite the underlying asset still being 98% below its initial value. This particular scenario requires both abrupt and profound volatility – by contrast, the S&P 500 index has never increased by more than 12% in one day.
Historical example.
For instance, between the close of November 28, 2008 and December 5, 2008, the iShares Dow Jones US Financial (NYSE: [ IYF]) moved from 44.98 to 45.35 (essentially flat, properly an increase of 0.8%), so a double short would have lost 1.6% over that time. However, it varied greatly during the week (dropping to a low of 37.92 on December 1, a daily drop of 15.7%, before recovering over the week), and thus the ProShares UltraShort Financials (NYSE: [ SKF]), which is a double-short ETF of the IYF moved from 135.05 to 117.18, a loss of 13.2%.
Furthermore, the BetaShares BEAR fund gained 16.9% in March 2020, compared to a fall of 20.7% in the S&P/ASX 200. For the March 2020 quarter, BEAR was up by 20.1%, versus a 23.1% slump in the index. BBOZ, the BetaShares geared Australian short fund, did even better; it was up by 33% in March 2020, compared to the 20.7% fall in the S&P/ASX 200; for the March quarter, BBOZ rose by 40.6%, against the 23.1% fall in the index. Finally, BBUS, the BetaShares leveraged US fund, surged by 22.6% in March, compared to its benchmark index, the S&P 500 Total Return Index (which includes dividends), in US$. For the March quarter, BBUS gained 47.8%, versus a fall of 19.7% for the index.
Expected loss.
Given that the index follows a geometric Brownian motion and that a fraction formula_0 of the fund formula_1 is invested in the index formula_2, the volatility gain of the log return can be seen from the following relation.
formula_3
where formula_4 is the variance of the index process and the last term on the right hand side constitutes the volatility gain. We see that if formula_5 or formula_6, as is the case with leveraged ETFs, the return of the fund will be less than formula_0 times the index return (the first term on the right hand side).
List of funds.
Some inverse ETFs are:
AdvisorShares
BetaShares Exchange-Traded Funds
Boost ETP
Direxion
ProShares
Horizons BetaPro
Tuttle
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "A_t"
},
{
"math_id": 2,
"text": "S_t"
},
{
"math_id": 3,
"text": "\\Delta \\ln(A_t)=x \\Delta \\ln(S_t)+(x-x^2)\\sigma^2\\frac{\\Delta t}{2}"
},
{
"math_id": 4,
"text": "\\sigma^2"
},
{
"math_id": 5,
"text": "x<0"
},
{
"math_id": 6,
"text": "x>1"
}
] |
https://en.wikipedia.org/wiki?curid=14384942
|
1438524
|
Degree of curvature
|
Measure of a bend's roundness
Degree of curve or degree of curvature is a measure of curvature of a circular arc used in civil engineering for its easy use in layout surveying.
Definition.
The degree of curvature is defined as the central angle to the ends of an agreed length of either an arc or a chord; various lengths are commonly used in different areas of practice. This angle is also the change in forward direction as that portion of the curve is traveled. In an "n"-degree curve, the forward bearing changes by "n" degrees over the standard length of arc or chord.
Usage.
Curvature is usually measured in radius of curvature. A small circle can be easily laid out by just using radius of curvature, but degree of curvature is more convenient for calculating and laying out the curve if the radius is large as a kilometer or a mile, as it needed for large scale works like roads and railroads. By using degrees of curvature, curve setting can be easily done with the help of a transit or theodolite and a chain, tape, or rope of a prescribed length.
Length selection.
The usual distance used to compute degree of curvature in North American road work is of arc. Conversely, North American railroad work traditionally used 100 feet of chord, which is used in other places for road work. Other lengths may be used—such as where SI is favoured or a shorter length for sharper curves. Where degree of curvature is based on 100 units of arc length, the conversion between degree of curvature and radius is "Dr" = 18000/π ≈ 5729.57795, where "D" is degree and "r" is radius.
Since rail routes have very large radii, they are laid out in chords, as the difference to the arc is inconsequential; this made work easier before electronic calculators became available.
The is called a station, used to define length along a road or other alignment, annotated as stations plus feet 1+00, 2+00, etc. Metric work may use similar notation, such as kilometers plus meters 1+000.
Formulas for radius of curvature.
Degree of curvature can be converted to radius of curvature by the following formulae:
Formula from arc length.
formula_0
where formula_1 is arc length, formula_2 is radius of curvature, and formula_3 is degree of curvature, arc definition
Substitute deflection angle for degree of curvature or make arc length equal to 100 feet.
Formula from chord length.
formula_4
where formula_5 is chord length, formula_2 is radius of curvature and formula_3 is degree of curvature, chord definition
Formula from radius.
formula_6
Example.
As an example, a curve with an arc length of 600 units that has an overall sweep of 6 degrees is a 1-degree curve: For every 100 feet of arc, the bearing changes by 1 degree. The radius of such a curve is 5729.57795. If the chord definition is used, each 100-unit chord length will sweep 1 degree with a radius of 5729.651 units, and the chord of the whole curve will be slightly shorter than 600 units.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r = \\frac{180^\\circ A}{\\pi D_\\text{C}}"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "D_\\text{C}"
},
{
"math_id": 4,
"text": "r = \\frac{C}{2 \\sin \\left( \\frac{D_\\text{C}}{2} \\right) }"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "D_\\text{C} = 5729.58/r"
}
] |
https://en.wikipedia.org/wiki?curid=1438524
|
143854
|
Interpunct
|
Typographical symbol, variously used as word delimiter, currency decimal delimiter, etc.
An interpunct ⟨·⟩, also known as an interpoint, middle dot, middot, centered dot or centred dot, is a punctuation mark consisting of a vertically centered dot used for interword separation in Classical Latin. (Word-separating spaces did not appear until some time between 600 and 800CE.) It appears in a variety of uses in some modern languages.
The multiplication dot or "dot operator" is frequently used in mathematical and scientific notation, and it may differ in appearance from the interpunct.
In written language.
Various dictionaries use the interpunct (in this context, sometimes called a hyphenation point) to indicate where to split a word and insert a hyphen if the word doesn't fit on the line. There is also a separate Unicode character, .
English.
In British typography, the space dot was once used as the formal decimal point. Its use was advocated by laws and can still be found in some UK-based academic journals such as "The Lancet". When the pound sterling was decimalised in 1971, the official advice issued was to write decimal amounts with a raised point (for example, ) and to use a decimal point "on the line" only when typesetting constraints made it unavoidable. However, this usage had already been declining since the 1968 ruling by the Ministry of Technology to use the full stop as the decimal point, not only because of that ruling but also because it is the widely-adopted international standard, and because the standard UK keyboard layout (for typewriters and computers) has only the full stop. The space dot is still used by some in handwriting.
In the early modern era, full stops (periods) were sometimes written as interpuncts (for example in the depicted 1646 transcription of the Mayflower Compact).
In the Shavian alphabet, interpuncts replace capitalization as the marker of proper nouns. The dot is placed at the beginning of a word.
Catalan.
The ("flying point") is used in Catalan between two Ls in cases where each belongs to a separate syllable, for example , "cell". This distinguishes such "geminate Ls" (), which are pronounced , from "double L" (), which are written without the flying point and are pronounced . In situations where the flying point is unavailable, periods (as in ) or hyphens (as in ) are frequently used as substitutes, but this is tolerated rather than encouraged.
Historically, medieval Catalan also used the symbol · as a marker for certain elisions, much like the modern apostrophe (see Occitan below) and hyphenations.
There is no separate physical keyboard layout for Catalan: the flying point can be typed using in the Spanish (Spain) layout or with on a US English layout. On a mobile phone with a Catalan keyboard layout, the geminate L with a flying dot appears when holding down the key. It appears in Unicode as the pre-composed letters Ŀ (U+013F) and ŀ (U+0140), but they are compatibility characters and are not frequently used or recommended.
Chinese.
The interpunct is used in Chinese (which generally lacks spacing between characters) to mark divisions in words transliterated from phonogram languages, particularly names. Lacking its own code point in Unicode, the interpunct in Chinese shares the code point U+00B7 (·), and it is properly (and in Taiwan formally) of full-width U+30FB (・). When the Chinese text is romanized, the partition sign is simply replaced by a standard space or other appropriate punctuation. Thus, William Shakespeare is written as and George W. Bush as . Titles and other translated words are not similarly marked: Genghis Khan and Elizabeth II are simply and without a partition sign.
The partition sign is also used to separate book and chapter titles when they are mentioned consecutively: book first and then chapter.
Hokkien.
In Pe̍h-ōe-jī for Taiwanese Hokkien, middle dot is often used as a workaround for the "dot above" right diacritic, since most early encoding systems did not support this diacritic. This is now encoded as . Unicode did not support this diacritic until June 2005. Newer fonts often support it natively; however, the practice of using middle dot still exists. Historically, it was derived in the late 19th century from an older barred-o with curly tail as an adaptation to the typewriter.
Tibetan.
In Tibetan the interpunct, called (), is used as a morpheme delimiter.
Ethiopic.
The Geʽez (Ethiopic) script traditionally separates words with an interpunct of two vertically aligned dots, like a colon, but with larger dots: . (For example ). Starting in the late 19th century the use of such punctuation has largely fallen out of use in favor of whitespace, except in formal hand-written or liturgical texts. In Eritrea the character may be used as a comma.
Franco-Provençal.
In Franco-Provençal (or Arpitan), the interpunct is used in order to distinguish the following graphemes:
French.
In modern French, the interpunct is sometimes used for gender-neutral writing, as in for ("the employees [both male and female]").
Greek.
Ancient Greek lacked spacing or interpuncts but instead ran all the letters together. By Late Antiquity, various marks were used to separate words, particularly the Greek comma.
The modern Greek, the ano teleia mark (; also known as ) is the infrequently-encountered Greek semicolon and is properly romanized as such. It is also used to introduce lists in the manner of an English colon. In Greek text, Unicode provides the code point , however, it is also expressed as an interpunct. In practice, the separate code point for ano teleia canonically decomposes to the interpunct.
The Hellenistic scholars of Alexandria first developed the mark for a function closer to the comma, before it fell out of use and was then repurposed for its present role.
Japanese.
Interpuncts are often used to separate transcribed foreign names or words written in katakana. For example, "Beautiful Sunday" becomes (). A middle dot is also sometimes used to separate lists in Japanese instead of the Japanese comma. Dictionaries and grammar lessons in Japanese sometimes also use a similar symbol to separate a verb suffix from its root. While some fonts may render the Japanese middle dot as a square under great magnification, this is not a defining property of the middle dot that is used in China or Japan.
However, the Japanese writing system usually does not use space or punctuation to separate words (though the mixing of katakana, kanji and hiragana gives some indication of word boundary).
In Japanese typography, there exist two Unicode code points:
The interpunct also has a number of other uses in Japanese, including the following: to separate titles, names and positions: (Assistant Section Head · Suzuki); as a decimal point when writing numbers in kanji: ; as a slash when writing for "or" in abbreviations: ; and in place of hyphens, dashes and colons when writing vertically.
Korean.
Interpuncts are used in written Korean to denote a list of two or more words, similarly to how a slash (/) is used to juxtapose words in many other languages. In this role it also functions in a similar way to the English en dash, as in , "American–Soviet relations". The use of interpuncts has declined in years of digital typography and especially in place of slashes, but, in the strictest sense, a slash cannot replace a middle dot in Korean typography.
() is used more than a middle dot when an interpunct is to be used in Korean typography, though "araea" is technically not a punctuation symbol but actually an obsolete Hangul "jamo". Because "araea" is a full-width letter, it looks better than middle dot between Hangul. In addition, it is drawn like the middle dot in Windows default Korean fonts such as "Batang".
Latin.
The interpunct () was regularly used in classical Latin to separate words. In addition to the most common round form, inscriptions sometimes use a small equilateral triangle for the interpunct, pointing either up or down. It may also appear as a mid-line comma, similar to the Greek practice of the time. The interpunct fell out of use c. 200 CE, and Latin was then written for several centuries.
Occitan.
In Occitan, especially in the Gascon dialect, the interpunct ("punt interior", literally, "inner dot", or "ponch naut" for "high / upper point") is used to distinguish the following graphemes:
Although it is considered to be a spelling error, a period is frequently used when a middle dot is unavailable: "des.har, in.hèrn", which is the case for French keyboard layout.
In Old Occitan, the symbol · was sometimes used to denote certain elisions, much like the modern apostrophe, the only difference being that the word that gets to be elided is always placed after the interpunct, the word before ending either in a vowel sound or the letter "n":
<templatestyles src="Col-begin/styles.css"/>
Old Irish.
In many linguistic works discussing Old Irish (but not in actual Old Irish manuscripts), the interpunct is used to separate a pretonic preverbal element from the stressed syllable of the verb, e.g. "gives". It is also used in citing the verb forms used after such preverbal elements (the prototonic forms), e.g. "carries", to distinguish them from forms used without preverbs, e.g. "carries". In other works, the hyphen (, ) or colon (, ) may be used for this purpose.
Runes.
Runic texts use either an interpunct-like or a colon-like punctuation mark to separate words. There are two Unicode characters dedicated for this:
In mathematics and science.
Up to the middle of the 20th century, and sporadically even much later, the interpunct could be found used as the decimal marker in British publications, such as tables of constants (e.g., "π = 3·14159"). This made expressions such as 15 · 823 potentially ambiguous; in which it could denote either 15 × 823 = 12345 or . In situations where the interpunct is used as a decimal point, the multiplication sign used is usually a full stop (period), not an interpunct.
In publications conforming to the standards of the International System of Units, as well as the multiplication sign (×), the centered dot (dot operator) or space (often typographically a non-breaking space) can be used as a multiplication sign. Only a comma or full stop (period) may be used as a decimal marker. The centered dot can be used when multiplying units, as in m·kg·s−2 for the newton expressed in terms of SI base units. However, when the decimal point is used as the decimal marker, as in the United States, the use of a centered dot for the multiplication of numbers or values of quantities is discouraged.
In mathematics, a small middle dot can be used to represent multiplication; for example, formula_0 for multiplying formula_1 by formula_2. When dealing with scalars, it is interchangeable with the multiplication sign (×), as long as the multiplication sign is between numerals such that it would not be mistaken as variable formula_3. For instance, formula_4 means the same thing as formula_5. However, when dealing with vectors, the dot operator denotes a dot product (e.g. formula_6, a scalar), which is distinct from the cross product (e.g. formula_7, a vector).
Another usage of this symbol in mathematics is with functions, where the dot is used as a placeholder for a function argument, in order to distinguish between the (general form of the) function itself and the value or a specific form of a function evaluated at a given point or with given specifications. For example, formula_8 denotes the function formula_9, and formula_10 denotes a partial application, where the first two arguments are given and the third argument shall take any valid value on its domain.
The bullet operator, ∙, U+2219, is sometimes used to denote the "AND" relationship in formal logic.
In computing, the middle dot is usually displayed (but not printed) to indicate white space in various software applications such as word processing, graphic design, web layout, desktop publishing or software development programs. In some word processors, interpuncts are used to denote not only hard space or space characters, but also sometimes used to indicate a space when put in paragraph format to show indentations and spaces. This allows the user to see where white space is located in the document and what sizes of white space are used, since normally white space is invisible so tabs, spaces, non-breaking spaces and such are indistinguishable from one another.
In chemistry, the middle dot is used to separate the parts of formulas of addition compounds, mixture salts or solvates (typically hydrates), such as of copper(II) sulphate pentahydrate, CuSO4·5H2O. The middle dot should not be surrounded by spaces when indicating a chemical adduct.
The middot as a letter.
A middot may be used as a consonant or modifier letter, rather than as punctuation, in transcription systems and in language orthographies. For such uses Unicode provides the code point .
In Americanist phonetic notation, the middot is a more common variant of the colon ⟨꞉⟩ used to indicate vowel length. It may be called a "half-colon" in such usage. Graphically, it may be high in the letter space (the top dot of the colon) or centered as the interpunct. From Americanist notation, it has been adopted into the orthographies of several languages, such as Washo.
In the writings of Franz Boas, the middot was used for palatal or palatalized consonants, e.g. ⟨kꞏ⟩ for IPA [c].
In the Sinological tradition of the 36 initials, the onset 影 (typically reconstructed as a glottal stop) may be transliterated with a middot ⟨ꞏ⟩, and the onset 喩 (typically reconstructed as a null onset) with an apostrophe ⟨ʼ⟩. Conventions vary, however, and it is common for 影 to be transliterated with the apostrophe. These conventions are used both for Chinese itself and for other scripts of China, such as ʼPhags-pa and Jurchen.
In the Canadian Aboriginal Syllabics, a middle dot ⟨ᐧ⟩ indicates a syllable medial ⟨w⟩ in Cree and Ojibwe, ⟨y⟩ or ⟨yu⟩ in some of the Athapascan languages, and a syllable medial ⟨s⟩ in Blackfoot. However, depending on the writing tradition, the middle dot may appear after the syllable it modifies (which is found in the Western style) or before the syllable it modifies (which is found in the Northern and Eastern styles). In Unicode, the middle dot is encoded both as independent glyph or as part of a pre-composed letter, such as in . In the Carrier syllabics subset, the middle dot Final indicates a glottal stop, but a centered dot diacritic on -position letters transform the vowel value to , for example: , .
Keyboard input.
On computers, the interpunct may be available through various key combinations, depending on the operating system and the keyboard layout. Assuming a QWERTY keyboard layout unless otherwise stated:
Similar symbols.
"Characters in the Symbol column above may not render correctly in all browsers."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x\\cdot y"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "2\\cdot3y"
},
{
"math_id": 5,
"text": "2\\times3y"
},
{
"math_id": 6,
"text": "\\vec{x}\\cdot\\vec{y}"
},
{
"math_id": 7,
"text": "\\vec{x}\\times\\vec{y}"
},
{
"math_id": 8,
"text": "f(\\cdot)"
},
{
"math_id": 9,
"text": "x\\mapsto f(x)"
},
{
"math_id": 10,
"text": "\\theta(s,a,\\cdot)"
}
] |
https://en.wikipedia.org/wiki?curid=143854
|
1438635
|
134 (number)
|
Natural number
134 (one hundred [and] thirty-four) is the natural number following 133 and preceding 135.
In mathematics.
134 is a nontotient since there is no integer with exactly 134 coprimes below it. And it is a noncototient since there is no integer with 134 integers with common factors below it.
134 is formula_0.
In Roman numerals, 134 is a Friedman number since CXXXIV = XV * (XC/X) - I.
In other fields.
134 is also:
|
[
{
"math_id": 0,
"text": "{}_8C_1 + {}_8C_3 + {}_8C_4"
}
] |
https://en.wikipedia.org/wiki?curid=1438635
|
14386363
|
Ferranti effect
|
Increase in voltage at a long AC power line
In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. It can be stated as a factor, or as a percent increase.
It was first observed during the installation of underground cables in Sebastian Ziani de Ferranti's 10,000-volt AC power distribution system in 1887.
The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. Therefore, both line inductance and capacitance are responsible for this phenomenon. This can be analysed by considering the line as a transmission line where the source impedance is lower than the load impedance (unterminated). The effect is similar to an electrically short version of the quarter-wave impedance transformer, but with smaller voltage transformation.
The Ferranti effect is more pronounced the longer the line and the higher the voltage applied. The relative voltage rise is proportional to the square of the line length and the square of frequency.
The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance.
An equivalent to the Ferranti effect occurs when inductive current flows through a series capacitance. Indeed, a formula_0 lagging current formula_1 flowing through a formula_2 impedance results in a voltage difference formula_3, hence in increased voltage on the receiving side.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "90^\\circ"
},
{
"math_id": 1,
"text": "-jI_L"
},
{
"math_id": 2,
"text": "-jX_c"
},
{
"math_id": 3,
"text": "V_\\text{send} - V_\\text{receive}=(-jI_L)(-jX_c)=-I_L X_c < 0"
}
] |
https://en.wikipedia.org/wiki?curid=14386363
|
1438662
|
Saha ionization equation
|
Relation between the ionization state of a gas and the temperature and pressure
In physics, the Saha ionization equation is an expression that relates the ionization state of a gas in thermal equilibrium to the temperature and pressure. The equation is a result of combining ideas of quantum mechanics and statistical mechanics and is used to explain the spectral classification of stars. The expression was developed by physicist Meghnad Saha in 1920. It is discussed in many textbooks on statistical physics and plasma physics.
Description.
For a gas at a high enough temperature (here measured in energy units, i.e. keV or J) and/or density, the thermal collisions of the atoms will ionize some of the atoms, making an ionized gas. When several or more of the electrons that are normally bound to the atom in orbits around the atomic nucleus are freed, they form an independent electron gas cloud co-existing with the surrounding gas of atomic ions and neutral atoms. With sufficient ionization, the gas can become the state of matter called plasma.
The Saha equation describes the degree of ionization for any gas in thermal equilibrium as a function of the temperature, density, and ionization energies of the atoms. The Saha equation only holds for weakly ionized plasmas for which the Debye length is small. This means that the screening of the Coulomb interaction of ions and electrons by other ions and electrons is negligible. The subsequent lowering of the ionization potentials and the "cutoff" of the partition function is therefore also negligible.
For a gas composed of a single atomic species, the Saha equation is written:
formula_0
where:
The expression formula_11 is the energy required to remove the formula_12th electron. In the case where only one level of ionization is important, we have formula_13 and defining the total density "n" as formula_14, the Saha equation simplifies to:
formula_15
where formula_16 is the energy of ionization. We can define the degree of ionization formula_17 and find
formula_18
This gives a quadratic equation that can be solved in closed form:
formula_19
For small formula_20, formula_21, so that the ionization decreases with density.
As a simple example, imagine a gas of monatomic hydrogen atoms, set formula_22 and let formula_16 = =, the ionization energy of hydrogen from its ground state. Let formula_23 =, which is the Loschmidt constant, or particle density of Earth's atmosphere at standard pressure and temperature. At formula_9 =, the ionization is essentially none: formula_24 = and there would almost certainly be no ionized atoms in the volume of Earth's atmosphere. formula_24 increases rapidly with formula_9, reaching 0.35 for formula_9 =. There is substantial ionization even though this formula_9 is much less than the ionization energy (although this depends somewhat on density). This is a common occurrence. Physically, it stems from the fact that at a given temperature, the particles have a distribution of energies, including some with several times formula_9. These high energy particles are much more effective at ionizing atoms. In Earth's atmosphere, ionization is actually governed not by the Saha equation but by very energetic cosmic rays, largely muons. These particles are not in thermal equilibrium with the atmosphere, so they are not at its temperature and the Saha logic does not apply.
Particle densities.
The Saha equation is useful for determining the ratio of particle densities for two different ionization levels. The most useful form of the Saha equation for this purpose is
formula_25
where "Z" denotes the partition function. The Saha equation can be seen as a restatement of the equilibrium condition for the chemical potentials:
formula_26
This equation simply states that the potential for an atom of ionization state "i" to ionize is the same as the potential for an electron and an atom of ionization state "i" + 1; the potentials are equal, therefore the system is in equilibrium and no "net" change of ionization will occur.
Stellar atmospheres.
In the early twenties Ralph H. Fowler (in collaboration with Charles Galton Darwin) developed a new method in statistical mechanics permitting a systematic calculation of the equilibrium properties of matter. He used this to provide a rigorous derivation of the ionization formula which Saha had obtained, by extending to the ionization of atoms the theorem of Jacobus Henricus van 't Hoff, used in physical chemistry for its application to molecular dissociation. Also, a significant improvement in the Saha equation introduced by Fowler was to include the effect of the excited states of atoms and ions. A further important step forward came in 1923, when Edward Arthur Milne and R.H. Fowler published a paper in the "Monthly Notices of the Royal Astronomical Society", showing that the criterion of the maximum intensity of absorption lines (belonging to subordinate series of a neutral atom) was much more fruitful in giving information about physical parameters of stellar atmospheres than the criterion employed by Saha which consisted in the marginal appearance or disappearance of absorption lines. The latter criterion requires some knowledge of the relevant pressures in the stellar atmospheres, and Saha following the generally accepted view at the time assumed a value of the order of 1 to 0.1 atmosphere. Milne wrote:
Saha had concentrated on the marginal appearances and disappearances of absorption lines in the stellar sequence, assuming an order of magnitude for the pressure in a stellar atmosphere and calculating the temperature where increasing ionization, for example, inhibited further absorption of the line in question owing to the loss of the series electron. As Fowler and I were one day stamping round my rooms in Trinity and discussing this, it suddenly occurred to me that the maximum intensity of the Balmer lines of hydrogen, for example, was readily explained by the consideration that at the lower temperatures there were too few excited atoms to give appreciable absorption, whilst at the higher temperatures there are too few neutral atoms left to give any absorption. ... That evening I did a hasty order of magnitude calculation of the effect and found that to agree with a temperature of 10000° [K] for the stars of type A0, where the Balmer lines have their maximum, a pressure of the order of 10−4 atmosphere was required. This was very exciting, because standard determinations of pressures in stellar atmospheres from line shifts and line widths had been supposed to indicate a pressure of the order of one atmosphere or more, and I had begun on other grounds to disbelieve this.
The generally accepted view at the time assumed that the composition of stars were similar to Earth. However, in 1925 Cecilia Payne used Saha's ionization theory to calculate that the composition of stellar atmospheres is as we now know them; mostly hydrogen and helium, expanding the knowledge of stars.
Stellar coronae.
Saha equilibrium prevails when the plasma is in local thermodynamic equilibrium, which is not the case in the optically-thin corona.
Here the equilibrium ionization states must be estimated by detailed statistical calculation of collision and recombination rates.
Early universe.
Equilibrium ionization, described by the Saha equation, explains evolution in the early universe. After the Big Bang, all atoms were ionized, leaving mostly protons and electrons. According to Saha's approach, when the universe had expanded and cooled such that the temperature reached about , electrons recombined with protons forming hydrogen atoms. At this point, the universe became transparent to most electromagnetic radiation. That surface, red-shifted by a factor of about 1,000, generates the 3 K cosmic microwave background radiation, which pervades the universe today.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{n_{i+1}n_\\text{e}}{n_i} = \\frac{2}{\\lambda_\\text{th}^{3}}\\frac{g_{i+1}}{g_i}\\exp\\left[-\\frac{(\\varepsilon_{i+1}-\\varepsilon_i)}{k_\\text{B} T}\\right]"
},
{
"math_id": 1,
"text": "n_i"
},
{
"math_id": 2,
"text": "g_i"
},
{
"math_id": 3,
"text": "\\varepsilon_i"
},
{
"math_id": 4,
"text": "n_\\text{e}"
},
{
"math_id": 5,
"text": "k_\\text{B}"
},
{
"math_id": 6,
"text": "\\lambda_\\text{th}"
},
{
"math_id": 7,
"text": "\\lambda_\\text{th} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{h}{\\sqrt{2\\pi m_\\text{e} k_\\text{B} T}}"
},
{
"math_id": 8,
"text": "m_\\text{e}"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "h"
},
{
"math_id": 11,
"text": "(\\varepsilon_{i+1}-\\varepsilon_i)"
},
{
"math_id": 12,
"text": "(i+1)"
},
{
"math_id": 13,
"text": "n_1=n_\\text{e}"
},
{
"math_id": 14,
"text": "n=n_0+n_1"
},
{
"math_id": 15,
"text": "\\frac{n_\\text{e}^2}{n-n_\\text{e}} = \\frac{2}{\\lambda_\\text{th}^3}\\frac{g_1}{g_0}\\exp\\left[\\frac{-\\varepsilon}{k_\\text{B} T}\\right]"
},
{
"math_id": 16,
"text": "\\varepsilon"
},
{
"math_id": 17,
"text": "x=n_1/n"
},
{
"math_id": 18,
"text": "\\frac{x^2}{1-x} = A = \\frac{2}{n\\lambda_\\text{th}^3}\\frac{g_1}{g_0}\\exp\\left[\\frac{-\\varepsilon}{k_\\text{B} T}\\right]"
},
{
"math_id": 19,
"text": "x^2 + A x - A = 0"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "x \\approx A^{1/2} \\propto n^{-1/2}"
},
{
"math_id": 22,
"text": "g_0=g_1"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "x"
},
{
"math_id": 25,
"text": "\\frac{Z_i}{N_i} = \\frac{Z_{i+1}Z_e}{N_{i+1}N_e},"
},
{
"math_id": 26,
"text": "\\mu_i = \\mu_{i+1} + \\mu_e\\,"
}
] |
https://en.wikipedia.org/wiki?curid=1438662
|
14388042
|
Analytic torsion
|
Topological invariant of manifolds that can distinguish homotopy-equivalent manifolds
In mathematics, Reidemeister torsion (or R-torsion, or Reidemeister–Franz torsion) is a topological invariant of manifolds introduced by Kurt Reidemeister for 3-manifolds and generalized to higher dimensions by Wolfgang Franz (1935) and Georges de Rham (1936).
Analytic torsion (or Ray–Singer torsion) is an invariant of Riemannian manifolds defined by and Isadore M. Singer (1971, 1973a, 1973b) as an analytic analogue of Reidemeister torsion. Jeff Cheeger (1977, 1979) and Werner Müller (1978) proved Ray and Singer's conjecture that Reidemeister torsion and analytic torsion are the same for compact Riemannian manifolds.
Reidemeister torsion was the first invariant in algebraic topology that could distinguish between closed manifolds which are homotopy equivalent but not homeomorphic, and can thus be seen as the birth of geometric topology as a distinct field. It can be used to classify lens spaces.
Reidemeister torsion is closely related to Whitehead torsion; see . It has also given some important motivation to arithmetic topology; see . For more recent work on torsion see the books and (Nicolaescu 2002, 2003).
Definition of analytic torsion.
If "M" is a Riemannian manifold and "E" a vector bundle over "M", then there is a Laplacian operator acting on the "k"-forms with values in "E". If the eigenvalues on "k"-forms are λ"j" then the zeta function ζ"k" is defined to be
formula_0
for "s" large, and this is extended to all complex "s" by analytic continuation.
The zeta regularized determinant of the Laplacian acting on "k"-forms is
formula_1
which is formally the product of the positive eigenvalues of the laplacian acting on "k"-forms.
The analytic torsion "T"("M","E") is defined to be
formula_2
Definition of Reidemeister torsion.
Let formula_3 be a finite connected CW-complex with fundamental group formula_4
and universal cover formula_5, and let formula_6 be an orthogonal finite-dimensional formula_7-representation. Suppose that
formula_8
for all n. If we fix a cellular basis for formula_9 and an orthogonal formula_10-basis for formula_6, then formula_11 is a contractible finite based free formula_10-chain complex. Let formula_12 be any chain contraction of D*, i.e. formula_13 for all formula_14. We obtain an isomorphism formula_15 with formula_16, formula_17. We define the Reidemeister torsion
formula_18
where A is the matrix of formula_19 with respect to the given bases. The Reidemeister torsion formula_20 is independent of the choice of the cellular basis for formula_9, the orthogonal basis for formula_6 and the chain contraction formula_21.
Let formula_22 be a compact smooth manifold, and let formula_23 be a unimodular representation. formula_22 has a smooth triangulation. For any choice of a volume formula_24, we get an invariant formula_25. Then we call the positive real number formula_26 the Reidemeister torsion of the manifold formula_22 with respect to formula_27 and formula_28.
A short history of Reidemeister torsion.
Reidemeister torsion was first used to combinatorially classify 3-dimensional lens spaces in by Reidemeister, and in higher-dimensional spaces by Franz. The classification includes examples of homotopy equivalent 3-dimensional manifolds which are not homeomorphic — at the time (1935) the classification was only up to PL homeomorphism, but later E.J. Brody (1960) showed that this was in fact a classification up to homeomorphism.
J. H. C. Whitehead defined the "torsion" of a homotopy equivalence between finite complexes. This is a direct generalization of the Reidemeister, Franz, and de Rham concept; but is a more delicate invariant. Whitehead torsion provides a key tool for the study of combinatorial or differentiable manifolds with nontrivial fundamental group and is closely related to the concept of "simple homotopy type", see
In 1960 Milnor discovered the duality relation of torsion invariants of manifolds and show that the (twisted) Alexander polynomial of knots is the Reidemeister torsion of its knot complement in formula_29. For each "q" the Poincaré duality formula_30 induces
formula_31
and then we obtain
formula_32
The representation of the fundamental group of knot complement plays a central role in them. It gives the relation between knot theory and torsion invariants.
Cheeger–Müller theorem.
Let formula_33 be an orientable compact Riemann manifold of dimension n and formula_34 a representation of the fundamental group of formula_22 on a real vector space of dimension N. Then we can define the de Rham complex
formula_35
and the formal adjoint formula_36 and formula_37 due to the flatness of formula_38. As usual, we also obtain the Hodge Laplacian on p-forms
formula_39
Assuming that formula_40, the Laplacian is then a symmetric positive semi-positive elliptic operator with pure point spectrum
formula_41
As before, we can therefore define a zeta function associated with the Laplacian formula_42 on formula_43 by
formula_44
where formula_45 is the projection of formula_46 onto the kernel space formula_47 of the Laplacian formula_42. It was moreover shown by that formula_48 extends to a meromorphic function of formula_49 which is holomorphic at formula_50.
As in the case of an orthogonal representation, we define the analytic torsion formula_51 by
formula_52
In 1971 D.B. Ray and I.M. Singer conjectured that formula_53 for any unitary representation formula_27. This Ray–Singer conjecture was eventually proved, independently, by Cheeger (1977, 1979) and . Both approaches focus on the logarithm of torsions and their traces. This is easier for odd-dimensional manifolds than in the even-dimensional case, which involves additional technical difficulties. This Cheeger–Müller theorem (that the two notions of torsion are equivalent), along with Atiyah–Patodi–Singer theorem, later provided the basis for Chern–Simons perturbation theory.
A proof of the Cheeger-Müller theorem for arbitrary representations was later given by J. M. Bismut and Weiping Zhang. Their proof uses the Witten deformation.
|
[
{
"math_id": 0,
"text": "\\zeta_k(s) = \\sum_{\\lambda_j>0}\\lambda_j^{-s}"
},
{
"math_id": 1,
"text": "\\Delta_k=\\exp(-\\zeta^\\prime_k(0))"
},
{
"math_id": 2,
"text": "T(M,E) = \\exp\\left(\\sum_k (-1)^kk \\zeta^\\prime_k(0)/2\\right) = \\prod_k\\Delta_k^{-(-1)^kk/2}."
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\pi := \\pi_1(X)"
},
{
"math_id": 5,
"text": "{\\tilde X}"
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "\\pi"
},
{
"math_id": 8,
"text": "H^\\pi_n(X;U) := H_n(U \\otimes_{\\mathbf{Z}[\\pi]} C_*({\\tilde X})) = 0"
},
{
"math_id": 9,
"text": "C_*({\\tilde X})"
},
{
"math_id": 10,
"text": "\\mathbf{R}"
},
{
"math_id": 11,
"text": "D_* := U \\otimes_{\\mathbf{Z}[\\pi]} C_*({\\tilde X})"
},
{
"math_id": 12,
"text": "\\gamma_*: D_* \\to D_{*+1}"
},
{
"math_id": 13,
"text": "d_{n+1} \\circ \\gamma_n + \\gamma_{n-1} \\circ d_n = id_{D_n}"
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "(d_* + \\gamma_*)_\\text{odd}: D_\\text{odd} \\to D_\\text{even}"
},
{
"math_id": 16,
"text": "D_\\text{odd} := \\oplus_{n \\, \\text{odd}} \\, D_n"
},
{
"math_id": 17,
"text": "D_\\text{even} := \\oplus_{n \\, \\text{even}} \\, D_n"
},
{
"math_id": 18,
"text": "\\rho(X;U) := |\\det(A)|^{-1} \\in \\mathbf{R}^{>0}"
},
{
"math_id": 19,
"text": "(d_* + \\gamma_*)_\\text{odd}"
},
{
"math_id": 20,
"text": "\\rho(X;U)"
},
{
"math_id": 21,
"text": "\\gamma_*"
},
{
"math_id": 22,
"text": "M"
},
{
"math_id": 23,
"text": "\\rho\\colon\\pi(M)\\rightarrow GL(E)"
},
{
"math_id": 24,
"text": "\\mu\\in\\det H_*(M)"
},
{
"math_id": 25,
"text": "\\tau_M(\\rho:\\mu)\\in\\mathbf{R}^+"
},
{
"math_id": 26,
"text": "\\tau_M(\\rho:\\mu)"
},
{
"math_id": 27,
"text": "\\rho"
},
{
"math_id": 28,
"text": "\\mu"
},
{
"math_id": 29,
"text": "S^3"
},
{
"math_id": 30,
"text": "P_o"
},
{
"math_id": 31,
"text": "P_o\\colon\\operatorname{det}(H_q(M))\\overset{\\sim}{\\,\\longrightarrow\\,}(\\operatorname{det}(H_{n-q}(M)))^{-1}"
},
{
"math_id": 32,
"text": "\\Delta(t)=\\pm t^n\\Delta(1/t)."
},
{
"math_id": 33,
"text": "(M,g)"
},
{
"math_id": 34,
"text": "\\rho\\colon \\pi(M)\\rightarrow\\mathop{GL}(E)"
},
{
"math_id": 35,
"text": "\\Lambda^0\\stackrel{d_0}{\\longrightarrow}\\Lambda^1\\stackrel{d_1}{\\longrightarrow}\\cdots\\stackrel{d_{n-1}}{\\longrightarrow}\\Lambda^n"
},
{
"math_id": 36,
"text": "d_p"
},
{
"math_id": 37,
"text": "\\delta_p"
},
{
"math_id": 38,
"text": "E_q"
},
{
"math_id": 39,
"text": "\\Delta_p=\\delta_{p+1} d_p+d_{p-1}\\delta_{p}."
},
{
"math_id": 40,
"text": "\\partial M=0"
},
{
"math_id": 41,
"text": "0\\le\\lambda_0\\le\\lambda_1\\le\\cdots\\rightarrow\\infty."
},
{
"math_id": 42,
"text": "\\Delta_q"
},
{
"math_id": 43,
"text": "\\Lambda^q(E)"
},
{
"math_id": 44,
"text": "\\zeta_q(s;\\rho)=\\sum_{\\lambda_j >0}\\lambda_j^{-s}=\\frac{1}{\\Gamma(s)}\\int^\\infty_0 t^{s-1}\\text{Tr}(e^{-t\\Delta_q} - P_q)dt,\\ \\ \\ \\text{Re}(s)>\\frac{n}{2}"
},
{
"math_id": 45,
"text": "P"
},
{
"math_id": 46,
"text": "L^2 \\Lambda(E)"
},
{
"math_id": 47,
"text": "\\mathcal{H}^q(E)"
},
{
"math_id": 48,
"text": "\\zeta_q(s;\\rho)"
},
{
"math_id": 49,
"text": "s\\in\\mathbf{C}"
},
{
"math_id": 50,
"text": "s=0"
},
{
"math_id": 51,
"text": "T_M(\\rho;E)"
},
{
"math_id": 52,
"text": "T_M(\\rho;E) = \\exp\\biggl(\\frac{1}{2}\\sum^n_{q=0}(-l)^qq\\frac{d}{ds}\\zeta_q(s;\\rho)\\biggl|_{s=0}\\biggr)."
},
{
"math_id": 53,
"text": "T_M(\\rho;E)=\\tau_M(\\rho;\\mu)"
}
] |
https://en.wikipedia.org/wiki?curid=14388042
|
143888
|
Perfect fifth
|
Musical interval
<score> { «
\new Staff \with{ \magnifyStaff #4/3 } \relative c' {
\key c \major \clef treble \override Score.TimeSignature #'stencil = ##f \time 3/4
<g' d'> <d, a'>
\new Staff \with{ \magnifyStaff #4/3 } \relative c' {
\key c \major \clef bass \override Score.TimeSignature #'stencil = ##f \time 3/4
<c, g'> <f' c'>
</score>Examples of perfect fifth intervals
In music theory, a perfect fifth is the musical interval corresponding to a pair of pitches with a frequency ratio of 3:2, or very nearly so.
In classical music from Western culture, a fifth is the interval from the first to the last of the first five consecutive notes in a diatonic scale. The perfect fifth (often abbreviated P5) spans seven semitones, while the diminished fifth spans six and the augmented fifth spans eight semitones. For example, the interval from C to G is a perfect fifth, as the note G lies seven semitones above C.
The perfect fifth may be derived from the harmonic series as the interval between the second and third harmonics. In a diatonic scale, the dominant note is a perfect fifth above the tonic note.
The perfect fifth is more consonant, or stable, than any other interval except the unison and the octave. It occurs above the root of all major and minor chords (triads) and their extensions. Until the late 19th century, it was often referred to by one of its Greek names, "diapente". Its inversion is the perfect fourth. The octave of the fifth is the twelfth.
A perfect fifth is at the start of "Twinkle, Twinkle, Little Star"; the pitch of the first "twinkle" is the root note and the pitch of the second "twinkle" is a perfect fifth above it.
Alternative definitions.
The term "perfect" identifies the perfect fifth as belonging to the group of "perfect intervals" (including the unison, perfect fourth and octave), so called because of their simple pitch relationships and their high degree of consonance. When an instrument with only twelve notes to an octave (such as the piano) is tuned using Pythagorean tuning, one of the twelve fifths (the wolf fifth) sounds severely discordant and can hardly be qualified as "perfect", if this term is interpreted as "highly consonant". However, when using correct enharmonic spelling, the wolf fifth in Pythagorean tuning or meantone temperament is actually not a perfect fifth but a diminished sixth (for instance G♯–E♭).
Perfect intervals are also defined as those natural intervals whose inversions are also natural, where natural, as opposed to altered, designates those intervals between a base note and another note in the major diatonic scale starting at that base note (for example, the intervals from C to C, D, E, F, G, A, B, C, with no sharps or flats); this definition leads to the perfect intervals being only the unison, fourth, fifth, and octave, without appealing to degrees of consonance.
The term "perfect" has also been used as a synonym of "just", to distinguish intervals tuned to ratios of small integers from those that are "tempered" or "imperfect" in various other tuning systems, such as equal temperament. The perfect unison has a pitch ratio 1:1, the perfect octave 2:1, the perfect fourth 4:3, and the perfect fifth 3:2.
Within this definition, other intervals may also be called perfect, for example a perfect third (5:4) or a perfect major sixth (5:3).
Other qualities.
In addition to perfect, there are two other kinds, or qualities, of fifths: the diminished fifth, which is one chromatic semitone smaller, and the augmented fifth, which is one chromatic semitone larger. In terms of semitones, these are equivalent to the tritone (or augmented fourth), and the minor sixth, respectively.
Pitch ratio.
The justly tuned pitch ratio of a perfect fifth is 3:2 (also known, in early music theory, as a "hemiola"), meaning that the upper note makes three vibrations in the same amount of time that the lower note makes two. The just perfect fifth can be heard when a violin is tuned: if adjacent strings are adjusted to the exact ratio of 3:2, the result is a smooth and consonant sound, and the violin sounds in tune.
Keyboard instruments such as the piano normally use an equal-tempered version of the perfect fifth, enabling the instrument to play in all keys. In 12-tone equal temperament, the frequencies of the tempered perfect fifth are in the ratio formula_0 or approximately 1.498307. An equally tempered perfect fifth, defined as 700 cents, is about two cents narrower than a just perfect fifth, which is approximately 701.955 cents.
Kepler explored musical tuning in terms of integer ratios, and defined a "lower imperfect fifth" as a 40:27 pitch ratio, and a "greater imperfect fifth" as a 243:160 pitch ratio. His lower perfect fifth ratio of 1.48148 (680 cents) is much more "imperfect" than the equal temperament tuning (700 cents) of 1.4983 (relative to the ideal 1.50). Hermann von Helmholtz uses the ratio 301:200 (708 cents) as an example of an imperfect fifth; he contrasts the ratio of a fifth in equal temperament (700 cents) with a "perfect fifth" (3:2), and discusses the audibility of the beats that result from such an "imperfect" tuning.
Use in harmony.
W. E. Heathcote describes the octave as representing the prime unity within the triad, a higher unity produced from the successive process: "first Octave, then Fifth, then Third, which is the union of the two former". Hermann von Helmholtz argues that some intervals, namely the perfect fourth, fifth, and octave, "are found in all the musical scales known", though the editor of the English translation of his book notes the fourth and fifth may be interchangeable or indeterminate.
The perfect fifth is a basic element in the construction of major and minor triads, and their extensions. Because these chords occur frequently in much music, the perfect fifth occurs just as often. However, since many instruments contain a perfect fifth as an overtone, it is not unusual to omit the fifth of a chord (especially in root position).
The perfect fifth is also present in seventh chords as well as "tall tertian" harmonies (harmonies consisting of more than four tones stacked in thirds above the root). The presence of a perfect fifth can in fact soften the dissonant intervals of these chords, as in the major seventh chord in which the dissonance of a major seventh is softened by the presence of two perfect fifths.
Chords can also be built by stacking fifths, yielding quintal harmonies. Such harmonies are present in more modern music, such as the music of Paul Hindemith. This harmony also appears in Stravinsky's "The Rite of Spring" in the "Dance of the Adolescents" where four C trumpets, a piccolo trumpet, and one horn play a five-tone B-flat quintal chord.
Bare fifth, open fifth, or empty fifth.
A bare fifth, open fifth or empty fifth is a chord containing only a perfect fifth with no third. The closing chords of Pérotin's "Viderunt omnes" and "Sederunt Principes", Guillaume de Machaut's "Messe de Nostre Dame", the Kyrie in Mozart's "Requiem", and the first movement of Bruckner's "Ninth Symphony" are all examples of pieces ending on an open fifth. These chords are common in Medieval music, sacred harp singing, and throughout rock music. In hard rock, metal, and punk music, overdriven or distorted electric guitar can make thirds sound muddy while the bare fifths remain crisp. In addition, fast chord-based passages are made easier to play by combining the four most common guitar hand shapes into one. Rock musicians refer to them as "power chords". Power chords often include octave doubling (i.e., their bass note is doubled one octave higher, e.g. F3–C4–F4).
<templatestyles src="Stack/styles.css"/>
An "empty fifth" is sometimes used in traditional music, e.g., in Asian music and in some Andean music genres of pre-Columbian origin, such as "k'antu" and "sikuri". The same melody is being led by parallel fifths and octaves during all the piece.
Western composers may use the interval to give a passage an exotic flavor. Empty fifths are also sometimes used to give a cadence an ambiguous quality, as the bare fifth does not indicate a major or minor tonality.
Use in tuning and tonal systems.
The just perfect fifth, together with the octave, forms the basis of Pythagorean tuning. A slightly narrowed perfect fifth is likewise the basis for meantone tuning.
The circle of fifths is a model of pitch space for the chromatic scale (chromatic circle), which considers nearness as the number of perfect fifths required to get from one note to another, rather than chromatic adjacency.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(\\sqrt [12]{2})^7"
}
] |
https://en.wikipedia.org/wiki?curid=143888
|
14390885
|
Effective molarity
|
Ratio used in chemistry
In chemistry, the effective molarity (denoted "EM") is defined as the ratio between the first-order rate constant of an intramolecular reaction and the second-order rate constant of the corresponding intermolecular reaction ("kinetic effective molarity") or the ratio between the equilibrium constant of an intramolecular reaction and the equilibrium constant of the corresponding intermolecular reaction ("thermodynamic effective molarity").
formula_0
formula_1
EM has the dimension of concentration. High EM values always indicate greater ease of intramolecular processes over the corresponding intermolecular ones. Effective molarities can be used to get a deeper understanding of the effects of intramolecularity on reaction courses.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "EM_{kinetic} = {k_{intramolecular} \\over k_{intermolecular}}"
},
{
"math_id": 1,
"text": "EM_{thermo} = {K_{intramolecular} \\over K_{intermolecular}}"
}
] |
https://en.wikipedia.org/wiki?curid=14390885
|
14391787
|
Bayes linear statistics
|
Bayes linear statistics is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear analysis attempts to solve this problem by developing theory and practise for using partially specified probability models. Bayes linear in its current form has been primarily developed by Michael Goldstein. Mathematically and philosophically it extends Bruno de Finetti's Operational Subjective approach to probability and statistics.
Motivation.
Consider first a traditional Bayesian Analysis where you expect to shortly know "D" and you would like to know more about some other observable "B". In the traditional Bayesian approach it is required that every possible outcome is enumerated i.e. every possible outcome is the cross product of the partition of a set of "B" and "D". If represented on a computer where "B" requires "n" bits and "D" "m" bits then the number of states required is formula_0. The first step to such an analysis is to determine a person's subjective probabilities e.g. by asking about their betting behaviour for each of these outcomes. When we learn "D" conditional probabilities for "B" are determined by the application of Bayes' rule.
Practitioners of subjective Bayesian statistics routinely analyse datasets where the size of this set is large enough that subjective probabilities cannot be meaningfully determined for every element of "D" × "B". This is normally accomplished by assuming exchangeability and then the use of parameterized models with prior distributions over parameters and appealing to the de Finetti's theorem to justify that this produces valid operational subjective probabilities over "D" × "B". The difficulty with such an approach is that the
validity of the statistical analysis requires that the subjective probabilities are a good representation of an individual's beliefs however this method results in a very precise specification over "D" × "B" and it is often difficult to articulate what it would mean to adopt these belief specifications.
In contrast to the traditional Bayesian paradigm Bayes linear statistics following de Finetti uses Prevision or subjective expectation as a primitive, probability is then defined as the expectation of an indicator variable. Instead of specifying a subjective probability for every element in the partition "D" × "B" the analyst specifies subjective expectations for just a few quantities that they are interested in or feel knowledgeable about. Then instead of conditioning an adjusted expectation is computed by a rule that is a generalization of Bayes' rule that is based upon expectation.
The use of the word linear in the title refers to de Finetti's arguments that probability theory is a linear theory (de Finetti argued against the more common measure theory approach).
Example.
In Bayes linear statistics, the probability model is only partially specified, and it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation.
To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements "D" and some future value which you would like to know "B". Here "D" refers to a vector containing data and "B" to a vector containing quantities you would like to predict. For the following example "B" and "D" are taken to be two-dimensional vectors i.e.
formula_1
In order to specify a Bayes linear model it is necessary to supply expectations for the vectors "B" and "D", and to also specify the correlation between each component of "B" and each component of "D".
For example the expectations are specified as:
formula_2
and the covariance matrix is specified as :
formula_3
The repetition in this matrix, has some interesting implications to be discussed shortly.
An adjusted expectation is a linear estimator of the form
formula_4
where formula_5 and formula_6 are chosen to minimise the prior expected loss for the observations i.e. formula_7 in this case. That is for formula_8
formula_9
where
formula_10
are chosen in order to minimise the prior expected loss in estimating formula_8
In general the adjusted expectation is calculated with
formula_11
Setting formula_12 to minimise
formula_13
From a proof provided in (Goldstein and Wooff 2007) it can be shown that:
formula_14
For the case where Var("D") is not invertible the Moore–Penrose pseudoinverse should be used instead.
Furthermore, the adjusted variance of the variable X after observing the data D is given by
formula_15
- "Foresight: its Logical Laws, Its Subjective Sources," (translation of the 1937 article in French) in H. E. Kyburg and H. E. Smokler (eds), "Studies in Subjective Probability," New York: Wiley, 1964.
|
[
{
"math_id": 0,
"text": "2^{n+m}"
},
{
"math_id": 1,
"text": "B = (Y_1,Y_2),~ D = (X_1,X_2)."
},
{
"math_id": 2,
"text": "E(Y_1)=5,~E(Y_2)=3,~E(X_1)=5,~E(X_2)=3"
},
{
"math_id": 3,
"text": "\n\\begin{array}{c|cccc}\n & X_1 & X_2 & Y_1 & Y_2 \\\\ \\hline\nX_1 & 1 & u & \\gamma & \\gamma \\\\\nX_2 & u & 1 & \\gamma & \\gamma \\\\\nY_1 & \\gamma & \\gamma & 1 & v \\\\\nY_2 & \\gamma & \\gamma & v & 1 \\\\\n\\end{array}.\n"
},
{
"math_id": 4,
"text": "c_0 + c_1X_1 + c_2X_2"
},
{
"math_id": 5,
"text": "c_0, c_1"
},
{
"math_id": 6,
"text": "c_2"
},
{
"math_id": 7,
"text": "Y_1, Y_2"
},
{
"math_id": 8,
"text": "Y_1"
},
{
"math_id": 9,
"text": "E([Y_1 - c_0 - c_1X_1 - c_2X_2]^2)\\,"
},
{
"math_id": 10,
"text": "c_0, c_1, c_2\\,"
},
{
"math_id": 11,
"text": "E_D(X) = \\sum^k_{i=0} h_iD_i ."
},
{
"math_id": 12,
"text": "h_0, \\dots, h_k"
},
{
"math_id": 13,
"text": "E\\left(\\left[X-\\sum^k_{i=0}h_iD_i\\right]^2\\right). "
},
{
"math_id": 14,
"text": "E_D(X) = E(X) + \\mathrm{Cov}(X,D)\\mathrm{Var}(D)^{-1}(D-E(D)) . \\,"
},
{
"math_id": 15,
"text": "\\mathrm{Var}_D(X) = \\mathrm{Var}(X) - \\mathrm{Cov}(X,D)\\mathrm{Var}(D)^{-1}\\mathrm{Cov}(D,X). "
}
] |
https://en.wikipedia.org/wiki?curid=14391787
|
14391804
|
Wilberforce pendulum
|
Coupled mechanical oscillator
A Wilberforce pendulum, invented by British physicist Lionel Robert Wilberforce around 1896, consists of a mass suspended by a long helical spring and free to turn on its vertical axis, twisting the spring. It is an example of a coupled mechanical oscillator, often used as a demonstration in physics education. The mass can both bob up and down on the spring, and rotate back and forth about its vertical axis with torsional vibrations. When correctly adjusted and set in motion, it exhibits a curious motion in which periods of purely rotational oscillation gradually alternate with periods of purely up and down oscillation. The energy stored in the device shifts slowly back and forth between the translational 'up and down' oscillation mode and the torsional 'clockwise and counterclockwise' oscillation mode, until the motion eventually dies away.
Despite the name, in normal operation it does not swing back and forth as ordinary pendulums do. The mass usually has opposing pairs of radial 'arms' sticking out horizontally, threaded with small weights that can be screwed in or out to adjust the moment of inertia to 'tune' the torsional vibration period.
Explanation.
The device's intriguing behavior is caused by a slight coupling between the two motions or degrees of freedom, due to the geometry of the spring. When the weight is moving up and down, each downward excursion of the spring causes it to unwind slightly, giving the weight a slight twist. When the weight moves up, it causes the spring to wind slightly tighter, giving the weight a slight twist in the other direction. So when the weight is moving up and down, each oscillation gives a slight alternating rotational torque to the weight. In other words, during each oscillation some of the energy in the translational mode leaks into the rotational mode. Slowly the up and down movement gets less, and the rotational movement gets greater, until the weight is just rotating and not bobbing.
Similarly, when the weight is rotating back and forth, each twist of the weight in the direction that unwinds the spring also reduces the spring tension slightly, causing the weight to sag a little lower. Conversely, each twist of the weight in the direction of winding the spring tighter causes the tension to increase, pulling the weight up slightly. So each oscillation of the weight back and forth causes it to bob up and down more, until all the energy is transferred back from the rotational mode into the translational mode and it is just bobbing up and down, not rotating.
A Wilberforce pendulum can be designed by approximately equating the frequency of harmonic oscillations of the spring-mass oscillator "fT", which is dependent on the spring constant "k" of the spring and the mass "m" of the system, and the frequency of the rotating oscillator "f"R, which is dependent on the moment of inertia "I" and the torsional coefficient "κ" of the system.
formula_0
The pendulum is usually adjusted by moving the moment of inertia adjustment weights towards or away from the centre of the mass by equal amounts on each side in order to modify "f"R, until the rotational frequency is close to the translational frequency, so the alternation period will be slow enough to allow the change between the two modes to be clearly seen.
Alternation or 'beat' frequency.
The frequency at which the two modes alternate is equal to the difference between the oscillation frequencies of the modes. The closer in frequency the two motions are, the slower will be the alternation between them. This behavior, common to all coupled oscillators, is analogous to the phenomenon of beats in musical instruments, in which two tones combine to produce a 'beat' tone at the difference between their frequencies. For example, if the pendulum bobs up and down at a rate of "f"T = 4 Hz, and rotates back and forth about its axis at a rate of "f"R = 4.1 Hz, the alternation rate "f"alt will be:
formula_1
formula_2
So the motion will change from rotational to translational in 5 seconds and then back to rotational in the next 5 seconds. If the two frequencies are exactly equal, the beat frequency will be zero, and resonance will occur.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f_T=\\sqrt\\frac{k}{m} \\approx \\sqrt\\frac{\\kappa}{I}=f_R"
},
{
"math_id": 1,
"text": "f_{\\rm alt} = f_R - f_T = 0.1\\; \\mathrm{Hz}"
},
{
"math_id": 2,
"text": "T_{\\rm alt} = 1 / f_{\\rm alt} = 10\\; \\mathrm{s}"
}
] |
https://en.wikipedia.org/wiki?curid=14391804
|
14394227
|
Generalized Ozaki cost function
|
In economics the generalized-Ozaki (GO) cost function is a general description of the cost of production proposed by Shinichiro Nakamura.
The GO cost function is notable for explicitly considering nonhomothetic technology, where the proportions of inputs can vary as the output changes. This stands in contrast to the standard production model, which assumes homothetic technology.
The GO function.
For a given output formula_0, at time formula_1 and a vector of formula_2 input prices formula_3, the generalized-Ozaki (GO) cost function formula_4 is expressed as
Here, formula_5 and formula_6, formula_7. By applying the Shephard's lemma, we derive the demand function for input formula_8, formula_9 :
The GO cost function is flexible in the price space, and treats scale effects and technical change in a highly general manner.
The concavity condition which ensures that a constant function aligns with cost minimization for a specific set of formula_10, necessitates that its Hessian (the matrix of second partial derivatives with respect to formula_3 and formula_11) being negative semidefinite.
Several notable special cases can be identified:
When (HT) holds, the GO function reduces to the Generalized Leontief function of Diewert, A well-known flexible functional form for cost and production functions. When (FL) hods, it reduces to a non-linear version of Leontief's model, which explains the cross-sectional variation of formula_9 when variations in input prices were negligible:
Background.
Cost- and production funcitons.
In economics, production technology is typically represented by the production function formula_16, which, in the case of a single output formula_0 and formula_2 inputs, is written as formula_17. When considering cost minimization for a given set of prices formula_10 and formula_0, the corresponding cost function formula_18 can be expressed as:
The duality theorems of cost and production functions state that once a well-behaved cost function is established, one can derive the corresponding production function, and vice versa.
For a given cost function formula_18, the corresponding production function formula_16 can be obtained as (a more rigorous derivation involves using a distance function instead of a production function) :
In essence, under general conditions, a specific technology can be equally effectively represented by both cost and production functions.
One advantage of using a cost function rather than a production function is that the demand functions for inputs can be easily derived from the former using Shephard's lemma, whereas this process can become cumbersome with the production function.
Homothetic- and Nonhomothetic Technology.
Commonly used forms of production functions, such as Cobb-Douglas and Constant Elasticity of Substitution (CES) functions exhibit homothticity.
This property means that the production function formula_16 can be represented as a positive monotone transformation of a linear-homogeneous function formula_19:
formula_20
where formula_21 for any formula_22.
The Cobb-Douglas function is a special case of the CES function for which the elasticity of substitution between the inputs, formula_23, is one.
For a homothetic technology, the cost function can be represented as
formula_24
where formula_25 is a monotone increasing function, and formula_26 is termed a unit cost function. From Shephard's lemma, we obtain the following expression for the ratio of inputs formula_8 and formula_27:
formula_28,
which implies that for a homothetic technology, the ratio of inputs depends solely on prices and not on the scale of output.
However, empirical studies on the cross-section of establishments show that the FL model (3) effectively explains the data, particularly for heavy industries such as steel mills, paper mills, basic chemical sectors, and power stations, indicating that homotheticity may not be applicable.
Furthermore, in the areas of trade, homothetic and monolithic functional models do not accurately predict results. One example is in the gravity equation for trade, or how much will two countries trade with each other based on GDP and distance. This led researchers to explore non-homothetic models of production, to fit with a cross section analysis of producer behavior, for example, when producers would begin to minimize costs by switching inputs or investing in increased production.
Flexible Functional Forms.
CES functions (note that Cobb-Douglas is a special case of CES) typically involve only two inputs, such as capital and labor.
While they can be extended to include more than two inputs, assuming the same degree of substitutability for all inputs may seem overly restrictive (refer to CES for further details on this topic, including the potential for accommodating diverse elasticities of substitution among inputs, although this capability is somewhat constrained).
To address this limitation, flexible functional forms have been developed.
These general functional forms are called flexible functional forms (FFFs) because they do not impose any restrictions a priori on the degree of substitutability among inputs. These FFFs can provide a second-order approximation to any twice-differentiable function that meets the necessary regulatory conditions, including basic technological conditions and those consistent with cost minimization.
Widely used examples of FFFs are the transcendental logarithmic (translog) function and the Generalized Leontief (GL) function.
The translog function extends the Cobb-Douglas function to the second order, while the GL function performs a similar extension to the Leontief production function.
Limitations.
A drawback of the GL function is its inability to be globally concave without sacrificing flexibility in the price space.
This limitation also applies to the GO function, as it is a non-homothetic extension of the GL.
In a subsequent study, Nakamura attempted to address this issue by employing the Generalized McFadden function.
For further advancements in this area, refer to Ryan and Wales.
Moreover, both the GO function and the underlying GL function presume immediate adjustments of inputs in response to changes in
formula_10 and formula_0.
This oversimplifies the reality where technological changes entail significant investments in plant and equipment, thus requiring time, often occurring over years rather than instantaneously.
One way to address this issue will be to resort to a variable cost function that explicitly takes into account differences in the speed of adjustments among inputs.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
See also.
Production function
List of production functions
Constant elasticity of substitution
Shephard's lemma
Returns to scale
|
[
{
"math_id": 0,
"text": "y"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "p_i"
},
{
"math_id": 4,
"text": "C()"
},
{
"math_id": 5,
"text": "b_{ij}=b_{ji}"
},
{
"math_id": 6,
"text": "\\sum_i b_{ij}=1"
},
{
"math_id": 7,
"text": "i,j=1,..,m"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "x_i"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "p_j"
},
{
"math_id": 12,
"text": "b_{yi}=b_y"
},
{
"math_id": 13,
"text": "b_y = 0"
},
{
"math_id": 14,
"text": "b_{yi}=0"
},
{
"math_id": 15,
"text": "b_{ti}=b_t"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "y=f(x)"
},
{
"math_id": 18,
"text": "C(p,y)"
},
{
"math_id": 19,
"text": "h"
},
{
"math_id": 20,
"text": "y = f(x) = \\phi(h(x))"
},
{
"math_id": 21,
"text": "h(\\lambda x) = \\lambda h(x)"
},
{
"math_id": 22,
"text": "\\lambda > 0"
},
{
"math_id": 23,
"text": "\\sigma"
},
{
"math_id": 24,
"text": "C(p,y) = c(p)d(y)"
},
{
"math_id": 25,
"text": "d"
},
{
"math_id": 26,
"text": "c"
},
{
"math_id": 27,
"text": "j\n"
},
{
"math_id": 28,
"text": "\\frac{x_i}{x_j}=\\frac{\\partial c(p)/\\partial p_i}{\\partial c(p)/\\partial p_j}"
}
] |
https://en.wikipedia.org/wiki?curid=14394227
|
14400
|
History of science
|
The history of science covers the development of science from ancient times to the present. It encompasses all three major branches of science: natural, social, and formal. Protoscience, early sciences, and natural philosophies such as alchemy and astrology during the Bronze Age, Iron Age, classical antiquity, and the Middle Ages declined during the early modern period after the establishment of formal disciplines of science in the Age of Enlightenment.
Science's earliest roots can be traced to Ancient Egypt and Mesopotamia around 3000 to 1200 BCE. These civilizations' contributions to mathematics, astronomy, and medicine influenced later Greek natural philosophy of classical antiquity, wherein formal attempts were made to provide explanations of events in the physical world based on natural causes. After the fall of the Western Roman Empire, knowledge of Greek conceptions of the world deteriorated in Latin-speaking Western Europe during the early centuries (400 to 1000 CE) of the Middle Ages, but continued to thrive in the Greek-speaking Byzantine Empire. Aided by translations of Greek texts, the Hellenistic worldview was preserved and absorbed into the Arabic-speaking Muslim world during the Islamic Golden Age. The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th century revived the learning of natural philosophy in the West. Traditions of early science were also developed in ancient India and separately in ancient China, the Chinese model having influenced Vietnam, Korea and Japan before Western exploration. Among the Pre-Columbian peoples of Mesoamerica, the Zapotec civilization established their first known traditions of astronomy and mathematics for producing calendars, followed by other civilizations such as the Maya.
Natural philosophy was transformed during the Scientific Revolution in 16th- to 17th-century Europe, as new ideas and discoveries departed from previous Greek conceptions and traditions. The New Science that emerged was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. More "revolutions" in subsequent centuries soon followed. The chemical revolution of the 18th century, for instance, introduced new quantitative methods and measurements for chemistry. In the 19th century, new perspectives regarding the conservation of energy, age of Earth, and evolution came into focus. And in the 20th century, new discoveries in genetics and physics laid the foundations for new sub disciplines such as molecular biology and particle physics. Moreover, industrial and military concerns as well as the increasing complexity of new research endeavors ushered in the era of "big science," particularly after World War II.
Approaches to history of science.
The nature of the history of science is a topic of debate (as is, by implication, the definition of science itself). The history of science is often seen as a linear story of progress
but historians have come to see the story as more complex.
Alfred Edward Taylor has characterised lean periods in the advance of scientific discovery as "periodical bankruptcies of science".
Science is a human activity, and scientific contributions have come from people from a wide range of different backgrounds and cultures. Historians of science increasingly see their field as part of a global history of exchange, conflict and collaboration.
The relationship between science and religion has been variously characterized in terms of "conflict", "harmony", "complexity", and "mutual independence", among others. Events in Europe such as the Galileo affair of the early-17th century – associated with the scientific revolution and the Age of Enlightenment – led scholars such as John William Draper to postulate (c. 1874) a conflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. The "conflict thesis" has since lost favor among the majority of contemporary scientists and historians of science. However, some contemporary philosophers and scientists, such as Richard Dawkins, still subscribe to this thesis.
Historians have emphasized that trust is necessary for agreement on claims about nature. In this light, the 1660 establishment of the Royal Society and its code of experiment – trustworthy because witnessed by its members – has become an important chapter in the historiography of science. Many people in modern history (typically women and persons of color) were excluded from elite scientific communities and characterized by the science establishment as inferior. Historians in the 1980s and 1990s described the structural barriers to participation and began to recover the contributions of overlooked individuals. Historians have also investigated the mundane practices of science such as fieldwork and specimen collection, correspondence, drawing, record-keeping, and the use of laboratory and field equipment.
Prehistoric times.
In prehistoric times, knowledge and technique were passed from generation to generation in an oral tradition. For instance, the domestication of maize for agriculture has been dated to about 9,000 years ago in southern Mexico, before the development of writing systems. Similarly, archaeological evidence indicates the development of astronomical knowledge in preliterate societies.
The oral tradition of preliterate societies had several features, the first of which was its fluidity. New information was constantly absorbed and adjusted to new circumstances or community needs. There were no archives or reports. This fluidity was closely related to the practical need to explain and justify a present state of affairs. Another feature was the tendency to describe the universe as just sky and earth, with a potential underworld. They were also prone to identify causes with beginnings, thereby providing a historical origin with an explanation. There was also a reliance on a "medicine man" or "wise woman" for healing, knowledge of divine or demonic causes of diseases, and in more extreme cases, for rituals such as exorcism, divination, songs, and incantations. Finally, there was an inclination to unquestioningly accept explanations that might be deemed implausible in more modern times while at the same time not being aware that such credulous behaviors could have posed problems.
The development of writing enabled humans to store and communicate knowledge across generations with much greater accuracy. Its invention was a prerequisite for the development of philosophy and later science in ancient times. Moreover, the extent to which philosophy and science would flourish in ancient times depended on the efficiency of a writing system (e.g., use of alphabets).
Earliest roots in the Ancient Near East.
The earliest roots of science can be traced to the Ancient Near East, in particular Ancient Egypt and Mesopotamia in around 3000 to 1200 BCE.
Ancient Egypt.
Number system and geometry.
Starting in around 3000 BCE, the ancient Egyptians developed a numbering system that was decimal in character and had oriented their knowledge of geometry to solving practical problems such as those of surveyors and builders. Their development of geometry was itself a necessary development of surveying to preserve the layout and ownership of farmland, which was flooded annually by the Nile River. The 3-4-5 right triangle and other rules of geometry were used to build rectilinear structures, and the post and lintel architecture of Egypt.
Disease and healing.
Egypt was also a center of alchemy research for much of the Mediterranean. Based on the medical papyri written in the 2500–1200 BCE, the ancient Egyptians believed that disease was mainly caused by the invasion of bodies by evil forces or spirits. Thus, in addition to using medicines, their healing therapies included prayer, incantation, and ritual. The Ebers Papyrus, written in around 1600 BCE, contains medical recipes for treating diseases related to the eyes, mouth, skin, internal organs, and extremities, as well as abscesses, wounds, burns, ulcers, swollen glands, tumors, headaches, and even bad breath. The Edwin Smith papyrus, written at about the same time, contains a surgical manual for treating wounds, fractures, and dislocations. The Egyptians believed that the effectiveness of their medicines depended on the preparation and administration under appropriate rituals. Medical historians believe that ancient Egyptian pharmacology, for example, was largely ineffective. Both the Ebers and Edwin Smith papyri applied the following components to the treatment of disease: examination, diagnosis, treatment, and prognosis, which display strong parallels to the basic empirical method of science and, according to G.E.R. Lloyd, played a significant role in the development of this methodology.
Calendar.
The ancient Egyptians even developed an official calendar that contained twelve months, thirty days each, and five days at the end of the year. Unlike the Babylonian calendar or the ones used in Greek city-states at the time, the official Egyptian calendar was much simpler as it was fixed and did not take lunar and solar cycles into consideration.
Mesopotamia.
The ancient Mesopotamians had extensive knowledge about the chemical properties of clay, sand, metal ore, bitumen, stone, and other natural materials, and applied this knowledge to practical use in manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. Metallurgy required knowledge about the properties of metals. Nonetheless, the Mesopotamians seem to have had little interest in gathering information about the natural world for the mere sake of gathering information and were far more interested in studying the manner in which the gods had ordered the universe. Biology of non-human organisms was generally only written about in the context of mainstream academic disciplines. Animal physiology was studied extensively for the purpose of divination; the anatomy of the liver, which was seen as an important organ in haruspicy, was studied in particularly intensive detail. Animal behavior was also studied for divinatory purposes. Most information about the training and domestication of animals was probably transmitted orally without being written down, but one text dealing with the training of horses has survived.
Mesopotamian medicine.
The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors prescribed magical formulas to be recited as well as medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur (c. 2112 BCE – c. 2004 BCE). The most extensive Babylonian medical text, however, is the "Diagnostic Handbook" written by the "ummânū", or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). In East Semitic cultures, the main medicinal authority was a kind of exorcist-healer known as an "āšipu". The profession was generally passed down from father to son and was held in extremely high regard. Of less frequent recourse was another kind of healer known as an "asu", who corresponds more closely to a modern physician and treated physical symptoms using primarily folk remedies composed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease.
Astronomy and celestial divination.
In Babylonian astronomy, records of the motions of the stars, planets, and the moon are left on thousands of clay tablets created by scribes. Even today, astronomical periods identified by Mesopotamian proto-scientists are still widely used in Western calendars such as the solar year and the lunar month. Using this data, they developed mathematical methods to compute the changing length of daylight in the course of the year, predict the appearances and disappearances of the Moon and planets, and eclipses of the Sun and Moon. Only a few astronomers' names are known, such as that of Kidinnu, a Chaldean astronomer and mathematician. Kiddinu's value for the solar year is in use for today's calendars. Babylonian astronomy was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian A. Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in Islam, and in the West—if not indeed all subsequent endeavour in the exact sciences—depend upon Babylonian astronomy in decisive and fundamental ways."
To the Babylonians and other Near Eastern cultures, messages from the gods or omens were concealed in all natural phenomena that could be deciphered and interpreted by those who are adept. Hence, it was believed that the gods could speak through all terrestrial objects (e.g., animal entrails, dreams, malformed births, or even the color of a dog urinating on a person) and celestial phenomena. Moreover, Babylonian astrology was inseparable from Babylonian astronomy.
Mathematics.
The Mesopotamian cuneiform tablet Plimpton 322, dating to the eighteenth-century BCE, records a number of Pythagorean triplets (3,4,5) (5,12,13) ..., hinting that the ancient Mesopotamians might have been aware of the Pythagorean theorem over a millennium before Pythagoras.
Ancient and medieval South Asia and East Asia.
Mathematical achievements from Mesopotamia had some influence on the development of mathematics in India, and there were confirmed transmissions of mathematical ideas between India and China, which were bidirectional. Nevertheless, the mathematical and scientific achievements in India and particularly in China occurred largely independently from those of Europe and the confirmed early influences that these two civilizations had on the development of science in Europe in the pre-modern era were indirect, with Mesopotamia and later the Islamic World acting as intermediaries. The arrival of modern science, which grew out of the Scientific Revolution, in India and China and the greater Asian region in general can be traced to the scientific activities of Jesuit missionaries who were interested in studying the region's flora and fauna during the 16th to 17th century.
India.
Mathematics.
The earliest traces of mathematical knowledge in the Indian subcontinent appear with the Indus Valley Civilisation (c. 4th millennium BCE ~ c. 3rd millennium BCE). The people of this civilization made bricks whose dimensions were in the proportion 4:2:1, which is favorable for the stability of a brick structure. They also tried to standardize measurement of length to a high degree of accuracy. They designed a ruler—the "Mohenjo-daro ruler"—whose unit of length (approximately 1.32 inches or 3.4 centimeters) was divided into ten equal parts. Bricks manufactured in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length.
The Bakhshali manuscript contains problems involving arithmetic, algebra and geometry, including mensuration. The topics covered include fractions, square roots, arithmetic and geometric progressions, solutions of simple equations, simultaneous linear equations, quadratic equations and indeterminate equations of the second degree. In the 3rd century BCE, Pingala presents the "Pingala-sutras," the earliest known treatise on Sanskrit prosody. He also presents a numerical system by adding one to the sum of place values. Pingala's work also includes material related to the Fibonacci numbers, called "".
Indian astronomer and mathematician Aryabhata (476–550), in his "Aryabhatiya" (499) introduced the sine function in trigonometry and the number 0 [mathematics]
. In 628 CE, Brahmagupta suggested that gravity was a force of attraction. He also lucidly explained the use of zero as both a placeholder and a decimal digit, along with the Hindu–Arabic numeral system now used universally throughout the world. Arabic translations of the two astronomers' texts were soon available in the Islamic world, introducing what would become Arabic numerals to the Islamic world by the 9th century.
During the 14th–16th centuries, the Kerala school of astronomy and mathematics made significant advances in astronomy and especially mathematics, including fields such as trigonometry and analysis. In particular, Madhava of Sangamagrama is considered the "founder of mathematical analysis". Parameshvara (1380–1460), presents a case of the Mean Value theorem in his commentaries on Govindasvāmi and Bhāskara II. The "Yuktibhāṣā" was written by Jyeshtadeva in 1530.
Astronomy.
The first textual mention of astronomical concepts comes from the Vedas, religious literature of India. According to Sarma (2008): "One finds in the Rigveda intelligent speculations about the genesis of the universe from nonexistence, the configuration of the universe, the spherical self-supporting earth, and the year of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month.".
The first 12 chapters of the "Siddhanta Shiromani", written by Bhāskara in the 12th century, cover topics such as: mean longitudes of the planets; true longitudes of the planets; the three problems of diurnal rotation; syzygies; lunar eclipses; solar eclipses; latitudes of the planets; risings and settings; the moon's crescent; conjunctions of the planets with each other; conjunctions of the planets with the fixed stars; and the patas of the sun and moon. The 13 chapters of the second part cover the nature of the sphere, as well as significant astronomical and trigonometric calculations based on it.
In the "Tantrasangraha" treatise, Nilakantha Somayaji's updated the Aryabhatan model for the interior planets, Mercury, and Venus and the equation that he specified for the center of these planets was more accurate than the ones in European or Islamic astronomy until the time of Johannes Kepler in the 17th century. Jai Singh II of Jaipur constructed five observatories called Jantar Mantars in total, in New Delhi, Jaipur, Ujjain, Mathura and Varanasi; they were completed between 1724 and 1735.
Grammar.
Some of the earliest linguistic activities can be found in Iron Age India (1st millennium BCE) with the analysis of Sanskrit for the purpose of the correct recitation and interpretation of Vedic texts. The most notable grammarian of Sanskrit was (c. 520–460 BCE), whose grammar formulates close to 4,000 rules for Sanskrit. Inherent in his analytic approach are the concepts of the phoneme, the morpheme and the root. The Tolkāppiyam text, composed in the early centuries of the common era, is a comprehensive text on Tamil grammar, which includes sutras on orthography, phonology, etymology, morphology, semantics, prosody, sentence structure and the significance of context in language.
Medicine.
Findings from Neolithic graveyards in what is now Pakistan show evidence of proto-dentistry among an early farming culture. The ancient text Suśrutasamhitā of Suśruta describes procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. The "Charaka Samhita" of Charaka describes ancient theories on human body, etiology, symptomology and therapeutics for a wide range of diseases. It also includes sections on the importance of diet, hygiene, prevention, medical education, and the teamwork of a physician, nurse and patient necessary for recovery to health.
Politics and state.
An ancient Indian treatise on statecraft, economic policy and military strategy by Kautilya and , who are traditionally identified with (c. 350–283 BCE). In this treatise, the behaviors and relationships of the people, the King, the State, the Government Superintendents, Courtiers, Enemies, Invaders, and Corporations are analyzed and documented. Roger Boesche describes the "Arthaśāstra" as "a book of political realism, a book analyzing how the political world does work and not very often stating how it ought to work, a book that frequently discloses to a king what calculating and sometimes brutal measures he must carry out to preserve the state and the common good."
China.
Chinese mathematics.
From the earliest the Chinese used a positional decimal system on counting boards in order to calculate. To express 10, a single rod is placed in the second box from the right. The spoken language uses a similar system to English: e.g. four thousand two hundred and seven. No symbol was used for zero. By the 1st century BCE, negative numbers and decimal fractions were in use and "The Nine Chapters on the Mathematical Art" included methods for extracting higher order roots by Horner's method and solving linear equations and by Pythagoras' theorem. Cubic equations were solved in the Tang dynasty and solutions of equations of order higher than 3 appeared in print in 1245 CE by Ch'in Chiu-shao. Pascal's triangle for binomial coefficients was described around 1100 by Jia Xian.
Although the first attempts at an axiomatization of geometry appear in the Mohist canon in 330 BCE, Liu Hui developed algebraic methods in geometry in the 3rd century CE and also calculated pi to 5 significant figures. In 480, Zu Chongzhi improved this by discovering the ratio formula_0 which remained the most accurate value for 1200 years.
Astronomical observations.
Astronomical observations from China constitute the longest continuous sequence from any civilization and include records of sunspots (112 records from 364 BCE), supernovas (1054), lunar and solar eclipses. By the 12th century, they could reasonably accurately make predictions of eclipses, but the knowledge of this was lost during the Ming dynasty, so that the Jesuit Matteo Ricci gained much favor in 1601 by his predictions. By 635 Chinese astronomers had observed that the tails of comets always point away from the sun.
From antiquity, the Chinese used an equatorial system for describing the skies and a star map from 940 was drawn using a cylindrical (Mercator) projection. The use of an armillary sphere is recorded from the 4th century BCE and a sphere permanently mounted in equatorial axis from 52 BCE. In 125 CE Zhang Heng used water power to rotate the sphere in real time. This included rings for the meridian and ecliptic. By 1270 they had incorporated the principles of the Arab torquetum.
In the Song Empire (960–1279) of Imperial China, Chinese scholar-officials unearthed, studied, and cataloged ancient artifacts.
Inventions.
To better prepare for calamities, Zhang Heng invented a seismometer in 132 CE which provided instant alert to authorities in the capital Luoyang that an earthquake had occurred in a location indicated by a specific cardinal or ordinal direction. Although no tremors could be felt in the capital when Zhang told the court that an earthquake had just occurred in the northwest, a message came soon afterwards that an earthquake had indeed struck northwest of Luoyang (in what is now modern Gansu). Zhang called his device the 'instrument for measuring the seasonal winds and the movements of the Earth' (Houfeng didong yi 候风地动仪), so-named because he and others thought that earthquakes were most likely caused by the enormous compression of trapped air.
There are many notable contributors to early Chinese disciplines, inventions, and practices throughout the ages. One of the best examples would be the medieval Song Chinese Shen Kuo (1031–1095), a polymath and statesman who was the first to describe the magnetic-needle compass used for navigation, discovered the concept of true north, improved the design of the astronomical gnomon, armillary sphere, sight tube, and clepsydra, and described the use of drydocks to repair boats. After observing the natural process of the inundation of silt and the find of marine fossils in the Taihang Mountains (hundreds of miles from the Pacific Ocean), Shen Kuo devised a theory of land formation, or geomorphology. He also adopted a theory of gradual climate change in regions over time, after observing petrified bamboo found underground at Yan'an, Shaanxi province. If not for Shen Kuo's writing, the architectural works of Yu Hao would be little known, along with the inventor of movable type printing, Bi Sheng (990–1051). Shen's contemporary Su Song (1020–1101) was also a brilliant polymath, an astronomer who created a celestial atlas of star maps, wrote a treatise related to botany, zoology, mineralogy, and metallurgy, and had erected a large astronomical clocktower in Kaifeng city in 1088. To operate the crowning armillary sphere, his clocktower featured an escapement mechanism and the world's oldest known use of an endless power-transmitting chain drive.
The Jesuit China missions of the 16th and 17th centuries "learned to appreciate the scientific achievements of this ancient culture and made them known in Europe. Through their correspondence European scientists first learned about the Chinese science and culture." Western academic thought on the history of Chinese technology and science was galvanized by the work of Joseph Needham and the Needham Research Institute. Among the technological accomplishments of China were, according to the British scholar Needham, the water-powered celestial globe (Zhang Heng), dry docks, sliding calipers, the double-action piston pump, the blast furnace, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the winnowing machine, gunpowder, the raised-relief map, toilet paper, the efficient harness, along with contributions in logic, astronomy, medicine, and other fields.
However, cultural factors prevented these Chinese achievements from developing into "modern science". According to Needham, it may have been the religious and philosophical framework of Chinese intellectuals which made them unable to accept the ideas of laws of nature:
<templatestyles src="Template:Blockquote/styles.css" />It was not that there was no order in nature for the Chinese, but rather that it was not an order ordained by a rational personal being, and hence there was no conviction that rational personal beings would be able to spell out in their lesser earthly languages the divine code of laws which he had decreed aforetime. The Taoists, indeed, would have scorned such an idea as being too naïve for the subtlety and complexity of the universe as they intuited it.
Pre-Columbian Mesoamerica.
During the Middle Formative Period (c. 900 BCE – c. 300 BCE) of Pre-Columbian Mesoamerica, the Zapotec civilization, heavily influenced by the Olmec civilization, established the first known full writing system of the region (possibly predated by the Olmec Cascajal Block), as well as the first known astronomical calendar in Mesoamerica. Following a period of initial urban development in the Preclassical period, the Classic Maya civilization (c. 250 CE – c. 900 CE) built on the shared heritage of the Olmecs by developing the most sophisticated systems of writing, astronomy, calendrical science, and mathematics among Mesoamerican peoples. The Maya developed a positional numeral system with a base of 20 that included the use of zero for constructing their calendars. Maya writing, which was developed by 200 BCE, widespread by 100 BCE, and rooted in Olmec and Zapotec scripts, contains easily discernible calendar dates in the form of logographs representing numbers, coefficients, and calendar periods amounting to 20 days and even 20 years for tracking social, religious, political, and economic events in 360-day years.
Classical antiquity and Greco-Roman science.
The contributions of the Ancient Egyptians and Mesopotamians in the areas of astronomy, mathematics, and medicine had entered and shaped Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes. Inquiries were also aimed at such practical goals such as establishing a reliable calendar or determining how to cure a variety of illnesses. The ancient people who were considered the first "scientists" may have thought of themselves as "natural philosophers", as practitioners of a skilled profession (for example, physicians), or as followers of a religious tradition (for example, temple healers).
Pre-socratics.
The earliest Greek philosophers, known as the pre-Socratics, provided competing answers to the question found in the myths of their neighbors: "How did the ordered cosmos in which we live come to be?" The pre-Socratic philosopher Thales (640–546 BCE) of Miletus, identified by later authors such as Aristotle as the first of the Ionian philosophers, postulated non-supernatural explanations for natural phenomena. For example, that land floats on water and that earthquakes are caused by the agitation of the water upon which the land floats, rather than the god Poseidon. Thales' student Pythagoras of Samos founded the Pythagorean school, which investigated mathematics for its own sake, and was the first to postulate that the Earth is spherical in shape. Leucippus (5th century BCE) introduced atomism, the theory that all matter is made of indivisible, imperishable units called atoms. This was greatly expanded on by his pupil Democritus and later Epicurus.
Natural philosophy.
Plato and Aristotle produced the first systematic discussions of natural philosophy, which did much to shape later investigations of nature. Their development of deductive reasoning was of particular importance and usefulness to later scientific inquiry. Plato founded the Platonic Academy in 387 BCE, whose motto was "Let none unversed in geometry enter here," and also turned out many notable philosophers. Plato's student Aristotle introduced empiricism and the notion that universal truths can be arrived at via observation and induction, thereby laying the foundations of the scientific method. Aristotle also produced many biological writings that were empirical in nature, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals on Lesbos, classified more than 540 animal species, and dissected at least 50. Aristotle's writings profoundly influenced subsequent Islamic and European scholarship, though they were eventually superseded in the Scientific Revolution.
Aristotle also contributed to theories of the elements and the cosmos. He believed that the celestial bodies (such as the planets and the Sun) had something called an unmoved mover that put the celestial bodies in motion. Aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as God. Aristotle did not have the technological advancements that would have explained the motion of celestial bodies. In addition, Aristotle had many views on the elements. He believed that everything was derived of the elements earth, water, air, fire, and lastly the Aether. The Aether was a celestial element, and therefore made up the matter of the celestial bodies. The elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. The motion of these elements begins with earth being the closest to "the Earth," then water, air, fire, and finally Aether. In addition to the makeup of all things, Aristotle came up with theories as to why things did not return to their natural motion. He understood that water sits above earth, air above water, and fire above air in their natural state. He explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state.
The important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. In the Hellenistic age scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations. Thus, clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
Neither reason nor inquiry began with the Ancient Greeks, but the Socratic method did, along with the idea of Forms, give great advances in geometry, logic, and the natural sciences. According to Benjamin Farrington, former professor of Classics at Swansea University:
"Men were weighing for thousands of years before Archimedes worked out the laws of equilibrium; they must have had practical and intuitional knowledge of the principals involved. What Archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system."
and again:
"With astonishment we find ourselves on the threshold of modern science. Nor should it be supposed that by some trick of translation the extracts have been given an air of modernity. Far from it. The vocabulary of these writings and their style are the source from which our own vocabulary and style have been derived."
Greek astronomy.
The astronomer Aristarchus of Samos was the first known person to propose a heliocentric model of the Solar System, while the geographer Eratosthenes accurately calculated the circumference of the Earth. Hipparchus (c. 190 – c. 120 BCE) produced the first systematic star catalog. The level of achievement in Hellenistic astronomy and engineering is impressively shown by the Antikythera mechanism (150–100 BCE), an analog computer for calculating the position of planets. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.
Hellenistic medicine.
There was not a defined societal structure for healthcare during the age of Hippocrates. At that time, society was not organized and knowledgeable as people still relied on pure religious reasoning to explain illnesses. Hippocrates introduced the first healthcare system based on science and clinical protocols. Hippocrates' theories about physics and medicine helped pave the way in creating an organized medical structure for society. In medicine, Hippocrates (c. 460 BC – c. 370 BCE) and his followers were the first to describe many diseases and medical conditions and developed the Hippocratic Oath for physicians, still relevant and in use today. Hippocrates' ideas are expressed in The Hippocratic Corpus. The collection notes descriptions of medical philosophies and how disease and lifestyle choices reflect on the physical body. Hippocrates influenced a Westernized, professional relationship among physician and patient. Hippocrates is also known as "the Father of Medicine". Herophilos (335–280 BCE) was the first to base his conclusions on dissection of the human body and to describe the nervous system. Galen (129 – c. 200 CE) performed many audacious operations—including brain and eye surgeries— that were not tried again for almost two millennia.
Greek mathematics.
In Hellenistic Egypt, the mathematician Euclid laid down the foundations of mathematical rigor and introduced the concepts of definition, axiom, theorem and proof still in use today in his "Elements", considered the most influential textbook ever written. Archimedes, considered one of the greatest mathematicians of all time, is credited with using the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He is also known in physics for laying the foundations of hydrostatics, statics, and the explanation of the principle of the lever.
Other developments.
Theophrastus wrote some of the earliest descriptions of plants and animals, establishing the first taxonomy and looking at minerals in terms of their properties, such as hardness. Pliny the Elder produced one of the largest encyclopedias of the natural world in 77 CE, and was a successor to Theophrastus. For example, he accurately describes the octahedral shape of the diamond and noted that diamond dust is used by engravers to cut and polish other gems owing to its great hardness. His recognition of the importance of crystal shape is a precursor to modern crystallography, while notes on other minerals presages mineralogy. He recognizes other minerals have characteristic crystal shapes, but in one example, confuses the crystal habit with the work of lapidaries. Pliny was the first to show amber was a resin from pine trees, because of trapped insects within them.
The development of archaeology has its roots in history and with those who were interested in the past, such as kings and queens who wanted to show past glories of their respective nations. The 5th-century-BCE Greek historian Herodotus was the first scholar to systematically study the past and perhaps the first to examine artifacts.
Greek scholarship under Roman rule.
During the rule of Rome, famous historians such as Polybius, Livy and Plutarch documented the rise of the Roman Republic, and the organization and histories of other nations, while statesmen like Julius Caesar, Cicero, and others provided examples of the politics of the republic and Rome's empire and wars. The study of politics during this age was oriented toward understanding history, understanding methods of governing, and describing the operation of governments.
The Roman conquest of Greece did not diminish learning and culture in the Greek provinces. On the contrary, the appreciation of Greek achievements in literature, philosophy, politics, and the arts by Rome's upper class coincided with the increased prosperity of the Roman Empire. Greek settlements had existed in Italy for centuries and the ability to read and speak Greek was not uncommon in Italian cities such as Rome. Moreover, the settlement of Greek scholars in Rome, whether voluntarily or as slaves, gave Romans access to teachers of Greek literature and philosophy. Conversely, young Roman scholars also studied abroad in Greece and upon their return to Rome, were able to convey Greek achievements to their Latin leadership. And despite the translation of a few Greek texts into Latin, Roman scholars who aspired to the highest level did so using the Greek language. The Roman statesman and philosopher Cicero (106 – 43 BCE) was a prime example. He had studied under Greek teachers in Rome and then in Athens and Rhodes. He mastered considerable portions of Greek philosophy, wrote Latin treatises on several topics, and even wrote Greek commentaries of Plato's "Timaeus" as well as a Latin translation of it, which has not survived.
In the beginning, support for scholarship in Greek knowledge was almost entirely funded by the Roman upper class. There were all sorts of arrangements, ranging from a talented scholar being attached to a wealthy household to owning educated Greek-speaking slaves. In exchange, scholars who succeeded at the highest level had an obligation to provide advice or intellectual companionship to their Roman benefactors, or to even take care of their libraries. The less fortunate or accomplished ones would teach their children or perform menial tasks. The level of detail and sophistication of Greek knowledge was adjusted to suit the interests of their Roman patrons. That meant popularizing Greek knowledge by presenting information that were of practical value such as medicine or logic (for courts and politics) but excluding subtle details of Greek metaphysics and epistemology. Beyond the basics, the Romans did not value natural philosophy and considered it an amusement for leisure time.
Commentaries and encyclopedias were the means by which Greek knowledge was popularized for Roman audiences. The Greek scholar Posidonius (c. 135-c. 51 BCE), a native of Syria, wrote prolifically on history, geography, moral philosophy, and natural philosophy. He greatly influenced Latin writers such as Marcus Terentius Varro (116-27 BCE), who wrote the encyclopedia "Nine Books of Disciplines", which covered nine arts: grammar, rhetoric, logic, arithmetic, geometry, astronomy, musical theory, medicine, and architecture. The "Disciplines" became a model for subsequent Roman encyclopedias and Varro's nine liberal arts were considered suitable education for a Roman gentleman. The first seven of Varro's nine arts would later define the seven liberal arts of medieval schools. The pinnacle of the popularization movement was the Roman scholar Pliny the Elder (23/24–79 CE), a native of northern Italy, who wrote several books on the history of Rome and grammar. His most famous work was his voluminous "Natural History".
After the death of the Roman Emperor Marcus Aurelius in 180 CE, the favorable conditions for scholarship and learning in the Roman Empire were upended by political unrest, civil war, urban decay, and looming economic crisis. In around 250 CE, barbarians began attacking and invading the Roman frontiers. These combined events led to a general decline in political and economic conditions. The living standards of the Roman upper class was severely impacted, and their loss of leisure diminished scholarly pursuits. Moreover, during the 3rd and 4th centuries CE, the Roman Empire was administratively divided into two halves: Greek East and Latin West. These administrative divisions weakened the intellectual contact between the two regions. Eventually, both halves went their separate ways, with the Greek East becoming the Byzantine Empire. Christianity was also steadily expanding during this time and soon became a major patron of education in the Latin West. Initially, the Christian church adopted some of the reasoning tools of Greek philosophy in the 2nd and 3rd centuries CE to defend its faith against sophisticated opponents. Nevertheless, Greek philosophy received a mixed reception from leaders and adherents of the Christian faith. Some such as Tertullian (c. 155-c. 230 CE) were vehemently opposed to philosophy, denouncing it as heretic. Others such as Augustine of Hippo (354-430 CE) were ambivalent and defended Greek philosophy and science as the best ways to understand the natural world and therefore treated it as a handmaiden (or servant) of religion. Education in the West began its gradual decline, along with the rest of Western Roman Empire, due to invasions by Germanic tribes, civil unrest, and economic collapse. Contact with the classical tradition was lost in specific regions such as Roman Britain and northern Gaul but continued to exist in Rome, northern Italy, southern Gaul, Spain, and North Africa.
Middle Ages.
In the Middle Ages, the classical learning continued in three major linguistic cultures and civilizations: Greek (the Byzantine Empire), Arabic (the Islamic world), and Latin (Western Europe).
Byzantine Empire.
Preservation of Greek heritage.
The fall of the Western Roman Empire led to a deterioration of the classical tradition in the western part (or Latin West) of Europe during the 5th century. In contrast, the Byzantine Empire resisted the barbarian attacks and preserved and improved the learning.
While the Byzantine Empire still held learning centers such as Constantinople, Alexandria and Antioch, Western Europe's knowledge was concentrated in monasteries until the development of medieval universities in the 12th centuries. The curriculum of monastic schools included the study of the few available ancient texts and of new works on practical subjects like medicine and timekeeping.
In the sixth century in the Byzantine Empire, Isidore of Miletus compiled Archimedes' mathematical works in the Archimedes Palimpsest, where all Archimedes' mathematical contributions were collected and studied.
John Philoponus, another Byzantine scholar, was the first to question Aristotle's teaching of physics, introducing the theory of impetus. The theory of impetus was an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics. The works of John Philoponus inspired Galileo Galilei ten centuries later.
Collapse.
During the Fall of Constantinople in 1453, a number of Greek scholars fled to North Italy in which they fueled the era later commonly known as the "Renaissance" as they brought with them a great deal of classical learning including an understanding of botany, medicine, and zoology. Byzantium also gave the West important inputs: John Philoponus' criticism of Aristotelian physics, and the works of Dioscorides.
Islamic world.
This was the period (8th–14th century CE) of the Islamic Golden Age where commerce thrived, and new ideas and technologies emerged such as the importation of papermaking from China, which made the copying of manuscripts inexpensive.
Translations and Hellenization.
The eastward transmission of Greek heritage to Western Asia was a slow and gradual process that spanned over a thousand years, beginning with the Asian conquests of Alexander the Great in 335 BCE to the founding of Islam in the 7th century CE. The birth and expansion of Islam during the 7th century was quickly followed by its Hellenization. Knowledge of Greek conceptions of the world was preserved and absorbed into Islamic theology, law, culture, and commerce, which were aided by the translations of traditional Greek texts and some Syriac intermediary sources into Arabic during the 8th–9th century.
Education and scholarly pursuits.
Madrasas were centers for many different religious and scientific studies and were the culmination of different institutions such as mosques based around religious studies, housing for out-of-town visitors, and finally educational institutions focused on the natural sciences. Unlike Western universities, students at a madrasa would learn from one specific teacher, who would issue a certificate at the completion of their studies called an Ijazah. An Ijazah differs from a western university degree in many ways one being that it is issued by a single person rather than an institution, and another being that it is not an individual degree declaring adequate knowledge over broad subjects, but rather a license to teach and pass on a very specific set of texts. Women were also allowed to attend madrasas, as both students and teachers, something not seen in high western education until the 1800s. Madrasas were more than just academic centers. The Suleymaniye Mosque, for example, was one of the earliest and most well-known madrasas, which was built by Suleiman the Magnificent in the 16th century. The Suleymaniye Mosque was home to a hospital and medical college, a kitchen, and children's school, as well as serving as a temporary home for travelers.
Higher education at a madrasa (or college) was focused on Islamic law and religious science and students had to engage in self-study for everything else. And despite the occasional theological backlash, many Islamic scholars of science were able to conduct their work in relatively tolerant urban centers (e.g., Baghdad and Cairo) and were protected by powerful patrons. They could also travel freely and exchange ideas as there were no political barriers within the unified Islamic state. Islamic science during this time was primarily focused on the correction, extension, articulation, and application of Greek ideas to new problems.
Advancements in mathematics.
Most of the achievements by Islamic scholars during this period were in mathematics. Arabic mathematics was a direct descendant of Greek and Indian mathematics. For instance, what is now known as Arabic numerals originally came from India, but Muslim mathematicians made several key refinements to the number system, such as the introduction of decimal point notation. Mathematicians such as Muhammad ibn Musa al-Khwarizmi (c. 780–850) gave his name to the concept of the algorithm, while the term algebra is derived from "al-jabr", the beginning of the title of one of his publications. Islamic trigonometry continued from the works of Ptolemy's "Almagest" and Indian "Siddhanta", from which they added trigonometric functions, drew up tables, and applied trignometry to spheres and planes. Many of their engineers, instruments makers, and surveyors contributed books in applied mathematics. It was in astronomy where Islamic mathematicians made their greatest contributions. Al-Battani (c. 858–929) improved the measurements of Hipparchus, preserved in the translation of Ptolemy's "Hè Megalè Syntaxis" ("The great treatise") translated as "Almagest". Al-Battani also improved the precision of the measurement of the precession of the Earth's axis. Corrections were made to Ptolemy's geocentric model by al-Battani, Ibn al-Haytham, Averroes and the Maragha astronomers such as Nasir al-Din al-Tusi, Mu'ayyad al-Din al-Urdi and Ibn al-Shatir.
Scholars with geometric skills made significant improvements to the earlier classical texts on light and sight by Euclid, Aristotle, and Ptolemy. The earliest surviving Arabic treatises were written in the 9th century by Abū Ishāq al-Kindī, Qustā ibn Lūqā, and (in fragmentary form) Ahmad ibn Isā. Later in the 11th century, Ibn al-Haytham (known as Alhazen in the West), a mathematician and astronomer, synthesized a new theory of vision based on the works of his predecessors. His new theory included a complete system of geometrical optics, which was set in great detail in his "Book of Optics". His book was translated into Latin and was relied upon as a principal source on the science of optics in Europe until the 17th century.
Institutionalization of medicine.
The medical sciences were prominently cultivated in the Islamic world. The works of Greek medical theories, especially those of Galen, were translated into Arabic and there was an outpouring of medical texts by Islamic physicians, which were aimed at organizing, elaborating, and disseminating classical medical knowledge. Medical specialties started to emerge, such as those involved in the treatment of eye diseases such as cataracts. Ibn Sina (known as Avicenna in the West, c. 980–1037) was a prolific Persian medical encyclopedist wrote extensively on medicine, with his two most notable works in medicine being the "Kitāb al-shifāʾ" ("Book of Healing") and The Canon of Medicine, both of which were used as standard medicinal texts in both the Muslim world and in Europe well into the 17th century. Amongst his many contributions are the discovery of the contagious nature of infectious diseases, and the introduction of clinical pharmacology. Institutionalization of medicine was another important achievement in the Islamic world. Although hospitals as an institution for the sick emerged in the Byzantium empire, the model of institutionalized medicine for all social classes was extensive in the Islamic empire and was scattered throughout. In addition to treating patients, physicians could teach apprentice physicians, as well write and do research. The discovery of the pulmonary transit of blood in the human body by Ibn al-Nafis occurred in a hospital setting.
Decline.
Islamic science began its decline in the 12th–13th century, before the Renaissance in Europe, due in part to the Christian reconquest of Spain and the Mongol conquests in the East in the 11th–13th century. The Mongols sacked Baghdad, capital of the Abbasid Caliphate, in 1258, which ended the Abbasid empire. Nevertheless, many of the conquerors became patrons of the sciences. Hulagu Khan, for example, who led the siege of Baghdad, became a patron of the Maragheh observatory. Islamic astronomy continued to flourish into the 16th century.
Western Europe.
By the eleventh century, most of Europe had become Christian; stronger monarchies emerged; borders were restored; technological developments and agricultural innovations were made, increasing the food supply and population. Classical Greek texts were translated from Arabic and Greek into Latin, stimulating scientific discussion in Western Europe.
In classical antiquity, Greek and Roman taboos had meant that dissection was usually banned, but in the Middle Ages medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (c. 1275–1326) produced the first known anatomy textbook based on human dissection.
As a result of the Pax Mongolica, Europeans, such as Marco Polo, began to venture further and further east. The written accounts of Polo and his fellow travelers inspired other Western European maritime explorers to search for a direct sea route to Asia, ultimately leading to the Age of Discovery.
Technological advances were also made, such as the early flight of Eilmer of Malmesbury (who had studied mathematics in 11th-century England), and the metallurgical achievements of the Cistercian blast furnace at Laskill.
Medieval universities.
An intellectual revitalization of Western Europe started with the birth of medieval universities in the 12th century. These urban institutions grew from the informal scholarly activities of learned friars who visited monasteries, consulted libraries, and conversed with other fellow scholars. A friar who became well-known would attract a following of disciples, giving rise to a brotherhood of scholars (or "collegium" in Latin). A "collegium" might travel to a town or request a monastery to host them. However, if the number of scholars within a "collegium" grew too large, they would opt to settle in a town instead. As the number of "collegia" within a town grew, the "collegia" might request that their king grant them a charter that would convert them into a "universitas". Many universities were chartered during this period, with the first in Bologna in 1088, followed by Paris in 1150, Oxford in 1167, and Cambridge in 1231. The granting of a charter meant that the medieval universities were partially sovereign and independent from local authorities. Their independence allowed them to conduct themselves and judge their own members based on their own rules. Furthermore, as initially religious institutions, their faculties and students were protected from capital punishment (e.g., gallows). Such independence was a matter of custom, which could, in principle, be revoked by their respective rulers if they felt threatened. Discussions of various subjects or claims at these medieval institutions, no matter how controversial, were done in a formalized way so as to declare such discussions as being within the bounds of a university and therefore protected by the privileges of that institution's sovereignty. A claim could be described as "ex cathedra" (literally "from the chair", used within the context of teaching) or "ex hypothesi" (by hypothesis). This meant that the discussions were presented as purely an intellectual exercise that did not require those involved to commit themselves to the truth of a claim or to proselytize. Modern academic concepts and practices such as academic freedom or freedom of inquiry are remnants of these medieval privileges that were tolerated in the past.
The curriculum of these medieval institutions centered on the seven liberal arts, which were aimed at providing beginning students with the skills for reasoning and scholarly language. Students would begin their studies starting with the first three liberal arts or "Trivium" (grammar, rhetoric, and logic) followed by the next four liberal arts or "Quadrivium" (arithmetic, geometry, astronomy, and music). Those who completed these requirements and received their "baccalaureate" (or Bachelor of Arts) had the option to join the higher faculty (law, medicine, or theology), which would confer an LLD for a lawyer, an MD for a physician, or ThD for a theologian. Students who chose to remain in the lower faculty (arts) could work towards a "Magister" (or Master's) degree and would study three philosophies: metaphysics, ethics, and natural philosophy. Latin translations of Aristotle's works such as ("On the Soul") and the commentaries on them were required readings. As time passed, the lower faculty was allowed to confer its own doctoral degree called the PhD. Many of the Masters were drawn to encyclopedias and had used them as textbooks. But these scholars yearned for the complete original texts of the Ancient Greek philosophers, mathematicians, and physicians such as Aristotle, Euclid, and Galen, which were not available to them at the time. These Ancient Greek texts were to be found in the Byzantine Empire and the Islamic World.
Translations of Greek and Arabic sources.
Contact with the Byzantine Empire, and with the Islamic world during the Reconquista and the Crusades, allowed Latin Europe access to scientific Greek and Arabic texts, including the works of Aristotle, Ptolemy, Isidore of Miletus, John Philoponus, Jābir ibn Hayyān, al-Khwarizmi, Alhazen, Avicenna, and Averroes. European scholars had access to the translation programs of Raymond of Toledo, who sponsored the 12th century Toledo School of Translators from Arabic to Latin. Later translators like Michael Scotus would learn Arabic in order to study these texts directly. The European universities aided materially in the translation and propagation of these texts and started a new infrastructure which was needed for scientific communities. In fact, European university put many works about the natural world and the study of nature at the center of its curriculum, with the result that the "medieval university laid far greater emphasis on science than does its modern counterpart and descendent."
At the beginning of the 13th century, there were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors, allowing a sound transfer of scientific ideas via both the universities and the monasteries. By then, the natural philosophy in these texts began to be extended by scholastics such as Robert Grosseteste, Roger Bacon, Albertus Magnus and Duns Scotus. Precursors of the modern scientific method, influenced by earlier contributions of the Islamic world, can be seen already in Grosseteste's emphasis on mathematics as a way to understand nature, and in the empirical approach admired by Bacon, particularly in his "Opus Majus". Pierre Duhem's thesis is that Stephen Tempier – the Bishop of Paris – Condemnation of 1277 led to the study of medieval science as a serious discipline, "but no one in the field any longer endorses his view that modern science started in 1277". However, many scholars agree with Duhem's view that the mid-late Middle Ages saw important scientific developments.
Medieval science.
The first half of the 14th century saw much important scientific work, largely within the framework of scholastic commentaries on Aristotle's scientific writings. William of Ockham emphasized the principle of parsimony: natural philosophers should not postulate unnecessary entities, so that motion is not a distinct thing but is only the moving object and an intermediary "sensible species" is not needed to transmit an image of an object to the eye. Scholars such as Jean Buridan and Nicole Oresme started to reinterpret elements of Aristotle's mechanics. In particular, Buridan developed the theory that impetus was the cause of the motion of projectiles, which was a first step towards the modern concept of inertia. The Oxford Calculators began to mathematically analyze the kinematics of motion, making this analysis without considering the causes of motion.
In 1348, the Black Death and other disasters sealed a sudden end to philosophic and scientific development. Yet, the rediscovery of ancient texts was stimulated by the Fall of Constantinople in 1453, when many Byzantine scholars sought refuge in the West. Meanwhile, the introduction of printing was to have great effect on European society. The facilitated dissemination of the printed word democratized learning and allowed ideas such as algebra to propagate more rapidly. These developments paved the way for the Scientific Revolution, where scientific inquiry, halted at the start of the Black Death, resumed.
Renaissance.
Revival of learning.
The renewal of learning in Europe began with 12th century Scholasticism. The Northern Renaissance showed a decisive shift in focus from Aristotelian natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine). Thus modern science in Europe was resumed in a period of great upheaval: the Protestant Reformation and Catholic Counter-Reformation; the discovery of the Americas by Christopher Columbus; the Fall of Constantinople; but also the re-discovery of Aristotle during the Scholastic period presaged large social and political changes. Thus, a suitable environment was created in which it became possible to question scientific doctrine, in much the same way that Martin Luther and John Calvin questioned religious doctrine. The works of Ptolemy (astronomy) and Galen (medicine) were found not always to match everyday observations. Work by Vesalius on human cadavers found problems with the Galenic view of anatomy.
The discovery of Cristallo contributed to the advancement of science in the period as well with its appearance out of Venice around 1450. The new glass allowed for better spectacles and eventually to the inventions of the telescope and microscope.
Theophrastus' work on rocks, "Peri lithōn", remained authoritative for millennia: its interpretation of fossils was not overturned until after the Scientific Revolution.
During the Italian Renaissance, Niccolò Machiavelli established the emphasis of modern political science on direct empirical observation of political institutions and actors. Later, the expansion of the scientific paradigm during the Enlightenment further pushed the study of politics beyond normative determinations. In particular, the study of statistics, to study the subjects of the state, has been applied to polling and voting.
In archaeology, the 15th and 16th centuries saw the rise of antiquarians in Renaissance Europe who were interested in the collection of artifacts.
Scientific Revolution and birth of New Science.
The early modern period is seen as a flowering of the European Renaissance. There was a willingness to question previously held truths and search for new answers. This resulted in a period of major scientific advancements, now known as the Scientific Revolution, which led to the emergence of a New Science that was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. The Scientific Revolution is a convenient boundary between ancient thought and classical physics, and is traditionally held to have begun in 1543, when the books "De humani corporis fabrica" ("On the Workings of the Human Body") by Andreas Vesalius, and also "De Revolutionibus", by the astronomer Nicolaus Copernicus, were first printed. The period culminated with the publication of the "Philosophiæ Naturalis Principia Mathematica" in 1687 by Isaac Newton, representative of the unprecedented growth of scientific publications throughout Europe.
Other significant scientific advances were made during this time by Galileo Galilei, Johannes Kepler, Edmond Halley, William Harvey, Pierre Fermat, Robert Hooke, Christiaan Huygens, Tycho Brahe, Marin Mersenne, Gottfried Leibniz, Isaac Newton, and Blaise Pascal. In philosophy, major contributions were made by Francis Bacon, Sir Thomas Browne, René Descartes, Baruch Spinoza, Pierre Gassendi, Robert Boyle, and Thomas Hobbes. Christiaan Huygens derived the centripetal and centrifugal forces and was the first to transfer mathematical inquiry to describe unobservable physical phenomena. William Gilbert did some of the earliest experiments with electricity and magnetism, establishing that the Earth itself is magnetic.
Heliocentrism.
The heliocentric astronomical model of the universe was refined by Nicolaus Copernicus. Copernicus proposed the idea that the Earth and all heavenly spheres, containing the planets and other objects in the cosmos, rotated around the Sun. His heliocentric model also proposed that all stars were fixed and did not rotate on an axis, nor in any motion at all. His theory proposed the yearly rotation of the Earth and the other heavenly spheres around the Sun and was able to calculate the distances of planets using deferents and epicycles. Although these calculations were not completely accurate, Copernicus was able to understand the distance order of each heavenly sphere. The Copernican heliocentric system was a revival of the hypotheses of Aristarchus of Samos and Seleucus of Seleucia. Aristarchus of Samos did propose that the Earth rotated around the Sun but did not mention anything about the other heavenly spheres' order, motion, or rotation. Seleucus of Seleucia also proposed the rotation of the Earth around the Sun but did not mention anything about the other heavenly spheres. In addition, Seleucus of Seleucia understood that the Moon rotated around the Earth and could be used to explain the tides of the oceans, thus further proving his understanding of the heliocentric idea.
Age of Enlightenment.
Continuation of Scientific Revolution.
The Scientific Revolution continued into the Age of Enlightenment, which accelerated the development of modern science.
Planets and orbits.
The heliocentric model revived by Nicolaus Copernicus was followed by the model of planetary motion given by Johannes Kepler in the early 17th century, which proposed that the planets follow elliptical orbits, with the Sun at one focus of the ellipse. In "Astronomia Nova" ("A New Astronomy"), the first two of the laws of planetary motion were shown by the analysis of the orbit of Mars. Kepler introduced the revolutionary concept of planetary orbit. Because of his work astronomical phenomena came to be seen as being governed by physical laws.
Emergence of chemistry.
A decisive moment came when "chemistry" was distinguished from alchemy by Robert Boyle in his work "The Sceptical Chymist", in 1661; although the alchemical tradition continued for some time after his work. Other important steps included the gravimetric experimental practices of medical chemists like William Cullen, Joseph Black, Torbern Bergman and Pierre Macquer and through the work of Antoine Lavoisier ("father of modern chemistry") on oxygen and the law of conservation of mass, which refuted phlogiston theory. Modern chemistry emerged from the sixteenth through the eighteenth centuries through the material practices and theories promoted by alchemy, medicine, manufacturing and mining.
Calculus and Newtonian mechanics.
In 1687, Isaac Newton published the "Principia Mathematica", detailing two comprehensive and successful physical theories: Newton's laws of motion, which led to classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity.
Circulatory system.
William Harvey published "De Motu Cordis" in 1628, which revealed his conclusions based on his extensive studies of vertebrate circulatory systems. He identified the central role of the heart, arteries, and veins in producing blood movement in a circuit, and failed to find any confirmation of Galen's pre-existing notions of heating and cooling functions. The history of early modern biology and medicine is often told through the search for the seat of the soul. Galen in his descriptions of his foundational work in medicine presents the distinctions between arteries, veins, and nerves using the vocabulary of the soul.
Scientific societies and journals.
A critical innovation was the creation of permanent scientific societies and their scholarly journals, which dramatically sped the diffusion of new ideas. Typical was the founding of the Royal Society in London in 1660 and its journal in 1665 the Philosophical Transaction of the Royal Society, the first scientific journal in English. 1665 also saw the first journal in French, the Journal des "sçavans". Science drawing on the works of Newton, Descartes, Pascal and Leibniz, science was on a path to modern mathematics, physics and technology by the time of the generation of Benjamin Franklin (1706–1790), Leonhard Euler (1707–1783), Mikhail Lomonosov (1711–1765) and Jean le Rond d'Alembert (1717–1783). Denis Diderot's "Encyclopédie", published between 1751 and 1772 brought this new understanding to a wider audience. The impact of this process was not limited to science and technology, but affected philosophy (Immanuel Kant, David Hume), religion (the increasingly significant impact of science upon religion), and society and politics in general (Adam Smith, Voltaire).
Developments in geology.
Geology did not undergo systematic restructuring during the Scientific Revolution but instead existed as a cloud of isolated, disconnected ideas about rocks, minerals, and landforms long before it became a coherent science. Robert Hooke formulated a theory of earthquakes, and Nicholas Steno developed the theory of superposition and argued that fossils were the remains of once-living creatures. Beginning with Thomas Burnet's "Sacred Theory of the Earth" in 1681, natural philosophers began to explore the idea that the Earth had changed over time. Burnet and his contemporaries interpreted Earth's past in terms of events described in the Bible, but their work laid the intellectual foundations for secular interpretations of Earth history.
Post-Scientific Revolution.
Bioelectricity.
During the late 18th century, researchers such as Hugh Williamson and John Walsh experimented on the effects of electricity on the human body. Further studies by Luigi Galvani and Alessandro Volta established the electrical nature of what Volta called galvanism.
Developments in geology.
Modern geology, like modern chemistry, gradually evolved during the 18th and early 19th centuries. Benoît de Maillet and the Comte de Buffon saw the Earth as much older than the 6,000 years envisioned by biblical scholars. Jean-Étienne Guettard and Nicolas Desmarest hiked central France and recorded their observations on some of the first geological maps. Aided by chemical experimentation, naturalists such as Scotland's John Walker, Sweden's Torbern Bergman, and Germany's Abraham Werner created comprehensive classification systems for rocks and minerals—a collective achievement that transformed geology into a cutting edge field by the end of the eighteenth century. These early geologists also proposed a generalized interpretations of Earth history that led James Hutton, Georges Cuvier and Alexandre Brongniart, following in the steps of Steno, to argue that layers of rock could be dated by the fossils they contained: a principle first applied to the geology of the Paris Basin. The use of index fossils became a powerful tool for making geological maps, because it allowed geologists to correlate the rocks in one locality with those of similar age in other, distant localities.
Birth of modern economics.
The basis for classical economics forms Adam Smith's "An Inquiry into the Nature and Causes of the Wealth of Nations", published in 1776. Smith criticized mercantilism, advocating a system of free trade with division of labour. He postulated an "invisible hand" that regulated economic systems made up of actors guided only by self-interest. The "invisible hand" mentioned in a lost page in the middle of a chapter in the middle of the "Wealth of Nations", 1776, advances as Smith's central message.
Social science.
Anthropology can best be understood as an outgrowth of the Age of Enlightenment. It was during this period that Europeans attempted systematically to study human behavior. Traditions of jurisprudence, history, philology and sociology developed during this time and informed the development of the social sciences of which anthropology was a part.
19th century.
The 19th century saw the birth of science as a profession. William Whewell had coined the term "scientist" in 1833, which soon replaced the older term "natural philosopher".
Developments in physics.
In physics, the behavior of electricity and magnetism was studied by Giovanni Aldini, Alessandro Volta, Michael Faraday, Georg Ohm, and others. The experiments, theories and discoveries of Michael Faraday, Andre-Marie Ampere, James Clerk Maxwell, and their contemporaries led to the unification of the two phenomena into a single theory of electromagnetism as described by Maxwell's equations. Thermodynamics led to an understanding of heat and the notion of energy being defined.
Discovery of Neptune.
In astronomy, the planet Neptune was discovered. Advances in astronomy and in optical systems in the 19th century resulted in the first observation of an asteroid (1 Ceres) in 1801, and the discovery of Neptune in 1846.
Developments in mathematics.
In mathematics, the notion of complex numbers finally matured and led to a subsequent analytical theory; they also began the use of hypercomplex numbers. Karl Weierstrass and others carried out the arithmetization of analysis for functions of real and complex variables. It also saw rise to new progress in geometry beyond those classical theories of Euclid, after a period of nearly two thousand years. The mathematical science of logic likewise had revolutionary breakthroughs after a similarly long period of stagnation. But the most important step in science at this time were the ideas formulated by the creators of electrical science. Their work changed the face of physics and made possible for new technology to come about such as electric power, electrical telegraphy, the telephone, and radio.
Developments in chemistry.
In chemistry, Dmitri Mendeleev, following the atomic theory of John Dalton, created the first periodic table of elements. Other highlights include the discoveries unveiling the nature of atomic structure and matter, simultaneously with chemistry – and of new kinds of radiation. The theory that all matter is made of atoms, which are the smallest constituents of matter that cannot be broken down without losing the basic chemical and physical properties of that matter, was provided by John Dalton in 1803, although the question took a hundred years to settle as proven. Dalton also formulated the law of mass relationships. In 1869, Dmitri Mendeleev composed his periodic table of elements on the basis of Dalton's discoveries. The synthesis of urea by Friedrich Wöhler opened a new research field, organic chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The later part of the 19th century saw the exploitation of the Earth's petrochemicals, after the exhaustion of the oil supply from whaling. By the 20th century, systematic production of refined materials provided a ready supply of products which provided not only energy, but also synthetic materials for clothing, medicine, and everyday disposable resources. Application of the techniques of organic chemistry to living organisms resulted in physiological chemistry, the precursor to biochemistry.
Age of the Earth.
Over the first half of the 19th century, geologists such as Charles Lyell, Adam Sedgwick, and Roderick Murchison applied the new technique to rocks throughout Europe and eastern North America, setting the stage for more detailed, government-funded mapping projects in later decades. Midway through the 19th century, the focus of geology shifted from description and classification to attempts to understand "how" the surface of the Earth had changed. The first comprehensive theories of mountain building were proposed during this period, as were the first modern theories of earthquakes and volcanoes. Louis Agassiz and others established the reality of continent-covering ice ages, and "fluvialists" like Andrew Crombie Ramsay argued that river valleys were formed, over millions of years by the rivers that flow through them. After the discovery of radioactivity, radiometric dating methods were developed, starting in the 20th century. Alfred Wegener's theory of "continental drift" was widely dismissed when he proposed it in the 1910s, but new data gathered in the 1950s and 1960s led to the theory of plate tectonics, which provided a plausible mechanism for it. Plate tectonics also provided a unified explanation for a wide range of seemingly unrelated geological phenomena. Since the 1960s it has served as the unifying principle in geology.
Evolution and inheritance.
Perhaps the most prominent, controversial, and far-reaching theory in all of science has been the theory of evolution by natural selection, which was independently formulated by Charles Darwin and Alfred Wallace. It was described in detail in Darwin's book "The Origin of Species", which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Implications of evolution on fields outside of pure science have led to both opposition and support from different parts of society, and profoundly influenced the popular understanding of "man's place in the universe". Separately, Gregor Mendel formulated in the principles of inheritance in 1866, which became the basis of modern genetics.
Germ theory.
Another important landmark in medicine and biology were the successful efforts to prove the germ theory of disease. Following this, Louis Pasteur made the first vaccine against rabies, and also made many discoveries in the field of chemistry, including the asymmetry of crystals. In 1847, Hungarian physician Ignác Fülöp Semmelweis dramatically reduced the occurrence of puerperal fever by simply requiring physicians to wash their hands before attending to women in childbirth. This discovery predated the germ theory of disease. However, Semmelweis' findings were not appreciated by his contemporaries and handwashing came into use only with discoveries by British surgeon Joseph Lister, who in 1865 proved the principles of antisepsis. Lister's work was based on the important findings by French biologist Louis Pasteur. Pasteur was able to link microorganisms with disease, revolutionizing medicine. He also devised one of the most important methods in preventive medicine, when in 1880 he produced a vaccine against rabies. Pasteur invented the process of pasteurization, to help prevent the spread of disease through milk and other foods.
Schools of economics.
Karl Marx developed an alternative economic theory, called Marxian economics. Marxian economics is based on the labor theory of value and assumes the value of good to be based on the amount of labor required to produce it. Under this axiom, capitalism was based on employers not paying the full value of workers labor to create profit. The Austrian School responded to Marxian economics by viewing entrepreneurship as driving force of economic development. This replaced the labor theory of value by a system of supply and demand.
Founding of psychology.
Psychology as a scientific enterprise that was independent from philosophy began in 1879 when Wilhelm Wundt founded the first laboratory dedicated exclusively to psychological research (in Leipzig). Other important early contributors to the field include Hermann Ebbinghaus (a pioneer in memory studies), Ivan Pavlov (who discovered classical conditioning), William James, and Sigmund Freud. Freud's influence has been enormous, though more as cultural icon than a force in scientific psychology.
Modern sociology.
Modern sociology emerged in the early 19th century as the academic response to the modernization of the world. Among many early sociologists (e.g., Émile Durkheim), the aim of sociology was in structuralism, understanding the cohesion of social groups, and developing an "antidote" to social disintegration. Max Weber was concerned with the modernization of society through the concept of rationalization, which he believed would trap individuals in an "iron cage" of rational thought. Some sociologists, including Georg Simmel and W. E. B. Du Bois, used more microsociological, qualitative analyses. This microlevel approach played an important role in American sociology, with the theories of George Herbert Mead and his student Herbert Blumer resulting in the creation of the symbolic interactionism approach to sociology. In particular, just Auguste Comte, illustrated with his work the transition from a theological to a metaphysical stage and, from this, to a positive stage. Comte took care of the classification of the sciences as well as a transit of humanity towards a situation of progress attributable to a re-examination of nature according to the affirmation of 'sociality' as the basis of the scientifically interpreted society.
Romanticism.
The Romantic Movement of the early 19th century reshaped science by opening up new pursuits unexpected in the classical approaches of the Enlightenment. The decline of Romanticism occurred because a new movement, Positivism, began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. At the same time, the romantic reaction to the Enlightenment produced thinkers such as Johann Gottfried Herder and later Wilhelm Dilthey whose work formed the basis for the culture concept which is central to the discipline. Traditionally, much of the history of the subject was based on colonial encounters between Western Europe and the rest of the world, and much of 18th- and 19th-century anthropology is now classed as scientific racism. During the late 19th century, battles over the "study of man" took place between those of an "anthropological" persuasion (relying on anthropometrical techniques) and those of an "ethnological" persuasion (looking at cultures and traditions), and these distinctions became part of the later divide between physical anthropology and cultural anthropology, the latter ushered in by the students of Franz Boas.
20th century.
Science advanced dramatically during the 20th century. There were new and radical developments in the physical and life sciences, building on the progress from the 19th century.
Theory of relativity and quantum mechanics.
The beginning of the 20th century brought the start of a revolution in physics. The long-held theories of Newton were shown not to be correct in all circumstances. Beginning in 1900, Max Planck, Albert Einstein, Niels Bohr and others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy levels. Not only did quantum mechanics show that the laws of motion did not hold on small scales, but the theory of general relativity, proposed by Einstein in 1915, showed that the fixed background of spacetime, on which both Newtonian mechanics and special relativity depended, could not exist. In 1925, Werner Heisenberg and Erwin Schrödinger formulated quantum mechanics, which explained the preceding quantum theories. Currently, general relativity and quantum mechanics are inconsistent with each other, and efforts are underway to unify the two.
Big Bang.
The observation by Edwin Hubble in 1929 that the speed at which galaxies recede positively correlates with their distance, led to the understanding that the universe is expanding, and the formulation of the Big Bang theory by Georges Lemaître. George Gamow, Ralph Alpher, and Robert Herman had calculated that there should be evidence for a Big Bang in the background temperature of the universe. In 1964, Arno Penzias and Robert Wilson discovered a 3 Kelvin background hiss in their Bell Labs radiotelescope (the Holmdel Horn Antenna), which was evidence for this hypothesis, and formed the basis for a number of results that helped determine the age of the universe.
Big science.
In 1938 Otto Hahn and Fritz Strassmann discovered nuclear fission with radiochemical methods, and in 1939 Lise Meitner and Otto Robert Frisch wrote the first theoretical interpretation of the fission process, which was later improved by Niels Bohr and John A. Wheeler. Further developments took place during World War II, which led to the practical application of radar and the development and use of the atomic bomb. Around this time, Chien-Shiung Wu was recruited by the Manhattan Project to help develop a process for separating uranium metal into U-235 and U-238 isotopes by Gaseous diffusion. She was an expert experimentalist in beta decay and weak interaction physics. Wu designed an experiment (see Wu experiment) that enabled theoretical physicists Tsung-Dao Lee and Chen-Ning Yang to disprove the law of parity experimentally, winning them a Nobel Prize in 1957.
Though the process had begun with the invention of the cyclotron by Ernest O. Lawrence in the 1930s, physics in the postwar period entered into a phase of what historians have called "Big Science", requiring massive machines, budgets, and laboratories in order to test their theories and move into new frontiers. The primary patron of physics became state governments, who recognized that the support of "basic" research could often lead to technologies useful to both military and industrial applications.
Advances in genetics.
In the early 20th century, the study of heredity became a major investigation after the rediscovery in 1900 of the laws of inheritance developed by Mendel. The 20th century also saw the integration of physics and chemistry, with chemical properties explained as the result of the electronic structure of the atom. Linus Pauling's book on "The Nature of the Chemical Bond" used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. Pauling's work culminated in the physical modelling of DNA, "the secret of life" (in the words of Francis Crick, 1953). In the same year, the Miller–Urey experiment demonstrated in a simulation of primordial processes, that basic constituents of proteins, simple amino acids, could themselves be built up from simpler molecules, kickstarting decades of research into the chemical origins of life. By 1953, James D. Watson and Francis Crick clarified the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix. In their famous paper "Molecular structure of Nucleic Acids" In the late 20th century, the possibilities of genetic engineering became practical for the first time, and a massive international effort began in 1990 to map out an entire human genome (the Human Genome Project). The discipline of ecology typically traces its origin to the synthesis of Darwinian evolution and Humboldtian biogeography, in the late 19th and early 20th centuries. Equally important in the rise of ecology, however, were microbiology and soil science—particularly the cycle of life concept, prominent in the work Louis Pasteur and Ferdinand Cohn. The word "ecology" was coined by Ernst Haeckel, whose particularly holistic view of nature in general (and Darwin's theory in particular) was important in the spread of ecological thinking. The field of ecosystem ecology emerged in the Atomic Age with the use of radioisotopes to visualize food webs and by the 1970s ecosystem ecology deeply influenced global environmental management.
Space exploration.
In 1925, Cecilia Payne-Gaposchkin determined that stars were composed mostly of hydrogen and helium. She was dissuaded by astronomer Henry Norris Russell from publishing this finding in her PhD thesis because of the widely held belief that stars had the same composition as the Earth. However, four years later, in 1929, Henry Norris Russell came to the same conclusion through different reasoning and the discovery was eventually accepted.
In 1987, supernova SN 1987A was observed by astronomers on Earth both visually, and in a triumph for neutrino astronomy, by the solar neutrino detectors at Kamiokande. But the solar neutrino flux was a fraction of its theoretically expected value. This discrepancy forced a change in some values in the standard model for particle physics.
Neuroscience as a distinct discipline.
The understanding of neurons and the nervous system became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in "Aplysia". In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation. Neuroscience began to be recognized as a distinct academic discipline in its own right. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field.
Plate tectonics.
Geologists' embrace of plate tectonics became part of a broadening of the field from a study of rocks into a study of the Earth as a planet. Other elements of this transformation include: geophysical studies of the interior of the Earth, the grouping of geology with meteorology and oceanography as one of the "earth sciences", and comparisons of Earth and the solar system's other rocky planets.
Applications.
In terms of applications, a massive number of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first car was introduced by Karl Benz in 1885. The first airplane flight occurred in 1903, and by the end of the century airliners flew thousands of miles in a matter of hours. The development of the radio, television and computers caused massive changes in the dissemination of information. Advances in biology also led to large increases in food production, as well as the elimination of diseases such as polio by Dr. Jonas Salk. Gene mapping and gene sequencing, invented by Drs. Mark Skolnik and Walter Gilbert, respectively, are the two technologies that made the Human Genome Project feasible. Computer science, built upon a foundation of theoretical linguistics, discrete mathematics, and electrical engineering, studies the nature and limits of computation. Subfields include computability, computational complexity, database design, computer networking, artificial intelligence, and the design of computer hardware. One area in which advances in computing have contributed to more general scientific development is by facilitating large-scale archiving of scientific data. Contemporary computer science typically distinguishes itself by emphasizing mathematical 'theory' in contrast to the practical emphasis of software engineering.
Einstein's paper "On the Quantum Theory of Radiation" outlined the principles of the stimulated emission of photons. This led to the invention of the Laser (light amplification by the stimulated emission of radiation) and the optical amplifier which ushered in the Information Age. It is optical amplification that allows fiber optic networks to transmit the massive capacity of the Internet.
Based on wireless transmission of electromagnetic radiation and global networks of cellular operation, the mobile phone became a primary means to access the internet.
Developments in political science and economics.
In political science during the 20th century, the study of ideology, behaviouralism and international relations led to a multitude of 'pol-sci' subdisciplines including rational choice theory, voting theory, game theory (also used in economics), psephology, political geography/geopolitics, political anthropology/political psychology/political sociology, political economy, policy analysis, public administration, comparative political analysis and peace studies/conflict analysis. In economics, John Maynard Keynes prompted a division between microeconomics and macroeconomics in the 1920s. Under Keynesian economics macroeconomic trends can overwhelm economic choices made by individuals. Governments should promote aggregate demand for goods as a means to encourage economic expansion. Following World War II, Milton Friedman created the concept of monetarism. Monetarism focuses on using the supply and demand of money as a method for controlling economic activity. In the 1970s, monetarism has adapted into supply-side economics which advocates reducing taxes as a means to increase the amount of money available for economic expansion. Other modern schools of economic thought are New Classical economics and New Keynesian economics. New Classical economics was developed in the 1970s, emphasizing solid microeconomics as the basis for macroeconomic growth. New Keynesian economics was created partially in response to New Classical economics. It shows how imperfect competition and market rigidities, means monetary policy has real effects, and enables analysis of different policies.
Developments in psychology, sociology, and anthropology.
Psychology in the 20th century saw a rejection of Freud's theories as being too unscientific, and a reaction against Edward Titchener's atomistic approach of the mind. This led to the formulation of behaviorism by John B. Watson, which was popularized by B.F. Skinner. Behaviorism proposed epistemologically limiting psychological study to overt behavior, since that could be reliably measured. Scientific knowledge of the "mind" was considered too metaphysical, hence impossible to achieve. The final decades of the 20th century have seen the rise of cognitive science, which considers the mind as once again a subject for investigation, using the tools of psychology, linguistics, computer science, philosophy, and neurobiology. New methods of visualizing the activity of the brain, such as PET scans and CAT scans, began to exert their influence as well, leading some researchers to investigate the mind by investigating the brain, rather than cognition. These new forms of investigation assume that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such as artificial intelligence. Evolutionary theory was applied to behavior and introduced to anthropology and psychology, through the works of cultural anthropologist Napoleon Chagnon. Physical anthropology would become biological anthropology, incorporating elements of evolutionary biology.
American sociology in the 1940s and 1950s was dominated largely by Talcott Parsons, who argued that aspects of society that promoted structural integration were therefore "functional". This structural functionalism approach was questioned in the 1960s, when sociologists came to see this approach as merely a justification for inequalities present in the status quo. In reaction, conflict theory was developed, which was based in part on the philosophies of Karl Marx. Conflict theorists saw society as an arena in which different groups compete for control over resources. Symbolic interactionism also came to be regarded as central to sociological thinking. Erving Goffman saw social interactions as a stage performance, with individuals preparing "backstage" and attempting to control their audience through impression management. While these theories are currently prominent in sociological thought, other approaches exist, including feminist theory, post-structuralism, rational choice theory, and postmodernism.
In the mid-20th century, much of the methodologies of earlier anthropological and ethnographical study were reevaluated with an eye towards research ethics, while at the same time the scope of investigation has broadened far beyond the traditional study of "primitive cultures".
21st century.
In the early 21st century, some concepts that originated in 20th century physics were proven. On 4 July 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, confirmed as such by the following March. Gravitational waves were first detected on 14 September 2015.
The Human Genome Project was declared complete in 2003. The CRISPR gene editing technique developed in 2012 allowed scientists to precisely and easily modify DNA and led to the development of new medicine. In 2020, xenobots, a new class of living robotics, were invented; reproductive capabilities were introduced the following year.
Positive psychology is a branch of psychology founded in 1998 by Martin Seligman that is concerned with the study of happiness, mental well-being, and positive human functioning, and is a reaction to 20th century psychology's emphasis on mental illness and dysfunction.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tfrac{355}{113}"
}
] |
https://en.wikipedia.org/wiki?curid=14400
|
1440207
|
Symmetric polynomial
|
Polynomial invariant under variable permutations
In mathematics, a symmetric polynomial is a polynomial "P"("X"1, "X"2, ..., "X""n") in "n" variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, "P" is a "symmetric polynomial" if for any permutation σ of the subscripts 1, 2, ..., "n" one has "P"("X"σ(1), "X"σ(2), ..., "X"σ("n"))
"P"("X"1, "X"2, ..., "X""n").
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. Indeed, a theorem called the fundamental theorem of symmetric polynomials states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials. This implies that every "symmetric" polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.
Examples.
The following polynomials in two variables "X"1 and "X"2 are symmetric:
formula_0
formula_1
as is the following polynomial in three variables "X"1, "X"2, "X"3:
formula_2
There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is
formula_3
where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant).
On the other hand, the polynomial in two variables
formula_4
is not symmetric, since if one exchanges formula_5 and formula_6 one gets a different polynomial, formula_7. Similarly in three variables
formula_8
has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. However, the following is symmetric:
formula_9
Applications.
Galois theory.
One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree "n" having "n" roots in a given field. These "n" roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function "f" of the "n" roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if "f" is given by a symmetric polynomial.
This yields the approach to solving polynomial equations by inverting this map, "breaking" the symmetry – given the coefficients of the polynomial (the elementary symmetric polynomials in the roots), how can one recover the roots?
This leads to studying solutions of polynomials using the permutation group of the roots, originally in the form of Lagrange resolvents, later developed in Galois theory.
Relation with the roots of a monic univariate polynomial.
Consider a monic polynomial in "t" of degree "n"
formula_10
with coefficients "a""i" in some field "K". There exist "n" roots "x"1...,"x""n" of "P" in some possibly larger field (for instance if "K" is the field of real numbers, the roots will exist in the field of complex numbers); some of the roots might be equal, but the fact that one has "all" roots is expressed by the relation
formula_11
By comparing coefficients one finds that
formula_12
These are in fact just instances of Vieta's formulas. They show that all coefficients of the polynomial are given in terms of the roots by a symmetric polynomial expression: although for a given polynomial "P" there may be qualitative differences between the roots (like lying in the base field "K" or not, being simple or multiple roots), none of this affects the way the roots occur in these expressions.
Now one may change the point of view, by taking the roots rather than the coefficients as basic parameters for describing "P", and considering them as indeterminates rather than as constants in an appropriate field; the coefficients "a""i" then become just the particular symmetric polynomials given by the above equations. Those polynomials, without the sign formula_13, are known as the elementary symmetric polynomials in "x"1, ..., "x""n". A basic fact, known as the fundamental theorem of symmetric polynomials, states that "any" symmetric polynomial in "n" variables can be given by a polynomial expression in terms of these elementary symmetric polynomials. It follows that any symmetric polynomial expression in the roots of a monic polynomial can be expressed as a polynomial in the "coefficients" of the polynomial, and in particular that its value lies in the base field "K" that contains those coefficients. Thus, when working only with such symmetric polynomial expressions in the roots, it is unnecessary to know anything particular about those roots, or to compute in any larger field than "K" in which those roots may lie. In fact the values of the roots themselves become rather irrelevant, and the necessary relations between coefficients and symmetric polynomial expressions can be found by computations in terms of symmetric polynomials only. An example of such relations are Newton's identities, which express the sum of any fixed power of the roots in terms of the elementary symmetric polynomials.
Special kinds of symmetric polynomials.
There are a few types of symmetric polynomials in the variables "X"1, "X"2, ..., "X""n" that are fundamental.
Elementary symmetric polynomials.
For each nonnegative integer "k", the elementary symmetric polynomial "e""k"("X"1, ..., "X""n") is the sum of all distinct products of "k" distinct variables. (Some authors denote it by σ"k" instead.) For "k" = 0 there is only the empty product so "e"0("X"1, ..., "X""n") = 1, while for "k" > "n", no products at all can be formed, so "e""k"("X"1, "X"2, ..., "X""n") = 0 in these cases. The remaining "n" elementary symmetric polynomials are building blocks for all symmetric polynomials in these variables: as mentioned above, any symmetric polynomial in the variables considered can be obtained from these elementary symmetric polynomials using multiplications and additions only. In fact one has the following more detailed facts:
For example, for "n" = 2, the relevant elementary symmetric polynomials are "e"1("X"1, "X"2) = "X"1 + "X"2, and "e"2("X"1, "X"2) = "X"1"X"2. The first polynomial in the list of examples above can then be written as
formula_14
(for a proof that this is always possible see the fundamental theorem of symmetric polynomials).
Monomial symmetric polynomials.
Powers and products of elementary symmetric polynomials work out to rather complicated expressions. If one seeks basic "additive" building blocks for symmetric polynomials, a more natural choice is to take those symmetric polynomials that contain only one type of monomial, with only those copies required to obtain symmetry. Any monomial in "X"1, ..., "X""n" can be written as "X"1α1..."X""n"α"n" where the exponents α"i" are natural numbers (possibly zero); writing α = (α1...,α"n") this can be abbreviated to "X" α. The monomial symmetric polynomial "m"α("X"1, ..., "X""n") is defined as the sum of all monomials "x"β where β ranges over all "distinct" permutations of (α1...,α"n"). For instance one has
formula_15,
formula_16
Clearly "m"α = "m"β when β is a permutation of α, so one usually considers only those "m"α for which α1 ≥ α2 ≥ ... ≥ α"n", in other words for which α is a partition of an integer.
These monomial symmetric polynomials form a vector space basis: every symmetric polynomial "P" can be written as a linear combination of the monomial symmetric polynomials. To do this it suffices to separate the different types of monomial occurring in "P". In particular if "P" has integer coefficients, then so will the linear combination.
The elementary symmetric polynomials are particular cases of monomial symmetric polynomials: for 0 ≤ "k" ≤ "n" one has
formula_17 where α is the partition of "k" into "k" parts 1 (followed by "n" − "k" zeros).
Power-sum symmetric polynomials.
For each integer "k" ≥ 1, the monomial symmetric polynomial "m"("k",0...,0)("X"1, ..., "X""n") is of special interest. It is the power sum symmetric polynomial, defined as
formula_18
All symmetric polynomials can be obtained from the first "n" power sum symmetric polynomials by additions and multiplications, possibly involving rational coefficients. More precisely,
Any symmetric polynomial in "X"1, ..., "X""n" can be expressed as a polynomial expression with rational coefficients in the power sum symmetric polynomials "p"1("X"1, ..., "X""n"), ..., "p""n"("X"1, ..., "X""n").
In particular, the remaining power sum polynomials "p""k"("X"1, ..., "X""n") for "k" > "n" can be so expressed in the first "n" power sum polynomials; for example
formula_19
In contrast to the situation for the elementary and complete homogeneous polynomials, a symmetric polynomial in "n" variables with "integral" coefficients need not be a polynomial function with integral coefficients of the power sum symmetric polynomials.
For an example, for "n" = 2, the symmetric polynomial
formula_20
has the expression
formula_21
Using three variables one gets a different expression
formula_22
The corresponding expression was valid for two variables as well (it suffices to set "X"3 to zero), but since it involves "p"3, it could not be used to illustrate the statement for "n" = 2. The example shows that whether or not the expression for a given monomial symmetric polynomial in terms of the first "n" power sum polynomials involves rational coefficients may depend on "n". But rational coefficients are "always" needed to express elementary symmetric polynomials (except the constant ones, and "e"1 which coincides with the first power sum) in terms of power sum polynomials. The Newton identities provide an explicit method to do this; it involves division by integers up to "n", which explains the rational coefficients. Because of these divisions, the mentioned statement fails in general when coefficients are taken in a field of finite characteristic; however, it is valid with coefficients in any ring containing the rational numbers.
Complete homogeneous symmetric polynomials.
For each nonnegative integer "k", the complete homogeneous symmetric polynomial "h""k"("X"1, ..., "X""n") is the sum of all distinct monomials of degree "k" in the variables "X"1, ..., "X""n". For instance
formula_23
The polynomial "h""k"("X"1, ..., "X""n") is also the sum of all distinct monomial symmetric polynomials of degree "k" in "X"1, ..., "X""n", for instance for the given example
formula_24
All symmetric polynomials in these variables can be built up from complete homogeneous ones: any symmetric polynomial in "X"1, ..., "X""n" can be obtained from the complete homogeneous symmetric polynomials "h"1("X"1, ..., "X""n"), ..., "h""n"("X"1, ..., "X""n") via multiplications and additions. More precisely:
Any symmetric polynomial "P" in "X"1, ..., "X""n" can be written as a polynomial expression in the polynomials "h""k"("X"1, ..., "X""n") with 1 ≤ "k" ≤ "n".
If "P" has integral coefficients, then the polynomial expression also has integral coefficients.
For example, for "n" = 2, the relevant complete homogeneous symmetric polynomials are "h"1("X"1, "X"2) = "X"1 + "X"2 and "h"2("X"1, "X"2) = "X"12 + "X"1"X"2 + "X"22. The first polynomial in the list of examples above can then be written as
formula_25
As in the case of power sums, the given statement applies in particular to the complete homogeneous symmetric polynomials beyond "h""n"("X"1, ..., "X""n"), allowing them to be expressed in terms of the ones up to that point; again the resulting identities become invalid when the number of variables is increased.
An important aspect of complete homogeneous symmetric polynomials is their relation to elementary symmetric polynomials, which can be expressed as the identities
formula_26, for all "k" > 0, and any number of variables "n".
Since "e"0("X"1, ..., "X""n") and "h"0("X"1, ..., "X""n") are both equal to 1, one can isolate either the first or the last term of these summations; the former gives a set of equations that allows one to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials, and the latter gives a set of equations that allows doing the inverse. This implicitly shows that any symmetric polynomial can be expressed in terms of the "h""k"("X"1, ..., "X""n") with 1 ≤ "k" ≤ "n": one first expresses the symmetric polynomial in terms of the elementary symmetric polynomials, and then expresses those in terms of the mentioned complete homogeneous ones.
Schur polynomials.
Another class of symmetric polynomials is that of the Schur polynomials, which are of fundamental importance in the applications of symmetric polynomials to representation theory. They are however not as easy to describe as the other kinds of special symmetric polynomials; see the main article for details.
Symmetric polynomials in algebra.
Symmetric polynomials are important to linear algebra, representation theory, and Galois theory. They are also important in combinatorics, where they are mostly studied through the ring of symmetric functions, which avoids having to carry around a fixed number of variables all the time.
Alternating polynomials.
Analogous to symmetric polynomials are alternating polynomials: polynomials that, rather than being "invariant" under permutation of the entries, change according to the sign of the permutation.
These are all products of the Vandermonde polynomial and a symmetric polynomial, and form a quadratic extension of the ring of symmetric polynomials: the Vandermonde polynomial is a square root of the discriminant.
|
[
{
"math_id": 0,
"text": "X_1^3+ X_2^3-7"
},
{
"math_id": 1,
"text": "4 X_1^2X_2^2 +X_1^3X_2 + X_1X_2^3 +(X_1+X_2)^4"
},
{
"math_id": 2,
"text": "X_1 X_2 X_3 - 2 X_1 X_2 - 2 X_1 X_3 - 2 X_2 X_3"
},
{
"math_id": 3,
"text": "\\prod_{1\\leq i<j\\leq n}(X_i-X_j)^2"
},
{
"math_id": 4,
"text": "X_1 - X_2"
},
{
"math_id": 5,
"text": "X_1"
},
{
"math_id": 6,
"text": "X_2"
},
{
"math_id": 7,
"text": "X_2 - X_1"
},
{
"math_id": 8,
"text": "X_1^4X_2^2X_3 + X_1X_2^4X_3^2 + X_1^2X_2X_3^4"
},
{
"math_id": 9,
"text": "X_1^4X_2^2X_3 + X_1X_2^4X_3^2 + X_1^2X_2X_3^4 +\n X_1^4X_2X_3^2 + X_1X_2^2X_3^4 + X_1^2X_2^4X_3"
},
{
"math_id": 10,
"text": "P=t^n+a_{n-1}t^{n-1}+\\cdots+a_2t^2+a_1t+a_0"
},
{
"math_id": 11,
"text": "P = t^n+a_{n-1}t^{n-1}+\\cdots+a_2t^2+a_1t+a_0=(t-x_1)(t-x_2)\\cdots(t-x_n)."
},
{
"math_id": 12,
"text": "\\begin{align}\na_{n-1}&=-x_1-x_2-\\cdots-x_n\\\\\na_{n-2}&=x_1x_2+x_1x_3+\\cdots+x_2x_3+\\cdots+x_{n-1}x_n = \\textstyle\\sum_{1\\leq i<j\\leq n}x_ix_j\\\\\n& {}\\ \\, \\vdots\\\\\na_1&=(-1)^{n-1}(x_2x_3\\cdots x_n+x_1x_3x_4\\cdots x_n+\\cdots+x_1x_2\\cdots x_{n-2}x_n+x_1x_2\\cdots x_{n-1})\n = \\textstyle(-1)^{n-1}\\sum_{i=1}^n\\prod_{j\\neq i}x_j\\\\\na_0&=(-1)^nx_1x_2\\cdots x_n.\n\\end{align}"
},
{
"math_id": 13,
"text": "(-1)^{n-i}"
},
{
"math_id": 14,
"text": "X_1^3+X_2^3-7=e_1(X_1,X_2)^3-3e_2(X_1,X_2)e_1(X_1,X_2)-7"
},
{
"math_id": 15,
"text": "m_{(3,1,1)}(X_1,X_2,X_3)=X_1^3X_2X_3+X_1X_2^3X_3+X_1X_2X_3^3"
},
{
"math_id": 16,
"text": "m_{(3,2,1)}(X_1,X_2,X_3)=X_1^3X_2^2X_3+X_1^3X_2X_3^2+X_1^2X_2^3X_3+X_1^2X_2X_3^3+X_1X_2^3X_3^2+X_1X_2^2X_3^3."
},
{
"math_id": 17,
"text": "e_k(X_1,\\ldots,X_n)=m_\\alpha(X_1,\\ldots,X_n)"
},
{
"math_id": 18,
"text": "p_k(X_1,\\ldots,X_n) = X_1^k + X_2^k + \\cdots + X_n^k ."
},
{
"math_id": 19,
"text": "p_3(X_1,X_2)=\\textstyle\\frac32p_2(X_1,X_2)p_1(X_1,X_2)-\\frac12p_1(X_1,X_2)^3."
},
{
"math_id": 20,
"text": "m_{(2,1)}(X_1,X_2) = X_1^2 X_2 + X_1 X_2^2"
},
{
"math_id": 21,
"text": " m_{(2,1)}(X_1,X_2)= \\textstyle\\frac12p_1(X_1,X_2)^3-\\frac12p_2(X_1,X_2)p_1(X_1,X_2)."
},
{
"math_id": 22,
"text": "\\begin{align}m_{(2,1)}(X_1,X_2,X_3) &= X_1^2 X_2 + X_1 X_2^2 + X_1^2 X_3 + X_1 X_3^2 + X_2^2 X_3 + X_2 X_3^2\\\\\n &= p_1(X_1,X_2,X_3)p_2(X_1,X_2,X_3)-p_3(X_1,X_2,X_3).\n\\end{align}"
},
{
"math_id": 23,
"text": "h_3(X_1,X_2,X_3) = X_1^3+X_1^2X_2+X_1^2X_3+X_1X_2^2+X_1X_2X_3+X_1X_3^2+X_2^3+X_2^2X_3+X_2X_3^2+X_3^3."
},
{
"math_id": 24,
"text": "\\begin{align}\n h_3(X_1,X_2,X_3)&=m_{(3)}(X_1,X_2,X_3)+m_{(2,1)}(X_1,X_2,X_3)+m_{(1,1,1)}(X_1,X_2,X_3)\\\\\n &=(X_1^3+X_2^3+X_3^3)+(X_1^2X_2+X_1^2X_3+X_1X_2^2+X_1X_3^2+X_2^2X_3+X_2X_3^2)+(X_1X_2X_3).\\\\\n\\end{align}"
},
{
"math_id": 25,
"text": "X_1^3+ X_2^3-7 = -2h_1(X_1,X_2)^3+3h_1(X_1,X_2)h_2(X_1,X_2)-7."
},
{
"math_id": 26,
"text": "\\sum_{i=0}^k(-1)^i e_i(X_1,\\ldots,X_n)h_{k-i}(X_1,\\ldots,X_n) = 0"
}
] |
https://en.wikipedia.org/wiki?curid=1440207
|
14402929
|
Generalized Hebbian algorithm
|
Linear feedforward neural network model
The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons.
Theory.
The GHA combines Oja's rule with the Gram-Schmidt process to produce a learning rule of the form
formula_0,
where "w""ij" defines the synaptic weight or connection strength between the "j"th input and "i"th output neurons, "x" and "y" are the input and output vectors, respectively, and "η" is the "learning rate" parameter.
Derivation.
In matrix form, Oja's rule can be written
formula_1,
and the Gram-Schmidt algorithm is
formula_2,
where "w"("t") is any matrix, in this case representing synaptic weights, "Q"
"η" x xT is the autocorrelation matrix, simply the outer product of inputs, diag is the function that diagonalizes a matrix, and lower is the function that sets all matrix elements on or above the diagonal equal to 0. We can combine these equations to get our original rule in matrix form,
formula_3,
where the function LT sets all matrix elements above the diagonal equal to 0, and note that our output y("t")
"w"("t") x("t") is a linear neuron.
Applications.
The GHA is used in applications where a self-organizing map is necessary, or where a feature or principal components analysis can be used. Examples of such cases include artificial intelligence and speech and image processing.
Its importance comes from the fact that learning is a single-layer process—that is, a synaptic weight changes only depending on the response of the inputs and outputs of that layer, thus avoiding the multi-layer dependence associated with the backpropagation algorithm. It also has a simple and predictable trade-off between learning speed and accuracy of convergence as set by the learning rate parameter η.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\,\\Delta w_{ij} ~ = ~ \\eta\\left(y_i x_j - y_i \\sum_{k=1}^{i} w_{kj} y_k \\right)"
},
{
"math_id": 1,
"text": "\\,\\frac{\\text{d} w(t)}{\\text{d} t} ~ = ~ w(t) Q - \\mathrm{diag} [w(t) Q w(t)^{\\mathrm{T}}] w(t)"
},
{
"math_id": 2,
"text": "\\,\\Delta w(t) ~ = ~ -\\mathrm{lower} [w(t) w(t)^{\\mathrm{T}}] w(t)"
},
{
"math_id": 3,
"text": "\\,\\Delta w(t) ~ = ~ \\eta(t) \\left(\\mathbf{y}(t) \\mathbf{x}(t)^{\\mathrm{T}} - \\mathrm{LT}[\\mathbf{y}(t)\\mathbf{y}(t)^{\\mathrm{T}}] w(t)\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=14402929
|
1440307
|
Dielectrophoresis
|
Particle motion in a non-uniform electric field due to dipole-field interactions
Dielectrophoresis (DEP) is a phenomenon in which a force is exerted on a dielectric particle when it is subjected to a non-uniform electric field. This force does not require the particle to be charged. All particles exhibit dielectrophoretic activity in the presence of electric fields. However, the strength of the force depends strongly on the medium and particles' electrical properties, on the particles' shape and size, as well as on the frequency of the electric field. Consequently, fields of a particular frequency can manipulate particles with great selectivity. This has allowed, for example, the separation of cells or the orientation and manipulation of nanoparticles and nanowires. Furthermore, a study of the change in DEP force as a function of frequency can allow the electrical (or electrophysiological in the case of cells) properties of the particle to be elucidated.
Background and properties.
Although the phenomenon we now call dielectrophoresis was described in passing as far back as the early 20th century, it was only subject to serious study, named and first understood by Herbert Pohl in the 1950s. Recently, dielectrophoresis has been revived due to its potential in the manipulation of microparticles, nanoparticles and cells.
Dielectrophoresis occurs when a polarizable particle is suspended in a non-uniform electric field. The electric field polarizes the particle, and the poles then experience a force along the field lines, which can be either attractive or repulsive according to the orientation on the dipole. Since the field is non-uniform, the pole experiencing the greatest electric field will dominate over the other, and the particle will move. The orientation of the dipole is dependent on the relative polarizability of the particle and medium, in accordance with Maxwell–Wagner–Sillars polarization. Since the direction of the force is dependent on field gradient rather than field direction, DEP will occur in AC as well as DC electric fields; polarization (and hence the direction of the force) will depend on the relative polarizabilities of particle and medium. If the particle moves in the direction of increasing electric field, the behavior is referred to as positive DEP (sometime pDEP), if acting to move the particle away from high field regions, it is known as negative DEP (or nDEP). As the relative polarizabilities of the particle and medium are frequency-dependent, varying the energizing signal and measuring the way in which the force changes can be used to determine the electrical properties of particles; this also allows the elimination of electrophoretic motion of particles due to inherent particle charge.
Phenomena associated with dielectrophoresis are electrorotation and traveling wave dielectrophoresis (TWDEP). These require complex signal generation equipment in order to create the required rotating or traveling electric fields, and as a result of this complexity have found less favor among researchers than conventional dielectrophoresis.
Dielectrophoretic force.
The simplest theoretical model is that of a homogeneous sphere surrounded by a conducting dielectric medium. For a homogeneous sphere of radius formula_0 and complex permittivity formula_1 in a medium with complex permittivity formula_2 the (time-averaged) DEP force is:
formula_3
The factor in curly brackets is known as the complex Clausius-Mossotti function and contains all the frequency dependence of the DEP force. Where the particle consists of nested spheres – the most common example of which is the approximation of a spherical cell composed of an inner part (the cytoplasm) surrounded by an outer layer (the cell membrane) – then this can be represented by nested expressions for the shells and the way in which they interact, allowing the properties to be elucidated where there are sufficient parameters related to the number of unknowns being sought.
For a more general field-aligned ellipsoid of radius formula_0 and length formula_4 with complex dielectric constant formula_1 in a medium with complex dielectric constant formula_2 the time-dependent dielectrophoretic force is given by:
formula_5
The complex dielectric constant is formula_6, where formula_7 is the dielectric constant, formula_8 is the electrical conductivity, formula_9 is the field frequency, and formula_10 is the imaginary unit. This expression has been useful for approximating the dielectrophoretic behavior of particles such as red blood cells (as oblate spheroids) or long thin tubes (as prolate ellipsoids) allowing the approximation of the dielectrophoretic response of carbon nanotubes or tobacco mosaic viruses in suspension.
These equations are accurate for particles when the electric field gradients are not very large (e.g., close to electrode edges) or when the particle is not moving along an axis in which the field gradient is zero (such as at the center of an axisymmetric electrode array), as the equations only take into account the dipole formed and not higher order polarization. When the electric field gradients are large, or when there is a field null running through the center of the particle, higher order terms become relevant, and result in higher forces.
To be precise, the time-dependent equation only applies to lossless particles, because loss creates a lag between the field and the induced dipole. When averaged, the effect cancels out and the equation holds true for lossy particles as well. An equivalent time-averaged equation can be easily obtained by replacing E with Erms, or, for sinusoidal voltages by dividing the right hand side by 2.
These models ignores the fact that cells have a complex internal structure and are heterogeneous. A multi-shell model in a low conducting medium can be used to obtain information of the membrane conductivity and the permittivity of the cytoplasm.
For a cell with a shell surrounding a homogeneous core with its surrounding medium considered as a layer, as seen in Figure 2, the overall dielectric response is obtained from a combination of the properties of the shell and core.
formula_11
where 1 is the core (in cellular terms, the cytoplasm), 2 is the shell (in a cell, the membrane). r1 is the radius from the centre of the sphere to the inside of the shell, and r2 is the radius from the centre of the sphere to the outside of the shell.
Applications.
Dielectrophoresis can be used to manipulate, transport, separate and sort different types of particles. DEP is being applied in fields such as medical diagnostics, drug discovery, cell therapeutics, and particle filtration.
DEP has been also used in conjunction with semiconductor chip technology for the development of DEP array technology for the simultaneous management of thousands of cells in microfluidic devices. Single microelectrodes on the floor of a flow cell are managed by a CMOS chip to form thousands of dielectrophoretic "cages", each capable of capturing and moving one single cell under control of routing software.
As biological cells have dielectric properties, dielectrophoresis has many biological and medical applications. Instruments capable of separating cancer cells from healthy cells have been made as well as isolating single cells from forensic mixed samples. Platelets have been separated from whole blood with a DEP-activated cell sorter.
DEP has made it possible to characterize and manipulate biological particles like blood cells, stem cells, neurons, pancreatic β cells, DNA, chromosomes, proteins and viruses.
DEP can be used to separate particles with different sign polarizabilities as they move in different directions at a given frequency of the AC field applied. DEP has been applied for the separation of live and dead cells, with the remaining live cells still viable after separation or to force contact between selected single cells to study cell-cell interaction.
DEP has been used to separate strains of bacteria and viruses. DEP can also be used to detect apoptosis soon after drug induction measuring the changes in electrophysiological properties.
As a cell characterisation tool.
DEP is mainly used for characterising cells measuring the changes in their electrical properties. To do this, many techniques are available to quantify the dielectrophoretic response, as it is not possible to directly measure the DEP force.
These techniques rely on indirect measures, obtaining a proportional response of the strength and direction of the force that needs to be scaled to the model spectrum. So most models only consider the Clausius-Mossotti factor of a particle.
The most used techniques are collection rate measurements: this is the simplest and most used technique – electrodes are submerged in a suspension with a known concentration of particles and the particles that collect at the electrode are counted; crossover measurements: the crossover frequency between positive and negative DEP is measured to characterise particles – this technique is used for smaller particles (e.g. viruses), that are difficult to count with the previous technique; particle velocity measurements: this technique measures the velocity and direction of the particles in an electric field gradient; measurement of the levitation height: the levitation height of a particle is proportional to the negative DEP force that is applied. Thus, this technique is good for characterising single particles and is mainly used for larger particles such as cells; impedance sensing: particles collecting at the electrode edge have an influence on the impedance of the electrodes – this change can be monitored to quantify DEP.
In order to study larger populations of cells, the properties can be obtained by analysing the dielectrophoretic spectra.
Implementation.
Electrode geometries.
At the start, electrodes were made mainly from wires or metal sheets.
Nowadays, the electric field in DEP is created by means of electrodes which minimize the magnitude of the voltage needed. This has been possible using fabrication techniques such as photolithography, laser ablation and electron beam patterning.
These small electrodes allow the handling of small bioparticles.
The most used electrode geometries are isometric, polynomial, interdigitated, and crossbar. Isometric geometry is effective for particle manipulation with DEP but repelled particles do not collect in well defined areas and so separation into two homogeneous groups is difficult. Polynomial is a new geometry producing well defined differences in regions of high and low forces and so particles could be collected by positive and negative DEP. This electrode geometry showed that the electrical field was highest at the middle of the inter-electrode gaps. Interdigitated geometry comprises alternating electrode fingers of opposing polarities and is mainly used for dielectrophoretic trapping and analysis. Crossbar geometry is potentially useful for networks of interconnects.
DEP-well electrodes.
These electrodes were developed to offer a high-throughput yet low-cost alternative to conventional electrode structures for DEP. Rather than use photolithographic methods or other microengineering approaches, DEP-well electrodes are constructed from stacking successive conductive and insulating layers in a laminate, after which multiple "wells" are drilled through the structure. If one examines the walls of these wells, the layers appear as interdigitated electrodes running continuously around the walls of the tube. When alternating conducting layers are connected to the two phases of an AC signal, a field gradient formed along the walls moves cells by DEP.
DEP-wells can be used in two modes; for analysis or separation. In the first, the dielectrophoretic properties of cells can be monitored by light absorption measurements: positive DEP attracts the cells to the wall of the well, thus when probed with a light beam the well the light intensity increases through the well. The opposite is true for negative DEP, in which the light beam becomes obscured by the cells. Alternatively, the approach can be used to build a separator, where mixtures of cells are forced through large numbers (>100) of wells in parallel; those experiencing positive DEP are trapped in the device whilst the rest are flushed. Switching off the field allows release of the trapped cells into a separate container. The highly parallel nature of the approach means that the chip can sort cells at much higher speeds, comparable to those used by MACS and FACS.
This approach offers many advantages over conventional, photolithography-based devices but reducing cost, increasing the amount of sample which can be analysed simultaneously, and the simplicity of cell motion reduced to one dimension (where cells can only move radially towards or away from the centre of the well). Devices manufactured to use the DEP-well principle are marketed under the DEPtech brand.
Dielectrophoresis field-flow fractionation.
The utilization of the difference between dielectrophoretic forces exerted on different particles in nonuniform electric fields is known as DEP separation. The exploitation of DEP forces has been classified into two groups: DEP migration and DEP retention. DEP migration uses DEP forces that exert opposite signs of force on different particle types to attract some of the particles and repel others. DEP retention uses the balance between DEP and fluid-flow forces. Particles experiencing repulsive and weak attractive DEP forces are eluted by fluid flow, whereas particles experiencing strong attractive DEP forces are trapped at electrode edges against flow drag.
Dielectrophoresis field-flow fractionation (DEP-FFF), introduced by Davis and Giddings, is a family of chromatographic-like separation methods. In DEP-FFF, DEP forces are combined with drag flow to fractionate a sample of different types of particles. Particles are injected into a carrier flow that passes through the separation chamber, with an external separating force (a DEP force) being applied perpendicular to the flow. By means of different factors, such as diffusion and steric, hydrodynamic, dielectric and other effects, or a combination thereof, particles (<1 μm in diameter) with different dielectric or diffusive properties attain different positions away from the chamber wall, which, in turn, exhibit different characteristic concentration profile. Particles that move further away from the wall reach higher positions in the parabolic velocity profile of the liquid flowing through the chamber and will be eluted from the chamber at a faster rate.
Optical dielectrophoresis.
The use of photoconductive materials (for example, in lab-on-chip devices) allows for localized inducement of dielectrophoretic forces through the application of light. In addition, one can project an image to induce forces in a patterned illumination area, allowing for some complex manipulations. When manipulating living cells, optical dielectrophoresis provides a non-damaging alternative to optical tweezers, as the intensity of light is about 1000 times less.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "\\varepsilon_p^*"
},
{
"math_id": 2,
"text": "\\varepsilon_m^*"
},
{
"math_id": 3,
"text": "\\langle F_\\mathrm{DEP} \\rangle = 2\\pi r^3\\varepsilon_m \\textrm{Re}\\left\\{\\frac{\\varepsilon^*_p - \\varepsilon^*_m}{\\varepsilon^*_p + 2\\varepsilon^*_m}\\right\\}\\nabla \\left|\\vec{E}_{rms}\\right|^2"
},
{
"math_id": 4,
"text": "l"
},
{
"math_id": 5,
"text": "F_\\mathrm{DEP} = \\frac{\\pi r^2 l}{3}\\varepsilon_m \\textrm{Re}\\left\\{\\frac{\\varepsilon^*_p - \\varepsilon^*_m}{\\varepsilon^*_m}\\right\\}\\nabla \\left|\\vec{E}\\right|^2 "
},
{
"math_id": 6,
"text": "\\varepsilon^* = \\varepsilon + \\frac{i\\sigma}{\\omega}"
},
{
"math_id": 7,
"text": "\\varepsilon"
},
{
"math_id": 8,
"text": "\\sigma"
},
{
"math_id": 9,
"text": "\\omega"
},
{
"math_id": 10,
"text": "i"
},
{
"math_id": 11,
"text": " \\varepsilon_{1eff}^*(\\omega)= \\varepsilon_2^*\\frac{(\\frac{r_2}{r_1})^3+2\\frac{\\varepsilon_1^*-\\varepsilon_2^*}{\\varepsilon_1^*+2\\varepsilon_2^*}}{(\\frac{r_2}{r_1})^3-\\frac{\\varepsilon_1^*-\\varepsilon_2^*}{\\varepsilon_1^*+2\\varepsilon_2^*}}"
}
] |
https://en.wikipedia.org/wiki?curid=1440307
|
14404885
|
Grain boundary strengthening
|
Method of strengthening materials by changing grain size
In materials science, grain-boundary strengthening (or Hall–Petch strengthening) is a method of strengthening materials by changing their average crystallite (grain) size. It is based on the observation that grain boundaries are insurmountable borders for dislocations and that the number of dislocations within a grain has an effect on how stress builds up in the adjacent grain, which will eventually activate dislocation sources and thus enabling deformation in the neighbouring grain as well. By changing grain size, one can influence the number of dislocations piled up at the grain boundary and yield strength. For example, heat treatment after plastic deformation and changing the rate of solidification are ways to alter grain size.
Theory.
In grain-boundary strengthening, the grain boundaries act as pinning points impeding further dislocation propagation. Since the lattice structure of adjacent grains differs in orientation, it requires more energy for a dislocation to change directions and move into the adjacent grain. The grain boundary is also much more disordered than inside the grain, which also prevents the dislocations from moving in a continuous slip plane. Impeding this dislocation movement will hinder the onset of plasticity and hence increase the yield strength of the material.
Under an applied stress, existing dislocations and dislocations generated by Frank–Read sources will move through a crystalline lattice until encountering a grain boundary, where the large atomic mismatch between different grains creates a repulsive stress field to oppose continued dislocation motion. As more dislocations propagate to this boundary, dislocation 'pile up' occurs as a cluster of dislocations are unable to move past the boundary. As dislocations generate repulsive stress fields, each successive dislocation will apply a repulsive force to the dislocation incident with the grain boundary. These repulsive forces act as a driving force to reduce the energetic barrier for diffusion across the boundary, such that additional pile up causes dislocation diffusion across the grain boundary, allowing further deformation in the material. Decreasing grain size decreases the amount of possible pile up at the boundary, increasing the amount of applied stress necessary to move a dislocation across a grain boundary. The higher the applied stress needed to move the dislocation, the higher the yield strength. Thus, there is then an inverse relationship between grain size and yield strength, as demonstrated by the Hall-Petch equation. However, when there is a large direction change in the orientation of the two adjacent grains, the dislocation may not necessarily move from one grain to the other but instead create a new source of dislocation in the adjacent grain. The theory remains the same that more grain boundaries create more opposition to dislocation movement and in turn strengthens the material.
Obviously, there is a limit to this mode of strengthening, as infinitely strong materials do not exist. Grain sizes can range from about (large grains) to (small grains). Lower than this, the size of dislocations begins to approach the size of the grains. At a grain size of about , only one or two dislocations can fit inside a grain (see Figure 1 above). This scheme prohibits dislocation pile-up and instead results in grain boundary diffusion. The lattice resolves the applied stress by grain boundary sliding, resulting in a "decrease" in the material's yield strength.
To understand the mechanism of grain boundary strengthening one must understand the nature of dislocation-dislocation interactions. Dislocations create a stress field around them given by:
formula_0
where G is the material's shear modulus, b is the Burgers vector, and r is the distance from the dislocation. If the dislocations are in the right alignment with respect to each other, the local stress fields they create will repel each other. This helps dislocation movement along grains and across grain boundaries. Hence, the more dislocations are present in a grain, the greater the stress field felt by a dislocation near a grain boundary:
formula_1
Interphase boundaries can also contribute to grain boundary strengthening, particularly in composite materials and precipitation-hardened alloys. Coherent IPBs, in particular, can provide additional barriers to dislocation motion, similar to grain boundaries. In contrast, non-coherent IPBs and partially coherent IPBs can act as sources of dislocations, which can lead to localized deformation and affect the mechanical properties of the material.
Subgrain strengthening.
A subgrain is a part of the grain that is only slightly disoriented from other parts of the grain. Current research is being done to see the effect of subgrain strengthening in materials. Depending on the processing of the material, subgrains can form within the grains of the material. For example, when Fe-based material is ball-milled for long periods of time (e.g. 100+ hours), subgrains of 60–90 nm are formed. It has been shown that the higher the density of the subgrains, the higher the yield stress of the material due to the increased subgrain boundary. The strength of the metal was found to vary reciprocally with the size of the subgrain, which is analogous to the Hall–Petch equation. The subgrain boundary strengthening also has a breakdown point of around a subgrain size of 0.1 μm, which is the size where any subgrains smaller than that size would decrease yield strength.
Types of Strengthening Boundaries.
Coherent Interphase Boundaries.
Coherent grain boundaries are those in which the crystal lattice of adjacent grains is continuous across the boundary. In other words, the crystallographic orientation of the grains on either side of the boundary is related by a small rotation or translation. Coherent grain boundaries are typically observed in materials with small grain sizes or in highly ordered materials such as single crystals. Because the crystal lattice is continuous across the boundary, there are no defects or dislocations associated with coherent grain boundaries. As a result, they do not act as barriers to the motion of dislocations and have little effect on the strength of a material. However, they can still affect other properties such as diffusion and grain growth.
When solid solutions become supersaturated and precipitation occurs, tiny particles are formed. These particles typically have interphase boundaries that match up with the matrix, despite differences in interatomic spacing between the particle and the matrix. This creates a coherency strain, which causes distortion. Dislocations respond to the stress field of a coherent particle in a way similar to how they interact with solute atoms of different sizes. It is worth noting that the interfacial energy can also influence the kinetics of phase transformations and precipitation processes. For instance, the energy associated with a strained coherent interface can reach a critical level as the precipitate grows, leading to a transition from a coherent to a disordered (non-coherent) interface. This transition occurs when the energy associated with maintaining the coherency becomes too high, and the system seeks a lower energy configuration. This happens when particle dispersion is introduced into a matrix. Dislocations pass through small particles and bow between large particles or particles with disordered interphase boundaries. The predominant slip mechanism determines the contribution to strength, which depends on factors such as particle size and volume fraction.
Partially-coherent Interphase Boundaries.
A partially coherent interphase boundary is an intermediate type of IPB that lies between the completely coherent and non-coherent IPBs. In this type of boundary, there is a partial match between the atomic arrangements of the particle and the matrix, but not a perfect match. As a result, coherency strains are partially relieved, but not completely eliminated. The periodic introduction of dislocations along the boundary plays a key role in partially relieving the coherency strains. These dislocations act as periodic defects that accommodate the lattice mismatch between the particle and the matrix. The dislocations can be introduced during the precipitation process or during subsequent annealing treatments.
Non-coherent Interphase Boundaries.
Incoherent grain boundaries are those in which there is a significant mismatch in crystallographic orientation between adjacent grains. This results in a discontinuity in the crystal lattice across the boundary, and the formation of a variety of defects such as dislocations, stacking faults, and grain boundary ledges.The presence of these defects creates a barrier to the motion of dislocations and leads to a strengthening effect. This effect is more pronounced in materials with smaller grain sizes, as there are more grain boundaries to impede dislocation motion. In addition to the barrier effect, incoherent grain boundaries can also act as sources and sinks for dislocations. This can lead to localized plastic deformation and affect the overall mechanical response of a material.
When small particles are formed through precipitation from supersaturated solid solutions, their interphase boundaries may not be coherent with the matrix. In such cases, the atomic bonds do not match up across the interface and there is a misfit between the particle and the matrix. This misfit gives rise to a non-coherency strain, which can cause the formation of dislocations at the grain boundary. As a result, the properties of the small particle can be different from those of the matrix. The size at which non-coherent grain boundaries form depends on the lattice misfit and the interfacial energy.
Interfacial Energy.
Understanding the interfacial energy of materials with different types of interphase boundaries (IPBs) provides valuable insights into several aspects of their behavior, including thermodynamic stability, deformation behavior, and phase evolution.
Grain Boundary Sliding and Dislocation Transmission.
Interfacial energy affects the mechanisms of grain boundary sliding and dislocation transmission. Higher interfacial energy promotes greater resistance to grain boundary sliding, as the higher energy barriers inhibit the relative movement of adjacent grains. Additionally, dislocations that encounter grain boundaries can either transmit across the boundary or be reflected back into the same grain. The interfacial energy influences the likelihood of dislocation transmission, with higher interfacial energy barriers impeding dislocation motion and enhancing grain boundary strengthening.
Grain Boundary Orientation.
High-angle grain boundaries, which have large misorientations between adjacent grains, tend to have higher interfacial energy and are more effective in impeding dislocation motion. In contrast, low-angle grain boundaries with small misorientations and lower interfacial energy may allow for easier dislocation transmission and exhibit weaker grain boundary strengthening effects.
Grain Boundary Engineering.
Grain boundary engineering involves manipulating the grain boundary structure and energy to enhance mechanical properties. By controlling the interfacial energy, it is possible to engineer materials with desirable grain boundary characteristics, such as increased interfacial area, higher grain boundary density, or specific grain boundary types.
Introducing alloying elements into the material can alter the interfacial energy of grain boundaries. Alloying can result in segregation of solute atoms at the grain boundaries, which can modify the atomic arrangements and bonding, and thereby influence the interfacial energy.
Applying surface treatments or coatings can modify the interfacial energy of grain boundaries. Surface modification techniques, such as chemical treatments or deposition of thin films, can alter the surface energy and consequently affect the grain boundary energy.
Thermal treatments can be employed to modify the interfacial energy of grain boundaries. Annealing at specific temperatures and durations can induce atomic rearrangements, diffusion, and stress relaxation at the grain boundaries, leading to changes in the interfacial energy.
Once the interfacial energy is controlled, grain boundaries can be manipulated to enhance their strengthening effects.
Applying severe plastic deformation techniques, such as equal-channel angular pressing (ECAP) or high-pressure torsion (HPT), can lead to grain refinement and the creation of new grain boundaries with tailored characteristics. These refined grain structures can exhibit a high density of grain boundaries, including high-angle boundaries, which can contribute to enhanced grain boundary strengthening.
Utilizing specific thermomechanical processing routes, such as rolling, forging, or extrusion, can result in the creation of a desired texture and the development of specific grain boundary structures. These processing routes can promote the formation of specific grain boundary types and orientations, leading to improved grain boundary strengthening.
Hall Petch relationship.
There is an inverse relationship between delta yield strength and grain size to some power, "x".
formula_2
where "k" is the strengthening coefficient and both "k" and "x" are material specific. Assuming a narrow monodisperse grain size distribution in a polycrystalline material, the smaller the grain size, the smaller the repulsion stress felt by a grain boundary dislocation and the higher the applied stress needed to propagate dislocations through the material.
The relation between yield stress and grain size is described mathematically by the Hall–Petch equation:
formula_3
where "σy" is the yield stress, "σ0" is a materials constant for the starting stress for dislocation movement (or the resistance of the lattice to dislocation motion), "ky" is the strengthening coefficient (a constant specific to each material), and "d" is the average grain diameter. It is important to note that the H-P relationship is an empirical fit to experimental data, and that the notion that a pileup length of half the grain diameter causes a critical stress for transmission to or generation in an adjacent grain has not been verified by actual observation in the microstructure.
Theoretically, a material could be made infinitely strong if the grains are made infinitely small. This is impossible though, because the lower limit of grain size is a single unit cell of the material. Even then, if the grains of a material are the size of a single unit cell, then the material is in fact amorphous, not crystalline, since there is no long range order, and dislocations can not be defined in an amorphous material. It has been observed experimentally that the microstructure with the highest yield strength is a grain size of about , because grains smaller than this undergo another yielding mechanism, grain boundary sliding. Producing engineering materials with this ideal grain size is difficult because only thin films can be reliably produced with grains of this size. In materials having a bi-disperse grain size distribution, for example those exhibiting abnormal grain growth, hardening mechanisms do not strictly follow the Hall–Petch relationship and divergent behavior is observed.
History.
In the early 1950s two groundbreaking series of papers were written independently on the relationship between grain boundaries and strength.
In 1951, while at the University of Sheffield, E. O. Hall wrote three papers which appeared in volume 64 of the Proceedings of the Physical Society. In his third paper, Hall showed that the length of slip bands or crack lengths correspond to grain sizes and thus a relationship could be established between the two. Hall concentrated on the yielding properties of mild steels.
Based on his experimental work carried out in 1946–1949, N. J. Petch of the University of Leeds, England published a paper in 1953 independent from Hall's. Petch's paper concentrated more on brittle fracture. By measuring the variation in cleavage strength with respect to ferritic grain size at very low temperatures, Petch found a relationship exact to that of Hall's. Thus this important relationship is named after both Hall and Petch.
Reverse or inverse Hall Petch relation.
The Hall–Petch relation predicts that as the grain size decreases the yield strength increases. The Hall–Petch relation was experimentally found to be an effective model for materials with grain sizes ranging from 1 millimeter to 1 micrometer. Consequently, it was believed that if average grain size could be decreased even further to the nanometer length scale the yield strength would increase as well. However, experiments on many nanocrystalline materials demonstrated that if the grains reached a small enough size, the critical grain size which is typically around , the yield strength would either remain constant or decrease with decreasing grains size. This phenomenon has been termed the reverse or inverse Hall–Petch relation. A number of different mechanisms have been proposed for this relation. As suggested by Carlton "et al.", they fall into four categories: (1) dislocation-based, (2) diffusion-based, (3) grain-boundary shearing-based, (4) two-phase-based.
There have been several works done to investigate the mechanism behind the inverse Hall–Petch relationship on numerous materials. In Han’s work, a series of molecular dynamics simulations were done to investigate the effect of grain size on the mechanical properties of nanocrystalline graphene under uniaxial tensile loading, with random shapes and random orientations of graphene rings. The simulation was run at grain sizes of nm and at room temperature. It was found that in the grain size of range 3.1 nm to 40 nm, inverse Hall–Petch relationship was observed. This is because when the grain size decreases at nm scale, there is an increase in the density of grain boundary junctions which serves as a source of crack growth or weak bonding. However, it was also observed that at grain size below 3.1 nm, a pseudo Hall–Petch relationship was observed, which results an increase in strength. This is due to a decrease in stress concentration of grain boundary junctions and also due to the stress distribution of 5-7 defects along the grain boundary where the compressive and tensile stress are produced by the pentagon and heptagon rings, etc. Chen at al. have done research on the inverse HallPetch relations of high-entropy CoNiFeAl"x"Cu1–"x" alloys. In the work, polycrystalline models of FCC structured CoNiFeAl0.3Cu0.7 with grain sizes ranging from 7.2 nm to 18.8 nm were constructed to perform uniaxial compression using molecular dynamic simulations. All compression simulations were done after setting the periodic boundary conditions across the three orthogonal directions. It was found that when the grain size is below 12.1 nm the inverse Hall–Petch relation was observed. This is because as the grain size decreases partial dislocations become less prominent and so as deformation twinning. Instead, it was observed that there is a change in the grain orientation and migration of grain boundaries and thus cause the growth and shrinkage of neighboring grains. These are the mechanisms for inverse Hall–Petch relations. Sheinerman et al. also studied inverse Hall–Petch relation for nanocrystalline ceramics. It was found that the critical grain size for the transition from direct Hall–Petch to inverse Hall–Petch fundamentally depends on the activation energy of grain boundary sliding. This is because in direct Hall–Petch the dominant deformation mechanism is intragrain dislocation motion while in inverse Hall–Petch the dominant mechanism is grain boundary sliding. It was concluded that by plotting both the volume fraction of grain boundary sliding and volume fraction of intragrain dislocation motion as a function of grain size, the critical grain size could be found where the two curves cross.
Other explanations that have been proposed to rationalize the apparent softening of metals with nanosized grains include poor sample quality and the suppression of dislocation pileups.
The pileup of dislocations at grain boundaries is a hallmark mechanism of the Hall–Petch relationship. Once grain sizes drop below the equilibrium distance between dislocations, though, this relationship should no longer be valid. Nevertheless, it is not entirely clear what exactly the dependency of yield stress should be on grain sizes below this point.
Grain refinement.
Grain refinement, also known as "inoculation", is the set of techniques used to implement grain boundary strengthening in metallurgy. The specific techniques and corresponding mechanisms will vary based on what materials are being considered.
One method for controlling grain size in aluminum alloys is by introducing particles to serve as nucleants, such as Al–5%Ti. Grains will grow via heterogeneous nucleation; that is, for a given degree of undercooling beneath the melting temperature, aluminum particles in the melt will nucleate on the surface of the added particles. Grains will grow in the form of dendrites growing radially away from the surface of the nucleant. Solute particles can then be added (called grain refiners) which limit the growth of dendrites, leading to grain refinement. Al-Ti-B alloys are the most common grain refiner for Al alloys; however, novel refiners such as Al3Sc have been suggested.
One common technique is to induce a very small fraction of the melt to solidify at a much higher temperature than the rest; this will generate seed crystals that act as a template when the rest of the material falls to its (lower) melting temperature and begins to solidify. Since a huge number of minuscule seed crystals are present, a nearly equal number of crystallites result, and the size of any one grain is limited.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma \\propto \\frac{Gb} r, "
},
{
"math_id": 1,
"text": "\\tau_\\text{felt} = \\tau_\\text{applied} + n_\\text{dislocation} \\tau_\\text{dislocation} \\, "
},
{
"math_id": 2,
"text": "\\Delta \\tau \\propto {k \\over d^x}"
},
{
"math_id": 3,
"text": "\\sigma_\\text{y} = \\sigma_0 + {k_\\text{y} \\over \\sqrt {d}}"
}
] |
https://en.wikipedia.org/wiki?curid=14404885
|
1440511
|
Isotope geochemistry
|
Aspect of geology studying variations in isotope abundances in the natural environment
Isotope geochemistry is an aspect of geology based upon the study of natural variations in the relative abundances of isotopes of various elements. Variations in isotopic abundance are measured by isotope-ratio mass spectrometry, and can reveal information about the ages and origins of rock, air or water bodies, or processes of mixing between them.
Stable isotope geochemistry is largely concerned with isotopic variations arising from mass-dependent isotope fractionation, whereas radiogenic isotope geochemistry is concerned with the products of natural radioactivity.
Stable isotope geochemistry.
For most stable isotopes, the magnitude of fractionation from kinetic and equilibrium fractionation is very small; for this reason, enrichments are typically reported in "per mil" (‰, parts per thousand). These enrichments (δ) represent the ratio of heavy isotope to light isotope in the sample over the ratio of a standard. That is,
formula_0 ‰
Carbon.
Carbon has two stable isotopes, 12C and 13C, and one radioactive isotope, 14C.
The stable carbon isotope ratio, "δ"13C, is measured against Vienna Pee Dee Belemnite (VPDB). The stable carbon isotopes are fractionated primarily by photosynthesis (Faure, 2004). The 13C/12C ratio is also an indicator of paleoclimate: a change in the ratio in the remains of plants indicates a change in the amount of photosynthetic activity, and thus in how favorable the environment was for the plants. During photosynthesis, organisms using the C3 pathway show different enrichments compared to those using the C4 pathway, allowing scientists not only to distinguish organic matter from abiotic carbon, but also what type of photosynthetic pathway the organic matter was using. Occasional spikes in the global 13C/12C ratio have also been useful as stratigraphic markers for chemostratigraphy, especially during the Paleozoic.
The 14C ratio has been used to track ocean circulation, among other things.
Nitrogen.
Nitrogen has two stable isotopes, 14N and 15N. The ratio between these is measured relative to nitrogen in ambient air. Nitrogen ratios are frequently linked to agricultural activities. Nitrogen isotope data has also been used to measure the amount of exchange of air between the stratosphere and troposphere using data from the greenhouse gas N2O.
Oxygen.
Oxygen has three stable isotopes, 16O, 17O, and 18O. Oxygen ratios are measured relative to Vienna Standard Mean Ocean Water (VSMOW) or Vienna Pee Dee Belemnite (VPDB). Variations in oxygen isotope ratios are used to track both water movement, paleoclimate, and atmospheric gases such as ozone and carbon dioxide. Typically, the VPDB oxygen reference is used for paleoclimate, while VSMOW is used for most other applications. Oxygen isotopes appear in anomalous ratios in atmospheric ozone, resulting from mass-independent fractionation. Isotope ratios in fossilized foraminifera have been used to deduce the temperature of ancient seas.
Sulfur.
Sulfur has four stable isotopes, with the following abundances: 32S (0.9502), 33S (0.0075), 34S (0.0421) and 36S (0.0002). These abundances are compared to those found in Cañon Diablo troilite. Variations in sulfur isotope ratios are used to study the origin of sulfur in an orebody and the temperature of formation of sulfur–bearing minerals as well as a biosignature that can reveal presence of sulfate reducing microbes.
Radiogenic isotope geochemistry.
Radiogenic isotopes provide powerful tracers for studying the ages and origins of Earth systems. They are particularly useful to understand mixing processes between different components, because (heavy) radiogenic isotope ratios are not usually fractionated by chemical processes.
Radiogenic isotope tracers are most powerful when used together with other tracers: The more tracers used, the more control on mixing processes. An example of this application is to the evolution of the Earth's crust and Earth's mantle through geological time.
Lead–lead isotope geochemistry.
Lead has four stable isotopes: 204Pb, 206Pb, 207Pb, and 208Pb.
Lead is created in the Earth via decay of actinide elements, primarily uranium and thorium.
Lead isotope geochemistry is useful for providing isotopic dates on a variety of materials. Because the lead isotopes are created by decay of different transuranic elements, the ratios of the four lead isotopes to one another can be very useful in tracking the source of melts in igneous rocks, the source of sediments and even the origin of people via isotopic fingerprinting of their teeth, skin and bones.
It has been used to date ice cores from the Arctic shelf, and provides information on the source of atmospheric lead pollution.
Lead–lead isotopes has been successfully used in forensic science to fingerprint bullets, because each batch of ammunition has its own peculiar 204Pb/206Pb vs 207Pb/208Pb ratio.
Samarium–neodymium.
Samarium–neodymium is an isotope system which can be utilised to provide a date as well as isotopic fingerprints of geological materials, and various other materials including archaeological finds (pots, ceramics).
147Sm decays to produce 143Nd with a half life of 1.06x1011 years.
Dating is achieved usually by trying to produce an isochron of several minerals within a rock specimen. The initial 143Nd/144Nd ratio is determined.
This initial ratio is modelled relative to CHUR (the Chondritic Uniform Reservoir), which is an approximation of the chondritic material which formed the solar system. CHUR was determined by analysing chondrite and achondrite meteorites.
The difference in the ratio of the sample relative to CHUR can give information on a model age of extraction from the mantle (for which an assumed evolution has been calculated relative to CHUR) and to whether this was extracted from a granitic source (depleted in radiogenic Nd), the mantle, or an enriched source.
Rhenium–osmium.
Rhenium and osmium are siderophile elements which are present at very low abundances in the crust. Rhenium undergoes radioactive decay to produce osmium. The ratio of non-radiogenic osmium to radiogenic osmium throughout time varies.
Rhenium prefers to enter sulfides more readily than osmium. Hence, during melting of the mantle, rhenium is stripped out, and prevents the osmium–osmium ratio from changing appreciably. This "locks in" an initial osmium ratio of the sample at the time of the melting event. Osmium–osmium initial ratios are used to determine the source characteristic and age of mantle melting events.
Noble gas isotopes.
Natural isotopic variations amongst the noble gases result from both radiogenic and nucleogenic production processes. Because of their unique properties, it is useful to distinguish them from the conventional radiogenic isotope systems described above.
Helium-3.
Helium-3 was trapped in the planet when it formed. Some 3He is being added by meteoric dust, primarily collecting on the bottom of oceans (although due to subduction, all oceanic tectonic plates are younger than continental plates). However, 3He will be degassed from oceanic sediment during subduction, so cosmogenic 3He is not affecting the concentration or noble gas ratios of the mantle.
Helium-3 is created by cosmic ray bombardment, and by lithium spallation reactions which generally occur in the crust. Lithium spallation is the process by which a high-energy neutron bombards a lithium atom, creating a 3He and a 4He ion. This requires significant lithium to adversely affect the 3He/4He ratio.
All degassed helium is lost to space eventually, due to the average speed of helium exceeding the escape velocity for the Earth. Thus, it is assumed the helium content and ratios of Earth's atmosphere have remained essentially stable.
It has been observed that 3He is present in volcano emissions and oceanic ridge samples. How 3He is stored in the planet is under investigation, but it is associated with the mantle and is used as a marker of material of deep origin.
Due to similarities in helium and carbon in magma chemistry, outgassing of helium requires the loss of volatile components (water, carbon dioxide) from the mantle, which happens at depths of less than 60 km. However, 3He is transported to the surface primarily trapped in the crystal lattice of minerals within fluid inclusions.
Helium-4 is created by radiogenic production (by decay of uranium/thorium-series elements). The continental crust has become enriched with those elements relative to the mantle and thus more He4 is produced in the crust than in the mantle.
The ratio (R) of 3He to 4He is often used to represent 3He content. R usually is given as a multiple of the present atmospheric ratio (Ra).
Common values for R/Ra:
3He/4He isotope chemistry is being used to date groundwaters, estimate groundwater flow rates, track water pollution, and provide insights into hydrothermal processes, igneous geology and ore genesis.
Isotopes in actinide decay chains.
Isotopes in the decay chains of actinides are unique amongst radiogenic isotopes because they are both radiogenic and radioactive. Because their abundances are normally quoted as activity ratios rather than atomic ratios, they are best considered separately from the other radiogenic isotope systems.
Protactinium/Thorium – 231Pa/230Th.
Uranium is well mixed in the ocean, and its decay produces 231Pa and 230Th at a constant activity ratio (0.093). The decay products are rapidly removed by adsorption on settling particles, but not at equal rates. 231Pa has a residence equivalent to the residence time of deep water in the Atlantic basin (around 1000 yrs) but 230Th is removed more rapidly (centuries). Thermohaline circulation effectively exports 231Pa from the Atlantic into the Southern Ocean, while most of the 230Th remains in Atlantic sediments. As a result, there is a relationship between 231Pa/230Th in Atlantic sediments and the rate of overturning: faster overturning produces lower sediment 231Pa/230Th ratio, while slower overturning increases this ratio. The combination of δ13C and 231Pa/230Th can therefore provide a more complete insight into past circulation changes.
Anthropogenic isotopes.
Tritium/helium-3.
Tritium was released to the atmosphere during atmospheric testing of nuclear bombs. Radioactive decay of tritium produces the noble gas helium-3. Comparing the ratio of tritium to helium-3 (3H/3He) allows estimation of the age of recent ground waters. A small amount of tritium is also produced naturally by cosmic ray spallation and spontaneous ternary fission in natural uranium and thorium, but due to the relatively short half-life of tritium and the relatively small quantities (compared to those from anthropogenic sources) those sources of tritium usually play only a secondary role in the analysis of groundwater.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\delta \\ce{^{13}C} = \\left( \\frac{\\left( \\frac{\\ce{^{13}C}}{\\ce{^{12}C}} \\right)_{sample}}{\\left( \\frac{\\ce{^{13}C}}{\\ce{^{12}C}}\\right)_{standard}} -1 \\right) \\times 1000"
}
] |
https://en.wikipedia.org/wiki?curid=1440511
|
14405160
|
Synaptic weight
|
Strength or amplitude of a connection between two nodes in neuroscience and computer science
In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.
Computation.
In a computational neural network, a vector or set of inputs formula_0 and outputs formula_1, or pre- and post-synaptic neurons respectively, are interconnected with synaptic weights represented by the matrix formula_2, where for a linear neuron
formula_3.
where the rows of the synaptic matrix represent the vector of synaptic weights for the output indexed by formula_4.
The synaptic weight is changed by using a learning rule, the most basic of which is Hebb's rule, which is usually stated in biological terms as
"Neurons that fire together, wire together."
Computationally, this means that if a large signal from one of the input neurons results in a large signal from one of the output neurons, then the synaptic weight between those two neurons will increase. The rule is unstable, however, and is typically modified using such variations as Oja's rule, radial basis functions or the backpropagation algorithm.
Biology.
For biological networks, the effect of synaptic weights is not as simple as for linear neurons or Hebbian learning. However, biophysical models such as BCM theory have seen some success in mathematically describing these networks.
In the mammalian central nervous system, signal transmission is carried out by interconnected networks of nerve cells, or neurons. For the basic pyramidal neuron, the input signal is carried by the axon, which releases neurotransmitter chemicals into the synapse which is picked up by the dendrites of the next neuron, which can then generate an action potential which is analogous to the output signal in the computational case.
The synaptic weight in this process is determined by several variable factors:
The changes in synaptic weight that occur is known as synaptic plasticity, and the process behind long-term changes (long-term potentiation and depression) is still poorly understood. Hebb's original learning rule was originally applied to biological systems, but has had to undergo many modifications as a number of theoretical and experimental problems came to light.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textbf{x}"
},
{
"math_id": 1,
"text": "\\textbf{y}"
},
{
"math_id": 2,
"text": "w"
},
{
"math_id": 3,
"text": "y_j = \\sum_i w_{ij} x_i ~~\\textrm{or}~~ \\textbf{y} = w\\textbf{x}"
},
{
"math_id": 4,
"text": "j"
}
] |
https://en.wikipedia.org/wiki?curid=14405160
|
144052
|
Torsion subgroup
|
Subgroup of an abelian group consisting of all elements of finite order
In the theory of abelian groups, the torsion subgroup "AT" of an abelian group "A" is the subgroup of "A" consisting of all elements that have finite order (the torsion elements of "A"). An abelian group "A" is called a torsion group (or periodic group) if every element of "A" has finite order and is called torsion-free if every element of "A" except the identity is of infinite order.
The proof that "AT" is closed under the group operation relies on the commutativity of the operation (see examples section).
If "A" is abelian, then the torsion subgroup "T" is a fully characteristic subgroup of "A" and the factor group "A"/"T" is torsion-free. There is a covariant functor from the category of abelian groups to the category of torsion groups that sends every group to its torsion subgroup and every homomorphism to its restriction to the torsion subgroup. There is another covariant functor from the category of abelian groups to the category of torsion-free groups that sends every group to its quotient by its torsion subgroup, and sends every homomorphism to the obvious induced homomorphism (which is easily seen to be well-defined).
If "A" is finitely generated and abelian, then it can be written as the direct sum of its torsion subgroup "T" and a torsion-free subgroup (but this is not true for all infinitely generated abelian groups). In any decomposition of "A" as a direct sum of a torsion subgroup "S" and a torsion-free subgroup, "S" must equal "T" (but the torsion-free subgroup is not uniquely determined). This is a key step in the classification of finitely generated abelian groups.
"p"-power torsion subgroups.
For any abelian group formula_0 and any prime number "p" the set "ATp" of elements of "A" that have order a power of "p" is a subgroup called the "p"-power torsion subgroup or, more loosely, the "p"-torsion subgroup:
formula_1
The torsion subgroup "AT" is isomorphic to the direct sum of its "p"-power torsion subgroups over all prime numbers "p":
formula_2
When "A" is a finite abelian group, "ATp" coincides with the unique Sylow "p"-subgroup of "A".
Each "p"-power torsion subgroup of "A" is a fully characteristic subgroup. More strongly, any homomorphism between abelian groups sends each "p"-power torsion subgroup into the corresponding "p"-power torsion subgroup.
For each prime number "p", this provides a functor from the category of abelian groups to the category of "p"-power torsion groups that sends every group to its "p"-power torsion subgroup, and restricts every homomorphism to the "p"-torsion subgroups. The product over the set of all prime numbers of the restriction of these functors to the category of torsion groups, is a faithful functor from the category of torsion groups to the product over all prime numbers of the categories of "p"-torsion groups. In a sense, this means that studying "p"-torsion groups in isolation tells us everything about torsion groups in general.
⟨ "x", "y" | "x"² = "y"² = 1 ⟩
the element "xy" is a product of two torsion elements, but has infinite order.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(A, +)"
},
{
"math_id": 1,
"text": "A_{T_p}=\\{a\\in A \\;|\\; \\exists n\\in \\mathbb{N}\\;, p^n a = 0\\}.\\;"
},
{
"math_id": 2,
"text": "A_T \\cong \\bigoplus_{p\\in P} A_{T_p}.\\;"
}
] |
https://en.wikipedia.org/wiki?curid=144052
|
1440695
|
Elementary symmetric polynomial
|
Mathematical function
In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial "P" is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree "d" in "n" variables for each positive integer "d" ≤ "n", and it is formed by adding together all distinct products of "d" distinct variables.
Definition.
The elementary symmetric polynomials in "n" variables "X"1, ..., "X""n", written "e""k"("X"1, ..., "X""n") for "k"
1, ..., "n", are defined by
formula_0
and so forth, ending with
formula_1
In general, for "k" ≥ 0 we define
formula_2
so that "e""k"("X"1, ..., "X""n")
0 if "k" > "n".
Thus, for each positive integer "k" less than or equal to "n" there exists exactly one elementary symmetric polynomial of degree "k" in "n" variables. To form the one that has degree "k", we take the sum of all products of "k"-subsets of the "n" variables. (By contrast, if one performs the same operation using "multisets" of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.)
Given an integer partition (that is, a finite non-increasing sequence of positive integers) "λ"
("λ"1, ..., "λ""m"), one defines the symmetric polynomial "eλ"("X"1, ..., "X""n"), also called an elementary symmetric polynomial, by
formula_3.
Sometimes the notation "σ""k" is used instead of "e""k".
Examples.
The following lists the "n" elementary symmetric polynomials for the first four positive values of "n".
For "n"
1:
formula_4
For "n"
2:
formula_5
For "n"
3:
formula_6
For "n"
4:
formula_7
Properties.
The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity
formula_8
That is, when we substitute numerical values for the variables "X"1, "X"2, ..., "X""n", we obtain the monic univariate polynomial (with variable "λ") whose roots are the values substituted for "X"1, "X"2, ..., "X""n" and whose coefficients are – up to their sign – the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas.
The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain – up to their sign – the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of "e"1, and thus the sum of the eigenvalues. Similarly, the determinant is – up to the sign – the constant term of the characteristic polynomial, i.e. the value of "e""n". Thus the determinant of a square matrix is the product of the eigenvalues.
The set of elementary symmetric polynomials in "n" variables generates the ring of symmetric polynomials in "n" variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring formula_9["e"1("X"1, ..., "X""n"), ..., "en"("X"1, ..., "X""n")]. (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For another system of symmetric polynomials with the same property see Complete homogeneous symmetric polynomials, and for a system with a similar, but slightly weaker, property see Power sum symmetric polynomial.
Fundamental theorem of symmetric polynomials.
For any commutative ring "A", denote the ring of symmetric polynomials in the variables "X"1, ..., "X""n" with coefficients in "A" by "A"["X"1, ..., "X""n"]"Sn". This is a polynomial ring in the "n" elementary symmetric polynomials "ek"("X"1, ..., "X""n") for "k"
1, ..., "n".
This means that every symmetric polynomial "P"("X"1, ..., "Xn") ∈ "A"["X"1, ..., "X""n"]"Sn" has a unique representation
formula_10
for some polynomial "Q" ∈ "A"["Y"1, ..., "Yn"]. Another way of saying the same thing is that the ring homomorphism that sends "Yk" to "ek"("X"1, ..., "Xn") for "k"
1, ..., "n" defines an isomorphism between "A"["Y"1, ..., "Yn"] and "A"["X"1, ..., "X""n"]"Sn".
Proof sketch.
The theorem may be proved for symmetric homogeneous polynomials by a double induction with respect to the number of variables "n" and, for fixed "n", with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric).
In the case "n"
1 the result is trivial because every polynomial in one variable is automatically symmetric.
Assume now that the theorem has been proved for all polynomials for "m" < "n" variables and all symmetric polynomials in "n" variables with degree < "d". Every homogeneous symmetric polynomial "P" in "A"["X"1, ..., "X""n"]"Sn" can be decomposed as a sum of homogeneous symmetric polynomials
formula_11
Here the "lacunary part" "P"lacunary is defined as the sum of all monomials in "P" which contain only a proper subset of the "n" variables "X"1, ..., "X""n", i.e., where at least one variable "X""j" is missing.
Because "P" is symmetric, the lacunary part is determined by its terms containing only the variables "X"1, ..., "X""n" − 1, i.e., which do not contain "X""n". More precisely: If "A" and "B" are two homogeneous symmetric polynomials in "X"1, ..., "X""n" having the same degree, and if the coefficient of "A" before each monomial which contains only the variables "X"1, ..., "X""n" − 1 equals the corresponding coefficient of "B", then "A" and "B" have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables "X"1, ..., "X""n" − 1.)
But the terms of "P" which contain only the variables "X"1, ..., "X""n" − 1 are precisely the terms that survive the operation of setting "X""n" to 0, so their sum equals "P"("X"1, ..., "X""n" − 1, 0), which is a symmetric polynomial in the variables "X"1, ..., "X""n" − 1 that we shall denote by "P̃"("X"1, ..., "X""n" − 1). By the inductive hypothesis, this polynomial can be written as
formula_12
for some "Q̃". Here the doubly indexed "σ""j","n" − 1 denote the elementary symmetric polynomials in "n" − 1 variables.
Consider now the polynomial
formula_13
Then "R"("X"1, ..., "X""n") is a symmetric polynomial in "X"1, ..., "X""n", of the same degree as "P"lacunary, which satisfies
formula_14
(the first equality holds because setting "X""n" to 0 in "σ""j","n" gives "σ""j","n" − 1, for all "j" < "n"). In other words, the coefficient of "R" before each monomial which contains only the variables "X"1, ..., "X""n" − 1 equals the corresponding coefficient of "P". As we know, this shows that the lacunary part of "R" coincides with that of the original polynomial "P". Therefore the difference "P" − "R" has no lacunary part, and is therefore divisible by the product "X"1···"Xn" of all variables, which equals the elementary symmetric polynomial "σ""n","n". Then writing "P" − "R"
"σ""n","n""Q", the quotient "Q" is a homogeneous symmetric polynomial of degree less than "d" (in fact degree at most "d" − "n") which by the inductive hypothesis can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for "P" − "R" and "R" one finds a polynomial representation for "P".
The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the "n" polynomials "e"1, ..., "en" are algebraically independent over the ring "A".) The fact that the polynomial representation is unique implies that "A"["X"1, ..., "X""n"]"Sn" is isomorphic to "A"["Y"1, ..., "Yn"].
Alternative proof.
The following proof is also inductive, but does not involve other polynomials than those symmetric in "X"1, ..., "X""n", and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree "d"; different homogeneous components can be decomposed separately. Order the monomials in the variables "X""i" lexicographically, where the individual variables are ordered "X"1 > ... > "X""n", in other words the dominant term of a polynomial is one with the highest occurring power of "X"1, and among those the one with the highest power of "X"2, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree "d" (they are in fact homogeneous) as follows by partitions of "d". Order the individual elementary symmetric polynomials "e""i"("X"1, ..., "X""n") in the product so that those with larger indices "i" come first, then build for each such factor a column of "i" boxes, and arrange those columns from left to right to form a Young diagram containing "d" boxes in all. The shape of this diagram is a partition of "d", and each partition "λ" of "d" arises for exactly one product of elementary symmetric polynomials, which we shall denote by "e""λ""t" ("X"1, ..., "X""n") (the "t" is present only because traditionally this product is associated to the transpose partition of "λ"). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables "X""i".
Lemma. The leading term of "e""λ""t" ("X"1, ..., "X""n") is "X" "λ".
"Proof". The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor "e""i" ("X"1, ..., "X""n") is clearly "X"1"X"2···"X""i". To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1, ..., "i" of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is "X" "λ".
Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial "P" of degree "d" can be written as polynomial in the elementary symmetric polynomials. Since "P" is symmetric, its leading monomial has weakly decreasing exponents, so it is some "X" "λ" with "λ" a partition of "d". Let the coefficient of this term be "c", then "P" − "ce""λ""t" ("X"1, ..., "X""n") is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back "ce""λ""t" ("X"1, ..., "X""n") to it, one obtains the sought for polynomial expression for "P".
The fact that this expression is unique, or equivalently that all the products (monomials) "e""λ""t" ("X"1, ..., "X""n") of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the "e""λ""t" ("X"1, ..., "X""n") were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables "X""i") the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.
|
[
{
"math_id": 0,
"text": "\\begin{align}\n e_1 (X_1, X_2, \\dots,X_n) &= \\sum_{1 \\leq j \\leq n} X_j,\\\\\n e_2 (X_1, X_2, \\dots,X_n) &= \\sum_{1 \\leq j < k \\leq n} X_j X_k,\\\\\n e_3 (X_1, X_2, \\dots,X_n) &= \\sum_{1 \\leq j < k < l \\leq n} X_j X_k X_l,\\\\\n\\end{align}"
},
{
"math_id": 1,
"text": " e_n (X_1, X_2, \\dots,X_n) = X_1 X_2 \\cdots X_n."
},
{
"math_id": 2,
"text": " e_k (X_1 , \\ldots , X_n )=\\sum_{1\\le j_1 < j_2 < \\cdots < j_k \\le n} X_{j_1} \\dotsm X_{j_k},"
},
{
"math_id": 3,
"text": "e_\\lambda (X_1, \\dots,X_n) = e_{\\lambda_1}(X_1, \\dots, X_n) \\cdot e_{\\lambda_2}(X_1, \\dots, X_n) \\cdots e_{\\lambda_m}(X_1, \\dots, X_n)"
},
{
"math_id": 4,
"text": "e_1(X_1) = X_1."
},
{
"math_id": 5,
"text": "\\begin{align}\n e_1(X_1,X_2) &= X_1 + X_2,\\\\ \n e_2(X_1,X_2) &= X_1X_2.\\,\\\\\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\n e_1(X_1,X_2,X_3) &= X_1 + X_2 + X_3,\\\\ \n e_2(X_1,X_2,X_3) &= X_1X_2 + X_1X_3 + X_2X_3,\\\\\n e_3(X_1,X_2,X_3) &= X_1X_2X_3.\\,\\\\\n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\n e_1(X_1,X_2,X_3,X_4) &= X_1 + X_2 + X_3 + X_4,\\\\\n e_2(X_1,X_2,X_3,X_4) &= X_1X_2 + X_1X_3 + X_1X_4 + X_2X_3 + X_2X_4 + X_3X_4,\\\\\n e_3(X_1,X_2,X_3,X_4) &= X_1X_2X_3 + X_1X_2X_4 + X_1X_3X_4 + X_2X_3X_4,\\\\\n e_4(X_1,X_2,X_3,X_4) &= X_1X_2X_3X_4.\\,\\\\\n\\end{align}"
},
{
"math_id": 8,
"text": "\\prod_{j=1}^n ( \\lambda - X_j)=\\lambda^n - e_1(X_1,\\ldots,X_n)\\lambda^{n-1} + e_2(X_1,\\ldots,X_n)\\lambda^{n-2} + \\cdots +(-1)^n e_n(X_1,\\ldots,X_n)."
},
{
"math_id": 9,
"text": "\\mathbb{Z}"
},
{
"math_id": 10,
"text": " P(X_1,\\ldots, X_n)=Q\\big(e_1(X_1 , \\ldots ,X_n), \\ldots, e_n(X_1 , \\ldots ,X_n)\\big) "
},
{
"math_id": 11,
"text": " P(X_1,\\ldots,X_n)= P_{\\text{lacunary}} (X_1,\\ldots,X_n) + X_1 \\cdots X_n \\cdot Q(X_1,\\ldots,X_n). "
},
{
"math_id": 12,
"text": " \\tilde{P}(X_1, \\ldots, X_{n-1})=\\tilde{Q}(\\sigma_{1,n-1}, \\ldots, \\sigma_{n-1,n-1})"
},
{
"math_id": 13,
"text": "R(X_1, \\ldots, X_{n}):= \\tilde{Q}(\\sigma_{1,n}, \\ldots, \\sigma_{n-1,n}) ."
},
{
"math_id": 14,
"text": "R(X_1, \\ldots, X_{n-1},0) = \\tilde{Q}(\\sigma_{1,n-1}, \\ldots, \\sigma_{n-1,n-1}) = P(X_1, \\ldots,X_{n-1},0)"
}
] |
https://en.wikipedia.org/wiki?curid=1440695
|
14407606
|
Erwin Madelung
|
German physicist (1881–1972)
Erwin Madelung (18 May 1881 – 1 August 1972) was a German physicist.
He was born in 1881 in Bonn. His father was the surgeon Otto Wilhelm Madelung. He earned a doctorate in 1905 from the University of Göttingen, specializing in crystal structure, and eventually became a professor. It was during this time he developed the Madelung constant, which characterizes the net electrostatic effects of all ions in a crystal lattice, and is used to determine the energy of one ion.
In 1921 he succeeded Max Born as the Chair of Theoretical Physics at the Goethe University Frankfurt, which he held until his retirement in 1949. He specialized in atomic physics and quantum mechanics, and it was during this time he developed the Madelung equations, an alternative form of the Schrödinger equation.
He is also known for the Madelung rule, which states that atomic orbitals are filled in order of increasing formula_0 quantum numbers.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n + l"
}
] |
https://en.wikipedia.org/wiki?curid=14407606
|
1440770
|
Hans Peter Jørgen Julius Thomsen
|
Danish chemist (1826–1909)
Hans Peter Jørgen Julius Thomsen (16 February 1826 – 13 February 1909) was a Danish chemist noted in thermochemistry for the Thomsen–Berthelot principle.
Life and work.
Thomsen was born in Copenhagen, and spent his life in that city. From 1847 to 1856 he taught chemistry at the Polytechnic, where from 1883 to 1892 he was the director. From 1856 to 1866 he was on the staff of the military high school. In 1866 he was appointed professor of chemistry at the university, and retained that chair until his retirement from active work in 1891.
A friend and colleague of Ludwig A. Colding, who was one of the early advocates of the principle of conservation of energy, Thomsen did much to found the field of thermochemistry. In particular, between 1869 and 1882, he carried out a great number of determinations of the heat evolved or absorbed in chemical reactions, such as the formation of salts, oxidation and reduction, and the combustion of organic compounds. His collected results were published from 1882 to 1886 in four volumes under the title , and also a resume in English under the title "Thermochemistry" in 1908. In 1857 he established in Copenhagen a process for manufacturing soda from cryolite, obtained from the west coast of Greenland. Although his efforts at determining the structure of benzene were unsuccessful, the Thomsen graph formula_0 in mathematical graph theory is named after him, from an 1886 paper in which he proposed a benzene structure based on this graph.
Thomsen was elected a member of the Royal Swedish Academy of Sciences in 1880, and a Foreign Honorary Member of the American Academy of Arts and Sciences in 1884. He was awarded the Royal Society's Davy Medal in 1883.
Thomsen served on the Copenhagen City Council from 1861 to 1894, to which he lent his expertise in a number of areas during the city's development.
Katherine Alice Burke translated his book on systematic research in thermochemistry into English. This translation appeared in print in 1905.
Family.
His brother, Carl August Thomsen (1834–1894), was lecturer on technical chemistry at the Copenhagen Polytechnic, and a second brother, Thomas Gottfried Thomsen (1841–1901), was assistant in the chemical laboratory at the university until 1884, when he abandoned science for theology, subsequently becoming minister at Norup and Randers.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_{3,3}"
}
] |
https://en.wikipedia.org/wiki?curid=1440770
|
1440776
|
Adiabatic invariant
|
Property of physical systems that stays somewhat constant through slow changes
A property of a physical system, such as the entropy of a gas, that stays approximately constant when changes occur slowly is called an adiabatic invariant. By this it is meant that if a system is varied between two end points, as the time for the variation between the end points is increased to infinity, the variation of an adiabatic invariant between the two end points goes to zero.
In thermodynamics, an adiabatic process is a change that occurs without heat flow; it may be slow or fast. A reversible adiabatic process is an adiabatic process that occurs slowly compared to the time to reach equilibrium. In a reversible adiabatic process, the system is in equilibrium at all stages and the entropy is constant. In the 1st half of the 20th century the scientists that worked in quantum physics used the term "adiabatic" for reversible adiabatic processes and later for any gradually changing conditions which allow the system to adapt its configuration. The quantum mechanical definition is closer to the thermodynamical concept of a quasistatic process and has no direct relation with adiabatic processes in thermodynamics.
In mechanics, an adiabatic change is a slow deformation of the Hamiltonian, where the fractional rate of change of the energy is much slower than the orbital frequency. The area enclosed by the different motions in phase space are the "adiabatic invariants".
In quantum mechanics, an adiabatic change is one that occurs at a rate much slower than the difference in frequency between energy eigenstates. In this case, the energy states of the system do not make transitions, so that the quantum number is an adiabatic invariant.
The old quantum theory was formulated by equating the quantum number of a system with its classical adiabatic invariant. This determined the form of the Bohr–Sommerfeld quantization rule: the quantum number is the area in phase space of the classical orbit.
Thermodynamics.
In thermodynamics, adiabatic changes are those that do not increase the entropy. They occur slowly in comparison to the other characteristic timescales of the system of interest and allow heat flow only between objects at the same temperature. For isolated systems, an adiabatic change allows no heat to flow in or out.
Adiabatic expansion of an ideal gas.
If a container with an ideal gas is expanded instantaneously, the temperature of the gas doesn't change at all, because none of the molecules slow down. The molecules keep their kinetic energy, but now the gas occupies a bigger volume. If the container expands slowly, however, so that the ideal gas pressure law holds at any time, gas molecules lose energy at the rate that they do work on the expanding wall. The amount of work they do is the pressure times the area of the wall times the outward displacement, which is the pressure times the change in the volume of the gas:
formula_0
If no heat enters the gas, the energy in the gas molecules is decreasing by the same amount. By definition, a gas is ideal when its temperature is only a function of the internal energy per particle, not the volume. So
formula_1
where formula_2 is the specific heat at constant volume. When the change in energy is entirely due to work done on the wall, the change in temperature is given by
formula_3
This gives a differential relationship between the changes in temperature and volume, which can be integrated to find the invariant. The constant formula_4 is just a unit conversion factor, which can be set equal to one:
formula_5
So
formula_6
is an adiabatic invariant, which is related to the entropy
formula_7
Thus entropy is an adiabatic invariant. The "N" log("N") term makes the entropy additive, so the entropy of two volumes of gas is the sum of the entropies of each one.
In a molecular interpretation, "S" is the logarithm of the phase-space volume of all gas states with energy "E"("T") and volume "V".
For a monatomic ideal gas, this can easily be seen by writing down the energy:
formula_8
The different internal motions of the gas with total energy "E" define a sphere, the surface of a 3"N"-dimensional ball with radius formula_9. The volume of the sphere is
formula_10
where formula_11 is the gamma function.
Since each gas molecule can be anywhere within the volume "V", the volume in phase space occupied by the gas states with energy "E" is
formula_12
Since the "N" gas molecules are indistinguishable, the phase-space volume is divided by formula_13, the number of permutations of "N" molecules.
Using Stirling's approximation for the gamma function, and ignoring factors that disappear in the logarithm after taking "N" large,
formula_14
Since the specific heat of a monatomic gas is 3/2, this is the same as the thermodynamic formula for the entropy.
Wien's law – adiabatic expansion of a box of light.
For a box of radiation, ignoring quantum mechanics, the energy of a classical field in thermal equilibrium is infinite, since equipartition demands that each field mode has an equal energy on average, and there are infinitely many modes. This is physically ridiculous, since it means that all energy leaks into high-frequency electromagnetic waves over time.
Still, without quantum mechanics, there are some things that can be said about the equilibrium distribution from thermodynamics alone, because there is still a notion of adiabatic invariance that relates boxes of different size.
When a box is slowly expanded, the frequency of the light recoiling from the wall can be computed from the Doppler shift. If the wall is not moving, the light recoils at the same frequency. If the wall is moving slowly, the recoil frequency is only equal in the frame where the wall is stationary. In the frame where the wall is moving away from the light, the light coming in is bluer than the light coming out by twice the Doppler shift factor "v"/"c":
formula_15
On the other hand, the energy in the light is also decreased when the wall is moving away, because the light is doing work on the wall by radiation pressure. Because the light is reflected, the pressure is equal to twice the momentum carried by light, which is "E"/"c". The rate at which the pressure does work on the wall is found by multiplying by the velocity:
formula_16
This means that the change in frequency of the light is equal to the work done on the wall by the radiation pressure. The light that is reflected is changed both in frequency and in energy by the same amount:
formula_17
Since moving the wall slowly should keep a thermal distribution fixed, the probability that the light has energy "E" at frequency "f" must only be a function of "E"/"f".
This function cannot be determined from thermodynamic reasoning alone, and Wien guessed at the form that was valid at high frequency. He supposed that the average energy in high-frequency modes was suppressed by a Boltzmann-like factor:
formula_18
This is not the expected classical energy in the mode, which is formula_19 by equipartition, but a new and unjustified assumption that fit the high-frequency data.
When the expectation value is added over all modes in a cavity, this is Wien's distribution, and it describes the thermodynamic distribution of energy in a classical gas of photons. Wien's law implicitly assumes that light is statistically composed of packets that change energy and frequency in the same way. The entropy of a Wien gas scales as the volume to the power "N", where "N" is the number of packets. This led Einstein to suggest that light is composed of localizable particles with energy proportional to the frequency. Then the entropy of the Wien gas can be given a statistical interpretation as the number of possible positions that the photons can be in.
Classical mechanics – action variables.
Suppose that a Hamiltonian is slowly time-varying, for example, a one-dimensional harmonic oscillator with a changing frequency:
formula_20
The action "J" of a classical orbit is the area enclosed by the orbit in phase space:
formula_21
Since "J" is an integral over a full period, it is only a function of the energy. When the Hamiltonian is constant in time, and "J" is constant in time, the canonically conjugate variable formula_22 increases in time at a steady rate:
formula_23
So the constant formula_24 can be used to change time derivatives along the orbit to partial derivatives with respect to formula_22 at constant "J". Differentiating the integral for "J" with respect to "J" gives an identity that fixes formula_24:
formula_25
The integrand is the Poisson bracket of "x" and "p". The Poisson bracket of two canonically conjugate quantities, like "x" and "p", is equal to 1 in any canonical coordinate system. So
formula_26
and formula_24 is the inverse period. The variable formula_22 increases by an equal amount in each period for all values of "J" – it is an angle variable.
Adiabatic invariance of "J".
The Hamiltonian is a function of "J" only, and in the simple case of the harmonic oscillator,
formula_27
When "H" has no time dependence, "J" is constant. When "H" is slowly time-varying, the rate of change of "J" can be computed by re-expressing the integral for "J":
formula_28
The time derivative of this quantity is
formula_29
Replacing time derivatives with theta derivatives, using formula_30 and setting formula_31 without loss of generality (formula_32 being a global multiplicative constant in the resulting time derivative of the action) yields
formula_33
So as long as the coordinates "J", formula_22 do not change appreciably over one period, this expression can be integrated by parts to give zero. This means that for slow variations, there is no lowest-order change in the area enclosed by the orbit. This is the adiabatic invariance theorem – the action variables are adiabatic invariants.
For a harmonic oscillator, the area in phase space of an orbit at energy "E" is the area of the ellipse of constant energy,
formula_34
The "x" radius of this ellipse is formula_35 while the "p" radius of the ellipse is formula_9. Multiplying, the area is formula_36. So if a pendulum is slowly drawn in, such that the frequency changes, the energy changes by a proportional amount.
Old quantum theory.
After Planck identified that Wien's law can be extended to all frequencies, even very low ones, by interpolating with the classical equipartition law for radiation, physicists wanted to understand the quantum behavior of other systems.
The Planck radiation law quantized the motion of the field oscillators in units of energy proportional to the frequency:
formula_37
The quantum can only depend on the energy/frequency by adiabatic invariance, and since the energy must be additive when putting boxes end-to-end, the levels must be equally spaced.
Einstein, followed by Debye, extended the domain of quantum mechanics by considering the sound modes in a solid as quantized oscillators. This model explained why the specific heat of solids approached zero at low temperatures, instead of staying fixed at formula_38 as predicted by classical equipartition.
At the Solvay conference, the question of quantizing other motions was raised, and Lorentz pointed out a problem, known as Rayleigh–Lorentz pendulum. If you consider a quantum pendulum whose string is shortened very slowly, the quantum number of the pendulum cannot change because at no point is there a high enough frequency to cause a transition between the states. But the frequency of the pendulum changes when the string is shorter, so the quantum states change energy.
Einstein responded that for slow pulling, the frequency and energy of the pendulum both change, but the ratio stays fixed. This is analogous to Wien's observation that under slow motion of the wall the energy to frequency ratio of reflected waves is constant. The conclusion was that the quantities to quantize must be adiabatic invariants.
This line of argument was extended by Sommerfeld into a general theory: the quantum number of an arbitrary mechanical system is given by the adiabatic action variable. Since the action variable in the harmonic oscillator is an integer, the general condition is
formula_39
This condition was the foundation of the old quantum theory, which was able to predict the qualitative behavior of atomic systems. The theory is inexact for small quantum numbers, since it mixes classical and quantum concepts. But it was a useful half-way step to the new quantum theory.
Plasma physics.
In plasma physics there are three adiabatic invariants of charged-particle motion.
The first adiabatic invariant, μ.
The magnetic moment of a gyrating particle is
formula_40
which respects special relativity. formula_41 is the relativistic Lorentz factor, formula_42 is the rest mass, formula_43 is the velocity perpendicular to the magnetic field, and formula_44 is the magnitude of the magnetic field.
formula_45 is a constant of the motion to all orders in an expansion in formula_46, where formula_32 is the rate of any changes experienced by the particle, e.g., due to collisions or due to temporal or spatial variations in the magnetic field. Consequently, the magnetic moment remains nearly constant even for changes at rates approaching the gyrofrequency. When formula_45 is constant, the perpendicular particle energy is proportional to formula_44, so the particles can be heated by increasing formula_44, but this is a "one-shot" deal because the field cannot be increased indefinitely. It finds applications in magnetic mirrors and magnetic bottles.
There are some important situations in which the magnetic moment is "not" invariant:
The second adiabatic invariant, "J".
The longitudinal invariant of a particle trapped in a magnetic mirror,
formula_47
where the integral is between the two turning points, is also an adiabatic invariant. This guarantees, for example, that a particle in the magnetosphere moving around the Earth always returns to the same line of force. The adiabatic condition is violated in transit-time magnetic pumping, where the length of a magnetic mirror is oscillated at the bounce frequency, resulting in net heating.
The third adiabatic invariant, Φ.
The total magnetic flux formula_48 enclosed by a drift surface is the third adiabatic invariant, associated with the periodic motion of mirror-trapped particles drifting around the axis of the system. Because this drift motion is relatively slow, formula_48 is often not conserved in practical applications.
|
[
{
"math_id": 0,
"text": "\ndW = P \\, dV = \\frac{N k_\\text{B} T}{V} \\, dV.\n"
},
{
"math_id": 1,
"text": "\ndT = \\frac{1}{N C_v} \\, dE,\n"
},
{
"math_id": 2,
"text": "C_v"
},
{
"math_id": 3,
"text": "\nN C_v \\, dT = -dW = -\\frac{N k_\\text{B}T}{V} \\, dV.\n"
},
{
"math_id": 4,
"text": "k_\\text{B}"
},
{
"math_id": 5,
"text": "\nd(C_v N \\log T) = -d(N \\log V).\n"
},
{
"math_id": 6,
"text": "\nC_v N \\log T + N \\log V\n"
},
{
"math_id": 7,
"text": "\nS = C_v N \\log T + N \\log V - N \\log N = N \\log \\left(\\frac{T^{C_v} V}{N}\\right).\n"
},
{
"math_id": 8,
"text": "\nE = \\frac{1}{2m} \\sum_k \\left(p_{k1}^2 + p_{k2}^2 + p_{k3}^2 \\right).\n"
},
{
"math_id": 9,
"text": "\\sqrt{2mE}"
},
{
"math_id": 10,
"text": "\n\\frac{2\\pi^{3N/2}(2mE)^{(3N-1)/2}}{\\Gamma(3N/2)},"
},
{
"math_id": 11,
"text": "\\Gamma"
},
{
"math_id": 12,
"text": "\n\\frac{2\\pi^{3N/2}(2mE)^{(3N-1)/2} V^N}{\\Gamma(3N/2)}.\n"
},
{
"math_id": 13,
"text": "N! = \\Gamma(N + 1)"
},
{
"math_id": 14,
"text": "\n\\begin{align}\nS &= N \\left( \\tfrac{3}{2} \\log(E) - \\tfrac{3}{2} \\log(\\tfrac{3}{2}N) + \\log(V) - \\log(N) \\right) \\\\\n &= N \\left( \\tfrac{3}{2} \\log\\left(\\tfrac{2}{3} E/N\\right) + \\log\\left(\\frac{V}{N}\\right)\\right).\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\n\\Delta f = \\frac{2v}{c} f.\n"
},
{
"math_id": 16,
"text": "\n\\Delta E = v \\frac{2E}{c}.\n"
},
{
"math_id": 17,
"text": "\n\\frac{\\Delta f}{f} = \\frac{\\Delta E}{E}.\n"
},
{
"math_id": 18,
"text": "\n\\langle E_f \\rangle = e^{-\\beta h f}.\n"
},
{
"math_id": 19,
"text": "1/2\\beta"
},
{
"math_id": 20,
"text": "\nH_t(p, x) = \\frac{p^2}{2m} + \\frac{m \\omega(t)^2 x^2}{2}.\n"
},
{
"math_id": 21,
"text": "\nJ = \\int_0^T p(t) \\,\\frac{dx}{dt} \\,dt.\n"
},
{
"math_id": 22,
"text": "\\theta"
},
{
"math_id": 23,
"text": "\n\\frac{d\\theta}{dt} = \\frac{\\partial H}{\\partial J} = H'(J).\n"
},
{
"math_id": 24,
"text": "H'"
},
{
"math_id": 25,
"text": "\n\\frac{dJ}{dJ } = 1 = \\int_0^T \\left(\\frac{\\partial p}{\\partial J} \\frac{dx}{dt} +\n p \\frac{\\partial}{\\partial J} \\frac{dx}{dt}\\right) \\,dt =\nH' \\int_0^T \\left(\\frac{\\partial p}{\\partial J} \\frac{\\partial x}{\\partial \\theta} - \\frac{\\partial p}{\\partial \\theta} \\frac{\\partial x}{\\partial J}\\right) \\,dt.\n"
},
{
"math_id": 26,
"text": "\n1 = H' \\int_0^T \\{x, p\\} \\,dt = H' T,\n"
},
{
"math_id": 27,
"text": "\nH = \\omega J.\n"
},
{
"math_id": 28,
"text": "\nJ = \\int_0^{2\\pi} p \\frac{\\partial x}{\\partial \\theta} \\,d\\theta.\n"
},
{
"math_id": 29,
"text": "\n\\frac{dJ}{dt} = \\int_0^{2\\pi} \\left(\\frac{dp}{dt} \\frac{\\partial x}{\\partial \\theta} +\n p \\frac{d}{dt} \\frac{\\partial x}{\\partial \\theta}\\right) \\,d\\theta."
},
{
"math_id": 30,
"text": "d\\theta = \\omega \\, dt,"
},
{
"math_id": 31,
"text": "\\omega := 1"
},
{
"math_id": 32,
"text": "\\omega"
},
{
"math_id": 33,
"text": "\n\\frac{dJ}{dt} = \\int_0^{2\\pi} \\left(\\frac{\\partial p}{\\partial \\theta} \\frac{\\partial x}{\\partial \\theta} +\n p \\frac{\\partial}{\\partial \\theta} \\frac{\\partial x}{\\partial \\theta}\\right) \\,d\\theta.\n"
},
{
"math_id": 34,
"text": "\nE = \\frac{p^2}{2m} + \\frac{m\\omega^2 x^2}{2}.\n"
},
{
"math_id": 35,
"text": "\\sqrt{2E/\\omega^2m},"
},
{
"math_id": 36,
"text": "2\\pi E/\\omega"
},
{
"math_id": 37,
"text": "\nE = h f = \\hbar \\omega.\n"
},
{
"math_id": 38,
"text": "3k_\\text{B},"
},
{
"math_id": 39,
"text": "\n\\int p \\, dq = n h.\n"
},
{
"math_id": 40,
"text": "\n\\mu = \\frac{\\gamma m_0 v_\\perp^2}{2B},\n"
},
{
"math_id": 41,
"text": "\\gamma"
},
{
"math_id": 42,
"text": "m_0"
},
{
"math_id": 43,
"text": "v_\\perp"
},
{
"math_id": 44,
"text": "B"
},
{
"math_id": 45,
"text": "\\mu"
},
{
"math_id": 46,
"text": "\\omega/\\omega_c"
},
{
"math_id": 47,
"text": "\nJ = \\int_a^b p_\\parallel \\,ds,\n"
},
{
"math_id": 48,
"text": "\\Phi"
}
] |
https://en.wikipedia.org/wiki?curid=1440776
|
14407762
|
Seiberg–Witten invariants
|
4-manifold invariants
In mathematics, and especially gauge theory, Seiberg–Witten invariants are invariants of compact smooth oriented 4-manifolds introduced by Edward Witten (1994), using the Seiberg–Witten theory studied by Nathan Seiberg and Witten (1994a, 1994b) during their investigations of Seiberg–Witten gauge theory.
Seiberg–Witten invariants are similar to Donaldson invariants and can be used to prove similar (but sometimes slightly stronger) results about smooth 4-manifolds. They are technically much easier to work with than Donaldson invariants; for example, the moduli spaces of solutions of the Seiberg–Witten equations tends to be compact, so one avoids the hard problems involved in compactifying the moduli spaces in Donaldson theory.
For detailed descriptions of Seiberg–Witten invariants see , , , , . For the relation to symplectic manifolds and Gromov–Witten invariants see . For the early history see .
Spin"c"-structures.
The Spin"c" group (in dimension 4) is
formula_0
where the formula_1 acts as a sign on both factors. The group has a natural homomorphism to SO(4) = Spin(4)/±1.
Given a compact oriented 4 manifold, choose a smooth Riemannian metric formula_2 with Levi Civita connection formula_3. This reduces the structure group from the connected component GL(4)+ to SO(4) and is harmless from a homotopical point of view. A Spin"c"-structure or complex spin structure on "M" is a reduction of the structure group to Spin"c", i.e. a lift of the SO(4) structure on the tangent bundle to the group Spin"c". By a theorem of Hirzebruch and Hopf, every smooth oriented compact 4-manifold formula_4 admits a Spin"c" structure. The existence of a Spin"c" structure is equivalent to the existence of a lift of the second Stiefel–Whitney class formula_5 to a class formula_6 Conversely such a lift determines the Spin"c" structure up to 2 torsion in formula_7 A spin structure proper requires the more restrictive formula_8
A Spin"c" structure determines (and is determined by) a spinor bundle formula_9 coming from the 2 complex dimensional positive and negative spinor representation of Spin(4) on which U(1) acts by multiplication. We have formula_10. The spinor bundle formula_11 comes with a graded Clifford algebra bundle representation i.e. a map formula_12 such that for each 1 form formula_13 we have formula_14 and formula_15. There is a unique hermitian metric formula_16 on formula_11 s.t. formula_17 is skew Hermitian for real 1 forms formula_13. It gives an induced action of the forms formula_18 by anti-symmetrising. In particular this gives an isomorphism of formula_19 of the selfdual two forms with the traceless skew Hermitian endomorphisms of formula_20 which are then identified.
Seiberg–Witten equations.
Let formula_21 be the determinant line bundle with formula_22. For every connection formula_23 with formula_24 on formula_25, there is a unique spinor connection formula_26 on formula_11 i.e. a connection such that formula_27 for every 1-form formula_13 and vector field formula_28. The Clifford connection then defines a Dirac operator formula_29 on formula_11. The group of maps formula_30 acts as a gauge group on the set of all connections on formula_25. The action of formula_31 can be "gauge fixed" e.g. by the condition formula_32, leaving an effective parametrisation of the space of all such connections of formula_33 with a residual formula_34 gauge group action.
Write formula_35 for a spinor field of positive chirality, i.e. a section of formula_20. The Seiberg–Witten equations for formula_36 are now
formula_37
formula_38
Here formula_39 is the closed curvature 2-form of formula_40, formula_41 is its self-dual part, and σ is the squaring map formula_42 from formula_20 to the a traceless Hermitian endomorphism of formula_20 identified with an imaginary self-dual 2-form, and formula_43 is a real selfdual two form, often taken to be zero or harmonic. The gauge group formula_31 acts on the space of solutions. After adding the gauge fixing condition formula_32 the residual U(1) acts freely, except for "reducible solutions" with formula_44. For technical reasons, the equations are in fact defined in suitable Sobolev spaces of sufficiently high regularity.
An application of the Weitzenböck formula
formula_45
and the identity
formula_46
to solutions of the equations gives an equality
formula_47.
If formula_48 is maximal formula_49, so this shows that for any solution, the sup norm formula_50 is "a priori" bounded with the bound depending only on the scalar curvature formula_51 of formula_52 and the self dual form formula_43. After adding the gauge fixing condition, elliptic regularity of the Dirac equation shows that solutions are in fact "a priori" bounded in Sobolev norms of arbitrary regularity, which shows all solutions are smooth, and that the space of all solutions up to gauge equivalence is compact.
The solutions formula_53 of the Seiberg–Witten equations are called monopoles, as these equations are the field equations of massless magnetic monopoles on the manifold formula_4.
The moduli space of solutions.
The space of solutions is acted on by the gauge group, and the quotient by this action is called the moduli space of monopoles.
The moduli space is usually a manifold. For generic metrics, after gauge fixing, the equations cut out the solution space transversely and so define a smooth manifold. The residual U(1) "gauge fixed" gauge group U(1) acts freely except at reducible monopoles i.e. solutions with formula_44. By the Atiyah–Singer index theorem the moduli space is finite dimensional and has "virtual dimension"
formula_54
which for generic metrics is the actual dimension away from the reducibles. It means that the moduli space is generically empty if the virtual dimension is negative.
For a self dual 2 form formula_43, the reducible solutions have formula_44, and so are determined by connections formula_55 on formula_25 such that formula_56 for some anti selfdual 2-form formula_57. By the Hodge decomposition, since formula_58 is closed, the only obstruction to solving this equation for formula_59 given formula_57 and formula_43, is the harmonic part of formula_57 and formula_43, and the harmonic part, or equivalently, the (de Rham) cohomology class of the curvature form i.e. formula_60. Thus, since the formula_61 the necessary and sufficient condition for a reducible solution is
formula_62
where formula_63 is the space of harmonic anti-selfdual 2-forms. A two form formula_43 is formula_64-admissible if this condition is "not" met and solutions are necessarily irreducible. In particular, for formula_65, the moduli space is a (possibly empty) compact manifold for generic metrics and admissible formula_43. Note that, if formula_66 the space of formula_64-admissible two forms is connected, whereas if formula_67 it has two connected components (chambers). The moduli space can be given a natural orientation from an orientation on the space of positive harmonic 2 forms, and the first cohomology.
The "a priori" bound on the solutions, also gives "a priori" bounds on formula_68. There are therefore (for fixed formula_43) only finitely many formula_69, and hence only finitely many Spinc structures, with a non empty moduli space.
Seiberg–Witten invariants.
The Seiberg–Witten invariant of a four-manifold "M" with "b"2+("M") ≥ 2 is a map from the spin"c" structures on "M" to Z. The value of the invariant on a spin"c" structure is easiest to define when the moduli space is zero-dimensional (for a generic metric). In this case the value is the number of elements of the moduli space counted with signs.
The Seiberg–Witten invariant can also be defined when "b"2+("M") = 1, but then it depends on the choice of a chamber.
A manifold "M" is said to be of simple type if the Seiberg–Witten invariant vanishes whenever the expected dimension of the moduli space is nonzero. The simple type conjecture states that if "M" is simply connected and "b"2+("M") ≥ 2 then the manifold is of simple type. This is true for symplectic manifolds.
If the manifold "M" has a metric of positive scalar curvature and "b"2+("M") ≥ 2 then all Seiberg–Witten invariants of "M" vanish.
If the manifold "M" is the connected sum of two manifolds both of which have "b"2+ ≥ 1 then all Seiberg–Witten invariants of "M" vanish.
If the manifold "M" is simply connected and symplectic and "b"2+("M") ≥ 2 then it has a spin"c" structure "s" on which the Seiberg–Witten invariant is 1. In particular it cannot be split as a connected sum of manifolds with "b"2+ ≥ 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(U(1) \\times \\mathrm{Spin}(4))/(\\Z/2\\Z)."
},
{
"math_id": 1,
"text": "\\Z/2\\Z"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "\\nabla^{g}"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "w_2(M) \\in H^2(M,\\Z/2\\Z)"
},
{
"math_id": 6,
"text": "K \\in H^2(M, \\Z)."
},
{
"math_id": 7,
"text": "H^2(M,\\Z)."
},
{
"math_id": 8,
"text": "w_2(M) = 0."
},
{
"math_id": 9,
"text": "W = W^+ \\oplus W^-"
},
{
"math_id": 10,
"text": "K = c_1(W^+) = c_1(W^-)"
},
{
"math_id": 11,
"text": "W"
},
{
"math_id": 12,
"text": "\\gamma:\\mathrm{Cliff}(M, g) \\to \\mathcal{E}\\mathit{nd}(W)"
},
{
"math_id": 13,
"text": "a"
},
{
"math_id": 14,
"text": "\\gamma(a):W^\\pm \\to W^\\mp"
},
{
"math_id": 15,
"text": "\\gamma(a)^2 = - g(a, a)"
},
{
"math_id": 16,
"text": "h"
},
{
"math_id": 17,
"text": "\\gamma(a)"
},
{
"math_id": 18,
"text": "\\wedge^* M"
},
{
"math_id": 19,
"text": "\\wedge^+ M \\cong \\mathcal{E}\\mathit{nd}^{sh}_0(W^+)"
},
{
"math_id": 20,
"text": "W^+"
},
{
"math_id": 21,
"text": "L = \\det(W^+) \\equiv \\det(W^-)"
},
{
"math_id": 22,
"text": "c_1(L) = K"
},
{
"math_id": 23,
"text": "\\nabla_{A} = \\nabla_0 + A"
},
{
"math_id": 24,
"text": "A \\in iA^1_{\\R}(M)"
},
{
"math_id": 25,
"text": "L"
},
{
"math_id": 26,
"text": "\\nabla^{A}"
},
{
"math_id": 27,
"text": "\\nabla^{A}_X(\\gamma(a)) := [\\nabla^{A}_X, \\gamma(a)] = \\gamma(\\nabla^{g}_X a)"
},
{
"math_id": 28,
"text": "X"
},
{
"math_id": 29,
"text": "D^A = \\gamma \\otimes 1 \\circ \\nabla^A = \\gamma(dx^\\mu)\\nabla^A_\\mu"
},
{
"math_id": 30,
"text": "\\mathcal{G} = \\{u: M \\to U(1)\\}"
},
{
"math_id": 31,
"text": "\\mathcal{G}"
},
{
"math_id": 32,
"text": "d^*A = 0"
},
{
"math_id": 33,
"text": "H^1(M,\\R)^{\\mathrm{harm}}/H^1(M,\\Z) \\oplus d^* A^+_{\\R}(M)"
},
{
"math_id": 34,
"text": "U(1)"
},
{
"math_id": 35,
"text": "\\phi"
},
{
"math_id": 36,
"text": "(\\phi, \\nabla^A)"
},
{
"math_id": 37,
"text": "D^A\\phi=0"
},
{
"math_id": 38,
"text": "F^+_A=\\sigma(\\phi) + i\\omega"
},
{
"math_id": 39,
"text": "F^A \\in iA^2_{\\R}(M)"
},
{
"math_id": 40,
"text": "\\nabla^A"
},
{
"math_id": 41,
"text": "F^+_A"
},
{
"math_id": 42,
"text": "\\phi\\mapsto \\left (\\phi h(\\phi, -) -\\tfrac12 h(\\phi, \\phi)1_{W^+} \\right)"
},
{
"math_id": 43,
"text": "\\omega"
},
{
"math_id": 44,
"text": "\\phi = 0"
},
{
"math_id": 45,
"text": "{\\nabla^A}^*\\nabla^A \\phi = (D^A)^2\\phi - (\\tfrac12\\gamma(F_A^+) + s)\\phi"
},
{
"math_id": 46,
"text": " \\Delta_g |\\phi|_h^2 = 2h({\\nabla^A}^*\\nabla^A\\phi, \\phi) - 2|\\nabla^A\\phi|_{g\\otimes h}"
},
{
"math_id": 47,
"text": " \\Delta|\\phi|^2 + |\\nabla^A\\phi|^2 + \\tfrac14|\\phi|^4 = (-s)|\\phi|^2 - \\tfrac12h(\\phi,\\gamma(\\omega)\\phi)"
},
{
"math_id": 48,
"text": "|\\phi|^2"
},
{
"math_id": 49,
"text": "\\Delta|\\phi|^2\\ge 0"
},
{
"math_id": 50,
"text": "\\|\\phi\\|_\\infty"
},
{
"math_id": 51,
"text": "s"
},
{
"math_id": 52,
"text": "(M, g)"
},
{
"math_id": 53,
"text": "(\\phi,\\nabla^A)"
},
{
"math_id": 54,
"text": "(K^2-2\\chi_{\\mathrm{top}}(M)-3\\operatorname{sign}(M))/4"
},
{
"math_id": 55,
"text": "\\nabla_A = \\nabla_0 + A"
},
{
"math_id": 56,
"text": "F_0 + d A = i(\\alpha + \\omega)"
},
{
"math_id": 57,
"text": "\\alpha"
},
{
"math_id": 58,
"text": "F_0"
},
{
"math_id": 59,
"text": "A"
},
{
"math_id": 60,
"text": "[F_0] = F_0^{\\mathrm{harm}} = i (\\omega^{\\mathrm{harm}} + \\alpha^{\\mathrm{harm}}) \\in H^2(M, \\R)"
},
{
"math_id": 61,
"text": "[\\tfrac1{2\\pi i} F_0] = K "
},
{
"math_id": 62,
"text": " \\omega^{\\mathrm{harm}} \\in 2\\pi K + \\mathcal{H}^- \\in H^2(X,\\R)"
},
{
"math_id": 63,
"text": "\\mathcal{H}^-"
},
{
"math_id": 64,
"text": "K"
},
{
"math_id": 65,
"text": "b^+ \\ge 1"
},
{
"math_id": 66,
"text": "b_+ \\ge 2"
},
{
"math_id": 67,
"text": "b_+ = 1"
},
{
"math_id": 68,
"text": "F^{\\mathrm{harm}}"
},
{
"math_id": 69,
"text": "K \\in H^2(M,\\Z)"
}
] |
https://en.wikipedia.org/wiki?curid=14407762
|
14407845
|
Sextuple bond
|
Covalent bond involving 12 bonding electrons
A sextuple bond is a type of covalent bond involving 12 bonding electrons and in which the bond order is 6. The only known molecules with true sextuple bonds are the diatomic dimolybdenum (Mo2) and ditungsten (W2), which exist in the gaseous phase and have boiling points of and respectively.
Theoretical analysis.
Roos "et al" argue that no stable element can form bonds of higher order than a sextuple bond, because the latter corresponds to a hybrid of the "s" orbital and all five "d" orbitals, and "f" orbitals contract too close to the nucleus to bond in the lanthanides. Indeed, quantum mechanical calculations have revealed that the dimolybdenum bond is formed by a combination of two σ bonds, two π bonds and two δ bonds. (Also, the σ and π bonds contribute much more significantly to the sextuple bond than the δ bonds.)
Although no φ bonding has been reported for transition metal dimers, it is predicted that if any sextuply-bonded actinides were to exist, at least one of the bonds would likely be a φ bond as in quintuply-bonded diuranium and dineptunium. No sextuple bond has been observed in lanthanides or actinides.
For the majority of elements, even the possibility of a sextuple bond is foreclosed, because the "d" electrons ferromagnetically couple, instead of bonding. The only known exceptions are dimolybdenum and ditungsten.
Quantum-mechanical treatment.
The formal bond order (FBO) of a molecule is half the number of bonding electrons surplus to antibonding electrons; for a typical molecule, it attains exclusively integer values. A full quantum treatment requires a more nuanced picture, in which electrons may exist in a superposition, contributing fractionally to both bonding and antibonding orbitals. In a formal sextuple bond, there would be "P"
6 different electron pairs; an effective sextuple bond would then have all six contributing almost entirely to bonding orbitals.
In Roos et al's calculations, the effective bond order (EBO) could be determined by the formulaformula_0 where "η"b is the proportion of formal bonding orbital occupation for an electron pair p, "η"ab is the proportion of the formal antibonding orbital occupation, and c is a correction factor accounting for deviations from equilibrium geometry. Several metal-metal bonds' EBOs are given in the table at right, compared to their formal bond orders.
Dimolybdenum and ditungsten are the only molecules with effective bond orders above 5, with a quintuple bond and a partially formed sixth covalent bond. Dichromium, while formally described as having a sextuple bond, is best described as a pair of chromium atoms with all electron spins exchange-coupled to each other.
While diuranium is also formally described as having a sextuple bond, relativistic quantum mechanical calculations have determined it to be a quadruple bond with four electrons ferromagnetically coupled to each other rather than in two formal bonds. Previous calculations on diuranium did not treat the electronic molecular Hamiltonian relativistically and produced higher bond orders of 4.2 with two ferromagnetically coupled electrons.
Known instances: dimolybdenum and ditungsten.
Laser evaporation of a molybdenum sheet at low temperatures (7 K) produces gaseous dimolybdenum (Mo2). The resulting molecules can then be imaged with, for instance, near-infrared spectroscopy or UV spectroscopy.
Both ditungsten and dimolybdenum have very short bond lengths compared to neighboring metal dimers. For example, sextuply-bonded dimolybdenum has an equilibrium bond length of 1.93 Å. This equilibrium internuclear distance is significantly lower than in the dimer of any neighboring 4d transition metal, and suggestive of higher bond orders. However, the bond dissociation energies of ditungsten and dimolybdenum are rather low, because the short internuclear distance introduces geometric strain.
One empirical technique to determine bond order is spectroscopic examination of bond force constants. Linus Pauling investigated the relationships between bonding atoms and developed a formula that predicts that bond order is roughly proportional to the force constant; that is, formula_1 where n is the bond order, "k"e is the force constant of the interatomic interaction and "k"e(1) is the force constant of a single bond between the atoms.
The table at right shows some select force constants for metal-metal dimers compared to their EBOs; consistent with a sextuple bond, molybdenum's summed force constant is substantially more than quintuple the single-bond force constant.
Like dichromium, dimolybdenum and ditungsten are expected to exhibit a 1Σg+ singlet ground state. However, in tungsten, this ground state arises from a hybrid of either two 5D0 ground states or two 7S3 excited states. Only the latter corresponds to the formation of a stable, sextuply-bonded ditungsten dimer.
Ligand effects.
Although sextuple bonding in homodimers is rare, it remains a possibility in larger molecules.
Aromatics.
Theoretical computations suggest that bent dimetallocenes have a higher bond order than their linear counterparts. For this reason, the Schaefer lab has investigated dimetallocenes for natural sextuple bonds. However, such compounds tend to exhibit Jahn-Teller distortion, rather than a true sextuple bond.
For example, dirhenocene is bent. Calculating its frontier molecular orbitals suggests the existence of relatively stable singlet and triplet states, with a sextuple bond in the singlet state. But that state is the excited one; the triplet ground state should exhibit a formal quintuple bond. Similarly, for the dibenzene complexes Cr2(C6H6)2, Mo2(C6H6)2, and W2(C6H6)2, molecular bonding orbitals for the triplet states with symmetries D6h and D6d indicate the possibility of an intermetallic sextuple bond. Quantum chemistry calculations reveal, however, that the corresponding D2h singlet geometry is stabler than the D6h triplet state by , depending on the central metal.
Oxo ligands.
Both quantum mechanical calculations and photoelectron spectroscopy of the tungsten oxide clusters W2On (n = 1-6) indicate that increased oxidation state reduces the bond order in ditungsten. At first, the weak δ bonds break to yield a quadruply-bonded W2O; further oxidation generates the ditungsten complex W2O6 with two bridging oxo ligands and no direct W-W bonds.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "EBO = \\left ( \\frac{1}{2} \\right )\\sum_{p=1}^P(\\eta_{b,p}-\\eta_{ab,p})-c"
},
{
"math_id": 1,
"text": "k_e=n\\cdot k_e^{(1)}"
}
] |
https://en.wikipedia.org/wiki?curid=14407845
|
14408479
|
Biological neuron model
|
Mathematical descriptions of the properties of certain cells in the nervous system
Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons (or nerve cells) are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.
Central to these models is the description of how the membrane potential (that is, the difference in electric potential between the interior and the exterior of a biological cell) across the cell membrane changes over time. In an experimental setting, stimulating neurons with an electrical current generates an action potential (or spike), that propagates down the neuron's axon. This axon can branch out and connect to a large number of downstream neurons at sites called synapses. At these synapses, the spike can cause the release of neurotransmitters, which in turn can change the voltage potential of downstream neurons. This change can potentially lead to even more spikes in those downstream neurons, thus passing down the signal. As many as 85% of neurons in the neocortex, the outermost layer of the mammalian brain, consist of excitatory pyramidal neurons, and each pyramidal neuron receives tens of thousands of inputs from other neurons. Thus, spiking neurons are a major information processing unit of the nervous system.
One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatial morphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of trans-membrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description of ion channel and membrane potential dynamics (initially studied by Lapique in 1907).
Biological background, classification, and aims of neuron models.
Non-spiking cells, spiking cells, and their measurement
Not all the cells of the nervous system produce the type of spike that defines the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified as glia.
Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials.
With extracellular measurement techniques, one or more electrodes are placed in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages:
Overview of neuron models
Neuron models can be divided into two categories according to the physical units of the interface of the model. Each category could be further divided according to the abstraction/detail level:
Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (network effects).
Aims of neuron models
Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. However, several approaches can be distinguished, from more realistic models (e.g., mechanistic models) to more pragmatic models (e.g., phenomenological models). Modeling helps to analyze experimental data and address questions. Models are also important in the context of restoring lost brain functionality through neuroprosthetic devices.
Electrical input–output membrane voltage models.
The models in this category describe the relationship between neuronal membrane currents at the input stage and membrane voltage at the output stage. This category includes (generalized) integrate-and-fire models and biophysical models inspired by the work of Hodgkin–Huxley in the early 1950s using an experimental setup that punctured the cell membrane and allowed to force a specific membrane voltage/current.
Most modern electrical neural interfaces apply extra-cellular electrical stimulation to avoid membrane puncturing, which can lead to cell death and tissue damage. Hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.).
Hodgkin–Huxley.
The Hodgkin–Huxley model (H&H model)
is a model of the relationship between the flow of ionic currents across the neuronal cell membrane and the membrane voltage of the cell. It consists of a set of nonlinear differential equations describing the behavior of ion channels that permeate the cell membrane of the squid giant axon. Hodgkin and Huxley were awarded the 1963 Nobel Prize in Physiology or Medicine for this work.
It is important to note the voltage-current relationship, with multiple voltage-dependent currents charging the cell membrane of capacity "C"m
formula_0
The above equation is the time derivative of the law of capacitance, "Q"
"CV" where the change of the total charge must be explained as the sum over the currents. Each current is given by
formula_1
where "g"("t","V") is the conductance, or inverse resistance, which can be expanded in terms of its maximal conductance "ḡ" and the activation and inactivation fractions "m" and "h", respectively, that determine how many ions can flow through available membrane channels. This expansion is given by
formula_2
and our fractions follow the first-order kinetics
formula_3
with similar dynamics for "h", where we can use either "τ" and "m"∞ or "α" and "β" to define our gate fractions.
The Hodgkin–Huxley model may be extended to include additional ionic currents. Typically, these include inward Ca2+ and Na+ input currents, as well as several varieties of K+ outward currents, including a "leak" current.
The result can be at the small end of 20 parameters which one must estimate or measure for an accurate model. In a model of a complex system of neurons, numerical integration of the equations are computationally expensive. Careful simplifications of the Hodgkin–Huxley model are therefore needed.
The model can be reduced to two dimensions thanks to the dynamic relations which can be established between the gating variables. it is also possible to extend it to take into account the evolution of the concentrations (considered fixed in the original model).
Perfect Integrate-and-fire.
One of the earliest models of a neuron is the perfect integrate-and-fire model (also called non-leaky integrate-and-fire), first investigated in 1907 by Louis Lapicque. A neuron is represented by its membrane voltage "V" which evolves in time during stimulation with an input current "I(t)" according
formula_4
which is just the time derivative of the law of capacitance, "Q"
"CV". When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold "V"th, at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The "firing frequency" of the model thus increases linearly without bound as input current increases.
The model can be made more accurate by introducing a refractory period "t"ref that limits the firing frequency of a neuron by preventing it from firing during that period. For constant input "I(t)=I" the threshold voltage is reached after an integration time tint=CVthr/I after starting from zero. After a reset, the refractory period introduces a dead time so that the total time until the next firing is "t"ref+"t"int . The firing frequency is the inverse of the total inter-spike interval (including dead time). The firing frequency as a function of a constant input current, is therefore
formula_5
A shortcoming of this model is that it describes neither adaptation nor leakage. If the model receives a below-threshold short current pulse at some time, it will retain that voltage boost forever - until another input later makes it fire. This characteristic is not in line with observed neuronal behavior. The following extensions make the integrate-and-fire model more plausible from a biological point of view.
Leaky integrate-and-fire.
The leaky integrate-and-fire model, which can be traced back to Louis Lapicque, contains a "leak" term in the membrane potential equation that reflects the diffusion of ions through the membrane, unlike the non-leaky integrate-and-fire model. The model equation looks like
formula_6
where "V"m is the voltage across the cell membrane and "R"m is the membrane resistance. (The non-leaky integrate-and-fire model is retrieved in the limit "R"m to infinity, i.e. if the membrane is a perfect insulator). The model equation is valid for arbitrary time-dependent input until a threshold "V"th is reached; thereafter the membrane potential is reset.
For constant input, the minimum input to reach the threshold is "I"th
"V"th / "R"m. Assuming a reset to zero, the firing frequency thus looks like
formula_7
which converges for large input currents to the previous leak-free model with the refractory period. The model can also be used for inhibitory neurons.
The most significant disadvantage of this model is that it does not contain neuronal adaptation, so that it cannot describe an experimentally measured spike train in response to constant input current. This disadvantage is removed in generalized integrate-and-fire models that also contain one or several adaptation-variables and are able to predict spike times of cortical neurons under current injection to a high degree of accuracy.
Adaptive integrate-and-fire.
Neuronal adaptation refers to the fact that even in the presence of a constant current injection into the soma, the intervals between output spikes increase. An adaptive integrate-and-fire neuron model combines the leaky integration of voltage "V" with one or several adaptation variables "w"k (see Chapter 6.1. in the textbook Neuronal Dynamics)
formula_8
formula_9
where formula_10 is the membrane time constant, "w"k is the adaptation current number, with index "k", formula_11 is the time constant of adaptation current "w""k", "E"m is the resting potential and "t"f is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value "V"r below the firing threshold. The reset value is one of the important parameters of the model. The simplest model of adaptation has only a single adaptation variable "w" and the sum over k is removed.
Integrate-and-fire neurons with one or several adaptation variables can account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. Moreover, adaptive integrate-and-fire neurons with several adaptation variables are able to predict spike times of cortical neurons under time-dependent current injection into the soma.
Fractional-order leaky integrate-and-fire.
Recent advances in computational and theoretical fractional calculus lead to a new form of model called Fractional-order leaky integrate-and-fire. An advantage of this model is that it can capture adaptation effects with a single variable. The model has the following form
formula_12
Once the voltage hits the threshold it is reset. Fractional integration has been used to account for neuronal adaptation in experimental data.
'Exponential integrate-and-fire' and 'adaptive exponential integrate-and-fire'.
In the exponential integrate-and-fire model, spike generation is exponential, following the equation:
formula_13
where formula_14 is the membrane potential, formula_15 is the intrinsic membrane potential threshold, formula_10 is the membrane time constant, formula_16is the resting potential, and formula_17 is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses formula_15, it diverges to infinity in finite time. In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than formula_15) at which the membrane potential is reset to a value "V"r . The voltage reset value "V"r is one of the important parameters of the model. Importantly, the right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data. In this sense the exponential nonlinearity is strongly supported by experimental evidence.
In the adaptive exponential integrate-and-fire neuron the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w
formula_18
formula_19
where "w" denotes the adaptation current with time scale formula_20. Important model parameters are the voltage reset value "V"r, the intrinsic threshold formula_15, the time constants formula_20 and formula_10 as well as the coupling parameters "a" and "b". The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting. However, since the adaptation is in the form of a current, aberrant hyperpolarization may appear. This problem was solved by expressing it as a conductance.
Adaptive Threshold Neuron Model.
In this model, a time-dependent function formula_21 is added to the fixed threshold, formula_22, after every spike, causing an adaptation of the threshold. The threshold potential, formula_23, gradually returns to its steady state value depending on the threshold adaptation time constant formula_24. This is one of the simpler techniques to achieve spike frequency adaptation. The expression for the adaptive threshold is given by:
formula_25
where formula_21 is defined by: formula_26
When the membrane potential, formula_27, reaches a threshold, it is reset to formula_28:
formula_29
A simpler version of this with a single time constant in threshold decay with an LIF neuron is realized in to achieve LSTM like recurrent spiking neural networks to achieve accuracy nearer to ANNs on few spatio temporal tasks.
Double Exponential Adaptive Threshold (DEXAT).
The DEXAT neuron model is a flavor of adaptive neuron model in which the threshold voltage decays with a double exponential having two time constants. Double exponential decay is governed by a fast initial decay and then a slower decay over a longer period of time. This neuron used in SNNs through surrogate gradient creates an adaptive learning rate yielding higher accuracy and faster convergence, and flexible long short-term memory compared to existing counterparts in the literature. The membrane potential dynamics are described through equations and the threshold adaptation rule is:
formula_30
The dynamics of formula_31 and formula_32 are given by
formula_33,
formula_34,
where formula_35 and formula_36.
Further, multi-time scale adaptive threshold neuron model showing more complex dynamics is shown in.
Stochastic models of membrane voltage and spike timing.
The models in this category are generalized integrate-and-fire models that include a certain level of stochasticity. Cortical neurons in experiments are found to respond reliably to time-dependent input, albeit with a small degree of variations between one trial and the next if the same stimulus is repeated. Stochasticity in neurons has two important sources. First, even in a very controlled experiment where input current is injected directly into the soma, ion channels open and close stochastically and this channel noise leads to a small amount of variability in the exact value of the membrane potential and the exact timing of output spikes. Second, for a neuron embedded in a cortical network, it is hard to control the exact input because most inputs come from unobserved neurons somewhere else in the brain.
Stochasticity has been introduced into spiking neuron models in two fundamentally different forms: either (i) a noisy input current is added to the differential equation of the neuron model; or (ii) the process of spike generation is noisy. In both cases, the mathematical theory can be developed for continuous time, which is then, if desired for the use in computer simulations, transformed into a discrete-time model.
The relation of noise in neuron models to the variability of spike trains and neural codes is discussed in Neural Coding and in Chapter 7 of the textbook Neuronal Dynamics.
Noisy input model (diffusive noise).
A neuron embedded in a network receives spike input from other neurons. Since the spike arrival times are not controlled by an experimentalist they can be considered as stochastic. Thus a (potentially nonlinear) integrate-and-fire model with nonlinearity f(v) receives two inputs: an input formula_37 controlled by the experimentalists and a noisy input current formula_38 that describes the uncontrolled background input.
formula_39
Stein's model is the special case of a leaky integrate-and-fire neuron and a stationary white noise current formula_40 with mean zero and unit variance. In the subthreshold regime, these assumptions yield the equation of the Ornstein–Uhlenbeck process
formula_41
However, in contrast to the standard Ornstein–Uhlenbeck process, the membrane voltage is reset whenever "V" hits the firing threshold "V"th. Calculating the interval distribution of the Ornstein–Uhlenbeck model for constant input with threshold leads to a first-passage time problem. Stein's neuron model and variants thereof have been used to fit interspike interval distributions of spike trains from real neurons under constant input current.
In the mathematical literature, the above equation of the Ornstein–Uhlenbeck process is written in the form
formula_42
where formula_43 is the amplitude of the noise input and "dW" are increments of a Wiener process. For discrete-time implementations with time step dt the voltage updates are
formula_44
where y is drawn from a Gaussian distribution with zero mean unit variance. The voltage is reset when it hits the firing threshold "V"th.
The noisy input model can also be used in generalized integrate-and-fire models. For example, the exponential integrate-and-fire model with noisy input reads
formula_45
For constant deterministic input formula_46 it is possible to calculate the mean firing rate as a function of formula_47. This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron.
The leaky integrate-and-fire with noisy input has been widely used in the analysis of networks of spiking neurons. Noisy input is also called 'diffusive noise' because it leads to a diffusion of the subthreshold membrane potential around the noise-free trajectory (Johannesma, The theory of spiking neurons with noisy input is reviewed in Chapter 8.2 of the textbook "Neuronal Dynamics".
Noisy output model (escape noise).
In deterministic integrate-and-fire models, a spike is generated if the membrane potential "V"(t) hits the threshold formula_48. In noisy output models, the strict threshold is replaced by a noisy one as follows. At each moment in time t, a spike is generated stochastically with instantaneous stochastic intensity or 'escape rate' ""
formula_49
that depends on the momentary difference between the membrane voltage "V"(t) and the threshold formula_48. A common choice for the 'escape rate' formula_50 (that is consistent with biological data) is
formula_51
where formula_52is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and formula_53 is a sharpness parameter. For formula_54 the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is formula_55 which means that neuronal firing becomes non-negligible as soon as the membrane potential is a few mV below the formal firing threshold.
The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook "Neuronal Dynamics."
For models in discrete time, a spike is generated with probability
formula_56
that depends on the momentary difference between the membrane voltage "V" at time formula_57 and the threshold formula_48. The function F is often taken as a standard sigmoidal formula_58 with steepness parameter formula_59, similar to the update dynamics in artificial neural networks. But the functional form of F can also be derived from the stochastic intensity formula_50 in continuous time introduced above as formula_60 where formula_61 is the threshold distance.
Integrate-and-fire models with output noise can be used to predict the peristimulus time histogram (PSTH) of real neurons under arbitrary time-dependent input. For non-adaptive integrate-and-fire neurons, the interval distribution under constant stimulation can be calculated from stationary renewal theory. ""
Spike response model (SRM).
"main article": Spike response model
The spike response model (SRM) is a generalized linear model for the subthreshold membrane voltage combined with a nonlinear output noise process for spike generation. The membrane voltage "V"(t) at time "t" is
formula_62
where "t"f is the firing time of spike number f of the neuron, "V"rest is the resting voltage in the absence of input, "I(t-s)" is the input current at time t-s and formula_63 is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t-s to the voltage at time t. The contributions to the voltage caused by a spike at time formula_64 are described by the refractory kernel formula_65. In particular, formula_65 describes the reset after the spike and the time course of the spike-afterpotential following a spike. It therefore expresses the consequences of refractoriness and adaptation. The voltage V(t) can be interpreted as the result of an integration of the differential equation of a leaky integrate-and-fire model coupled to an arbitrary number of spike-triggered adaptation variables.
Spike firing is stochastic and happens with a time-dependent stochastic intensity (instantaneous rate)
formula_66
with parameters formula_52 and formula_53 and a dynamic threshold formula_67 given by
formula_68
Here formula_69 is the firing threshold of an inactive neuron and formula_70 describes the increase of the threshold after a spike at time formula_64. In case of a fixed threshold, one sets formula_71. For formula_72 the threshold process is deterministic.""
The time course of the filters formula_73 that characterize the spike response model can be directly extracted from experimental data. With optimized parameters the SRM describes the time course of the subthreshold membrane voltage for time-dependent input with a precision of 2mV and can predict the timing of most output spikes with a precision of 4ms. The SRM is closely related to linear-nonlinear-Poisson cascade models (also called Generalized Linear Model). The estimation of parameters of probabilistic neuron models such as the SRM using methods developed for Generalized Linear Models is discussed in Chapter 10 of the textbook "Neuronal Dynamics".
The name spike response model arises because, in a network, the input current for neuron i is generated by the spikes of other neurons so that in the case of a network the voltage equation becomes
formula_74
where formula_75is the firing times of neuron j (i.e., its spike train); formula_76 describes the time course of the spike and the spike after-potential for neuron i; and formula_77 and formula_78 describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike formula_75of the presynaptic neuron j. The time course formula_79 of the PSP results from the convolution of the postsynaptic current formula_37 caused by the arrival of a presynaptic spike from neuron j with the membrane filter formula_63.
SRM0.
The SRM is a stochastic neuron model related to time-dependent nonlinear renewal theory and a simplification of the Spike Response Model (SRM). The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel formula_80 there is no summation sign over past spikes: only the "most recent spike" (denoted as the time formula_81) matters. Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is
formula_82
and the network equations of the SRM are
formula_83
where formula_84 is the "last firing time neuron" i. Note that the time course of the postsynaptic potential formula_85 is also allowed to depend on the time since the last spike of neuron i to describe a change in membrane conductance during refractoriness. The instantaneous firing rate (stochastic intensity) is
formula_86
where formula_48 is a fixed firing threshold. Thus spike firing of neuron i depends only on its input and the time since neuron i has fired its last spike.
With the SRM, the interspike-interval distribution for constant input can be mathematically linked to the shape of the refractory kernel formula_87 . Moreover the stationary frequency-current relation can be calculated from the escape rate in combination with the refractory kernel formula_87. With an appropriate choice of the kernels, the SRM approximates the dynamics of the Hodgkin-Huxley model to a high degree of accuracy. Moreover, the PSTH response to arbitrary time-dependent input can be predicted.
Galves–Löcherbach model.
The Galves–Löcherbach model is a stochastic neuron model closely related to the spike response model SRM and the leaky integrate-and-fire model. It is inherently stochastic and, just like the SRM, it is linked to time-dependent nonlinear renewal theory. Given the model specifications, the probability that a given neuron formula_88 spikes in a period formula_89 may be described by
formula_90
where formula_91 is a synaptic weight, describing the influence of neuron formula_92 on neuron formula_88, formula_93 expresses the leak, and formula_94 provides the spiking history of neuron formula_88 before formula_89, according to
formula_95
Importantly, the spike probability of neuron formula_88 depends only on its spike input (filtered with a kernel formula_96 and weighted with a factor formula_97) and the timing of its most recent output spike (summarized by formula_98).
Didactic toy models of membrane voltage.
The models in this category are highly simplified toy models that qualitatively describe the membrane voltage as a function of input. They are mainly used for didactic reasons in teaching but are not considered valid neuron models for large-scale simulations or data fitting.
FitzHugh–Nagumo.
Sweeping simplifications to Hodgkin–Huxley were introduced by FitzHugh and Nagumo in 1961 and 1962. Seeking to describe "regenerative self-excitation" by a nonlinear positive-feedback membrane voltage and recovery by a linear negative-feedback gate voltage, they developed the model described by
formula_99
where we again have a membrane-like voltage and input current with a slower general gate voltage "w" and experimentally-determined parameters "a"
-0.7, "b"
0.8, "τ"
1/0.08. Although not derivable from biology, the model allows for a simplified, immediately available dynamic, without being a trivial simplification. The experimental support is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook "Methods of Neuronal Modeling".
Morris–Lecar.
In 1981, Morris and Lecar combined the Hodgkin–Huxley and FitzHugh–Nagumo models into a voltage-gated calcium channel model with a delayed-rectifier potassium channel represented by
formula_100
where formula_101. The experimental support of the model is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook "Methods of Neuronal Modeling".
A two-dimensional neuron model very similar to the Morris-Lecar model can be derived step-by-step starting from the Hodgkin-Huxley model. See Chapter 4.2 in the textbook Neuronal Dynamics.
Hindmarsh–Rose.
Building upon the FitzHugh–Nagumo model, Hindmarsh and Rose proposed in 1984 a model of neuronal activity described by three coupled first-order differential equations:
formula_102
with "r"2
"x"2 + "y"2 + "z"2, and "r" ≈ 10−2 so that the "z" variable only changes very slowly. This extra mathematical complexity allows a great variety of dynamic behaviors for the membrane potential, described by the "x" variable of the model, which includes chaotic dynamics. This makes the Hindmarsh–Rose neuron model very useful, because it is still simple, allows a good qualitative description of the many different firing patterns of the action potential, in particular bursting, observed in experiments. Nevertheless, it remains a toy model and has not been fitted to experimental data. It is widely used as a reference model for bursting dynamics.
Theta model and quadratic integrate-and-fire.
The theta model, or Ermentrout–Kopell canonical Type I model, is mathematically equivalent to the quadratic integrate-and-fire model which in turn is an approximation to the exponential integrate-and-fire model and the Hodgkin-Huxley model. It is called a canonical model because it is one of the generic models for constant input close to the bifurcation point, which means close to the transition from silent to repetitive firing.
The standard formulation of the theta model is
formula_103
The equation for the quadratic integrate-and-fire model is (see Chapter 5.3 in the textbook Neuronal Dynamics )
formula_104
The equivalence of theta model and quadratic integrate-and-fire is for example reviewed in Chapter 4.1.2.2 of spiking neuron models.
For input formula_37 that changes over time or is far away from the bifurcation point, it is preferable to work with the exponential integrate-and-fire model (if one wants to stay in the class of one-dimensional neuron models), because real neurons exhibit the nonlinearity of the exponential integrate-and-fire model.
Sensory input-stimulus encoding neuron models.
The models in this category were derived following experiments involving natural stimulation such as light, sound, touch, or odor. In these experiments, the spike pattern resulting from each stimulus presentation varies from trial to trial, but the averaged response from several trials often converges to a clear pattern. Consequently, the models in this category generate a probabilistic relationship between the input stimulus to spike occurrences. Importantly, the recorded neurons are often located several processing steps after the sensory neurons, so that these models summarize the effects of the sequence of processing steps in a compact form
The non-homogeneous Poisson process model (Siebert).
Siebert modeled the neuron spike firing pattern using a non-homogeneous Poisson process model, following experiments involving the auditory system. According to Siebert, the probability of a spiking event at the time interval formula_105 is proportional to a non-negative function formula_106, where formula_107 is the raw stimulus.:
formula_108
Siebert considered several functions as formula_106, including formula_109 for low stimulus intensities.
The main advantage of Siebert's model is its simplicity. The shortcomings of the model is its inability to reflect properly the following phenomena:
These shortcomings are addressed by the age-dependent point process model and the two-state Markov Model.
Refractoriness and age-dependent point process model.
Berry and Meister studied neuronal refractoriness using a stochastic model that predicts spikes as a product of two terms, a function f(s(t)) that depends on the time-dependent stimulus s(t) and one a recovery function formula_110 that depends on the time since the last spike
formula_111
The model is also called an "inhomogeneous Markov interval (IMI) process". Similar models have been used for many years in auditory neuroscience. Since the model keeps memory of the last spike time it is non-Poisson and falls in the class of time-dependent renewal models. It is closely related to the model SRM0 with exponential escape rate. Importantly, it is possible to fit parameters of the age-dependent point process model so as to describe not just the PSTH response, but also the interspike-interval statistics.
Linear-nonlinear Poisson cascade model and GLM.
The linear-nonlinear-Poisson cascade model is a cascade of a linear filtering process followed by a nonlinear spike generation step. In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM). The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step.
The two-state Markov model (Nossenson & Messer).
The spiking neuron model by Nossenson & Messer produces the probability of the neuron firing a spike as a function of either an external or pharmacological stimulus. The model consists of a cascade of a receptor layer model and a spiking neuron model, as shown in Fig 4. The connection between the external stimulus to the spiking probability is made in two steps: First, a receptor cell model translates the raw external stimulus to neurotransmitter concentration, and then, a spiking neuron model connects neurotransmitter concentration to the firing rate (spiking probability). Thus, the spiking neuron model by itself depends on neurotransmitter concentration at the input stage.
An important feature of this model is the prediction for neurons firing rate pattern which captures, using a low number of free parameters, the characteristic edge emphasized response of neurons to a stimulus pulse, as shown in Fig. 5. The firing rate is identified both as a normalized probability for neural spike firing and as a quantity proportional to the current of neurotransmitters released by the cell. The expression for the firing rate takes the following form:
formula_112
where,
formula_113
P0 could be generally calculated recursively using the Euler method, but in the case of a pulse of stimulus, it yields a simple closed-form expression.
formula_114
with formula_115 being a short temporal average of stimulus power (given in Watt or other energy per time unit).
Other predictions by this model include:
1) The averaged evoked response potential (ERP) due to the population of many neurons in unfiltered measurements resembles the firing rate.
2) The voltage variance of activity due to multiple neuron activity resembles the firing rate (also known as Multi-Unit-Activity power or MUA).
3) The inter-spike-interval probability distribution takes the form a gamma-distribution like function.
Pharmacological input stimulus neuron models.
The models in this category produce predictions for experiments involving pharmacological stimulation.
Synaptic transmission (Koch & Segev).
According to the model by Koch and Segev, the response of a neuron to individual neurotransmitters can be modeled as an extension of the classical Hodgkin–Huxley model with both standard and nonstandard kinetic currents. Four neurotransmitters primarily influence the CNS. AMPA/kainate receptors are fast excitatory mediators while NMDA receptors mediate considerably slower currents. Fast inhibitory currents go through GABAA receptors, while GABAB receptors mediate by secondary "G"-protein-activated potassium channels. This range of mediation produces the following current dynamics:
where "ḡ" is the maximal conductance (around 1S) and "E" is the equilibrium potential of the given ion or transmitter (AMDA, NMDA, Cl, or K), while ["O"] describes the fraction of open receptors. For NMDA, there is a significant effect of "magnesium block" that depends sigmoidally on the concentration of intracellular magnesium by "B"("V"). For GABAB, ["G"] is the concentration of the "G"-protein, and "K"d describes the dissociation of "G" in binding to the potassium gates.
The dynamics of this more complicated model have been well-studied experimentally and produce important results in terms of very quick synaptic potentiation and depression, that is fast, short-term learning.
The stochastic model by Nossenson and Messer translates neurotransmitter concentration at the input stage to the probability of releasing neurotransmitter at the output stage. For a more detailed description of this model, see the Two state Markov model section above.
HTM neuron model.
The HTM neuron model was developed by Jeff Hawkins and researchers at Numenta and is based on a theory called Hierarchical Temporal Memory, originally described in the book "On Intelligence". It is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the human brain. <br>
Applications.
Spiking Neuron Models are used in a variety of applications that need encoding into or decoding from neuronal spike trains in the context of neuroprosthesis and brain-computer interfaces such as retinal prosthesis: or artificial limb control and sensation. Applications are not part of this article; for more information on this topic please refer to the main article.
Relation between artificial and biological neuron models.
The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like
formula_120
where "y""i" is the output of the "i" th neuron, "x""j" is the "j"th input neuron signal, "w""ij" is the synaptic weight (or strength of connection) between the neurons "i" and "j", and "φ" is the activation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks time-dependence in input and output.
When an input is switched on at a time t and kept constant thereafter, biological neurons emit a spike train. Importantly, this spike train is not regular but exhibits a temporal structure characterized by adaptation, bursting, or initial bursting followed by regular spiking. Generalized integrate-and-fire models such as the Adaptive Exponential Integrate-and-Fire model, the spike response model, or the (linear) adaptive integrate-and-fire model can capture these neuronal firing patterns.
Moreover, neuronal input in the brain is time-dependent. Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enables to prediction of the spike train in the output for arbitrary time-dependent input, whereas an artificial neuron or a simple leaky integrate-and-fire does not.
If we take the Hodkgin-Huxley model as a starting point, generalized integrate-and-fire models can be derived systematically in a step-by-step simplification procedure. This has been shown explicitly for the exponential integrate-and-fire model and the spike response model.
In the case of modeling a biological neuron, physical analogs are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water-containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance "C"m. The firing of a neuron involves the movement of ions into the cell, that occurs when neurotransmitters cause ion channels on the cell membrane to open. We describe this by a physical time-dependent current "I"("t"). With this comes a change in voltage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in a voltage spike called an action potential which travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given by "V"m("t").
If the input current is constant, most neurons emit after some time of adaptation or initial bursting a regular spike train. The frequency of regular firing in response to a constant current "I" is described by the frequency-current relation, which corresponds to the transfer function formula_121 of artificial neural networks. Similarly, for all spiking neuron models, the transfer function formula_121 can be calculated numerically (or analytically).
Cable theory and compartmental models.
All of the above deterministic models are point-neuron models because they do not consider the spatial structure of a neuron. However, the dendrite contributes to transforming input into output. Point neuron models are valid description in three cases. (i) If input current is directly injected into the soma. (ii) If synaptic input arrives predominantly at or close to the soma (closeness is defined by a length scale formula_122 introduced below. (iii) If synapse arrives anywhere on the dendrite, but the dendrite is completely linear. In the last case, the cable acts as a linear filter; these linear filter properties can be included in the formulation of generalized integrate-and-fire models such as the spike response model.
The filter properties can be calculated from a cable equation.
Let us consider a cell membrane in the form of a cylindrical cable. The position on the cable is denoted by x and the voltage across the cell membrane by V. The cable is characterized by a longitudinal resistance formula_123 per unit length and a membrane resistance formula_124 . If everything is linear, the voltage changes as a function of timeWe introduce a length scale formula_125 on the left side and time constant formula_126 on the right side. The cable equation can now be written in its perhaps best-known form:
The above cable equation is valid for a single cylindrical cable.
Linear cable theory describes the dendritic arbor of a neuron as a cylindrical structure undergoing a regular pattern of bifurcation, like branches in a tree. For a single cylinder or an entire tree, the static input conductance at the base (where the tree meets the cell body or any such boundary) is defined as
formula_127,
where "L" is the electrotonic length of the cylinder, which depends on its length, diameter, and resistance. A simple recursive algorithm scales linearly with the number of branches and can be used to calculate the effective conductance of the tree. This is given by
formula_128
where "A""D"
"πld" is the total surface area of the tree of total length "l", and "L""D" is its total electrotonic length. For an entire neuron in which the cell body conductance is "G""S" and the membrane conductance per unit area is "G""md"
"G""m" / "A", we find the total neuron conductance "G""N" for "n" dendrite trees by adding up all tree and soma conductances, given by
formula_129
where we can find the general correction factor "F""dga" experimentally by noting "G""D"
"G""md""A""D""F""dga".
The linear cable model makes several simplifications to give closed analytic results, namely that the dendritic arbor must branch in diminishing pairs in a fixed pattern and that dendrites are linear. A compartmental model allows for any desired tree topology with arbitrary branches and lengths, as well as arbitrary nonlinearities. It is essentially a discretized computational implementation of nonlinear dendrites.
Each piece, or compartment, of a dendrite, is modeled by a straight cylinder of arbitrary length "l" and diameter "d" which connects with fixed resistance to any number of branching cylinders. We define the conductance ratio of the "i"th cylinder as "B""i"
"G""i" / "G"∞, where formula_130 and "R""i" is the resistance between the current compartment and the next. We obtain a series of equations for conductance ratios in and out of a compartment by making corrections to the normal dynamic "B"out,"i"
"B"in,"i+1", as
where the last equation deals with "parents" and "daughters" at branches, and formula_134. We can iterate these equations through the tree until we get the point where the dendrites connect to the cell body (soma), where the conductance ratio is "B"in,stem. Then our total neuron conductance for static input is given by
formula_135
Importantly, static input is a very special case. In biology, inputs are time-dependent. Moreover, dendrites are not always linear.
Compartmental models enable to include nonlinearities via ion channels positioned at arbitrary locations along the dendrites. For static inputs, it is sometimes possible to reduce the number of compartments (increase the computational speed) and yet retain the salient electrical characteristics.
Conjectures regarding the role of the neuron in the wider context of the brain principle of operation.
The neurotransmitter-based energy detection scheme.
The neurotransmitter-based energy detection scheme suggests that the neural tissue chemically executes a Radar-like detection procedure.
As shown in Fig. 6, the key idea of the conjecture is to account for neurotransmitter concentration, neurotransmitter generation, and neurotransmitter removal rates as the important quantities in executing the detection task, while referring to the measured electrical potentials as a side effect that only in certain conditions coincide with the functional purpose of each step. The detection scheme is similar to a radar-like "energy detection" because it includes signal squaring, temporal summation, and a threshold switch mechanism, just like the energy detector, but it also includes a unit that emphasizes stimulus edges and a variable memory length (variable memory). According to this conjecture, the physiological equivalent of the energy test statistics is neurotransmitter concentration, and the firing rate corresponds to neurotransmitter current. The advantage of this interpretation is that it leads to a unit-consistent explanation which allows for bridge between electrophysiological measurements, biochemical measurements, and psychophysical results.
The evidence reviewed in suggests the following association between functionality to histological classification:
Note that although the electrophysiological signals in Fig.6 are often similar to the functional signal (signal power/neurotransmitter concentration / muscle force), there are some stages in which the electrical observation differs from the functional purpose of the corresponding step. In particular, Nossenson et al. suggested that glia threshold crossing has a completely different functional operation compared to the radiated electrophysiological signal and that the latter might only be a side effect of glia break.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C_\\mathrm{m} \\frac{d V(t)}{d t} = -\\sum_i I_i (t, V)."
},
{
"math_id": 1,
"text": "I(t,V) = g(t,V)\\cdot(V-V_\\mathrm{eq})"
},
{
"math_id": 2,
"text": "g(t,V)=\\bar{g}\\cdot m(t,V)^p \\cdot h(t,V)^q"
},
{
"math_id": 3,
"text": "\\frac{d m(t,V)}{d t} = \\frac{m_\\infty(V)-m(t,V)}{\\tau_\\mathrm{m} (V)} = \\alpha_\\mathrm{m} (V)\\cdot(1-m) - \\beta_\\mathrm{m} (V)\\cdot m"
},
{
"math_id": 4,
"text": "I(t)=C \\frac{d V(t)}{d t}"
},
{
"math_id": 5,
"text": "\\,\\! f(I)= \\frac{I} {C_\\mathrm{} V_\\mathrm{th} + t_\\mathrm{ref} I}."
},
{
"math_id": 6,
"text": " C_\\mathrm{m} \\frac{d V_\\mathrm{m} (t)}{d t}= I(t)-\\frac{V_\\mathrm{m} (t)}{R_\\mathrm{m}}"
},
{
"math_id": 7,
"text": "f(I) =\n\\begin{cases} \n 0, & I \\le I_\\mathrm{th} \\\\\n \\left[ t_\\mathrm{ref}-R_\\mathrm{m} C_\\mathrm{m} \\log\\left(1-\\tfrac{V_\\mathrm{th}}{I R_\\mathrm{m}}\\right) \\right]^{-1}, & I > I_\\mathrm{th} \n\\end{cases} "
},
{
"math_id": 8,
"text": "\\tau_\\mathrm{m} \\frac{d V_\\mathrm{m} (t)}{d t} = R I(t)- [V_\\mathrm{m} (t) - E_\\mathrm{m} ]- R \\sum_k w_k"
},
{
"math_id": 9,
"text": "\\tau_k \\frac{d w_k (t)}{d t} = - a_k [V_\\mathrm{m} (t) - E_\\mathrm{m} ]- w_k + b_k \\tau_k \\sum_f \\delta (t-t^f)\n"
},
{
"math_id": 10,
"text": "\\tau_m"
},
{
"math_id": 11,
"text": "\\tau_k"
},
{
"math_id": 12,
"text": "I(t)-\\frac{V_\\mathrm{m} (t)}{R_\\mathrm{m}} = C_\\mathrm{m} \\frac{d^{\\alpha} V_\\mathrm{m} (t)}{d^{\\alpha} t}"
},
{
"math_id": 13,
"text": " \\frac{dV}{dt} - \\frac{R} {\\tau_m} I(t)= \\frac{1} {\\tau_m} \\left[ E_m-V+\\Delta_T \\exp \\left( \\frac{V - V_T} {\\Delta_T} \\right) \\right]. "
},
{
"math_id": 14,
"text": "V"
},
{
"math_id": 15,
"text": "V_T"
},
{
"math_id": 16,
"text": "E_m"
},
{
"math_id": 17,
"text": "\\Delta_T"
},
{
"math_id": 18,
"text": " \\tau_m \\frac{dV}{dt} = R I(t) + \\left[ E_m-V+\\Delta_T \\exp \\left( \\frac{V - V_T} {\\Delta_T} \\right) \\right] - R w "
},
{
"math_id": 19,
"text": "\\tau \\frac{d w (t)}{d t} = - a [V_\\mathrm{m} (t) - E_\\mathrm{m} ]- w + b \\tau \\delta (t-t^f)\n"
},
{
"math_id": 20,
"text": "\\tau"
},
{
"math_id": 21,
"text": "\\theta(t)"
},
{
"math_id": 22,
"text": "v_{th0}"
},
{
"math_id": 23,
"text": "v_{th}"
},
{
"math_id": 24,
"text": "\\tau_{\\theta}"
},
{
"math_id": 25,
"text": "v_{th}(t) = v_{th0} + \\frac{\\sum \\theta(t - t_f)}{f} = v_{th0} + \\frac{\\sum \\theta_0 \\exp\\left[-\\frac{(t - t_f)}{\\tau_{\\theta}}\\right]}{f}\n"
},
{
"math_id": 26,
"text": "\\theta(t) = \\theta_0 \\exp\\left[-\\frac{t}{\\tau_{\\theta}}\\right]"
},
{
"math_id": 27,
"text": "u(t)"
},
{
"math_id": 28,
"text": "v_{rest}"
},
{
"math_id": 29,
"text": "u(t) \\geq v_{th}(t) \\Rightarrow v(t) = v_{\\text{rest}}"
},
{
"math_id": 30,
"text": "v_{th}(t) = b_{0} + \\beta_{1}b_{1}(t) + \\beta_{2}b_{2}(t)\n"
},
{
"math_id": 31,
"text": "b_{1}(t)\n"
},
{
"math_id": 32,
"text": "b_2(t)"
},
{
"math_id": 33,
"text": "b_{1}(t + \\delta t) = p_{j1}b_{1}(t) + (1 - p_{j1})z(t)\\delta(t)"
},
{
"math_id": 34,
"text": "b_{2}(t + \\delta t) = p_{j2}b_{2}(t) + (1 - p_{j2})z(t)\\delta(t)"
},
{
"math_id": 35,
"text": "p_{j1} = \\exp\\left[-\\frac{\\delta t}{\\tau_{b1}}\\right]"
},
{
"math_id": 36,
"text": "p_{j2} = \\exp\\left[-\\frac{\\delta t}{\\tau_{b2}}\\right]"
},
{
"math_id": 37,
"text": "I(t)"
},
{
"math_id": 38,
"text": "I^{\\rm noise}(t)"
},
{
"math_id": 39,
"text": " \\tau_m \\frac{dV}{dt} = f(V) + R I(t) + R I^\\text{noise}(t) "
},
{
"math_id": 40,
"text": "I^{\\rm noise}(t) = \\xi(t)\n\n"
},
{
"math_id": 41,
"text": " \\tau_m \\frac{dV}{dt} = [E_m-V] + R I(t) + R \\xi(t) "
},
{
"math_id": 42,
"text": " dV = [E_m-V + R I(t)] \\frac{dt}{\\tau_m} + \\sigma \\, dW "
},
{
"math_id": 43,
"text": "\\sigma\n\n"
},
{
"math_id": 44,
"text": " \\Delta V = [E_m-V + R I(t)] \\frac{\\Delta t}{\\tau_m} + \\sigma \\sqrt{\\tau_m}y "
},
{
"math_id": 45,
"text": " \\tau_m \\frac{dV}{dt} =E_m-V+\\Delta_T \\exp \\left( \\frac{V - V_T} {\\Delta_T} \\right) + R I(t) + R\\xi(t) "
},
{
"math_id": 46,
"text": "I(t)=I_0"
},
{
"math_id": 47,
"text": "I_0"
},
{
"math_id": 48,
"text": "V_{th}"
},
{
"math_id": 49,
"text": "\\rho(t) = f(V(t)-V_{th}) "
},
{
"math_id": 50,
"text": "f"
},
{
"math_id": 51,
"text": " f(V-V_{th}) = \\frac{1}{\\tau_0} \\exp[\\beta(V-V_{th})] "
},
{
"math_id": 52,
"text": "\\tau_0"
},
{
"math_id": 53,
"text": "\\beta"
},
{
"math_id": 54,
"text": "\\beta\\to\\infty"
},
{
"math_id": 55,
"text": "1/\\beta\\approx 4mV"
},
{
"math_id": 56,
"text": "P_F(t_n) = F[V(t_n)-V_{th}] "
},
{
"math_id": 57,
"text": "t_n"
},
{
"math_id": 58,
"text": "F(x) = 0.5[1 + \\tanh(\\gamma x)]"
},
{
"math_id": 59,
"text": "\\gamma"
},
{
"math_id": 60,
"text": "F(y_n)\\approx 1 - \\exp[y_n\\Delta t]"
},
{
"math_id": 61,
"text": "y_n = V(t_n)-V_{th}"
},
{
"math_id": 62,
"text": "V(t)= \\sum_f \\eta(t-t^f) + \\int\\limits_0^\\infty \\kappa(s) I(t-s)\\,ds + V_\\mathrm{rest} "
},
{
"math_id": 63,
"text": "\\kappa(s)"
},
{
"math_id": 64,
"text": "t^f"
},
{
"math_id": 65,
"text": "\\eta(t-t^f)"
},
{
"math_id": 66,
"text": " f(V-\\vartheta(t)) = \\frac{1}{\\tau_0} \\exp[\\beta(V-\\vartheta(t))] "
},
{
"math_id": 67,
"text": "\\vartheta(t)"
},
{
"math_id": 68,
"text": "\\vartheta(t)= \\vartheta_0 + \\sum_f \\theta_1(t-t^f) "
},
{
"math_id": 69,
"text": "\\vartheta_0"
},
{
"math_id": 70,
"text": "\\theta_1(t-t^f)"
},
{
"math_id": 71,
"text": "\\theta_1(t-t^f) = 0"
},
{
"math_id": 72,
"text": "\\beta \\to \\infty"
},
{
"math_id": 73,
"text": "\\eta,\\kappa,\\theta_1"
},
{
"math_id": 74,
"text": "V_i(t)= \\sum_f \\eta_i(t-t_i^f) + \\sum_{j=1}^N w_{ij} \\sum_{f'}\\varepsilon_{ij}(t-t_j^{f'}) + V_\\mathrm{rest} "
},
{
"math_id": 75,
"text": "t_j^{f'}"
},
{
"math_id": 76,
"text": "\\eta_i(t-t^f_i)"
},
{
"math_id": 77,
"text": "w_{ij}"
},
{
"math_id": 78,
"text": "\\varepsilon_{ij}(t-t_j^{f'})"
},
{
"math_id": 79,
"text": "\\varepsilon_{ij}(s)"
},
{
"math_id": 80,
"text": "\\eta(s)"
},
{
"math_id": 81,
"text": "\\hat{t}"
},
{
"math_id": 82,
"text": "V(t)= \\eta(t-\\hat{t}) + \\int_0^\\infty \\kappa(s) I(t-s) \\, ds + V_\\mathrm{rest} "
},
{
"math_id": 83,
"text": "V_i(t\\mid\\hat{t}_i) = \\eta_i(t-\\hat{t}_i) + \\sum_j w_{ij} \\sum_f \\varepsilon_{ij}(t-\\hat{t}_i,t-t^f) + V_\\mathrm{rest} "
},
{
"math_id": 84,
"text": "\\hat{t}_i"
},
{
"math_id": 85,
"text": "\\varepsilon_{ij}"
},
{
"math_id": 86,
"text": " f(V-\\vartheta) = \\frac{1}{\\tau_0} \\exp[\\beta(V-V_{th})] "
},
{
"math_id": 87,
"text": "\\eta"
},
{
"math_id": 88,
"text": "i"
},
{
"math_id": 89,
"text": "t"
},
{
"math_id": 90,
"text": "\\mathop{\\mathrm{Prob}}(X_{t}(i) = 1\\mid \\mathcal{F}_{t-1}) = \\varphi_i \\Biggl( \\sum_{j\\in I} W_{j \\rightarrow i} \\sum_{s=L_t^i}^{t-1} g_j(t-s) X_s(j),~~~ t-L_t^i \\Biggl), "
},
{
"math_id": 91,
"text": "W_{j \\rightarrow i}"
},
{
"math_id": 92,
"text": "j"
},
{
"math_id": 93,
"text": "g_j"
},
{
"math_id": 94,
"text": "L_t^i"
},
{
"math_id": 95,
"text": "L_t^i =\\sup\\{s<t:X_s(i)=1\\}."
},
{
"math_id": 96,
"text": "g_{j}"
},
{
"math_id": 97,
"text": "W_{j\\to i}"
},
{
"math_id": 98,
"text": "t-L_t^i"
},
{
"math_id": 99,
"text": "\\begin{align}{rcl}\n \\dfrac{d V}{d t} &= V-V^3/3 - w + I_\\mathrm{ext} \\\\\n \\tau \\dfrac{d w}{d t} &= V-a-b w\n\\end{align}"
},
{
"math_id": 100,
"text": "\\begin{align}\nC\\frac{d V}{d t} &= -I_\\mathrm{ion}(V,w) + I \\\\\n\\frac{d w}{d t} &= \\varphi \\cdot \\frac{w_\\infty - w}{\\tau_{w}}\n\\end{align}"
},
{
"math_id": 101,
"text": "I_\\mathrm{ion}(V,w) = \\bar{g}_\\mathrm{Ca} m_\\infty \\cdot(V-V_\\mathrm{Ca}) + \\bar{g}_\\mathrm{K} w\\cdot(V-V_\\mathrm{K}) + \\bar{g}_\\mathrm{L}\\cdot(V-V_\\mathrm{L})"
},
{
"math_id": 102,
"text": "\\begin{align}\n\\frac{d x}{d t} &= y+3x^2-x^3-z+I \\\\\n\\frac{d y}{d t} &= 1-5x^2-y \\\\\n\\frac{d z}{d t} &= r\\cdot (4(x + \\tfrac{8}{5})-z)\n\\end{align}"
},
{
"math_id": 103,
"text": "\\frac{d\\theta(t)}{d t} = (I-I_0) [1+ \\cos(\\theta)] + [1- \\cos(\\theta)] "
},
{
"math_id": 104,
"text": "\\tau_\\mathrm{m} \\frac{d V_\\mathrm{m} (t)}{d t} = (I-I_0) R + [V_\\mathrm{m} (t) - E_\\mathrm{m} ][V_\\mathrm{m} (t) - V_\\mathrm{T} ] "
},
{
"math_id": 105,
"text": "[t, t+\\Delta_t]"
},
{
"math_id": 106,
"text": "g[s(t)]"
},
{
"math_id": 107,
"text": "s(t)"
},
{
"math_id": 108,
"text": "P_\\text{spike}(t\\in[t',t'+\\Delta_t])=\\Delta_t \\cdot g[s(t)]"
},
{
"math_id": 109,
"text": "g[s(t)] \\propto s^2(t)"
},
{
"math_id": 110,
"text": "w(t-\\hat{t}) "
},
{
"math_id": 111,
"text": "\\rho(t) = f(s(t))w(t-\\hat{t}) "
},
{
"math_id": 112,
"text": "R_\\text{fire}(t)=\\frac{P_\\text{spike}(t;\\Delta_t)}{\\Delta_t}=[y(t)+R_0] \\cdot P_0(t)"
},
{
"math_id": 113,
"text": "\\dot{P}_0=-[y(t)+R_0+R_1] \\cdot P_0(t) +R_1"
},
{
"math_id": 114,
"text": "y(t) \\simeq g_\\text{gain} \\cdot \\langle s^2(t)\\rangle,"
},
{
"math_id": 115,
"text": "\\langle s^2(t)\\rangle "
},
{
"math_id": 116,
"text": "I_\\mathrm{AMPA}(t,V) = \\bar{g}_\\mathrm{AMPA} \\cdot [O] \\cdot (V(t)-E_\\mathrm{AMPA})"
},
{
"math_id": 117,
"text": "I_\\mathrm{NMDA}(t,V) = \\bar{g}_\\mathrm{NMDA} \\cdot B(V) \\cdot [O] \\cdot (V(t)-E_\\mathrm{NMDA})"
},
{
"math_id": 118,
"text": "I_\\mathrm{GABA_A}(t,V) = \\bar{g}_\\mathrm{GABA_A} \\cdot ([O_1]+[O_2]) \\cdot (V(t)-E_\\mathrm{Cl})"
},
{
"math_id": 119,
"text": "I_\\mathrm{GABA_B}(t,V) = \\bar{g}_\\mathrm{GABA_B} \\cdot \\tfrac{[G]^n}{[G]^n+K_\\mathrm{d}} \\cdot (V(t)-E_\\mathrm{K})"
},
{
"math_id": 120,
"text": "y_i = \\varphi\\left( \\sum_j w_{ij} x_j \\right)"
},
{
"math_id": 121,
"text": "\\varphi"
},
{
"math_id": 122,
"text": "\\lambda"
},
{
"math_id": 123,
"text": "r_l"
},
{
"math_id": 124,
"text": "r_m"
},
{
"math_id": 125,
"text": "\\lambda^2 = {r_m}/{r_l}"
},
{
"math_id": 126,
"text": "\\tau = c_m r_m"
},
{
"math_id": 127,
"text": "G_{in} = \\frac{G_\\infty \\tanh(L) + G_L}{1+(G_L / G_\\infty )\\tanh(L)}"
},
{
"math_id": 128,
"text": "\\,\\! G_D = G_m A_D \\tanh(L_D) / L_D"
},
{
"math_id": 129,
"text": "G_N = G_S + \\sum_{j=1}^n A_{D_j} F_{dga_j},"
},
{
"math_id": 130,
"text": "G_\\infty=\\tfrac{\\pi d^{3/2}}{2\\sqrt{R_i R_m}}"
},
{
"math_id": 131,
"text": "B_{\\mathrm{out},i} = \\frac{B_{\\mathrm{in},i+1}(d_{i+1}/d_i)^{3/2} }{ \\sqrt{R_{\\mathrm{m},i+1}/R_{\\mathrm{m},i}} }"
},
{
"math_id": 132,
"text": "B_{\\mathrm{in},i} = \\frac{ B_{\\mathrm{out},i} + \\tanh X_i }{ 1+B_{\\mathrm{out},i}\\tanh X_i }"
},
{
"math_id": 133,
"text": "B_\\mathrm{out,par} = \\frac{B_\\mathrm{in,dau1} (d_\\mathrm{dau1}/d_\\mathrm{par})^{3/2}} {\\sqrt{R_\\mathrm{m,dau1}/R_\\mathrm{m,par}}} + \\frac{B_\\mathrm{in,dau2} (d_\\mathrm{dau2}/d_\\mathrm{par})^{3/2}} {\\sqrt{R_\\mathrm{m,dau2}/R_\\mathrm{m,par}}} + \\ldots"
},
{
"math_id": 134,
"text": "X_i = \\tfrac{l_i \\sqrt{4R_i}}{\\sqrt{d_i R_m}}"
},
{
"math_id": 135,
"text": "G_N = \\frac{A_\\mathrm{soma}}{R_\\mathrm{m,soma}} + \\sum_j B_{\\mathrm{in,stem},j} G_{\\infty,j}."
}
] |
https://en.wikipedia.org/wiki?curid=14408479
|
1440970
|
Triangulation (geometry)
|
Subdivision of a planar object into triangles
In geometry, a triangulation is a subdivision of a planar object into triangles, and by extension the subdivision of a higher-dimension geometric object into simplices. Triangulations of a three-dimensional volume would involve subdividing it into tetrahedra packed together.
In most instances, the triangles of a triangulation are required to meet edge-to-edge and vertex-to-vertex.
Types.
Different types of triangulations may be defined, depending both on what geometric object is to be subdivided and on how the subdivision is determined.
Generalization.
The concept of a triangulation may also be generalized somewhat to subdivisions into shapes related to triangles. In particular, a pseudotriangulation of a point set is a partition of the convex hull of the points into pseudotriangles—polygons that, like triangles, have exactly three convex vertices. As in point set triangulations, pseudotriangulations are required to have their vertices at the given input points.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "\\mathbb{R}^d"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "\\mathcal{P}\\subset\\mathbb{R}^d"
},
{
"math_id": 4,
"text": "\\mathcal{P}"
},
{
"math_id": 5,
"text": "\\Sigma"
},
{
"math_id": 6,
"text": "T_\\alpha"
},
{
"math_id": 7,
"text": "\\R^2"
},
{
"math_id": 8,
"text": "f_\\alpha"
},
{
"math_id": 9,
"text": "T_\\alpha \\cap T_\\beta"
},
{
"math_id": 10,
"text": "f_{\\alpha}f_{\\beta}^{-1} "
}
] |
https://en.wikipedia.org/wiki?curid=1440970
|
1440976
|
Triangulation (topology)
|
In mathematics, triangulation describes the replacement of topological spaces by piecewise linear spaces, i.e. the choice of a homeomorphism in a suitable simplicial complex. Spaces being homeomorphic to a simplicial complex are called triangulable. Triangulation has various uses in different branches of mathematics, for instance in algebraic topology, in complex analysis or in modeling.
Motivation.
On the one hand, it is sometimes useful to forget about superfluous information of topological spaces: The replacement of the original spaces with simplicial complexes may help to recognize crucial properties and to gain a better understanding of the considered object.
On the other hand, simplicial complexes are objects of combinatorial character and therefore one can assign them quantities rising from their combinatorial pattern, for instance, the Euler characteristic. Triangulation allows now to assign such quantities to topological spaces.
Investigations concerning the existence and uniqueness of triangulations established a new branch in topology, namely the piecewise-linear-topology (short PL-topology). Its main purpose is topological properties of simplicial complexes and its generalization, cell-complexes.
Simplicial complexes.
Abstract simplicial complexes.
An abstract simplicial complex above a set formula_0 is a system formula_1 of non-empty subsets such that:
The elements of formula_8 are called "simplices," the elements of formula_0 are called "vertices." A simplex with formula_9 vertices has "dimension" formula_10 by definition. The dimension of an abstract simplicial complex is defined as formula_11.
Abstract simplicial complexes can be thought of as geometrical objects too. This requires the term of geometric simplex.
Geometric simplices.
Let formula_12 be formula_13 affinely independent points in formula_14, i.e. the vectors formula_15are linearly independent. The set formula_16 is said to be the "simplex spanned by formula_12". It has "dimension" formula_10 by definition. The points formula_12 are called the vertices of formula_17, the simplices spanned by formula_10 of the formula_9 vertices are called faces and the boundary formula_18 is defined to be the union of its faces.
The formula_10"-dimensional standard-simplex" is the simplex spanned by the unit vectors formula_19
Geometric simplicial complexes.
A geometric simplicial complex formula_20 is a collection of geometric simplices such that
The union of all the simplices in formula_22 gives the set of points of formula_22, denoted formula_24 This set formula_25 is endowed with a topology by choosing the closed sets to be formula_26 "is closed for all" formula_27. Note that, that in general, this topology is not the same as the subspace topology that formula_25 inherits from formula_14. The topologies do coincide in the case that each point in the complex lies only in finitely many simplices.
Each geometric complex can be associated with an abstract complex by choosing as a ground set formula_0 the set of vertices that appear in any simplex of formula_22 and as system of subsets the subsets of formula_0 which correspond to vertex sets of simplices in formula_22.
A natural question is if vice versa, any abstract simplicial complex corresponds to a geometric complex. In general, the geometric construction as mentioned here is not flexible enough: consider for instance an abstract simplicial complex of infinite dimension. However, the following more abstract construction provides a topological space for any kind of abstract simplicial complex:
Let formula_8 be an abstract simplicial complex above a set formula_0. Choose a union of simplices formula_28, but each in formula_29 of dimension sufficiently large, such that the geometric simplex formula_30 is of dimension formula_10 if the abstract geometric simplex formula_31 has dimension formula_10. If formula_32, formula_33can be identified with a face of formula_34 and the resulting topological space is the gluing formula_35 Effectuating the gluing for each inclusion, one ends up with the desired topological space.
As in the previous construction, by the topology induced by gluing, the closed sets in this space are the subsets that are closed in the subspace topology of every simplex formula_30 in the complex.
The simplicial complex formula_36 which consists of all simplices formula_8 of dimension formula_37 is called the formula_10"-th skeleton" of formula_8.
A natural neighbourhood of a vertex formula_38 in a simplicial complex formula_22 is considered to be given by the star formula_39 of a simplex, whose boundary is the link formula_40.
Simplicial maps.
The maps considered in this category are simplicial maps: Let formula_41, formula_42 be abstract simplicial complexes above sets formula_43, formula_44. A simplicial map is a function formula_45 which maps each simplex in formula_41 onto a simplex in formula_42. By affine-linear extension on the simplices, formula_46 induces a map between the geometric realizations of the complexes.
Definition.
A triangulation of a topological space formula_54 is a homeomorphism formula_55 where formula_8 is a simplicial complex. Topological spaces do not necessarily admit a triangulation and if they do, it is not necessarily unique.
Invariants.
Triangulations of spaces allow assigning combinatorial invariants rising from their dedicated simplicial complexes to spaces. These are characteristics that equal for complexes that are isomorphic via a simplicial map and thus have the same combinatorial pattern.
This data might be useful to classify topological spaces up to homeomorphism but only given that the characteristics are also topological invariants, meaning, they do not depend on the chosen triangulation. For the data listed here, this is the case. For details and the link to singular homology, see topological invariance.
Homology.
Via triangulation, one can assign a chain complex to topological spaces that arise from its simplicial complex and compute its "simplicial homology". Compact spaces always admit finite triangulations and therefore their homology groups are finitely generated and only finitely many of them do not vanish. Other data as Betti-numbers or Euler characteristic can be derived from homology.
Betti-numbers and Euler-characteristics.
Let formula_25 be a finite simplicial complex. The formula_10-th Betti-number formula_64 is defined to be the rank of the formula_10-th simplicial homology group of the spaces. These numbers encode geometric properties of the spaces: The Betti-number formula_65 for instance represents the number of connected components. For a triangulated, closed orientable surfaces formula_31, formula_66 holds where formula_67 denotes the genus of the surface: Therefore its first Betti-number represents the doubled number of handles of the surface.
With the comments above, for compact spaces all Betti-numbers are finite and almost all are zero. Therefore, one can form their alternating sum
formula_68
which is called the "Euler characteristic" of the complex, a catchy topological invariant.
Topological invariance.
To use these invariants for the classification of topological spaces up to homeomorphism one needs invariance of the characteristics regarding homeomorphism.
A famous approach to the question was at the beginning of the 20th century the attempt to show that any two triangulations of the same topological space admit a common "subdivision". This assumption is known as "Hauptvermutung (" German: Main assumption). Let formula_69 be a simplicial complex. A complex formula_70 is said to be a subdivision of formula_42 iff:
Those conditions ensure that subdivisions does not change the simplicial complex as a set or as a topological space. A map formula_73 between simplicial complexes is said to be piecewise linear if there is a refinement formula_74 of formula_41 such that formula_75 is piecewise linear on each simplex of formula_41. Two complexes that correspond to another via piecewise linear bijection are said to be combinatorial isomorphic. In particular, two complexes that have a common refinement are combinatorially equivalent. Homology groups are invariant to combinatorial equivalence and therefore the Hauptvermutung would give the topological invariance of simplicial homology groups. In 1918, Alexander introduced the concept of singular homology. Henceforth, most of the invariants arising from triangulation were replaced by invariants arising from singular homology. For those new invariants, it can be shown that they were invariant regarding homeomorphism and even regarding homotopy equivalence. Furthermore it was shown that singular and simplicial homology groups coincide. This workaround has shown the invariance of the data to homeomorphism. Hauptvermutung lost in importance but it was initial for a new branch in topology: The "piecewise linear topology" (short PL-topology).
Hauptvermutung.
The Hauptvermutung ("German for main conjecture") states that two triangulations always admit a common subdivision. Originally, its purpose was to prove invariance of combinatorial invariants regarding homeomorphisms. The assumption that such subdivisions exist in general is intuitive, as subdivision are easy to construct for simple spaces, for instance for low dimensional manifolds. Indeed the assumption was proven for manifolds of dimension formula_76 and for differentiable manifolds but it was disproved in general: An important tool to show that triangulations do not admit a common subdivision. i. e their underlying complexes are not combinatorially isomorphic is the combinatorial invariant of Reidemeister torsion.
Reidemeister-torsion.
To disprove the Hauptvermutung it is helpful to use combinatorial invariants which are not topological invariants. A famous example is Reidemeister-torsion. It can be assigned to a tuple formula_77 of CW-complexes: If formula_78 this characteristic will be a topological invariant but if formula_79 in general not. An approach to Hauptvermutung was to find homeomorphic spaces with different values of Reidemeister-torsion. This invariant was used initially to classify lens-spaces and first counterexamples to the Hauptvermutung were built based on lens-spaces:
Classification of lens-spaces.
In its original formulation, lens spaces are 3-manifolds, constructed as quotient spaces of the 3-sphere: Let formula_80 be natural numbers, such that formula_80 are coprime. The lens space formula_81 is defined to be the orbit space of the free group action
formula_82
formula_83.
For different tuples formula_84, lens spaces will be homotopy-equivalent but not homeomorphic. Therefore they can't be distinguished with the help of classical invariants as the fundamental group but by the use of Reidemeister-torsion.
Two lens spaces formula_85 are homeomorphic, if and only if formula_86. This is the case iff two lens spaces are "simple-homotopy-equivalent". The fact can be used to construct counterexamples for the Hauptvermutung as follows. Suppose there are spaces formula_87 derived from non-homeomorphic lens spaces formula_85having different Reidemeister torsion. Suppose further that the modification into formula_87 does not affect Reidemeister torsion but such that after modification formula_88 and formula_89 are homeomorphic. The resulting spaces will disprove the Hauptvermutung.
Existence of triangulation.
Besides the question of concrete triangulations for computational issues, there are statements about spaces that are easier to prove given that they are simplicial complexes. Especially manifolds are of interest. Topological manifolds of dimension formula_76 are always triangulable but there are non-triangulable manifolds for dimension formula_10, for formula_10 arbitrary but greater than three. Further, differentiable manifolds always admit triangulations.
Piecewise linear structures.
Manifolds are an important class of spaces. It is natural to require them not only to be triangulable but moreover to admit a piecewise linear atlas, a PL-structure:
Let formula_90 be a simplicial complex such that every point admits an open neighborhood formula_91 such that there is a triangulation of formula_91 and a piecewise linear homeomorphism formula_92. Then formula_90 is said to be a "piecewise linear (PL) manifold of dimension" formula_10 and the triangulation together with the PL-atlas is said to be a "PL-structure on" formula_90.
An important lemma is the following:
Let formula_54 be a topological space. It is equivalent
The equivalence of the second and the third statement is because that the link of a vertex is independent of the chosen triangulation up to combinatorial isomorphism. One can show that differentiable manifolds admit a PL-structure as well as manifolds of dimension formula_76. Counterexamples for the triangulation conjecture are counterexamples for the conjecture of the existence of PL-structure of course.
Moreover, there are examples for triangulated spaces which do not admit a PL-structure. Consider an formula_94-dimensional PL-homology-sphere formula_54. The double suspension formula_95 is a topological formula_10-sphere. Choosing a triangulation formula_96 obtained via the suspension operation on triangulations the resulting simplicial complex is not a PL-manifold, because there is a vertex formula_97 such that formula_98 is not a formula_93 sphere.
A question arising with the definition is if PL-structures are always unique: Given two PL-structures for the same space formula_99, is there a there a homeomorphism formula_100 which is piecewise linear with respect to both PL-structures? The assumption is similar to the Hauptvermutung and indeed there are spaces which have different PL-structures which are not equivalent. Triangulation of PL-equivalent spaces can be transformed into one another via Pachner moves:
Pachner Moves.
Pachner moves are a way to manipulate triangulations: Let formula_101 be a simplicial complex. For two simplices formula_102 the "Join"
formula_103 are the points lying on straights between points in formula_104 and in formula_105. Choose formula_106 such that formula_107 for any formula_104 lying not in formula_22. A new complex formula_108, can be obtained by replacing formula_109 by formula_110. This replacement is called a "Pachner move." The theorem of Pachner states that whenever two triangulated manifolds are PL-equivalent, there is a series of Pachner moves transforming both into another.
Cellular complexes.
A similar but more flexible construction than simplicial complexes is the one of "cellular complexes" (or CW-complexes). Its construction is as follows:
An formula_10-cell is the closed formula_10-dimensional unit-ball formula_111, an open formula_10-cell is its inner formula_112. Let formula_54 be a topological space, let formula_113 be a continuous map. The gluing formula_114 is said to be "obtained by gluing on an formula_10-cell."
A cell complex is a union formula_115 of topological spaces such that
Each simplicial complex is a CW-complex, the inverse is not true. The construction of CW-complexes can be used to define cellular homology and one can show that cellular homology and simplicial homology coincide. For computational issues, it is sometimes easier to assume spaces to be CW-complexes and determine their homology via cellular decomposition, an example is the projective plane formula_63: Its construction as a CW-complex needs three cells, whereas its simplicial complex consists of 54 simplices.
Other applications.
Classification of manifolds.
By triangulating 1-dimensional manifolds, one can show that they are always homeomorphic to disjoint copies of the real line and the unit sphere formula_119. Moreover, surfaces, i.e. 2-manifolds, can be classified completely: Let formula_21 be a compact surface.
To prove this theorem one constructs a fundamental polygon of the surface: This can be done by using the simplicial structure obtained by the triangulation.
Maps on simplicial complexes.
Giving spaces the structure of a simplicial structure might help to understand maps defined on the spaces. The maps can often be assumed to be simplicial maps via the simplicial approximation theorem:
Simplicial approximation.
Let formula_41, formula_42 be abstract simplicial complexes above sets formula_43, formula_44. A simplicial map is a function formula_45 which maps each simplex in formula_41 onto a simplex in formula_42. By affin-linear extension on the simplices, formula_46 induces a map between the geometric realizations of the complexes. Each point in a geometric complex lies in the inner of exactly one simplex, its "support." Consider now a "continuous" map formula_122"." A simplicial map formula_123 is said to be a "simplicial approximation" of formula_75 if and only if each formula_124 is mapped by formula_67 onto the support of formula_125 in formula_42. If such an approximation exists, one can construct a homotopy formula_126 transforming formula_46 into formula_67 by defining it on each simplex; there it always exists, because simplices are contractible.
The simplicial approximation theorem guarantees for every continuous function formula_45 the existence of a simplicial approximation at least after refinement of formula_41, for instance by replacing formula_41 by its iterated barycentric subdivision. The theorem plays an important role for certain statements in algebraic topology in order to reduce the behavior of continuous maps on those of simplicial maps, for instance in "Lefschetz's fixed-point theorem."
Lefschetz's fixed-point theorem.
The "Lefschetz number" is a useful tool to find out whether a continuous function admits fixed-points. This data is computed as follows: Suppose that formula_54 and formula_99 are topological spaces that admit finite triangulations. A continuous map formula_127 induces homomorphisms formula_128 between its simplicial homology groups with coefficients in a field formula_104. These are linear maps between formula_129-vector spaces, so their trace formula_130 can be determined and their alternating sum
formula_131
is called the "Lefschetz number" of formula_75. If formula_132, this number is the Euler characteristic of formula_104. The fixpoint theorem states that whenever formula_133, formula_75 has a fixed-point. In the proof this is first shown only for simplicial maps and then generalized for any continuous functions via the approximation theorem. Brouwer's fixpoint theorem treats the case where formula_134 is an endomorphism of the unit-ball. For formula_135 all its homology groups formula_136 vanishes, and formula_137 is always the identity, so formula_138, so formula_75 has a fixpoint.
Formula of Riemann-Hurwitz.
The Riemann-Hurwitz formula allows to determine the genus of a compact, connected Riemann surface formula_139 without using explicit triangulation. The proof needs the existence of triangulations for surfaces in an abstract sense: Let formula_140 be a non-constant holomorphic function on a surface with known genus. The relation between the genus formula_141 of the surfaces formula_139 and formula_142 is
formula_143
where formula_144 denotes the degree of the map. The sum is well defined as it counts only the ramifying points of the function.
The background of this formula is that holomorphic functions on Riemann surfaces are ramified coverings. The formula can be found by examining the image of the simplicial structure near to ramifiying points.
|
[
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "\\mathcal{T} \\subset \\mathcal{P} (V)"
},
{
"math_id": 2,
"text": "\\{v_0\\} \\in \\mathcal{T}"
},
{
"math_id": 3,
"text": "v_0\\in V"
},
{
"math_id": 4,
"text": "E \\in \\mathcal{T}"
},
{
"math_id": 5,
"text": "\\emptyset \\neq F\\subset E"
},
{
"math_id": 6,
"text": "\\Rightarrow"
},
{
"math_id": 7,
"text": "F \\in \\mathcal{T}"
},
{
"math_id": 8,
"text": "\\mathcal{T}"
},
{
"math_id": 9,
"text": "n+1"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "\\text{dim}(\\mathcal{T})= \\text{sup}\\;\\{\\text{dim}(F):F \\in \\mathcal{T}\\} \\in \\mathbb{N}\\cup \\infty"
},
{
"math_id": 12,
"text": "p_0,...p_n"
},
{
"math_id": 13,
"text": "n+1\n"
},
{
"math_id": 14,
"text": "\\mathbb{R}^n"
},
{
"math_id": 15,
"text": "(p_1-p_0), (p_2-p_0),\\dots (p_n-p_0)"
},
{
"math_id": 16,
"text": "\\Delta = \\Bigl\\{x \\in \\mathbb{R}^n \\;\\Big|\\; x= \\sum_{i=0}^n t_ip_i\\; with \\; 0\\leq t_i\\leq 1\\; and \\; \\sum_{i=0}^n t_i =1 \\Bigr\\}"
},
{
"math_id": 17,
"text": " \\Delta "
},
{
"math_id": 18,
"text": "\\partial \\Delta"
},
{
"math_id": 19,
"text": " e_0,...e_n"
},
{
"math_id": 20,
"text": "\\mathcal{S}\\subseteq\\mathcal{P}(\\mathbb{R}^n)"
},
{
"math_id": 21,
"text": "S"
},
{
"math_id": 22,
"text": "\\mathcal{S}"
},
{
"math_id": 23,
"text": "S, T"
},
{
"math_id": 24,
"text": "|\\mathcal{S}|=\\bigcup_{S \\in \\mathcal{S}} S."
},
{
"math_id": 25,
"text": "|\\mathcal{S}|"
},
{
"math_id": 26,
"text": "\\{A \\subseteq |\\mathcal{S}| \\;\\mid\\; A \\cap \\Delta "
},
{
"math_id": 27,
"text": " \\Delta \\in \\mathcal{S}\\}"
},
{
"math_id": 28,
"text": "(\\Delta_F)_{F \\in \\mathcal{T}}"
},
{
"math_id": 29,
"text": "\\mathbb {R}^N"
},
{
"math_id": 30,
"text": "\\Delta_F"
},
{
"math_id": 31,
"text": "F"
},
{
"math_id": 32,
"text": "E\\subset F"
},
{
"math_id": 33,
"text": "\\Delta_E\\subset \\mathbb{R}^N"
},
{
"math_id": 34,
"text": "\\Delta_F\\subset\\mathbb{R}^M"
},
{
"math_id": 35,
"text": "\\Delta_E \\cup_{i}\\Delta_F"
},
{
"math_id": 36,
"text": "\\mathcal{T_n}"
},
{
"math_id": 37,
"text": "\\leq n"
},
{
"math_id": 38,
"text": "v \\in V"
},
{
"math_id": 39,
"text": "\\operatorname{star}(v)=\\{ L \\in \\mathcal{S} \\;\\mid\\; v \\in L \\}"
},
{
"math_id": 40,
"text": "\\operatorname{link}(v)"
},
{
"math_id": 41,
"text": "\\mathcal{K}"
},
{
"math_id": 42,
"text": "\\mathcal{L}"
},
{
"math_id": 43,
"text": "V_K"
},
{
"math_id": 44,
"text": "V_L"
},
{
"math_id": 45,
"text": "f:V_K \\rightarrow V_L"
},
{
"math_id": 46,
"text": "f "
},
{
"math_id": 47,
"text": "W =\\{a,b,c,d,e,f\\}"
},
{
"math_id": 48,
"text": "\\mathcal{T} = \\Big\\{ \\{a\\}, \\{b\\},\\{c\\},\\{d\\},\\{e\\},\\{f\\}, \\{a,b\\},\\{a,c\\},\\{a,d\\},\\{a,e\\},\\{a,f\\}\\Big\\}"
},
{
"math_id": 49,
"text": "\\{a\\}"
},
{
"math_id": 50,
"text": "V= \\{A,B,C,D\\}"
},
{
"math_id": 51,
"text": "\\mathcal{S} = \\mathcal{P}(V)"
},
{
"math_id": 52,
"text": "\\mathcal{S}' =\\; \\mathcal{P}(\\mathcal{V})\\setminus \\{A,B,C,D\\}"
},
{
"math_id": 53,
"text": "|\\mathcal{S'}| = \\partial |\\mathcal{S}|"
},
{
"math_id": 54,
"text": "X"
},
{
"math_id": 55,
"text": "t: |\\mathcal{T}|\\rightarrow X"
},
{
"math_id": 56,
"text": "\\mathcal{S}, \\mathcal{S'}"
},
{
"math_id": 57,
"text": "\\mathbb{D}^3"
},
{
"math_id": 58,
"text": "t:|\\mathcal{S}| \\rightarrow \\mathbb{D}^3"
},
{
"math_id": 59,
"text": "t"
},
{
"math_id": 60,
"text": " |\\mathcal{S}'|"
},
{
"math_id": 61,
"text": " t':|\\mathcal{S}'| \\rightarrow \\mathbb{S}^2"
},
{
"math_id": 62,
"text": "\\mathbb{T}^2 = \\mathbb{S}^1 \\times \\mathbb{S}^1"
},
{
"math_id": 63,
"text": "\\mathbb{P}^2"
},
{
"math_id": 64,
"text": "b_n(\\mathcal{S})"
},
{
"math_id": 65,
"text": "b_0(\\mathcal{S})"
},
{
"math_id": 66,
"text": "b_1(F)= 2g"
},
{
"math_id": 67,
"text": "g"
},
{
"math_id": 68,
"text": "\\sum_{k=0}^{\\infty} (-1)^{k}b_k(\\mathcal{S})"
},
{
"math_id": 69,
"text": "|\\mathcal{L}|\\subset \\mathbb{R}^N "
},
{
"math_id": 70,
"text": " |\\mathcal{L'}|\\subset \\mathbb{R}^N"
},
{
"math_id": 71,
"text": "\\mathcal{L'} "
},
{
"math_id": 72,
"text": "\\mathcal{L} "
},
{
"math_id": 73,
"text": "f: \\mathcal{K} \\rightarrow \\mathcal{L}"
},
{
"math_id": 74,
"text": "\\mathcal{K'}"
},
{
"math_id": 75,
"text": "f"
},
{
"math_id": 76,
"text": "\\leq 3"
},
{
"math_id": 77,
"text": "(K,L)"
},
{
"math_id": 78,
"text": "L = \\emptyset"
},
{
"math_id": 79,
"text": "L \\neq \\emptyset"
},
{
"math_id": 80,
"text": "p, q"
},
{
"math_id": 81,
"text": "L(p,q)"
},
{
"math_id": 82,
"text": "\\Z/p\\Z\\times S^{3}\\to S^{3}"
},
{
"math_id": 83,
"text": "(k,(z_1,z_2)) \\mapsto (z_1 \\cdot e^{2\\pi i k/p}, z_2 \\cdot e^{2\\pi i kq/p} )"
},
{
"math_id": 84,
"text": "(p, q)"
},
{
"math_id": 85,
"text": "L(p,q_1), L(p,q_2)"
},
{
"math_id": 86,
"text": "q_1 \\equiv \\pm q_2^{\\pm 1} \\pmod{p} "
},
{
"math_id": 87,
"text": "L'_1, L'_2"
},
{
"math_id": 88,
"text": "L'_1"
},
{
"math_id": 89,
"text": "L'_2"
},
{
"math_id": 90,
"text": "|X|"
},
{
"math_id": 91,
"text": "U"
},
{
"math_id": 92,
"text": "f: U \\rightarrow \\mathbb{R}^n"
},
{
"math_id": 93,
"text": "n-1"
},
{
"math_id": 94,
"text": "n-2"
},
{
"math_id": 95,
"text": "S^2X"
},
{
"math_id": 96,
"text": "t: |\\mathcal{S}| \\rightarrow S^2 X"
},
{
"math_id": 97,
"text": "v"
},
{
"math_id": 98,
"text": "link(v)"
},
{
"math_id": 99,
"text": "Y"
},
{
"math_id": 100,
"text": "F:Y\\rightarrow Y"
},
{
"math_id": 101,
"text": "\\mathcal{S} "
},
{
"math_id": 102,
"text": "K, L"
},
{
"math_id": 103,
"text": "K*L= \\Big\\{ tk+(1-t)l\\;|\\;k\\in K,l\\in L \\;t \\in [0,1]\\Big\\}"
},
{
"math_id": 104,
"text": "K"
},
{
"math_id": 105,
"text": "L"
},
{
"math_id": 106,
"text": "S \\in \\mathcal{S}"
},
{
"math_id": 107,
"text": "lk(S)= \\partial K"
},
{
"math_id": 108,
"text": "\\mathcal{S'}"
},
{
"math_id": 109,
"text": "S * \\partial K"
},
{
"math_id": 110,
"text": "\\partial S * K"
},
{
"math_id": 111,
"text": "B_n= [0,1]^n"
},
{
"math_id": 112,
"text": "B_n= [0,1]^n\\setminus \\mathbb{S}^{n-1}"
},
{
"math_id": 113,
"text": "f: \\mathbb{S}^{n-1}\\rightarrow X"
},
{
"math_id": 114,
"text": "X \\cup_{f}B_n"
},
{
"math_id": 115,
"text": "X=\\cup_{n\\geq 0} X_n"
},
{
"math_id": 116,
"text": "X_0"
},
{
"math_id": 117,
"text": "X_n"
},
{
"math_id": 118,
"text": "X_{n-1}"
},
{
"math_id": 119,
"text": "\\mathbb{S}^1"
},
{
"math_id": 120,
"text": "2"
},
{
"math_id": 121,
"text": "n\\geq 0"
},
{
"math_id": 122,
"text": "f:\\mathcal{K}\\rightarrow \\mathcal{L} "
},
{
"math_id": 123,
"text": "g:\\mathcal{K}\\rightarrow \\mathcal{L} "
},
{
"math_id": 124,
"text": "x \\in \\mathcal{K}"
},
{
"math_id": 125,
"text": "f(x)"
},
{
"math_id": 126,
"text": "H"
},
{
"math_id": 127,
"text": "f: X\\rightarrow Y"
},
{
"math_id": 128,
"text": "f_i: H_i(X,K)\\rightarrow H_i(Y,K)"
},
{
"math_id": 129,
"text": "K "
},
{
"math_id": 130,
"text": "tr_i"
},
{
"math_id": 131,
"text": "L_K(f)= \\sum_i(-1)^itr_i(f) \\in K"
},
{
"math_id": 132,
"text": "f = id"
},
{
"math_id": 133,
"text": "L_K(f)\\neq 0"
},
{
"math_id": 134,
"text": "f:\\mathbb{D}^n \\rightarrow \\mathbb{D}^n"
},
{
"math_id": 135,
"text": "k \\geq 1"
},
{
"math_id": 136,
"text": "H_k(\\mathbb{D}^n)"
},
{
"math_id": 137,
"text": "f_0"
},
{
"math_id": 138,
"text": "L_K(f) = tr_0(f) = 1 \\neq 0"
},
{
"math_id": 139,
"text": "X "
},
{
"math_id": 140,
"text": "F:X \\rightarrow Y "
},
{
"math_id": 141,
"text": "g "
},
{
"math_id": 142,
"text": "Y "
},
{
"math_id": 143,
"text": "2g(X)-2= deg(F)(2g(Y)-2) \\sum_{x\\in X}(ord(F)-1)"
},
{
"math_id": 144,
"text": "deg(F) "
}
] |
https://en.wikipedia.org/wiki?curid=1440976
|
14410687
|
List of National League annual slugging percentage leaders
|
List of National League Slugging Percentage Leaders
The National League slugging percentage Leader is the Major League Baseball player in the National League who has the highest slugging percentage in a particular season.
In baseball statistics, slugging percentage' (abbreviated SLG) is a measure of the power of a hitter. It is calculated as total bases divided by at bats:
formula_0
where "AB" is the number of at-bats for a given player, and "1B", "2B", "3B", and "HR" are the number of singles, doubles, triples, and home runs, respectively. Walks are specifically excluded from this calculation.
Currently, a player needs to accrue an average of at least 3.1 plate appearances for each game his team plays in order to qualify for the title. An exception to this qualification rule is that if a player falls short of 3.1 plate appearances per game, but would still have the highest batting average if enough hitless at-bats were added to reach the 3.1 average mark, the player still wins the slugging percentage championship.
The latest example of this exception being employed was in 2007, when Ryan Braun had a .634 slugging percentage, but only 492 plate appearances – 10 short of the 502 necessary. The addition of 10 hitless at-bats would have lowered his slugging percentage to a value that was still better than anyone else in the league, so Braun was the National League slugging percentage champion. A similar situation occurred when Tony Gwynn won the NL batting title in 1996.
Year-by-Year National League Slugging Percentage Leaders
+ Hall of Famer
A ** by the stat's value indicates the player had fewer than the required number of plate appearances for the SLG title that year. In order to rank the player, the necessary number of hitless at bats were added to the player's season total. The value here is their actual value, and not the value used to rank them.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "SLG = \\frac{(\\mathit{1B}) + (2 \\times \\mathit{2B}) + (3 \\times \\mathit{3B}) + (4 \\times \\mathit{HR})}{AB}"
}
] |
https://en.wikipedia.org/wiki?curid=14410687
|
1441111
|
Debye sheath
|
Plasma layer with a positive charge
The Debye sheath (also electrostatic sheath) is a layer in a plasma which has a greater density of positive ions, and hence an overall excess positive charge, that balances an opposite negative charge on the surface of a material with which it is in contact. The thickness of such a layer is several Debye lengths thick, a value whose size depends on various characteristics of plasma (e.g. temperature, density, etc.).
A Debye sheath arises in a plasma because the electrons usually have a temperature on the order of magnitude or greater than that of the ions and are much lighter. Consequently, they are faster than the ions by at least a factor of formula_0. At the interface to a material surface, therefore, the electrons will fly out of the plasma, charging the surface negative relative to the bulk plasma. Due to Debye shielding, the scale length of the transition region will be the Debye length formula_1. As the potential increases, more and more electrons are reflected by the sheath potential. An equilibrium is finally reached when the potential difference is a few times the electron temperature.
The Debye sheath is the transition from a plasma to a solid surface. Similar physics is involved between two plasma regions that have different characteristics; the transition between these regions is known as a double layer, and features one positive, and one negative layer.
Description.
Sheaths were first described by American physicist Irving Langmuir. In 1923 he wrote:
"Electrons are repelled from the negative electrode while positive ions are drawn towards it. Around each negative electrode there is thus a "sheath" of definite thickness containing only positive ions and neutral atoms. [..] Electrons are reflected from the outside surface of the sheath while all "positive" ions which reach the sheath are attracted to the electrode. [..] it follows directly that no change occurs in the positive ion current reaching the electrode. The electrode is in fact perfectly screened from the discharge by the positive ion sheath, and its potential cannot influence the phenomena occurring in the arc, nor the current flowing to the electrode."
Langmuir and co-author Albert W. Hull further described a sheath formed in a thermionic valve:
"Figure 1 shows graphically the condition that exists in such a tube containing mercury vapor. The space between filament and plate is filled with a mixture of electrons and positive ions, in nearly equal numbers, to which has been given the name "plasma". A wire immersed in the plasma, at zero potential with respect to it, will absorb every ion and electron that strikes it. Since the electrons move about 600 times as fast as the ions, 600 times as many electrons will strike the wire as ions. If the wire is insulated it must assume such a negative potential that it receives equal numbers of electrons and ions, that is, such a potential that it repels all but 1 in 600 of the electrons headed for it."
"Suppose that this wire, which we may take to be part of a grid, is made still more negative with a view to controlling the current through the tube. It will now repel all the electrons headed for it, but will receive all the positive ions that fly toward it. There will thus be a region around the wire which contains positive ions and no electrons, as shown diagrammatically in Fig. 1. The ions are accelerated as they approach the negative wire, and there will exist a potential gradient in this sheath, as we may call it, of positive ions, such that the potential is less and less negative as we recede from the wire, and at a certain distance is equal to the potential of the plasma. This distance we define as the boundary of the sheath. Beyond this distance there is no effect due to the potential of the wire."
Mathematical treatment.
The planar sheath equation.
The quantitative physics of the Debye sheath is determined by four phenomena:
Energy conservation of the ions: If we assume for simplicity cold ions of mass formula_2 entering the sheath with a velocity formula_3, having charge opposite to the electron, conservation of energy in the sheath potential requires
formula_4,
where formula_5 is the charge of the electron taken positively, i.e. formula_6 x formula_7 formula_8.
Ion continuity: In the steady state, the ions do not build up anywhere, so the flux is everywhere the same:
formula_9.
Boltzmann relation for the electrons: Since most of the electrons are reflected, their density is given by
formula_10.
Poisson's equation: The curvature of the electrostatic potential is related to the net charge density as follows:
formula_11.
Combining these equations and writing them in terms of the dimensionless potential, position, and ion speed,
formula_12
formula_13
formula_14
we arrive at the sheath equation:
formula_15.
The Bohm sheath criterion.
The sheath equation can be integrated once by multiplying by formula_16:
formula_17
At the sheath edge (formula_18), we can define the potential to be zero (formula_19) and assume that the electric field is also zero (formula_20). With these boundary conditions, the integrations yield
formula_21
This is easily rewritten as an integral in closed form, although one that can only be solved numerically. Nevertheless, an important piece of information can be derived analytically. Since the left-hand-side is a square, the right-hand-side must also be non-negative for every value of formula_22, in particular for small values. Looking at the Taylor expansion around formula_19, we see that the first term that does not vanish is the quadratic one, so that we can require
formula_23,
or
formula_24,
or
formula_25.
This inequality is known as the Bohm sheath criterion after its discoverer, David Bohm. If the ions are entering the sheath too slowly, the sheath potential will "eat" its way into the plasma to accelerate them. Ultimately a so-called pre-sheath will develop with a potential drop on the order of formula_26 and a scale determined by the physics of the ion source (often the same as the dimensions of the plasma). Normally the Bohm criterion will hold with equality, but there are some situations where the ions enter the sheath with supersonic speed.
The Child–Langmuir law.
Although the sheath equation must generally be integrated numerically, we can find an approximate solution analytically by neglecting the formula_27 term. This amounts to neglecting the electron density in the sheath, or only analyzing that part of the sheath where there are no electrons. For a "floating" surface, i.e. one that draws no net current from the plasma, this is a useful if rough approximation. For a surface biased strongly negative so that it draws the ion saturation current, the approximation is very good. It is customary, although not strictly necessary, to further simplify the equation by assuming that formula_28 is much larger than unity. Then the sheath equation takes on the simple form
formula_29.
As before, we multiply by formula_16 and integrate to obtain
formula_30,
or
formula_31.
This is easily integrated over ξ to yield
formula_32,
where formula_33 is the (normalized) potential at the wall (relative to the sheath edge), and "d" is the thickness of the sheath. Changing back to the variables formula_3 and formula_34 and noting that the ion current into the wall is formula_35, we have
formula_36.
This equation is known as Child's law, after Clement D. Child (1868–1933), who first published it in 1911, or as the Child–Langmuir law, honoring as well Irving Langmuir, who discovered it independently and published in 1913. It was first used to give the space-charge-limited current in a vacuum diode with electrode spacing "d". It can also be inverted to give the thickness of the Debye sheath as a function of the voltage drop by setting formula_37:
formula_38.
In recent years, the Child-Langmuir (CL) law have been revised as reported in two review papers.
|
[
{
"math_id": 0,
"text": "\\sqrt{m_\\mathrm{i}/m_\\mathrm{e}}"
},
{
"math_id": 1,
"text": "\\lambda_\\mathrm{D}"
},
{
"math_id": 2,
"text": "m_\\mathrm{i}"
},
{
"math_id": 3,
"text": "u_0"
},
{
"math_id": 4,
"text": "\\frac{1}{2}m_\\mathrm{i}\\,u(x)^2 = \\frac{1}{2}m_\\mathrm{i}\\,u_0^2 - e\\,\\varphi(x)"
},
{
"math_id": 5,
"text": "e"
},
{
"math_id": 6,
"text": "e=1.602"
},
{
"math_id": 7,
"text": "10^{-19}"
},
{
"math_id": 8,
"text": "\\mathrm{C}"
},
{
"math_id": 9,
"text": "n_0\\,u_0 = n_\\mathrm{i}(x)\\,u(x)"
},
{
"math_id": 10,
"text": "n_\\mathrm{e}(x) = n_0 \\exp\\Big(\\frac{e\\,\\varphi(x)}{k_\\mathrm{B}T_\\mathrm{e}}\\Big)"
},
{
"math_id": 11,
"text": "\\frac{d^2\\varphi(x)}{dx^2} = \\frac{e (n_\\mathrm{e}(x)-n_\\mathrm{i}(x))}{\\epsilon_0} "
},
{
"math_id": 12,
"text": "\\chi(\\xi) = -\\frac{e\\varphi(\\xi)}{k_\\mathrm{B}T_\\mathrm{e}}"
},
{
"math_id": 13,
"text": "\\xi = \\frac{x}{\\lambda_\\mathrm{D}}"
},
{
"math_id": 14,
"text": "\\mathfrak{M} = \\frac{u_\\mathrm{o}}{(k_\\mathrm{B}T_\\mathrm{e}/m_\\mathrm{i})^{1/2}}"
},
{
"math_id": 15,
"text": "\\chi'' = \\left( 1 + \\frac{2\\chi}{\\mathfrak{M}^2} \\right)^{-1/2} - e^{-\\chi}"
},
{
"math_id": 16,
"text": "\\chi'"
},
{
"math_id": 17,
"text": "\\int_0^\\xi \\chi' \\chi''\\,d\\xi_1 = \n\\int_0^\\xi \\left( 1 + \\frac{2\\chi}{\\mathfrak{M}^2} \\right)^{-1/2} \\chi' \\,d\\xi_1 - \n\\int_0^\\xi e^{-\\chi} \\chi'\\,d\\xi_1 \n"
},
{
"math_id": 18,
"text": "\\xi = 0"
},
{
"math_id": 19,
"text": "\\chi = 0"
},
{
"math_id": 20,
"text": "\\chi'=0"
},
{
"math_id": 21,
"text": "\\frac{1}{2}\\chi'^2 = \\mathfrak{M}^2 \\left[ \\left( 1 + \\frac{2\\chi}{\\mathfrak{M}^2} \\right)^{1/2} - 1 \\right] + e^{-\\chi} - 1"
},
{
"math_id": 22,
"text": "\\chi"
},
{
"math_id": 23,
"text": "\\frac{1}{2}\\chi^2\\left( -\\frac{1}{\\mathfrak{M}^2} + 1 \\right) \\ge 0"
},
{
"math_id": 24,
"text": "\\mathfrak{M}^2 \\ge 1"
},
{
"math_id": 25,
"text": "u_0 \\ge (k_\\mathrm{B}T_\\mathrm{e}/m_\\mathrm{i})^{1/2}"
},
{
"math_id": 26,
"text": "(k_\\mathrm{B}T_\\mathrm{e}/2e)"
},
{
"math_id": 27,
"text": "e^{-\\chi}"
},
{
"math_id": 28,
"text": "2\\chi/\\mathfrak{M}^2"
},
{
"math_id": 29,
"text": "\\chi'' = \\frac{\\mathfrak{M}}{(2\\chi)^{1/2}}"
},
{
"math_id": 30,
"text": "\\frac{1}{2}\\chi'^2 = \\mathfrak{M} (2\\chi)^{1/2}"
},
{
"math_id": 31,
"text": "\\chi^{-1/4}\\chi' = 2^{3/4} \\mathfrak{M}^{1/2}"
},
{
"math_id": 32,
"text": "\\frac{4}{3}\\chi_\\mathrm{w}^{3/4} = 2^{3/4} \\mathfrak{M}^{1/2} d"
},
{
"math_id": 33,
"text": "\\chi_\\mathrm{w}"
},
{
"math_id": 34,
"text": "\\varphi"
},
{
"math_id": 35,
"text": "J=e\\,n_0\\,u_0"
},
{
"math_id": 36,
"text": "J = \\frac{4}{9} \\left(\\frac{2e}{m_i}\\right)^{1/2} \\frac{|\\varphi_w|^{3/2}}{4\\pi d^2}"
},
{
"math_id": 37,
"text": "J=j_\\mathrm{ion}^\\mathrm{sat}"
},
{
"math_id": 38,
"text": "\nd = \\frac{2}{3} \\left(\\frac{2e}{m_\\mathrm{i}}\\right)^{1/4} \\frac{|\\varphi_\\mathrm{w}|^{3/4}}{2\\sqrt{\\pi j_\\mathrm{ion}^\\mathrm{sat}}}"
}
] |
https://en.wikipedia.org/wiki?curid=1441111
|
14411227
|
Critical state soil mechanics
|
Critical state soil mechanics is the area of soil mechanics that encompasses the conceptual models representing the mechanical behavior of saturated remoulded soils based on the "critical state" concept. At the "critical state", the relationship between forces applied in the soil (stress), and the resulting deformation resulting from this stress (strain) becomes constant. The soil will continue to deform, but the stress will no longer increase.
Forces are applied to soils in a number of ways, for example when they are loaded by foundations, or unloaded by excavations. The critical state concept is used to predict the behaviour of soils under various loading conditions, and geotechnical engineers use the critical state model to estimate how soil will behave under different stresses.
The basic concept is that soil and other granular materials, if continuously distorted until they flow as a frictional fluid, will come into a well-defined critical state. In practical terms, the critical state can be considered a failure condition for the soil. It's the point at which the soil cannot sustain any additional load without undergoing continuous deformation, in a manner similar to the behaviour of fluids.
Certain properties of the soil, like porosity, shear strength, and volume, reach characteristic values. These properties are intrinsic to the type of soil and its initial conditions.
Formulation.
The Critical State concept is an idealization of the observed behavior of saturated remoulded clays in triaxial compression tests, and it is assumed to apply to undisturbed soils. It states that soils and other granular materials, if continuously distorted (sheared) until they flow as a frictional fluid, will come into a well-defined critical state. At the onset of the critical state, shear distortions formula_0 occur without any further changes in mean effective stress formula_1, deviatoric stress formula_2 (or yield stress, formula_3, in uniaxial tension according to the von Mises yielding criterion), or specific volume formula_4:
formula_5
where,
formula_6
formula_7
formula_8
However, for triaxial conditions formula_9. Thus,
formula_10
formula_11
All critical states, for a given soil, form a unique line called the "Critical State Line" ("CSL") defined by the following equations in the space formula_12:
formula_13
formula_14
where formula_15, formula_16, and formula_17 are soil constants. The first equation determines the magnitude of the deviatoric stress formula_2 needed to keep the soil flowing continuously as the product of a frictional constant formula_15 (capital formula_18) and the mean effective stress formula_1. The second equation states that the specific volume formula_4 occupied by unit volume of flowing particles will decrease as the logarithm of the mean effective stress increases.
History.
In an attempt to advance soil testing techniques, Kenneth Harry Roscoe of Cambridge University, in the late forties and early fifties, developed a simple shear apparatus in which his successive students attempted to study the changes in conditions in the shear zone both in sand and in clay soils. In 1958 a study of the yielding of soil based on some Cambridge data of the simple shear apparatus tests, and on much more extensive data of triaxial tests at Imperial College London from research led by Professor Sir Alec Skempton at Imperial College, led to the publication of the critical state concept .
Roscoe obtained his undergraduate degree in mechanical engineering and his experiences trying to create tunnels to escape when held as a prisoner of war by the Nazis during WWII introduced him to soil mechanics. Subsequent to this 1958 paper, concepts of plasticity were introduced by Schofield and published in his textbook. Schofield was taught at Cambridge by Prof. John Baker, a structural engineer who was a strong believer in designing structures that would fail "plastically". Prof. Baker's theories strongly influenced Schofield's thinking on soil shear. Prof. Baker's views were developed from his pre-war work on steel structures and further informed by his wartime experiences assessing blast-damaged structures and with the design of the "Morrison Shelter", an air-raid shelter which could be located indoors .
Original Cam-Clay Model.
The name cam clay asserts that the plastic volume change typical of clay soil behaviour is due to mechanical stability of an aggregate of small, rough, frictional, interlocking hard particles.
The Original Cam-Clay model is based on the assumption that the soil is isotropic, elasto-plastic, deforms as a continuum, and it is not affected by creep. The yield surface of the Cam clay model is described by the equation
formula_19
where formula_20 is the equivalent stress, formula_21 is the pressure, formula_22 is the pre-consolidation pressure, and formula_23 is the slope of the critical state line in formula_24 space.
The pre-consolidation pressure evolves as the void ratio (formula_25) (and therefore the specific volume formula_26) of the soil changes. A commonly used relation is
formula_27
where formula_28 is the virgin compression index of the soil. A limitation of this model is the possibility of negative specific volumes at realistic values of stress.
An improvement to the above model for formula_22 is the bilogarithmic form
formula_29
where formula_30 is the appropriate compressibility index of the soil.
Modified Cam-Clay Model.
Professor John Burland of Imperial College who worked with Professor Roscoe is credited with the development of the modified version of the original model. The difference between the Cam Clay and the Modified Cam Clay (MCC) is that the yield surface of the MCC is described by an ellipse and therefore the plastic strain increment vector (which is perpendicular to the yield surface) for the largest value of the mean effective stress is horizontal, and hence no incremental deviatoric plastic strain takes place for a change in mean effective stress (for purely hydrostatic states of stress). This is very convenient for constitutive modelling in numerical analysis, especially finite element analysis, where numerical stability issues are important (as a curve needs to be continuous in order to be differentiable).
The yield surface of the modified Cam-clay model has the form
formula_31
where formula_21 is the pressure, formula_20 is the equivalent stress, formula_22 is the pre-consolidation pressure, and formula_23 is the slope of the critical state line.
Critique.
The basic concepts of the elasto-plastic approach were first proposed by two mathematicians Daniel C. Drucker and William Prager (Drucker and Prager, 1952) in a short eight page note. In their note, Drucker and Prager also demonstrated how to use their approach to calculate the critical height of a vertical bank using either a plane or a log spiral failure surface. Their yield criterion is today called the Drucker-Prager yield criterion. Their approach was subsequently extended by Kenneth H. Roscoe and others in the soil mechanics department of Cambridge University.
Critical state and elasto-plastic soil mechanics have been the subject of criticism ever since they were first introduced. The key factor driving the criticism is primarily the implicit assumption that soils are made of isotropic point particles. Real soils are composed of finite size particles with anisotropic properties that strongly determine observed behavior. Consequently, models based on a metals based theory of plasticity are not able to model behavior of soils that is a result of anisotropic particle properties, one example of which is the drop in shear strengths post peak strength, i.e., strain-softening behavior. Because of this elasto-plastic soil models are only able to model "simple stress-strain curves" such as that from isotropic normally or lightly over consolidated "fat" clays, i.e., CL-ML type soils constituted of very fine grained particles.
Also, in general, volume change is governed by considerations from elasticity and, this assumption being largely untrue for real soils, results in very poor matches of these models to volume changes or pore pressure changes. Further, elasto-plastic models describe the entire element as a whole and not specifically conditions directly on the failure plane, as a consequence of which, they do not model the stress-strain curve post failure, particularly for soils that exhibit strain-softening post peak. Finally, most models separate out the effects of hydrostatic stress and shear stress, with each assumed to cause only volume change and shear change respectively. In reality, soil structure, being analogous to a "house of cards," shows both shear deformations on the application of pure compression, and volume changes on the application of pure shear.
Additional criticisms are that the theory is "only descriptive," i.e., only describes known behavior and lacking the ability to either explain or predict standard soil behaviors such as, why the void ratio in a one dimensional compression test varies linearly with the logarithm of the vertical effective stress. This behavior, critical state soil mechanics simply assumes as a given.
For these reasons, critical-state and elasto-plastic soil mechanics have been subject to charges of scholasticism; the tests to demonstrated its validity are usually "conformation tests" where only simple stress-strain curves are demonstrated to be modeled satisfactorily. The critical-state and concepts surrounding it have a long history of being "scholastic," with Sir Alec Skempton, the “founding father” of British soil mechanics, attributed the scholastic nature of CSSM to Roscoe, of whom he said: “…he did little field work and was, I believe, never involved in a practical engineering job.”.In the 1960s and 1970s, Prof. Alan Bishop at Imperial College used to routinely demonstrate the inability of these theories to match the stress-strain curves of real soils. Joseph (2013) has suggested that critical-state and elasto-plastic soil mechanics meet the criterion of a “degenerate research program” a concept proposed by the philosopher of science Imre Lakatos, for theories where excuses are used to justify an inability of theory to match empirical data.
Response.
The claims that critical state soil mechanics is only descriptive and meets the criterion of a degenerate research program have not been settled. Andrew Jenike used a logarithmic-logarithmic relation to describe the compression test in his theory of critical state and admitted decreases in stress during converging flow and increases in stress during diverging flow. Chris Szalwinski has defined a critical state as a multi-phase state at which the specific volume is the same in both solid and fluid phases. Under his definition the linear-logarithmic relation of the original theory and Jenike's logarithmic-logarithmic relation are special cases of a more general physical phenomenon.
Stress tensor formulations.
Plane stress.
formula_32
Drained conditions.
Plane Strain State of Stress.
"Separation of Plane Strain Stress State Matrix into Distortional and Volumetric Parts":
formula_33
formula_34
After formula_35 loading
formula_36
Drained state of stress.
formula_37
formula_38
formula_39
formula_40
Drained Plane Strain State.
formula_41;formula_42
formula_43; formula_44
By matrix:
formula_45;
Undrained conditions.
Undrained state of stress.
formula_46
formula_47
formula_48
formula_49
formula_50
formula_48
formula_49
formula_51
formula_52
Undrained state of Plane Strain State.
formula_53
formula_54
formula_55
formula_56
Triaxial State of Stress.
"Separation Matrix into Distortional and Volumetric Parts":
formula_57
Undrained state of Triaxial stress.
formula_58formula_59
formula_60formula_61
formula_62
formula_63
formula_64
formula_65
Drained state of Triaxial stress.
Only volumetric in case of drainage:
formula_58formula_59
formula_60formula_61
Example solution in matrix form.
"The following data were obtained from a conventional triaxial compression test on a saturated (B=1), normally consolidated simple clay (Ladd, 1964). The cell pressure was held constant at 10 kPa, while the axial stress was increased to failure (axial compression test).".
Initial phase: formula_66
Step one:
formula_67
formula_68
Step 2-9 is same step one.
Step seven: formula_69
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ \\varepsilon_s"
},
{
"math_id": 1,
"text": "\\ p'"
},
{
"math_id": 2,
"text": "\\ q"
},
{
"math_id": 3,
"text": "\\ \\sigma_y"
},
{
"math_id": 4,
"text": "\\ \\nu"
},
{
"math_id": 5,
"text": "\\ \\frac{\\partial p'}{\\partial \\varepsilon_s}=\\frac{\\partial q}{\\partial \\varepsilon_s}=\\frac{\\partial \\nu}{\\partial \\varepsilon_s}=0"
},
{
"math_id": 6,
"text": "\\ \\nu=1+e"
},
{
"math_id": 7,
"text": "\\ p'=\\frac{1}{3}(\\sigma_1'+\\sigma_2'+\\sigma_3')"
},
{
"math_id": 8,
"text": "\\ q= \\sqrt{\\frac{(\\sigma_1' - \\sigma_2')^2 + (\\sigma_2' - \\sigma_3')^2 + (\\sigma_1' - \\sigma_3')^2}{2}}"
},
{
"math_id": 9,
"text": "\\ \\sigma_2'=\\sigma_3'"
},
{
"math_id": 10,
"text": "\\ p'=\\frac{1}{3}(\\sigma_1'+2\\sigma_3')"
},
{
"math_id": 11,
"text": "\\ q=(\\sigma_1'-\\sigma_3')"
},
{
"math_id": 12,
"text": "\\ (p', q, v)"
},
{
"math_id": 13,
"text": "\\ q=Mp'"
},
{
"math_id": 14,
"text": "\\ \\nu=\\Gamma-\\lambda \\ln(p')"
},
{
"math_id": 15,
"text": "\\ M"
},
{
"math_id": 16,
"text": "\\ \\Gamma"
},
{
"math_id": 17,
"text": "\\ \\lambda"
},
{
"math_id": 18,
"text": "\\ \\mu"
},
{
"math_id": 19,
"text": "\n f(p,q,p_c) = q + M\\,p\\,\\ln\\left[\\frac{p}{p_c}\\right] \\le 0\n "
},
{
"math_id": 20,
"text": "q"
},
{
"math_id": 21,
"text": "p"
},
{
"math_id": 22,
"text": "p_c"
},
{
"math_id": 23,
"text": "M"
},
{
"math_id": 24,
"text": "p-q"
},
{
"math_id": 25,
"text": "e"
},
{
"math_id": 26,
"text": "v"
},
{
"math_id": 27,
"text": "\n e = e_0 - \\lambda \\ln\\left[\\frac{p_c}{p_{c0}}\\right]\n "
},
{
"math_id": 28,
"text": "\\lambda"
},
{
"math_id": 29,
"text": "\n \\ln\\left[\\frac{1+e}{1+e_0}\\right] = \\ln\\left[\\frac{v}{v_0}\\right] = - \\tilde{\\lambda} \\ln\\left[\\frac{p_c}{p_{c0}}\\right]\n "
},
{
"math_id": 30,
"text": "\\tilde{\\lambda}"
},
{
"math_id": 31,
"text": "\n f(p,q,p_c) = \\left[\\frac{q}{M}\\right]^2 + p\\,(p - p_c) \\le 0\n "
},
{
"math_id": 32,
"text": "\\sigma=\\left[\\begin{matrix}\\sigma_{xx}&0&\\tau_{xz}\\\\0&0&0\\\\\\tau_{zx}&0&\\sigma_{zz}\\\\\\end{matrix}\\right] =\\left[\\begin{matrix}\\sigma_{xx}&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}\\\\\\end{matrix}\\right]"
},
{
"math_id": 33,
"text": "\\sigma=\\left[\\begin{matrix}\\sigma_{xx}&0&\\tau_{xz}\\\\0&0&0\\\\\\tau_{zx}&0&\\sigma_{zz}\\\\\\end{matrix}\\right] =\\left[\\begin{matrix}\\sigma_{xx}&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}\\\\\\end{matrix}\\right]=\\left[\\begin{matrix}\\sigma_{xx}-\\sigma_{hydrostatic}&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\sigma_{hydrostatic}&0\\\\0&\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]"
},
{
"math_id": 34,
"text": "\\sigma_{hydrostatic}=p_{mean}=\\frac{\\sigma_{xx}+\\sigma_{zz}}{2}"
},
{
"math_id": 35,
"text": "\\delta\\sigma_z"
},
{
"math_id": 36,
"text": "\\left[\\begin{matrix}\\sigma_{xx}-\\sigma_{hydrostatic}&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\sigma_{hydrostatic}&0\\\\0&\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]\n+\\left[\\begin{matrix}0&0\\\\0&\\sigma_{z}\\ \\\\\\end{matrix}\\right]"
},
{
"math_id": 37,
"text": "\\left[\\begin{matrix}\\sigma_{xx}-\\sigma_{hydrostatic}&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\sigma_{hydrostatic}&0\\\\0&\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]"
},
{
"math_id": 38,
"text": "+\\left[\\begin{matrix}0&0\\\\0&\\mathbf{\\delta z}\\ \\\\\\end{matrix}\\right]=\\left[\\begin{matrix}\\sigma_{xx}-\\sigma_{hydrostatic}&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\sigma_{hydrostatic}&0\\\\0&\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]"
},
{
"math_id": 39,
"text": "+\\left[\\begin{matrix}\\frac{-{\\delta p}_w}{2}\\ &0\\\\0&\\sigma_z-\\frac{{\\delta p}_w}{2}\\ \\\\\\end{matrix}\\right]"
},
{
"math_id": 40,
"text": "+\\left[\\begin{matrix}\\frac{{\\delta p}_w}{2}&0\\\\0&\\frac{{\\delta p}_w}{2}\\ \\\\\\end{matrix}\\right]"
},
{
"math_id": 41,
"text": "\\varepsilon_z=\\frac{\\Delta h}{h_0}"
},
{
"math_id": 42,
"text": "\\ \\varepsilon_x=\\varepsilon_y=0"
},
{
"math_id": 43,
"text": "\\varepsilon_z=\\frac{1}{E}(\\sigma_z-\\nu)(\\sigma_x+\\sigma_z)=\\frac{1}{E}\\sigma_z(1-2\\nu\\varepsilon)"
},
{
"math_id": 44,
"text": "\\varepsilon=\\frac{\\nu}{1-\\nu};\\ \\nu=\\frac{\\varepsilon}{1+\\varepsilon}"
},
{
"math_id": 45,
"text": "\\varepsilon_z=\\frac{1}{E}(1-2\\nu\\varepsilon)\\ \\left[\\left[\\begin{matrix}\\sigma_{xx}-\\rho_w&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\rho_w\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\rho_w&0\\\\0&\\rho_w\\\\\\end{matrix}\\right]\\right]"
},
{
"math_id": 46,
"text": "\\left[\\begin{matrix}\\sigma_{xx}-\\rho_w&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\rho_w\\\\\\end{matrix}\\right]+"
},
{
"math_id": 47,
"text": "\\left[\\begin{matrix}\\rho_w&0\\\\0&\\rho_w\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}0&0\\\\0&\\delta\\sigma_z\\ \\\\\\end{matrix}\\right]="
},
{
"math_id": 48,
"text": "=\\left[\\begin{matrix}\\sigma_{xx}-\\rho_w&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\rho_w\\\\\\end{matrix}\\right]+"
},
{
"math_id": 49,
"text": "\\left[\\begin{matrix}\\rho_w&0\\\\0&\\rho_w\\\\\\end{matrix}\\right]"
},
{
"math_id": 50,
"text": "+\\ \\ \\left[\\begin{matrix}-{p}_w\\ /\\mathbf{2}&0\\\\0&\\sigma_z-{p}_w/\\mathbf{2}\\ \\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\delta p_w/2&0\\\\0&\\delta p_w/\\mathbf{2}\\ \\\\\\end{matrix}\\right]="
},
{
"math_id": 51,
"text": "+\\ \\ \\left[\\begin{matrix}-{p}_w\\ /\\mathbf{2}&0\\\\0&\\sigma_z-{p}_w/\\mathbf{2}\\ \\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\delta p_w/2&0\\\\0&\\delta p_w/\\mathbf{2}\\ \\\\\\end{matrix}\\right]+"
},
{
"math_id": 52,
"text": "\\left[\\begin{matrix}0&\\tau_{xz}\\\\{\\tau}_{zx}&0\\\\\\end{matrix}\\right]-\\left[\\begin{matrix}0&{\\delta p}_{w,int}\\\\{\\delta p}_{w,int}&0\\\\\\end{matrix}\\right]"
},
{
"math_id": 53,
"text": "\\varepsilon_z=\\frac{1}{E}\\left(1-2\\nu\\varepsilon\\right)="
},
{
"math_id": 54,
"text": "=\\left[\\left[\\begin{matrix}\\sigma_{xx}-\\rho_w&\\tau_{xz}\\\\\\tau_{zx}&\\sigma_{zz}-\\rho_w\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\rho_w&0\\\\0&\\rho_w\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}0&\\delta \\tau_{xz}\\\\{\\delta \\tau}_{zx}&0\\\\\\end{matrix}\\right]-\\left[\\begin{matrix}0&{\\delta p}_{w,int}\\\\{\\delta p}_{w,int}&0\\\\\\end{matrix}\\right]\\right]="
},
{
"math_id": 55,
"text": "=\\frac{1}{E}\\left(1-2\\nu\\varepsilon\\right)\\left[\\rho_u+\\rho_w+p\\right]"
},
{
"math_id": 56,
"text": "\\rho_u=K_u\\Delta\\varepsilon_z;\\ \\ \\rho_w=\\frac{K_w}{n}\\Delta\\varepsilon_z;\\ \\ \\rho_=K_\\Delta\\varepsilon_z;"
},
{
"math_id": 57,
"text": "\\sigma=\\left[\\begin{matrix}\\sigma_r&0&0\\\\0&\\sigma_r&0\\\\0&0&\\sigma_z\\\\\\end{matrix}\\right]=\\left[\\begin{matrix}\\sigma_r-\\sigma_{hydrostatic}&0&0\\\\0&\\sigma_r-\\sigma_{hydrostatic}&0\\\\0&0&\\sigma_z-\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}\\sigma_{hydrostatic}&0&0\\\\0&\\sigma_{hydrostatic}&0\\\\0&0&\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]"
},
{
"math_id": 58,
"text": "\\left[\\begin{matrix}\\sigma_r-\\sigma_{hydrostatic}&0&0\\\\0&\\sigma_r-\\sigma_{hydrostatic}&0\\\\0&0&\\sigma_z-\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]"
},
{
"math_id": 59,
"text": "+\\left[\\begin{matrix}\\sigma_{hydrostatic}&0&0\\\\0&\\sigma_{hydrostatic}&0\\\\0&0&\\sigma_{hydrostatic}\\\\\\end{matrix}\\right]"
},
{
"math_id": 60,
"text": "+\\left[\\begin{matrix}-\\left(\\frac{r}{2H\\ast3}\\right){p}_w&0&0\\\\0&-\\left(\\frac{r}{2H\\ast3}\\right){(p}_w&0\\\\0&0&(\\sigma_z-{\\left(\\frac{r}{2H\\ast3}\\right)\\ast\\ p}_w\\\\\\end{matrix}\\right]"
},
{
"math_id": 61,
"text": "-\\ \\ \\left[\\begin{matrix}{\\left(\\frac{r}{2H\\ast3}\\right)p}_w&0&0\\\\0&{\\left(\\frac{r}{2H\\ast3}\\right)p}_w&0\\\\0&0&\\left(\\frac{r}{2H\\ast3}\\right)p_w\\\\\\end{matrix}\\right]+"
},
{
"math_id": 62,
"text": "\\left[\\begin{matrix}0&0&{\\delta \\tau_{xz}}\\\\0&0&0\\\\\\delta {\\tau}_{\\delta {zx}}&0&0\\\\\\end{matrix}\\right]"
},
{
"math_id": 63,
"text": "+\\left[\\begin{matrix}{\\delta p_{w,int}}&0&0\\\\0&{\\delta p_{w,int}}&0\\\\0&0&{\\delta p_{w,int}}\\\\\\end{matrix}\\right]+"
},
{
"math_id": 64,
"text": "\\left[\\begin{matrix}{-\\delta p_{w,int}}&0&0\\\\0&{-\\delta p_{w,int}}&0\\\\0&0&{-\\delta p_{w,int}}\\\\\\end{matrix}\\right]+"
},
{
"math_id": 65,
"text": "\\left[\\begin{matrix}0&0&{-\\delta \\tau_{xz}}\\\\0&0&0\\\\-\\delta {\\tau}_{\\delta {zx}}&0&0\\\\\\end{matrix}\\right]"
},
{
"math_id": 66,
"text": "\\sigma=\\left[\\begin{matrix}\\sigma_r&0&0\\\\0&\\sigma_r&0\\\\0&0&\\sigma_z\\\\\\end{matrix}\\right]=\\left[\\begin{matrix}0&0&0\\\\0&10&0\\\\0&0&10\\\\\\end{matrix}\\right]"
},
{
"math_id": 67,
"text": "\\sigma_1=\\left[\\begin{matrix}0&0&0\\\\0&10&0\\\\0&0&10\\\\\\end{matrix}\\right]+\\mathbf{\\sigma}=\\left[\\begin{matrix}0&0&0\\\\0&10&0\\\\0&0&10\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}1&0&0\\\\0&0&3.5\\\\0&-1&0\\\\\\end{matrix}\\right]"
},
{
"math_id": 68,
"text": "\\left[\\begin{matrix}1-1.9&0&0\\\\0&10-1.9&3.5\\\\0&-1\\ &10-1.9\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}1.9&0&0\\\\0&1.9&0\\\\0&0&1.9\\\\\\end{matrix}\\right]"
},
{
"math_id": 69,
"text": "\\sigma_7=\\left[\\begin{matrix}12-4.4\\ \\ \\ &0&0\\\\0&10-4.4&2.9\\\\0&-2\\ &10-4.4\\\\\\end{matrix}\\right]+\\left[\\begin{matrix}4.4&0&0\\\\0&4.4&0\\\\0&0&4.4\\\\\\end{matrix}\\right]"
}
] |
https://en.wikipedia.org/wiki?curid=14411227
|
14411733
|
Madelung equations
|
Hydrodynamic formulation of the Schrödinger equations
In theoretical physics, the Madelung equations, or the equations of quantum hydrodynamics, are Erwin Madelung's equivalent alternative formulation of the Schrödinger equation, written in terms of hydrodynamical variables, similar to the Navier–Stokes equations of fluid dynamics. The derivation of the Madelung equations is similar to the de Broglie–Bohm formulation, which represents the Schrödinger equation as a quantum Hamilton–Jacobi equation.
Equations.
The Madelung equations are quantum Euler equations:
formula_0
where
The circulation of the flow velocity field along any closed path obeys the auxiliary quantization condition formula_4 for all integers n.
Derivation.
This section derives the hydrodynamic formulation of the quantum mechanics of a single particle.
Continuity equation.
Starting from the Schrödinger equation in the position basis
formula_5
where formula_6 is the potential and formula_7 is the wavefunction, both of which are functions of time formula_8 and position formula_9. By direct computation, we haveformula_10
Defining the probability density formula_11 and a vector field formula_12, we obtain the continuity equation
formula_13This continuity equation is the reason that the quantity formula_14 is named the probability current, since it is analogous to the continuity of conserved quantities in classical fluid dynamics, even though, unlike in classical fluid dynamics, there are no underlying particles carrying microscopic parcels of probabilities around.
Velocity field.
Given the probability current formula_14, we can define the velocity field formula_15:
formula_16
Expressing the wavefunction in polar form formula_17, the velocity field simplifies to
formula_18The quantization condition formula_4 comes from requiring formula_19 to be single-valued when integrated over a cycle.
Quantum Hamilton-Jacobi equation.
Substituting formula_20 into the Schrödinger equation, and dividing by formula_7, we obtain a complex equation. The imaginary part simplifies to the continuity equation. The real part simplifies to the quantum Hamilton-Jacobi equation (HJE):formula_21
where
formula_22
is the Bohm potential.
Material derivative.
The total time derivative of the velocity field, also known as the material derivative, is given by
formula_23
Plugging formula_24 into formula_25, we obtain
formula_26
Consequences.
Quantum stress.
The quantum force, which is the negative of the gradient of the quantum potential, can be cast in a form reminiscent of classical fluid dynamics:formula_27
where we define the quantum stress tensorformula_28Thus, formula_29which is similar to the classical Cauchy momentum equation.
Information.
The integral energy stored in the quantum pressure tensor is proportional to the Fisher information, which accounts for the quality of measurements. Thus, according to the Cramér–Rao bound, the Heisenberg uncertainty principle is equivalent to a standard inequality for the efficiency of measurements.
The quantum potential formula_30 has an average value formula_31This can be interpreted as a form of Fisher information.
Suppose we have a statistical estimation problem where the underlying parameter is formula_32, and the measurement is formula_33, where formula_34 is the measurement noise. In that case, the Fisher information matrix of the measurement for estimating the underlying parameter is formula_35 Further, if we assume that the measurement noise is independent of the underlying parameter, that is, formula_36 for some formula_37, then the Fisher information matrix is independent of formula_32 as well:formula_38 or equivalently,formula_39Plugging both into the equation for formula_40, we have formula_41
Quantum energies.
The thermodynamic definition of the quantum chemical potential
formula_42
follows from the hydrostatic force balance above:
formula_43
According to thermodynamics, at equilibrium the chemical potential is constant everywhere, which corresponds straightforwardly to the stationary Schrödinger equation. Therefore, the eigenvalues of the Schrödinger equation are free energies, which differ from the internal energies of the system. The particle internal energy is calculated as
formula_44
and is related to the local Carl Friedrich von Weizsäcker correction.
In the case of a quantum harmonic oscillator, for instance, one can easily show that the zero-point energy is the value of the oscillator chemical potential, while the oscillator internal energy is zero in the ground state, formula_45. Hence, the zero point energy represents the energy to place a static oscillator in vacuum, which shows again that the vacuum fluctuations are the reason for quantum mechanics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n& \\partial_t \\rho_m + \\nabla\\cdot(\\rho_m \\mathbf v) = 0, \\\\[4pt]\n& \\frac{d \\mathbf v}{dt} = \\partial_t\\mathbf v + \\mathbf v \\cdot \\nabla\\mathbf v = -\\frac{1}{m} \\mathbf{\\nabla}(Q + V),\n\\end{align}"
},
{
"math_id": 1,
"text": "\\mathbf v"
},
{
"math_id": 2,
"text": "\\rho_m = m \\rho = m |\\psi|^2"
},
{
"math_id": 3,
"text": "Q = -\\frac{\\hbar^2}{2m} \\frac{\\nabla^2 \\sqrt{\\rho}}{\\sqrt{\\rho}} = -\\frac{\\hbar^2}{2m} \\frac{\\nabla^2 \\sqrt{\\rho_m}}{\\sqrt{\\rho_m}}"
},
{
"math_id": 4,
"text": "\\Gamma \\doteq \\oint{m\\mathbf{v} \\cdot d\\mathbf{l}} = 2\\pi n\\hbar"
},
{
"math_id": 5,
"text": "\ni \\hbar \\partial_t \\psi = -\\frac{\\hbar^2}{2m }\\nabla^2 \\psi + V\\psi\n"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "\\psi"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\mathbf{q}"
},
{
"math_id": 10,
"text": "\n\\partial_t(\\psi\\psi^*) = \\frac{\\hbar}{2mi} (\\psi \\nabla^2 \\psi^* - \\psi^* \\nabla^2 \\psi).\n"
},
{
"math_id": 11,
"text": "\\rho = \\psi\\psi^*"
},
{
"math_id": 12,
"text": "\\mathbf{J} = \\frac{\\hbar}{2mi}(\\psi^* \\nabla \\psi - \\psi \\nabla \\psi^*)"
},
{
"math_id": 13,
"text": "\n\\partial_t \\rho + \\nabla \\cdot \\mathbf{J} = 0.\n"
},
{
"math_id": 14,
"text": "\\mathbf{J}"
},
{
"math_id": 15,
"text": "\\mathbf{v} := \\mathbf{J}/\\rho"
},
{
"math_id": 16,
"text": "\n\\mathbf{v} = \\frac{\\hbar}{2mi}( \\nabla \\ln \\psi - \\nabla \\ln \\psi^*).\n"
},
{
"math_id": 17,
"text": "\\psi = \\sqrt{\\rho}e^{iS/\\hbar}"
},
{
"math_id": 18,
"text": "\n\\mathbf{v} = \\frac{1}{m} \\nabla S.\n"
},
{
"math_id": 19,
"text": "e^{iS/\\hbar}"
},
{
"math_id": 20,
"text": "\\psi = \\sqrt{\\rho}e^{iS/\\hbar} = e^{iS/\\hbar + \\ln \\sqrt\\rho}"
},
{
"math_id": 21,
"text": "\n-\\partial_t S = \\frac{\\|m\\mathbf{v} \\|^2}{2m} + V + Q\n"
},
{
"math_id": 22,
"text": "\nQ := - \\frac{\\hbar^2}{2m} \\frac{\\nabla^2 \\sqrt \\rho}{\\sqrt \\rho}\n"
},
{
"math_id": 23,
"text": "\nD_t \\mathbf{v} := \\partial_t \\mathbf{v} + \\mathbf{v} \\cdot \\nabla \\mathbf{v}.\n"
},
{
"math_id": 24,
"text": "\\mathbf{v} = \\nabla S /m"
},
{
"math_id": 25,
"text": "\\nabla(\\text{quantum HJE})"
},
{
"math_id": 26,
"text": "\nD_t \\mathbf{v} = -\\frac{1}{m} \\nabla(V + Q).\n"
},
{
"math_id": 27,
"text": "\\mathbf{F_Q} = -\\mathbf{\\nabla} Q = \\frac{m}{\\rho_m} \\nabla \\cdot \\mathbf{\\sigma_Q} "
},
{
"math_id": 28,
"text": "\\mathbf{\\sigma_Q} = (\\hbar/2m)^2 \\rho_m \\nabla \\otimes \\nabla \\ln \\rho_m."
},
{
"math_id": 29,
"text": "D_t \\mathbf{v} = -\\frac{\\nabla V}{m} + \\frac{1}{\\rho_m} \\nabla \\cdot \\mathbf{\\sigma_Q}"
},
{
"math_id": 30,
"text": "Q"
},
{
"math_id": 31,
"text": "\n \\langle Q\\rangle = \\int \\rho Q = - \\frac{\\hbar^2}{2m} \\int \\rho ( \\nabla^2 \\ln \\sqrt \\rho + \\|\\nabla \\ln \\sqrt \\rho\\|^2)\n "
},
{
"math_id": 32,
"text": "\\theta"
},
{
"math_id": 33,
"text": "y = \\theta + q"
},
{
"math_id": 34,
"text": "q"
},
{
"math_id": 35,
"text": "\n I(\\theta)_{i, j} = \\mathbb{E}_{y \\sim \\rho(\\cdot | \\theta )}[ \\partial_{\\theta_i} \\ln \\rho(y| \\theta ) \\partial_{\\theta_j} \\ln \\rho(y| \\theta ) ]\n "
},
{
"math_id": 36,
"text": "\\rho(y | \\theta) = \\rho(y - \\theta)"
},
{
"math_id": 37,
"text": "\\rho"
},
{
"math_id": 38,
"text": "\n I_{ij} = E_q [ \\partial_{q_i} \\ln \\rho(q) \\partial_{q_j} \\ln \\rho(q)]\n "
},
{
"math_id": 39,
"text": "\n -E_q [ \\rho(q) \\partial_{q_j q_j} \\ln \\rho(q)]\n "
},
{
"math_id": 40,
"text": "\\langle Q\\rangle"
},
{
"math_id": 41,
"text": "\n \\langle Q\\rangle = \\frac{\\hbar^2}{8m} tr(I)\n "
},
{
"math_id": 42,
"text": "\\mu = Q + V = \\frac{1}{\\sqrt{\\rho_m}} \\widehat H \\sqrt{\\rho_m}"
},
{
"math_id": 43,
"text": "\\nabla \\mu = \\frac{m}{\\rho_m} \\nabla \\cdot \\mathbf p_Q + \\nabla V."
},
{
"math_id": 44,
"text": "\\varepsilon = \\mu - \\operatorname{tr}(\\mathbf p_Q) \\frac{m}{\\rho_m} = -\\frac{\\hbar^2}{8m} (\\nabla \\ln \\rho_m)^2 + U"
},
{
"math_id": 45,
"text": "\\varepsilon = 0"
}
] |
https://en.wikipedia.org/wiki?curid=14411733
|
14412213
|
Lester's theorem
|
Several points associated with a scalene triangle lie on the same circle
In Euclidean plane geometry, Lester's theorem states that in any scalene triangle, the two Fermat points, the nine-point center, and the circumcenter lie on the same circle.
The result is named after June Lester, who published it in 1997, and the circle through these points was called the Lester circle by Clark Kimberling.
Lester proved the result by using the properties of complex numbers; subsequent authors have given elementary proofs, proofs using vector arithmetic, and computerized proofs. The center of the Lester circle is also a triangle center. It is the center designated as X(1116) in the Encyclopedia of Triangle Centers. Recently, Peter Moses discovered 21 other triangle centers lie on the Lester circle. The points are numbered X(15535) – X(15555) in the Encyclopedia of Triangle Centers.
Gibert's generalization.
In 2000, Bernard Gibert proposed a generalization of the Lester Theorem involving the Kiepert hyperbola of a triangle. His result can be stated as follows: Every circle with a diameter that is a chord of the Kiepert hyperbola and perpendicular to the triangle's Euler line passes through the Fermat points.
Dao's generalizations.
Dao's first generalization.
In 2014, Dao Thanh Oai extended Gibert's result to every rectangular hyperbola. The generalization is as follows: Let formula_0 and formula_1 lie on one branch of a rectangular hyperbola, and let formula_2 and formula_3 be the two points on the hyperbola that are symmetrical about its center (antipodal points), where the tangents at these points are parallel to the line formula_4. Let formula_5 and formula_6 be two points on the hyperbola where the tangents intersect at a point formula_7 on the line formula_4. If the line formula_8 intersects formula_4 at formula_9, and the perpendicular bisector of formula_10 intersects the hyperbola at formula_11 and formula_12, then the six points formula_2, formula_13 formula_14 formula_15 formula_11, and formula_12 lie on a circle. When the rectangular hyperbola is the Kiepert hyperbola and formula_2 and formula_3 are the two Fermat points, Dao's generalization becomes Gibert's generalization.
Dao's second generalization.
In 2015, Dao Thanh Oai proposed another generalization of the Lester circle, this time associated with the Neuberg cubic. It can be stated as follows: Let formula_16 be a point on the Neuberg cubic, and let formula_17 be the reflection of formula_16 in the line formula_18, with formula_19 and formula_20 defined cyclically. The lines formula_21, formula_22, and formula_23 are known to be concurrent at a point denoted as formula_24. The four points formula_25, formula_26, formula_16, and formula_24 lie on a circle. When formula_16 is the point formula_27, it is known that formula_28, making Dao's generalization a restatement of the Lester Theorem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "F_+"
},
{
"math_id": 3,
"text": "F_-"
},
{
"math_id": 4,
"text": "HG"
},
{
"math_id": 5,
"text": "K_+"
},
{
"math_id": 6,
"text": "K_-"
},
{
"math_id": 7,
"text": "E"
},
{
"math_id": 8,
"text": "K_+K_-"
},
{
"math_id": 9,
"text": "D"
},
{
"math_id": 10,
"text": "DE"
},
{
"math_id": 11,
"text": "G_+"
},
{
"math_id": 12,
"text": "G_-"
},
{
"math_id": 13,
"text": "F_-,"
},
{
"math_id": 14,
"text": "E,"
},
{
"math_id": 15,
"text": "F,"
},
{
"math_id": 16,
"text": "P"
},
{
"math_id": 17,
"text": "P_A"
},
{
"math_id": 18,
"text": "BC"
},
{
"math_id": 19,
"text": "P_B"
},
{
"math_id": 20,
"text": "P_C"
},
{
"math_id": 21,
"text": "AP_A"
},
{
"math_id": 22,
"text": "BP_B"
},
{
"math_id": 23,
"text": "CP_C"
},
{
"math_id": 24,
"text": "Q(P)"
},
{
"math_id": 25,
"text": "X_{13}"
},
{
"math_id": 26,
"text": "X_{14}"
},
{
"math_id": 27,
"text": "X(3)"
},
{
"math_id": 28,
"text": "Q(P) = Q(X_3) = X_5"
}
] |
https://en.wikipedia.org/wiki?curid=14412213
|
14413905
|
Debtor days
|
The debtors days ratio measures how quickly cash is being collected from debtors. The longer it takes for a company to collect, the greater the number of debtors days.
Debtor days can also be referred to as debtor collection period. Another common ratio is the creditors days ratio.
Definition.
formula_0
or
formula_1
when
formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mbox{Debtor days} = \\frac {\\mbox{Year end trade debtors}} {\\mbox{Sales}} \\times {\\mbox{Number of days in financial year}}"
},
{
"math_id": 1,
"text": "\\mbox{Debtor days} = \\frac {\\mbox{Average trade debtors}} {\\mbox{Sales}} \\times {\\mbox{Number of days in financial year}}\n"
},
{
"math_id": 2,
"text": "\\mbox{Average trade debtors} = \\frac {\\mbox{Opening trade debtors} + \\mbox{Closing trade debtors}} {\\mbox{2}}\n"
}
] |
https://en.wikipedia.org/wiki?curid=14413905
|
14414065
|
Polar amplification
|
Polar amplification is the phenomenon that any change in the net radiation balance (for example greenhouse intensification) tends to produce a larger change in temperature near the poles than in the planetary average. This is commonly referred to as the ratio of polar warming to tropical warming. On a planet with an atmosphere that can restrict emission of longwave radiation to space (a greenhouse effect), surface temperatures will be warmer than a simple planetary equilibrium temperature calculation would predict. Where the atmosphere or an extensive ocean is able to transport heat polewards, the poles will be warmer and equatorial regions cooler than their local net radiation balances would predict. The poles will experience the most cooling when the global-mean temperature is lower relative to a reference climate; alternatively, the poles will experience the greatest warming when the global-mean temperature is higher.
In the extreme, the planet Venus is thought to have experienced a very large increase in greenhouse effect over its lifetime, so much so that its poles have warmed sufficiently to render its surface temperature effectively isothermal (no difference between poles and equator). On Earth, water vapor and trace gasses provide a lesser greenhouse effect, and the atmosphere and extensive oceans provide efficient poleward heat transport. Both palaeoclimate changes and recent global warming changes have exhibited strong polar amplification, as described below.
Arctic amplification is polar amplification of the Earth's North Pole only; Antarctic amplification is that of the South Pole.
History.
An observation-based study related to Arctic amplification was published in 1969 by Mikhail Budyko, and the study conclusion has been summarized as "Sea ice loss affects Arctic temperatures through the surface albedo feedback." The same year, a similar model was published by William D. Sellers. Both studies attracted significant attention since they hinted at the possibility for a runaway positive feedback within the global climate system. In 1975, Manabe and Wetherald published the first somewhat plausible general circulation model that looked at the effects of an increase of greenhouse gas. Although confined to less than one-third of the globe, with a "swamp" ocean and only land surface at high latitudes, it showed an Arctic warming faster than the tropics (as have all subsequent models).
Amplification.
Amplifying mechanisms.
Feedbacks associated with sea ice and snow cover are widely cited as one of the principal causes of terrestrial polar amplification. These feedbacks are particularly noted in local polar amplification, although recent work has shown that the lapse rate feedback is likely equally important to the ice-albedo feedback for Arctic amplification. Supporting this idea, large-scale amplification is also observed in model worlds with no ice or snow. It appears to arise both from a (possibly transient) intensification of poleward heat transport and more directly from changes in the local net radiation balance. Local radiation balance is crucial because an overall decrease in outgoing longwave radiation will produce a larger relative increase in net radiation near the poles than near the equator. Thus, between the lapse rate feedback and changes in the local radiation balance, much of polar amplification can be attributed to changes in outgoing longwave radiation. This is especially true for the Arctic, whereas the elevated terrain in Antarctica limits the influence of the lapse rate feedback.
Some examples of climate system feedbacks thought to contribute to recent polar amplification include the reduction of snow cover and sea ice, changes in atmospheric and ocean circulation, the presence of anthropogenic soot in the Arctic environment, and increases in cloud cover and water vapor. CO2 forcing has also been attributed to polar amplification. Most studies connect sea ice changes to polar amplification. Both ice extent and thickness impact polar amplification. Climate models with smaller baseline sea ice extent and thinner sea ice coverage exhibit stronger polar amplification. Some models of modern climate exhibit Arctic amplification without changes in snow and ice cover.
The individual processes contributing to polar warming are critical to understanding climate sensitivity. Polar warming also affects many ecosystems, including marine and terrestrial ecosystems, climate systems, and human populations. Polar amplification is largely driven by local polar processes with hardly any remote forcing, whereas polar warming is regulated by tropical and midlatitude forcing. These impacts of polar amplification have led to continuous research in the face of global warming.
Ocean circulation.
It has been estimated that 70% of global wind energy is transferred to the ocean and takes place within the Antarctic Circumpolar Current (ACC). Eventually, upwelling due to wind-stress transports cold Antarctic waters through the Atlantic surface current, while warming them over the equator, and into the Arctic environment. This is especially noticed in high latitudes. Thus, warming in the Arctic depends on the efficiency of the global ocean transport and plays a role in the polar see-saw effect.
Decreased oxygen and low-pH during La Niña are processes that correlate with decreased primary production and a more pronounced poleward flow of ocean currents. It has been proposed that the mechanism of increased Arctic surface air temperature anomalies during La Niña periods of ENSO may be attributed to the Tropically Excited Arctic Warming Mechanism (TEAM), when Rossby waves propagate more poleward, leading to wave dynamics and an increase in downward infrared radiation.
Amplification factor.
Polar amplification is quantified in terms of a polar amplification factor, generally defined as the ratio of some change in a polar temperature to a corresponding change in a broader average temperature:
formula_0,
where formula_1 is a change in polar temperature and formula_2 is, for example, a corresponding change in a global mean temperature.
Common implementations define the temperature changes directly as the anomalies in surface air temperature relative to a recent reference interval (typically 30 years). Others have used the ratio of the variances of surface air temperature over an extended interval.
Amplification phase.
It is observed that Arctic and Antarctic warming commonly proceed out of phase because of orbital forcing, resulting in the so-called polar see-saw effect.
Paleoclimate polar amplification.
The glacial / interglacial cycles of the Pleistocene provide extensive palaeoclimate evidence of polar amplification, both from the Arctic and the Antarctic. In particular, the temperature rise since the last glacial maximum years ago provides a clear picture. Proxy temperature records from the Arctic (Greenland) and from the Antarctic indicate polar amplification factors on the order of 2.0.
Recent Arctic amplification.
Suggested mechanisms leading to the observed Arctic amplification include Arctic sea ice decline (open water reflects less sunlight than sea ice), atmospheric heat transport from the equator to the Arctic, and the lapse rate feedback.
The Arctic was historically described as warming twice as fast as the global average, but this estimate was based on older observations which missed the more recent acceleration. By 2021, enough data was available to show that the Arctic had warmed three times as fast as the globe - 3.1°C between 1971 and 2019, as opposed to the global warming of 1°C over the same period. Moreover, this estimate defines the Arctic as everything above 60th parallel north, or a full third of the Northern Hemisphere: in 2021–2022, it was found that since 1979, the warming within the Arctic Circle itself (above the 66th parallel) has been nearly four times faster than the global average. Within the Arctic Circle itself, even greater Arctic amplification occurs in the Barents Sea area, with hotspots around West Spitsbergen Current: weather stations located on its path record decadal warming up to seven times faster than the global average. This has fuelled concerns that unlike the rest of the Arctic sea ice, ice cover in the Barents Sea may permanently disappear even around 1.5 degrees of global warming.
The acceleration of Arctic amplification has not been linear: a 2022 analysis found that it occurred in two sharp steps, with the former around 1986, and the latter after 2000. The first acceleration is attributed to the increase in anthropogenic radiative forcing in the region, which is in turn likely connected to the reductions in stratospheric sulfur aerosols pollution in Europe in the 1980s in order to combat acid rain. Since sulphate aerosols have a cooling effect, their absence is likely to have increased Arctic temperatures by up to 0.5 degrees Celsius. The second acceleration has no known cause, which is why it did not show up in any climate models. It is likely to be an example of multi-decadal natural variability, like the suggested link between Arctic temperatures and Atlantic Multi-decadal Oscillation (AMO), in which case it can be expected to reverse in the future. However, even the first increase in Arctic amplification was only accurately simulated by a fraction of the current CMIP6 models.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{ PAF }={\\Delta{T}_{p}\\over\\Delta\\overline{T}}"
},
{
"math_id": 1,
"text": "\\Delta{T}_{p}"
},
{
"math_id": 2,
"text": "\\Delta\\overline{T}"
}
] |
https://en.wikipedia.org/wiki?curid=14414065
|
1441435
|
Equilibrium point (mathematics)
|
Constant solution to a differential equation
In mathematics, specifically in differential equations, an equilibrium point is a constant solution to a differential equation.
Formal definition.
The point formula_0 is an equilibrium point for the differential equation
formula_1
if formula_2 for all formula_3.
Similarly, the point formula_0 is an equilibrium point (or fixed point) for the difference equation
formula_4
if formula_5 for formula_6.
Equilibria can be classified by looking at the signs of the eigenvalues of the linearization of the equations about the equilibria. That is to say, by evaluating the Jacobian matrix at each of the equilibrium points of the system, and then finding the resulting eigenvalues, the equilibria can be categorized. Then the behavior of the system in the neighborhood of each equilibrium point can be qualitatively determined, (or even quantitatively determined, in some instances), by finding the eigenvector(s) associated with each eigenvalue.
An equilibrium point is "hyperbolic" if none of the eigenvalues have zero real part. If all eigenvalues have negative real parts, the point is "stable". If at least one has a positive real part, the point is "unstable". If at least one eigenvalue has negative real part and at least one has positive real part, the equilibrium is a saddle point and it is unstable. If all the eigenvalues are real and have the same sign the point is called a "node".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tilde{\\mathbf{x}}\\in \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\frac{d\\mathbf{x}}{dt} = \\mathbf{f}(t,\\mathbf{x})"
},
{
"math_id": 2,
"text": "\\mathbf{f}(t,\\tilde{\\mathbf{x}})=\\mathbf{0}"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "\\mathbf{x}_{k+1} = \\mathbf{f}(k,\\mathbf{x}_k)"
},
{
"math_id": 5,
"text": "\\mathbf{f}(k,\\tilde{\\mathbf{x}})= \\tilde{\\mathbf{x}} "
},
{
"math_id": 6,
"text": "k=0,1,2,\\ldots"
}
] |
https://en.wikipedia.org/wiki?curid=1441435
|
1441464
|
Vacuum solution (general relativity)
|
Lorentzian manifold with vanishing Einstein tensor
In general relativity, a vacuum solution is a Lorentzian manifold whose Einstein tensor vanishes identically. According to the Einstein field equation, this means that the stress–energy tensor also vanishes identically, so that no matter or non-gravitational fields are present. These are distinct from the electrovacuum solutions, which take into account the electromagnetic field in addition to the gravitational field. Vacuum solutions are also distinct from the lambdavacuum solutions, where the only term in the stress–energy tensor is the cosmological constant term (and thus, the lambdavacuums can be taken as cosmological models).
More generally, a vacuum region in a Lorentzian manifold is a region in which the Einstein tensor vanishes.
Vacuum solutions are a special case of the more general exact solutions in general relativity.
Equivalent conditions.
It is a mathematical fact that the Einstein tensor vanishes if and only if the Ricci tensor vanishes. This follows from the fact that these two second rank tensors stand in a kind of dual relationship; they are the trace reverse of each other:
formula_0
where the traces are formula_1.
A third equivalent condition follows from the Ricci decomposition of the Riemann curvature tensor as a sum of the Weyl curvature tensor plus terms built out of the Ricci tensor: the Weyl and Riemann tensors agree, formula_2, in some region if and only if it is a vacuum region.
Gravitational energy.
Since formula_3 in a vacuum region, it might seem that according to general relativity, vacuum regions must contain no energy. But the gravitational field can do work, so we must expect the gravitational field itself to possess energy, and it does. However, determining the precise location of this gravitational field energy is technically problematical in general relativity, by its very nature of the clean separation into a universal gravitational interaction and "all the rest".
The fact that the gravitational field itself possesses energy yields a way to understand the nonlinearity of the Einstein field equation: this gravitational field energy itself produces more gravity. (This is described as "the gravity of gravity", or by saying that "gravity gravitates".) This means that the gravitational field outside the Sun is a bit "stronger" according to general relativity than it is according to Newton's theory.
Examples.
Well-known examples of explicit vacuum solutions include:
These all belong to one or more general families of solutions:
Several of the families mentioned here, members of which are obtained by solving an appropriate linear or nonlinear, real or complex partial differential equation, turn out to be very closely related, in perhaps surprising ways.
In addition to these, we also have the vacuum pp-wave spacetimes, which include the gravitational plane waves.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G_{ab} = R_{ab} - \\frac{R}{2} \\, g_{ab}, \\; \\; R_{ab} = G_{ab} - \\frac{G}{2} \\, g_{ab}"
},
{
"math_id": 1,
"text": "R = {R^a}_a, \\; \\; G = {G^a}_a = -R"
},
{
"math_id": 2,
"text": "R_{abcd}=C_{abcd}"
},
{
"math_id": 3,
"text": "T^{ab} = 0"
}
] |
https://en.wikipedia.org/wiki?curid=1441464
|
14415338
|
Uniform 5-polytope
|
Five-dimensional geometric shape
In geometry, a uniform 5-polytope is a five-dimensional uniform polytope. By definition, a uniform 5-polytope is vertex-transitive and constructed from uniform 4-polytope facets.
The complete set of convex uniform 5-polytopes has not been determined, but many can be made as Wythoff constructions from a small set of symmetry groups. These construction operations are represented by the permutations of rings of the Coxeter diagrams.
Regular 5-polytopes.
Regular 5-polytopes can be represented by the Schläfli symbol {p,q,r,s}, with s {p,q,r} 4-polytope facets around each face. There are exactly three such regular polytopes, all convex:
There are no nonconvex regular polytopes in 5 dimensions or above.
Convex uniform 5-polytopes.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
What is the complete set of convex uniform 5-polytopes?
There are 104 known convex uniform 5-polytopes, plus a number of infinite families of duoprism prisms, and polygon-polyhedron duoprisms. All except the "grand antiprism prism" are based on Wythoff constructions, reflection symmetry generated with Coxeter groups.
Symmetry of uniform 5-polytopes in four dimensions.
The 5-simplex is the regular form in the A5 family. The 5-cube and 5-orthoplex are the regular forms in the B5 family. The bifurcating graph of the D5 family contains the 5-orthoplex, as well as a 5-demicube which is an alternated 5-cube.
Each reflective uniform 5-polytope can be constructed in one or more reflective point group in 5 dimensions by a Wythoff construction, represented by rings around permutations of nodes in a Coxeter diagram. Mirror hyperplanes can be grouped, as seen by colored nodes, separated by even-branches. Symmetry groups of the form [a,b,b,a], have an extended symmetry, a,b,b,a, like [3,3,3,3], doubling the symmetry order. Uniform polytopes in these group with symmetric rings contain this extended symmetry.
If all mirrors of a given color are unringed (inactive) in a given uniform polytope, it will have a lower symmetry construction by removing all of the inactive mirrors. If all the nodes of a given color are ringed (active), an alternation operation can generate a new 5-polytope with chiral symmetry, shown as "empty" circled nodes", but the geometry is not generally adjustable to create uniform solutions.
There are 5 finite categorical uniform prismatic families of polytopes based on the nonprismatic uniform 4-polytopes. There is one infinite family of 5-polytopes based on prisms of the uniform duoprisms {p}×{q}×{ }.
There are 3 categorical uniform duoprismatic families of polytopes based on Cartesian products of the uniform polyhedra and regular polygons: {"q","r"}×{"p"}.
Enumerating the convex uniform 5-polytopes.
That brings the tally to: 19+31+8+45+1=104
In addition there are:
The A5 family.
There are 19 forms based on all permutations of the Coxeter diagrams with one or more rings. (16+4-1 cases)
They are named by Norman Johnson from the Wythoff construction operations upon regular 5-simplex (hexateron).
The A5 family has symmetry of order 720 (6 factorial). 7 of the 19 figures, with symmetrically ringed Coxeter diagrams have doubled symmetry, order 1440.
The coordinates of uniform 5-polytopes with 5-simplex symmetry can be generated as permutations of simple integers in 6-space, all in hyperplanes with normal vector (1,1,1,1,1,1).
The B5 family.
The B5 family has symmetry of order 3840 (5!×25).
This family has 25−1=31 Wythoffian uniform polytopes generated by marking one or more nodes of the Coxeter diagram. Also added are 8 uniform polytopes generated as alternations with half the symmetry, which form a complete duplicate of the D5 family as ... = ... (There are more alternations that are not listed because they produce only repetitions, as ... = ... and ... = ... These would give a complete duplication of the uniform 5-polytopes numbered 20 through 34 with symmetry broken in half.)
For simplicity it is divided into two subgroups, each with 12 forms, and 7 "middle" forms which equally belong in both.
The 5-cube family of 5-polytopes are given by the convex hulls of the base points listed in the following table, with all permutations of coordinates and sign taken. Each base point generates a distinct uniform 5-polytope. All coordinates correspond with uniform 5-polytopes of edge length 2.
The D5 family.
The D5 family has symmetry of order 1920 (5! x 24).
This family has 23 Wythoffian uniform polytopes, from "3×8-1" permutations of the D5 Coxeter diagram with one or more rings. 15 (2×8-1) are repeated from the B5 family and 8 are unique to this family, though even those 8 duplicate the alternations from the B5 family.
In the 15 repeats, both of the nodes terminating the length-1 branches are ringed, so the two kinds of element are identical and the symmetry doubles: the relations are ... = ... and ... = ..., creating a complete duplication of the uniform 5-polytopes 20 through 34 above. The 8 new forms have one such node ringed and one not, with the relation ... = ... duplicating uniform 5-polytopes 51 through 58 above.
Uniform prismatic forms.
There are 5 finite categorical uniform prismatic families of polytopes based on the nonprismatic uniform 4-polytopes. For simplicity, most alternations are not shown.
A4 × A1.
This prismatic family has 9 forms:
The A1 x A4 family has symmetry of order 240 (2*5!).
B4 × A1.
This prismatic family has 16 forms. (Three are shared with [3,4,3]×[ ] family)
The A1×B4 family has symmetry of order 768 (254!).
The last three snubs can be realised with equal-length edges, but turn out nonuniform anyway because some of their 4-faces are not uniform 4-polytopes.
F4 × A1.
This prismatic family has 10 forms.
The A1 x F4 family has symmetry of order 2304 (2*1152). Three polytopes 85, 86 and 89 (green background) have double symmetry [[3,4,3],2], order 4608. The last one, snub 24-cell prism, (blue background) has [3+,4,3,2] symmetry, order 1152.
H4 × A1.
This prismatic family has [[Uniform 4-polytope#The H4 .5B5.2C3.2C3.5D family .E2.80.94 .28120-cell.2F600-cell.29|15 forms]]:
The [[Coxeter group#Finite Coxeter groups|A1 x H4 family]] has symmetry of order 28800 (2*14400).
Duoprism prisms.
Uniform duoprism prisms, {"p"}×{"q"}×{ }, form an infinite class for all integers "p","q">2. {4}×{4}×{ } makes a lower symmetry form of the [[5-cube]].
The extended [[f-vector]] of {"p"}×{"q"}×{ } is computed as ("p","p",1)*("q","q",1)*(2,1) = (2"pq",5"pq",4"pq"+2"p"+2"q",3"pq"+3"p"+3"q","p"+"q"+2,1).
Grand antiprism prism.
The grand antiprism prism is the only known convex non-Wythoffian uniform 5-polytope. It has 200 vertices, 1100 edges, 1940 faces (40 pentagons, 500 squares, 1400 triangles), 1360 cells (600 [[tetrahedron|tetrahedra]], 40 [[pentagonal antiprism]]s, 700 [[triangular prism]]s, 20 [[pentagonal prism]]s), and 322 hypercells (2 [[grand antiprism]]s [[Image:Grand antiprism.png|50px]], 20 [[pentagonal antiprism]] prisms [[Image:Pentagonal antiprismatic prism.png|50px]], and 300 [[tetrahedral prism]]s [[Image:Tetrahedral prism.png|50px]]).
Notes on the Wythoff construction for the uniform 5-polytopes.
Construction of the reflective 5-dimensional [[uniform polytope]]s are done through a [[Wythoff construction]] process, and represented through a [[Coxeter diagram]], where each node represents a mirror. Nodes are ringed to imply which mirrors are active. The full set of uniform polytopes generated are based on the unique permutations of ringed nodes. Uniform 5-polytopes are named in relation to the [[regular polytope]]s in each family. Some families have two regular constructors and thus may have two ways of naming them.
Here are the primary operators available for constructing and naming the uniform 5-polytopes.
The last operation, the snub, and more generally the alternation, are the operations that can create nonreflective forms. These are drawn with "hollow rings" at the nodes.
The prismatic forms and bifurcating graphs can use the same truncation indexing notation, but require an explicit numbering system on the nodes for clarity.
Regular and uniform honeycombs.
[[File:Coxeter diagram affine rank5 correspondence.png|436px|thumb|Coxeter diagram correspondences between families and higher symmetry within diagrams. Nodes of the same color in each row represent identical mirrors. Black nodes are not active in the correspondence.]]
There are five fundamental affine [[Coxeter groups]], and 13 prismatic groups that generate regular and uniform tessellations in Euclidean 4-space.
There are three [[List of regular polytopes#Higher dimensions 3|regular honeycomb]]s of Euclidean 4-space:
Other families that generate uniform honeycombs:
[[Non-Wythoffian]] uniform tessellations in 4-space also exist by elongation (inserting layers), and gyration (rotating layers) from these reflective forms.
Regular and uniform hyperbolic honeycombs.
There are 5 [[Coxeter-Dynkin diagram#Compact|compact hyperbolic Coxeter groups]] of rank 5, each generating uniform honeycombs in hyperbolic 4-space as permutations of rings of the Coxeter diagrams.
There are 5 regular compact convex hyperbolic honeycombs in H4 space:
There are also 4 regular compact hyperbolic star-honeycombs in H4 space:
There are 9 [[Coxeter-Dynkin diagram#Rank 4 to 10|paracompact hyperbolic Coxeter groups of rank 5]], each generating uniform honeycombs in 4-space as permutations of rings of the Coxeter diagrams. Paracompact groups generate honeycombs with infinite [[Facet (geometry)|facets]] or [[vertex figure]]s.
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
[[Category:5-polytopes]]
[[eo:5-hiperpluredro]]
|
[
{
"math_id": 0,
"text": "{\\tilde{A}}_4"
},
{
"math_id": 1,
"text": "{\\tilde{D}}_4"
}
] |
https://en.wikipedia.org/wiki?curid=14415338
|
1441951
|
Graphics pipeline
|
Procedure to convert 3D scenes to 2D images
The computer graphics pipeline, also known as the rendering pipeline, or graphics pipeline, is a framework within computer graphics that outlines the necessary procedures for transforming a three-dimensional (3D) scene into a two-dimensional (2D) representation on a screen. Once a 3D model is generated, the graphics pipeline converts the model into a visually perceivable format on the computer display. Due to the dependence on specific software, hardware configurations, and desired display attributes, a universally applicable graphics pipeline does not exist. Nevertheless, graphics application programming interfaces (APIs), such as Direct3D, OpenGL and Vulkan were developed to standardize common procedures and oversee the graphics pipeline of a given hardware accelerator. These APIs provide an abstraction layer over the underlying hardware, relieving programmers from the need to write code explicitly targeting various graphics hardware accelerators like AMD, Intel, Nvidia, and others.
The model of the graphics pipeline is usually used in real-time rendering. Often, most of the pipeline steps are implemented in hardware, which allows for special optimizations. The term "pipeline" is used in a similar sense for the pipeline in processors: the individual steps of the pipeline run in parallel as long as any given step has what it needs.
Concept.
The 3D pipeline usually refers to the most common form of computer 3D rendering called 3D polygon rendering, distinct from Raytracing and Raycasting. In Raycasting, a ray originates at the point where the camera resides, and if that ray hits a surface, the color and lighting of the point on the surface where the ray hit is calculated. In 3D polygon rendering the reverse happens- the area that is given the camera is calculated and then rays are created from every part of every surface given the camera and traced back to the camera.
Structure.
A graphics pipeline can be divided into three main parts: Application, Geometry, and Rasterization.
Application.
The application step is executed by the software on the main processor (CPU). During the application step, changes are made to the scene as required, for example, by user interaction using input devices or during an animation. The new scene with all its primitives, usually triangles, lines, and points, is then passed on to the next step in the pipeline.
Examples of tasks that are typically done in the application step are collision detection, animation, morphing, and acceleration techniques using spatial subdivision schemes such as Quadtrees or Octrees. These are also used to reduce the amount of main memory required at a given time. The "world" of a modern computer game is far larger than what could fit into memory at once.
Geometry.
The geometry step (with Geometry pipeline), which is responsible for the majority of the operations with polygons and their vertices (with Vertex pipeline), can be divided into the following five tasks. It depends on the particular implementation of how these tasks are organized as actual parallel pipeline steps.
Definitions.
A "vertex" (plural: vertices) is a point in the world. Many points are used to join the surfaces. In special cases, point clouds are drawn directly, but this is still the exception.
A "triangle" is the most common geometric primitive of computer graphics. It is defined by its three vertices and a normal vector - the normal vector serves to indicate the front face of the triangle and is a vector that is perpendicular to the surface. The triangle may be provided with a color or with a texture (image "glued" on top of it). Triangles are preferred over rectangles because their three points always exist in a single plane.
The World Coordinate System.
The world coordinate system is the coordinate system in which the virtual world is created. This should meet a few conditions for the following mathematics to be easily applicable:
How the unit of the coordinate system is defined, is left to the developer. Whether, therefore, the unit vector of the system corresponds in reality to one meter or an Ångström depends on the application.
"Example:" If we are to develop a flight simulator, we can choose the world coordinate system so that the origin is in the middle of the Earth and the unit is set to one meter. In addition, to make the reference to reality easier, we define that the X axis should intersect the equator on the zero meridian, and the Z axis passes through the poles. In a Right-handed system, the Y-axis runs through the 90°-East meridian (somewhere in the Indian Ocean). Now we have a coordinate system that describes every point on Earth in three-dimensional Cartesian coordinates. In this coordinate system, we are now modeling the principles of our world, mountains, valleys, and oceans.
"Note:" Aside from computer geometry, geographic coordinates are used for the Earth, i.e., latitude and longitude, as well as altitudes above sea level. The approximate conversion - if one does not consider the fact that the Earth is not an exact sphere - is simple:
formula_0 with R = Radius of the Earth [6.378.137m], lat = Latitude, long = Longitude, hasl = height above sea level.
All of the following examples apply in a right-handed system. For a left-handed system, the signs may need to be interchanged.
The objects contained within the scene (houses, trees, cars) are often designed in their object coordinate system (also called model coordinate system or local coordinate system) for reasons of simpler modeling. To assign these objects to coordinates in the world coordinate system or global coordinate system of the entire scene, the object coordinates are transformed using translation, rotation, or scaling. This is done by multiplying the corresponding transformation matrices. In addition, several differently transformed copies can be formed from one object, for example, a forest from a tree; This technique is called instancing.
To place a model of an aircraft in the world, we first determine four matrices. Since we work in three-dimensional space, we need four-dimensional homogeneous matrices for our calculations.
First, we need three rotation matrices, namely one for each of the three aircraft axes (vertical axis, transverse axis, longitudinal axis).
Around the X axis (usually defined as a longitudinal axis in the object coordinate system)
formula_1
Around the Y axis (usually defined as the transverse axis in the object coordinate system)
formula_2
Around the Z axis (usually defined as a vertical axis in the object coordinate system)
formula_3
We also use a translation matrix that moves the aircraft to the desired point in our world:
formula_4.
"Remark": The above matrices are transposed with respect to the ones in the article rotation matrix. See further down for an explanation of why.
Now we could calculate the position of the vertices of the aircraft in world coordinates by multiplying each point successively with these four matrices. Since the multiplication of a matrix with a vector is quite expensive (time-consuming), one usually takes another path and first multiplies the four matrices together. The multiplication of two matrices is even more expensive but must be executed only once for the whole object. The multiplications formula_5 and formula_6 are equivalent. Thereafter, the resulting matrix could be applied to the vertices. In practice, however, the multiplication with the vertices is still not applied, but the camera matrices (see below) are determined first.
For our example from above, however, the translation has to be determined somewhat differently, since the common meaning of "up" - apart from at the North Pole - does not coincide with our definition of the positive Z axis and therefore the model must also be rotated around the center of the Earth: formula_7 The first step pushes the origin of the model to the correct height above the Earth's surface, then it is rotated by latitude and longitude.
The order in which the matrices are applied is important because the matrix multiplication is "not" commutative. This also applies to the three rotations, which can be demonstrated by an example: The point (1, 0, 0) lies on the X-axis, if one rotates it first by 90° around the X- and then around The Y-axis, it ends up on the Z-axis (the rotation around the X-axis does not affect a point that is on the axis). If, on the other hand, one rotates around the Y-axis first and then around the X-axis, the resulting point is located on the Y-axis. The sequence itself is arbitrary as long as it is always the same. The sequence with x, then y, then z (roll, pitch, heading) is often the most intuitive because the rotation causes the compass direction to coincide with the direction of the "nose".
There are also two conventions to define these matrices, depending on whether you want to work with column vectors or row vectors. Different graphics libraries have different preferences here. OpenGL prefers column vectors, DirectX row vectors. The decision determines from which side the point vectors are to be multiplied by the transformation matrices.
For column vectors, the multiplication is performed from the right, i.e. formula_8, where vout and vin are 4x1 column vectors. The concatenation of the matrices also is done from the right to left, i.e., for example formula_9, when first rotating and then shifting.
In the case of row vectors, this works exactly the other way around. The multiplication now takes place from the left as formula_10 with 1x4-row vectors and the concatenation is formula_11 when we also first rotate and then move. The matrices shown above are valid for the second case, while those for column vectors are transposed. The rule formula_12 applies, which for multiplication with vectors means that you can switch the multiplication order by transposing the matrix.
In matrix chaining, each transformation defines a new coordinate system, allowing for flexible extensions. For instance, an aircraft's propeller, modeled separately, can be attached to the aircraft nose through translation, which only shifts from the model to the propeller coordinate system. To render the aircraft, its transformation matrix is first computed to transform the points, followed by multiplying the propeller model matrix by the aircraft's matrix for the propeller points. This calculated matrix is known as the 'world matrix,' essential for each object in the scene before rendering. The application can then dynamically alter these matrices, such as updating the aircraft's position with each frame based on speed.
The matrix calculated in this way is also called the "world matrix". It must be determined for each object in the world before rendering. The application can introduce changes here, for example, changing the position of the aircraft according to the speed after each frame.
Camera Transformation.
In addition to the objects, the scene also defines a virtual camera or viewer that indicates the position and direction of view relative to which the scene is rendered. The scene is transformed so that the camera is at the origin looking along the Z-axis. The resulting coordinate system is called the camera coordinate system and the transformation is called "camera transformation" or "View Transformation".
The view matrix is usually determined from the camera position, target point (where the camera looks), and an "up vector" ("up" from the viewer's viewpoint). The first three auxiliary vectors are required:
With normal(v) = normalization of the vector v;
cross(v1, v2) = cross product of v1 and v2.
Finally, the matrix: formula_13
with dot(v1, v2) = dot product of v1 and v2.
Projection.
The 3D projection step transforms the view volume into a cube with the corner point coordinates (-1, -1, 0) and (1, 1, 1); Occasionally other target volumes are also used. This step is called "projection", even though it transforms a volume into another volume, since the resulting Z coordinates are not stored in the image, but are only used in Z-buffering in the later rastering step. In a perspective illustration, a central projection is used. To limit the number of displayed objects, two additional clipping planes are used; The visual volume is therefore a truncated pyramid (frustum). The parallel or orthogonal projection is used, for example, for technical representations because it has the advantage that all parallels in the object space are also parallel in the image space, and the surfaces and volumes are the same size regardless of the distance from the viewer. Maps use, for example, an orthogonal projection (so-called orthophoto), but oblique images of a landscape cannot be used in this way - although they can technically be rendered, they seem so distorted that we cannot make any use of them. The formula for calculating a perspective mapping matrix is:
formula_14
With h = cot (fieldOfView / 2.0) (aperture angle of the camera); w = h / aspect Ratio (aspect ratio of the target image); near = Smallest distance to be visible; far = The longest distance to be visible.
The reasons why the smallest and the greatest distance have to be given here are, on the one hand, that this distance is divided to reach the scaling of the scene (more distant objects are smaller in a perspective image than near objects), and on the other hand to scale the Z values to the range 0..1, for filling the Z-buffer. This buffer often has only a resolution of 16 bits, which is why the near and far values should be chosen carefully. A too-large difference between the near and the far value leads to so-called Z-fighting because of the low resolution of the Z-buffer. It can also be seen from the formula that the near value cannot be 0 because this point is the focus point of the projection. There is no picture at this point.
For the sake of completeness, the formula for parallel projection (orthogonal projection):
formula_15
with w = width of the target cube (dimension in units of the world coordinate system); H = w / aspect Ratio (aspect ratio of the target image); near = Smallest distance to be visible; far = longest distance to be visible.
For reasons of efficiency, the camera and projection matrix are usually combined into a transformation matrix so that the camera coordinate system is omitted. The resulting matrix is usually the same for a single image, while the world matrix looks different for each object. In practice, therefore, view and projection are pre-calculated so that only the world matrix has to be adapted during the display. However, more complex transformations such as vertex blending are possible. Freely programmable geometry shaders that modify the geometry can also be executed.
In the actual rendering step, the world matrix * camera matrix * projection matrix is calculated and then finally applied to every single point. Thus, the points of all objects are transferred directly to the screen coordinate system (at least almost, the value range of the axes is still -1..1 for the visible range, see section "Window-Viewport-Transformation").
Lighting.
Often a scene contains light sources placed at different positions to make the lighting of the objects appear more realistic. In this case, a gain factor for the texture is calculated for each vertex based on the light sources and the material properties associated with the corresponding triangle. In the later rasterization step, the vertex values of a triangle are interpolated over its surface. A general lighting (ambient light) is applied to all surfaces. It is the diffuse and thus direction-independent brightness of the scene. The sun is a directed light source, which can be assumed to be infinitely far away. The illumination effected by the sun on a surface is determined by forming the scalar product of the directional vector from the sun and the normal vector of the surface. If the value is negative, the surface is facing the sun.
Clipping.
Only the primitives that are within the visual volume need to be rastered (drawn). This visual volume is defined as the inside of a frustum, a shape in the form of a pyramid with a cut-off top. Primitives that are completely outside the visual volume are discarded; This is called frustum culling. Further culling methods such as back-face culling, which reduces the number of primitives to be considered, can theoretically be executed in any step of the graphics pipeline. Primitives that are only partially inside the cube must be clipped against the cube. The advantage of the previous projection step is that the clipping always takes place against the same cube. Only the - possibly clipped - primitives, which are within the visual volume, are forwarded to the final step.
Window-Viewport transformation.
To output the image to any target area (viewport) of the screen, another transformation, the "Window-Viewport transformation", must be applied. This is a shift, followed by scaling. The resulting coordinates are the device coordinates of the output device. The viewport contains 6 values: the height and width of the window in pixels, the upper left corner of the window in window coordinates (usually 0, 0), and the minimum and maximum values for Z (usually 0 and 1).
Formally: formula_16
With vp=Viewport; v=Point after projection
On modern hardware, most of the geometry computation steps are performed in the vertex shader. This is, in principle, freely programmable, but generally performs at least the transformation of the points and the illumination calculation. For the DirectX programming interface, the use of a custom vertex shader is necessary from version 10, while older versions still have a standard shader.
Rasterization.
The rasterization step is the final step before the fragment shader pipeline that all primitives are rasterized with. In the rasterization step, discrete fragments are created from continuous primitives.
In this stage of the graphics pipeline, the grid points are also called fragments, for the sake of greater distinctiveness. Each fragment corresponds to one pixel in the frame buffer and this corresponds to one pixel of the screen. These can be colored (and possibly illuminated). Furthermore, it is necessary to determine the visible, closer to the observer fragment, in the case of overlapping polygons. A Z-buffer is usually used for this so-called hidden surface determination. The color of a fragment depends on the illumination, texture, and other material properties of the visible primitive and is often interpolated using the triangle vertex properties. Where available, a fragment shader (also called Pixel Shader) is run in the rastering step for each fragment of the object. If a fragment is visible, it can now be mixed with already existing color values in the image if transparency or multi-sampling is used. In this step, one or more fragments become a pixel.
To prevent the user sees the gradual rasterization of the primitives, double buffering takes place. The rasterization is carried out in a special memory area. Once the image has been completely rasterized, it is copied to the visible area of the image memory.
Inverse.
All matrices used are nonsingular and thus invertible. Since the multiplication of two nonsingular matrices creates another nonsingular matrix, the entire transformation matrix is also invertible. The inverse is required to recalculate world coordinates from screen coordinates - for example, to determine from the mouse pointer position the clicked object. However, since the screen and the mouse have only two dimensions, the third is unknown. Therefore, a ray is projected at the cursor position into the world and then the intersection of this ray with the polygons in the world is determined.
Shader.
Classic graphics cards are still relatively close to the graphics pipeline. With increasing demands on the GPU, restrictions were gradually removed to create more flexibility. Modern graphics cards use a freely programmable, shader-controlled pipeline, which allows direct access to individual processing steps. To relieve the main processor, additional processing steps have been moved to the pipeline and the GPU.
The most important shader units are vertex shaders, geometry shaders, and pixel shaders.
The Unified Shader has been introduced to take full advantage of all units. This gives a single large pool of shader units. As required, the pool is divided into different groups of shaders. A strict separation between the shader types is therefore no longer useful.
It is also possible to use a so-called compute-shader to perform any calculations off the display of a graphic on the GPU. The advantage is that they run very parallel, but there are limitations. These universal calculations are also called general-purpose computing on graphics processing units, or GPGPU for short.
Mesh shaders are a recent addition, aiming to overcome the bottlenecks of the geometry pipeline fixed layout.
|
[
{
"math_id": 0,
"text": "\\begin{pmatrix}\nx\\\\\ny\\\\\nz\n\\end{pmatrix}=\\begin{pmatrix}\n(R+{hasl})*\\cos({lat})*\\cos({long})\\\\\n(R+{hasl})*\\cos({lat})*\\sin({long})\\\\\n(R+{hasl})*\\sin({lat})\n\\end{pmatrix}\n"
},
{
"math_id": 1,
"text": "R_x=\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & \\cos(\\alpha) & \\sin(\\alpha) & 0\\\\\n0 & -\\sin(\\alpha) & \\cos(\\alpha) & 0\\\\\n0 & 0 & 0 & 1\n\\end{pmatrix}"
},
{
"math_id": 2,
"text": "R_y=\\begin{pmatrix}\n\\cos(\\alpha) & 0 & -\\sin(\\alpha) & 0\\\\\n0 & 1 & 0 & 0\\\\\n\\sin(\\alpha) & 0 & \\cos(\\alpha) & 0\\\\\n0 & 0 & 0 & 1\n\\end{pmatrix}"
},
{
"math_id": 3,
"text": "R_z=\\begin{pmatrix}\n\\cos(\\alpha) & \\sin(\\alpha) & 0 & 0\\\\\n-\\sin(\\alpha) & \\cos(\\alpha) & 0 & 0\\\\\n0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & 1\n\\end{pmatrix}"
},
{
"math_id": 4,
"text": "T_{x,y,z}=\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 & 1 & 0\\\\\nx & y & z & 1\n\\end{pmatrix}"
},
{
"math_id": 5,
"text": "((((v*R_x)*R_y)*R_z)*T)"
},
{
"math_id": 6,
"text": "(v*(((R_x*R_y)*R_z)*T))"
},
{
"math_id": 7,
"text": "T_{Kugel} = T_{x,y,z}(0,0, R+{hasl})*R_y(\\Pi/2-{lat})*R_z({long})"
},
{
"math_id": 8,
"text": "v_{out} = M * v_{in}"
},
{
"math_id": 9,
"text": "M = T_x * R_x"
},
{
"math_id": 10,
"text": "v_{out} = v_{in} * M"
},
{
"math_id": 11,
"text": "M = R_x * T_x"
},
{
"math_id": 12,
"text": "(v*M)^{T} = M^{T}*v^{T}"
},
{
"math_id": 13,
"text": "\\begin{pmatrix}\n{xaxis}.x & {yaxis}.x & {zaxis}.x & 0\\\\\n{xaxis}.y & {yaxis}.y & {zaxis}.y & 0\\\\\n{xaxis}.z & {yaxis}.z & {zaxis}.z & 0\\\\\n-{dot}({xaxis}, {cameraPosition}) & -{dot}({yaxis},{cameraPosition}) & -{dot}({zaxis},{cameraPosition}) & 1\n\\end{pmatrix}"
},
{
"math_id": 14,
"text": "\\begin{pmatrix}\nw & 0 & 0 & 0\\\\\n0 & h & 0 & 0\\\\\n0 & 0 & {far}/({near-far}) & -1\\\\\n0 & 0 & ({near}*{far}) / ({near}-{far}) & 0\n\\end{pmatrix}"
},
{
"math_id": 15,
"text": "\\begin{pmatrix}\n2.0/w & 0 & 0 & 0\\\\\n0 & 2.0/h & 0 & 0\\\\\n0 & 0 & 1.0/({near-far}) & -1\\\\\n0 & 0 & {near} / ({near}-{far}) & 0\n\\end{pmatrix}"
},
{
"math_id": 16,
"text": "\\begin{pmatrix}\nx\\\\\ny\\\\\nz\n\\end{pmatrix}=\\begin{pmatrix}\n{vp}.X+(1.0+v.X)*{vp}.{width}/2.0\\\\\n{vp}.Y+(1.0-v.Y)*{vp}.{height}/2.0\\\\\n{vp}.{minz}+v.Z*({vp}.{maxz} - {vp}.{minz})\n\\end{pmatrix}"
}
] |
https://en.wikipedia.org/wiki?curid=1441951
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.