id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
14549088
|
DTDP-4-amino-4,6-dideoxygalactose transaminase
|
In enzymology, a dTDP-4-amino-4,6-dideoxygalactose transaminase (EC 2.6.1.59) is an enzyme that catalyzes the chemical reaction
dTDP-4-amino-4,6-dideoxy-D-galactose + 2-oxoglutarate formula_0 dTDP-4-dehydro-6-deoxy-D-galactose + L-glutamate
Thus, the two substrates of this enzyme are dTDP-4-amino-4,6-dideoxy-D-galactose and 2-oxoglutarate, whereas its two products are dTDP-4-dehydro-6-deoxy-D-galactose and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is dTDP-4,6-dideoxy-D-galactose:2-oxoglutarate aminotransferase. Other names in common use include thymidine diphosphoaminodideoxygalactose aminotransferase, and thymidine diphosphate 4-keto-6-deoxy-D-glucose transaminase. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549088
|
14549158
|
Glutamate—prephenate aminotransferase
|
Prephenate aminotansferase
In enzymology, glutamate-prephenate aminotransferase (EC 2.6.1.79, also known as prephenate transaminase, PAT, and L-glutamate:prephenate aminotransferase) is an enzyme that catalyzes the chemical reaction
L-arogenate + 2-oxoglutarate formula_0 prephenate + L-glutamate
Thus, the two substrates of this enzyme are L-arogenate and 2-oxoglutarate, whereas its two products are prephenate and L-glutamate. However, in most plant species utilizing this enzyme, the left side of the reaction is strongly favored. Therefore, glutamate is used as the amino donor to convert prephenate into arogenate.
Nomenclature.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-arogenate:2-oxoglutarate aminotransferase. Other names in common use include prephenate transaminase (ambiguous), PAT (ambiguous), and L-glutamate:prephenate aminotransferase. It operates in the phenylalanine and tyrosine biosynthesis pathway.
Species distribution.
The gene which encodes this enzyme has recently been identified in various plant species and microorganisms, meaning that all genes in the pathway have now been identified and accounted for. This pathway occurs in many different plant species. As phenylalanine is an essential amino acid, humans (and other animals) have lost the ability to produce it themselves and must therefore obtain it from their diet. As such, the activity of this enzyme in various plant species affects the survival of animals as well. In these animals, tyrosine is synthesized from phenylalanine via the enzyme phenylalanine hydroxylase, whereas plants have their own method of tyrosine synthesis.
Function.
Glutamate—prephenate aminotransferase catalyzes the reversible reaction shown below:
,
and its primary purpose is to convert prephenate into arogenate via transamination, using glutamate as the amino donor. As stated previously, the left side of the reaction is strongly favored. This is a necessary process for any organism which needs to convert arogenate into phenylalanine or tyrosine, as arogenate is an intermediate in the reactions which synthesize these amino acids, an alternative route to that involving phenylpyruvate and hydroxyphenylpyruvate. In the absence of glutamate, aspartate can act as the amino donor in the reaction without the need for a different enzyme, but this reaction proceeds more slowly. The details of the activity of this enzyme are still somewhat of a mystery.
Structure.
Little is known about the structure of glutamate-prephenate aminotransferase. However, some data indicates that the enzyme may have an α2-β2 subunit structure.
<templatestyles src="Reflist/styles.css" />
References and further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549158
|
14549190
|
Glutamine—fructose-6-phosphate transaminase (isomerizing)
|
In enzymology, a glutamine-fructose-6-phosphate transaminase (isomerizing) (EC 2.6.1.16) is an enzyme that catalyzes the chemical reaction
-glutamine + -fructose 6-phosphate formula_0 -glutamate + -glucosamine 6-phosphate
Thus, the two substrates of this enzyme are L-glutamine and D-fructose 6-phosphate, whereas its two products are L-glutamate and D-glucosamine 6-phosphate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-glutamine:D-fructose-6-phosphate isomerase (deaminating). This enzyme participates in glutamate metabolism and aminosugars metabolism.
Structural studies.
As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1JXA, 1MOQ, 1MOR, 1MOS, 1XFF, 1XFG, 2BPL, 2J6H, 2POC, 2PUT, 2PUV, and 2PUW.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549190
|
14549213
|
Glutamine—phenylpyruvate transaminase
|
In enzymology, a glutamine-phenylpyruvate transaminase (EC 2.6.1.64) is an enzyme that catalyzes the chemical reaction
L-glutamine + phenylpyruvate formula_0 2-oxoglutaramate + L-phenylalanine
Thus, the two substrates of this enzyme are L-glutamine and phenylpyruvate, whereas its two products are 2-oxoglutaramate and L-phenylalanine.
This enzyme belongs to the family of transferases, to be specific, the transaminases, that transfer nitrogenous groups. The systematic name of this enzyme class is L-glutamine:phenylpyruvate aminotransferase. Other names in common use include glutamine transaminase K, and glutamine-phenylpyruvate aminotransferase. It employs one cofactor, pyridoxal phosphate.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1YIY and 1YIZ.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549213
|
14549233
|
Glutamine—pyruvate transaminase
|
In enzymology, a glutamine-pyruvate transaminase (EC 2.6.1.15) is an enzyme that catalyzes the chemical reaction
L-glutamine + pyruvate formula_0 2-oxoglutaramate + L-alanine
Thus, the two substrates of this enzyme are L-glutamine and pyruvate, whereas its two products are 2-oxoglutaramate and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-glutamine:pyruvate aminotransferase. Other names in common use include glutaminase II, L-glutamine transaminase L, and glutamine-oxo-acid transaminase. This enzyme participates in glutamate metabolism. It employs one cofactor, pyridoxal phosphate.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1V2D, 1V2E, and 1V2F.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549233
|
14549245
|
Glutamine—scyllo-inositol transaminase
|
In enzymology, a glutamine-scyllo-inositol transaminase (EC 2.6.1.50) is an enzyme that catalyzes the chemical reaction
L-glutamine + 2,4,6/3,5-pentahydroxycyclohexanone formula_0 2-oxoglutaramate + 1-amino-1-deoxy-scyllo-inositol
Thus, the two substrates of this enzyme are L-glutamine and 2,4,6/3,5-pentahydroxycyclohexanone, whereas its two products are 2-oxoglutaramate and 1-amino-1-deoxy-scyllo-inositol.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-glutamine:2,4,6/3,5-pentahydroxycyclohexanone aminotransferase. Other names in common use include glutamine scyllo-inosose aminotransferase, L-glutamine-keto-scyllo-inositol aminotransferase, glutamine-scyllo-inosose transaminase, and L-glutamine-scyllo-inosose transaminase. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549245
|
14549262
|
Glycine—oxaloacetate transaminase
|
In enzymology, a glycine-oxaloacetate transaminase (EC 2.6.1.35) is an enzyme that catalyzes the chemical reaction
glycine + oxaloacetate formula_0 glyoxylate + L-aspartate
Thus, the two substrates of this enzyme are glycine and oxaloacetate, whereas its two products are glyoxylate and L-aspartate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is glycine:oxaloacetate aminotransferase. This enzyme is also called glycine-oxaloacetate aminotransferase. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549262
|
14549280
|
Glycine transaminase
|
In enzymology, a glycine transaminase (EC 2.6.1.4) is an enzyme that catalyzes the chemical reaction
glycine + 2-oxoglutarate formula_0 glyoxylate + L-glutamate
Thus, the two substrates of this enzyme are glycine and 2-oxoglutarate, whereas its two products are glyoxylate and L-glutamate.
This reactions strongly favours synthesis of glycine. This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is glycine:2-oxoglutarate aminotransferase. Other names in common use include glutamic-glyoxylic transaminase, glycine aminotransferase, glyoxylate-glutamic transaminase, L-glutamate:glyoxylate aminotransferase, and glyoxylate-glutamate aminotransferase. This enzyme participates in glycine, serine and threonine metabolism. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549280
|
14549300
|
Histidine transaminase
|
In enzymology, a histidine transaminase (EC 2.6.1.38) is an enzyme that catalyzes the chemical reaction
L-histidine + 2-oxoglutarate formula_0 (imidazol-5-yl)pyruvate + L-glutamate
Thus, the two substrates of this enzyme are L-histidine and 2-oxoglutarate, whereas its two products are (imidazol-5-yl)pyruvate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-histidine:2-oxoglutarate aminotransferase. Other names in common use include histidine aminotransferase, and histidine-2-oxoglutarate aminotransferase. This enzyme participates in histidine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549300
|
14549324
|
Histidinol-phosphate transaminase
|
In enzymology, a histidinol-phosphate transaminase (EC 2.6.1.9) is an enzyme that catalyzes the chemical reaction
L-histidinol phosphate + 2-oxoglutarate formula_0 3-(imidazol-4-yl)-2-oxopropyl phosphate + L-glutamate
Thus, the two substrates of this enzyme are L-histidinol phosphate and 2-oxoglutarate, whereas its two products are 3-(imidazol-4-yl)-2-oxopropyl phosphate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-histidinol-phosphate:2-oxoglutarate aminotransferase. Other names in common use include imidazolylacetolphosphate transaminase, glutamic-imidazoleacetol phosphate transaminase, histidinol phosphate aminotransferase, imidazoleacetol phosphate transaminase, L-histidinol phosphate aminotransferase, histidine:imidazoleacetol phosphate transaminase, IAP transaminase, and imidazolylacetolphosphate aminotransferase. This enzyme participates in 5 metabolic pathways: histidine metabolism, tyrosine metabolism, phenylalanine metabolism, phenylalanine, tyrosine and tryptophan biosynthesis, and novobiocin biosynthesis. It employs one cofactor, pyridoxal phosphate.
Structural studies.
As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes 1FG3, 1FG7, 1GEW, 1GEX, 1GEY, 1H1C, 1IJI, 1UU0, 1UU1, 1UU2, and 2F8J.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549324
|
14549341
|
Kynurenine—glyoxylate transaminase
|
In enzymology, a kynurenine-glyoxylate transaminase (EC 2.6.1.63) is an enzyme that catalyzes the chemical reaction:
L-kynurenine + glyoxylate formula_0 4-(2-aminophenyl)-2,4-dioxobutanoate + glycine
Thus, the two substrates of this enzyme are L-kynurenine and glyoxylate, whereas its two products are 4-(2-aminophenyl)-2,4-dioxobutanoate and glycine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-kynurenine:glyoxylate aminotransferase (cyclizing). This enzyme is also called kynurenine-glyoxylate aminotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549341
|
14549368
|
Leucine transaminase
|
In enzymology, a leucine transaminase (EC 2.6.1.6) is an enzyme that catalyzes the chemical reaction
L-leucine + 2-oxoglutarate formula_0 4-methyl-2-oxopentanoate + L-glutamate
Thus, the two substrates of this enzyme are L-leucine and 2-oxoglutarate, whereas its two products are 4-methyl-2-oxopentanoate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-leucine:2-oxoglutarate aminotransferase. Other names in common use include L-leucine aminotransferase, leucine 2-oxoglutarate transaminase, leucine aminotransferase, and leucine-alpha-ketoglutarate transaminase. This enzyme participates in 3 metabolic pathways: valine, leucine and isoleucine degradation, valine, leucine and isoleucine biosynthesis, and pantothenate and coa biosynthesis. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549368
|
14549379
|
L,L-diaminopimelate aminotransferase
|
In enzymology, a L,L-diaminopimelate aminotransferase (EC 2.6.1.83) is an enzyme that catalyzes the chemical reaction
LL-2,6-diaminoheptanedioate + 2-oxoglutarate formula_0 (S)-2,3,4,5-tetrahydropyridine-2,6-dicarboxylate + L-glutamate + H2O
Thus, the two substrates of this enzyme are LL-2,6-diaminoheptanedioate and 2-oxoglutarate, whereas its 3 products are (S)-2,3,4,5-tetrahydropyridine-2,6-dicarboxylate, L-glutamate, and H2O.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is LL-2,6-diaminoheptanedioate:2-oxoglutarate aminotransferase. Other names in common use include LL-diaminopimelate transaminase, LL-DAP aminotransferase, and LL-DAP-AT. This enzyme participates in lysine biosynthesis.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2Z1Z and 2Z20.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549379
|
14549395
|
L-lysine 6-transaminase
|
In enzymology, a L-lysine 6-transaminase (EC 2.6.1.36) is an enzyme that catalyzes the chemical reaction
L-lysine + 2-oxoglutarate formula_0 2-aminoadipate 6-semialdehyde + L-glutamate
Thus, the two substrates of this enzyme are L-lysine and 2-oxoglutarate, whereas its two products are 2-aminoadipate 6-semialdehyde and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. This enzyme participates in lysine biosynthesis. It employs one cofactor, pyridoxal phosphate.
Nomenclature.
The systematic name of this enzyme class is L-lysine:2-oxoglutarate 6-aminotransferase. Other names in common use include
Structure.
L-lysine 6-transaminase belongs to the aminotransferase class-III family. Crystal structures of L-lysine 6-transaminase reveal a Glu243 “switch” through which the enzyme changes substrate specificities.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549395
|
14549489
|
Methionine—glyoxylate transaminase
|
In enzymology, a methionine-glyoxylate transaminase (EC 2.6.1.73) is an enzyme that catalyzes the chemical reaction
L-methionine + glyoxylate formula_0 4-methylthio-2-oxobutanoate + glycine
Thus, the two substrates of this enzyme are L-methionine and glyoxylate, whereas its two products are 4-methylthio-2-oxobutanoate and glycine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-methionine:glyoxylate aminotransferase. Other names in common use include methionine-glyoxylate aminotransferase, and MGAT.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549489
|
14549505
|
N6-acetyl-beta-lysine transaminase
|
In enzymology, a N6-acetyl-beta-lysine transaminase (EC 2.6.1.65) is an enzyme that catalyzes the chemical reaction
6-acetamido-3-aminohexanoate + 2-oxoglutarate formula_0 6-acetamido-3-oxohexanoate + L-glutamate
Thus, the two substrates of this enzyme are 6-acetamido-3-aminohexanoate and 2-oxoglutarate, whereas its two products are 6-acetamido-3-oxohexanoate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is 6-acetamido-3-aminohexanoate:2-oxoglutarate aminotransferase. This enzyme is also called epsilon-acetyl-beta-lysine aminotransferase. This enzyme participates in lysine degradation. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549505
|
14549522
|
Nicotianamine aminotransferase
|
In enzymology, a nicotianamine aminotransferase (EC 2.6.1.80) is an enzyme that catalyzes the chemical reaction
nicotianamine + 2-oxoglutarate formula_0 3"-deamino-3"-oxonicotianamine + L-glutamate
Thus, the two substrates of this enzyme are nicotianamine and 2-oxoglutarate, whereas its two products are 3"-deamino-3"-oxonicotianamine and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is nicotianamine:2-oxoglutarate aminotransferase; nicotianamine transaminase. Other names in common use include NAAT, NAAT-I, NAAT-II, and NAAT-III.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549522
|
14549549
|
Ornithine(lysine) transaminase
|
In enzymology, an ornithine(lysine) transaminase (EC 2.6.1.68) is an enzyme that catalyzes the chemical reaction
L-ornithine + 2-oxoglutarate formula_0 3,4-dihydro-2H-pyrrole-2-carboxylate + L-glutamate + H2O
Thus, the two substrates of this enzyme are L-ornithine and 2-oxoglutarate, whereas its 3 products are 3,4-dihydro-2H-pyrrole-2-carboxylate, L-glutamate, and H2O.
This enzyme belongs to the family of transferases, specifically the transaminases, which run really fast to nitrogenous groups. The systematic name of this enzyme class is L-ornithine:2-oxoglutarate-aminotransferase. Other names in common use include ornithine(lysine) aminotransferase, lysine/ornithine:2-oxoglutarate aminotransferase, and L-ornithine(L-lysine):2-oxoglutarate-aminotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549549
|
14549573
|
Phenylalanine(histidine) transaminase
|
In enzymology, a phenylalanine(histidine) transaminase (EC 2.6.1.58) is an enzyme that catalyzes the chemical reaction
L-phenylalanine + pyruvate formula_0 phenylpyruvate + L-alanine
Thus, the two substrates of this enzyme are L-phenylalanine and pyruvate, whereas its two products are phenylpyruvate and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-phenylalanine:pyruvate aminotransferase. Other names in common use include phenylalanine (histidine) aminotransferase, phenylalanine(histidine):pyruvate aminotransferase, histidine:pyruvate aminotransferase, and L-phenylalanine(L-histidine):pyruvate aminotransferase.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549573
|
14549602
|
Pyridoxamine—oxaloacetate transaminase
|
In enzymology, a pyridoxamine-oxaloacetate transaminase (EC 2.6.1.31) is an enzyme that catalyzes the chemical reaction:
pyridoxamine + oxaloacetate formula_0 pyridoxal + L-aspartate
Thus, the two substrates of this enzyme are pyridoxamine and oxaloacetate, whereas its two products are pyridoxal and L-aspartate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is pyridoxamine:oxaloacetate aminotransferase. This enzyme participates in vitamin B6 metabolism.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549602
|
14549664
|
Pyridoxamine-phosphate transaminase
|
In enzymology, a pyridoxamine-phosphate transaminase (EC 2.6.1.54) is an enzyme that catalyzes the chemical reaction
pyridoxamine 5'-phosphate + 2-oxoglutarate formula_0 pyridoxal 5'-phosphate + D-glutamate
Thus, the two substrates of this enzyme are pyridoxamine 5'-phosphate and 2-oxoglutarate, whereas its two products are pyridoxal 5'-phosphate and D-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is pyridoxamine-5'-phosphate:2-oxoglutarate aminotransferase (D-glutamate-forming). Other names in common use include pyridoxamine phosphate aminotransferase, pyridoxamine 5'-phosphate-alpha-ketoglutarate transaminase, and pyridoxamine 5'-phosphate transaminase. This enzyme participates in vitamin B6 metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549664
|
14549682
|
Pyridoxamine—pyruvate transaminase
|
In enzymology, a pyridoxamine-pyruvate transaminase (EC 2.6.1.30) is an enzyme that catalyzes the chemical reaction
pyridoxamine + pyruvate formula_0 pyridoxal + L-alanine
Thus, the two substrates of this enzyme are pyridoxamine and pyruvate, whereas its two products are pyridoxal and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is pyridoxamine:pyruvate aminotransferase. This enzyme is also called pyridoxamine-pyruvic transaminase. This enzyme participates in vitamin B6 metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549682
|
14549700
|
(R)-3-amino-2-methylpropionate—pyruvate transaminase
|
Class of enzymes
In enzymology, a (R)-3-amino-2-methylpropionate—pyruvate transaminase (EC 2.6.1.40) is an enzyme that catalyzes the chemical reaction
(R)-3-amino-2-methylpropanoate + pyruvate formula_0 2-methyl-3-oxopropanoate + L-alanine
Thus, the two substrates of this enzyme are (R)-3-amino-2-methylpropanoate and pyruvate, whereas its two products are 2-methyl-3-oxopropanoate and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is (R)-3-amino-2-methylpropanoate:pyruvate aminotransferase. Other names in common use include D-3-aminoisobutyrate-pyruvate transaminase, beta-aminoisobutyrate-pyruvate aminotransferase, D-3-aminoisobutyrate-pyruvate aminotransferase, D-3-aminoisobutyrate-pyruvate transaminase, (R)-3-amino-2-methylpropionate transaminase, and D-beta-aminoisobutyrate:pyruvate aminotransferase. But some additional information is that this enzyme catalyzed it transamination with L isomer, but D isomer in natural form, inactive as substrate. Also other names of enzymes similar to this contains, L-3-aminoisobutyrate transaminase, beta-aminobutyric transaminase, L-3-aminoisobutyric aminotransferase, and beta-aminoisobutyrate-alpha-ketoglutarate transaminase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549700
|
14549732
|
(S)-3-amino-2-methylpropionate transaminase
|
Class of enzymes
In enzymology, a (S)-3-amino-2-methylpropionate transaminase (EC 2.6.1.22) is an enzyme that catalyzes the chemical reaction
(S)-3-amino-2-methylpropanoate + 2-oxoglutarate formula_0 2-methyl-3-oxopropanoate + L-glutamate
Thus, the two substrates of this enzyme are (S)-3-amino-2-methylpropanoate and 2-oxoglutarate, whereas its two products are 2-methyl-3-oxopropanoate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is (S)-3-amino-2-methylpropanoate:2-oxoglutarate aminotransferase. Other names in common use include L-3-aminoisobutyrate transaminase, beta-aminobutyric transaminase, L-3-aminoisobutyric aminotransferase, and beta-aminoisobutyrate-alpha-ketoglutarate transaminase. This enzyme participates in valine, leucine and isoleucine degradation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549732
|
14549760
|
Serine—glyoxylate transaminase
|
In enzymology, a serine-glyoxylate transaminase (EC 2.6.1.45) is an enzyme that catalyzes the chemical reaction
L-serine + glyoxylate formula_0 3-hydroxypyruvate + glycine
Thus, the two substrates of this enzyme are L-serine and glyoxylate, whereas its two products are 3-hydroxypyruvate and glycine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-serine:glyoxylate aminotransferase. This enzyme participates in glycine, serine and threonine metabolism. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549760
|
14549797
|
Serine—pyruvate transaminase
|
In enzymology, a serine-pyruvate transaminase (EC 2.6.1.51) is an enzyme that catalyzes the chemical reaction
L-serine + pyruvate formula_0 3-hydroxypyruvate + L-alanine
Thus, the two substrates of this enzyme are L-serine and pyruvate, whereas its two products are 3-hydroxypyruvate and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-serine:pyruvate aminotransferase. Other names in common use include SPT, and hydroxypyruvate:L-alanine transaminase. This enzyme participates in glycine, serine and threonine metabolism. It employs one cofactor, pyridoxal phosphate.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1J04.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549797
|
14549827
|
Succinyldiaminopimelate transaminase
|
In enzymology, a succinyldiaminopimelate transaminase (EC 2.6.1.17) is an enzyme that catalyzes the chemical reaction
N-succinyl-L-2,6-diaminoheptanedioate + 2-oxoglutarate formula_0 N-succinyl-L-2-amino-6-oxoheptanedioate + L-glutamate
Thus, the two substrates of this enzyme are N-succinyl-L-2,6-diaminoheptanedioate and 2-oxoglutarate, whereas its two products are N-succinyl-L-2-amino-6-oxoheptanedioate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is N-succinyl-L-2,6-diaminoheptanedioate:2-oxoglutarate aminotransferase. Other names in common use include succinyldiaminopimelate aminotransferase, and N-succinyl-L-diaminopimelic glutamic transaminase. This enzyme participates in lysine biosynthesis. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549827
|
14549861
|
Succinylornithine transaminase
|
In enzymology, a succinylornithine transaminase (EC 2.6.1.81) is an enzyme that catalyzes the chemical reaction
N2-succinyl-L-ornithine + 2-oxoglutarate formula_0 N-succinyl-L-glutamate 5-semialdehyde + L-glutamate
Thus, the two substrates of this enzyme are N2-succinyl-L-ornithine and 2-oxoglutarate, whereas its two products are N-succinyl-L-glutamate 5-semialdehyde and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is N2-succinyl-L-ornithine:2-oxoglutarate 5-aminotransferase. Other names in common use include succinylornithine aminotransferase, N2-succinylornithine 5-aminotransferase, AstC, SOAT, and 2-N-succinyl-L-ornithine:2-oxoglutarate 5-aminotransferase. This enzyme participates in arginine and proline metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549861
|
14549886
|
Taurine—2-oxoglutarate transaminase
|
In enzymology, a taurine-2-oxoglutarate transaminase (EC 2.6.1.55) is an enzyme that catalyzes the chemical reaction.
taurine + 2-oxoglutarate formula_0 sulfoacetaldehyde + L-glutamate
Thus, the two substrates of this enzyme are taurine and 2-oxoglutarate, whereas its two products are sulfoacetaldehyde and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is taurine:2-oxoglutarate aminotransferase. Other names in common use include taurine aminotransferase, taurine transaminase, taurine-alpha-ketoglutarate aminotransferase, and taurine-glutamate transaminase. This enzyme participates in beta-alanine metabolism. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549886
|
14549923
|
Taurine—pyruvate aminotransferase
|
In enzymology, a taurine-pyruvate aminotransferase (EC 2.6.1.77) is an enzyme that catalyzes the chemical reaction.
taurine + pyruvate formula_0 L-alanine + 2-sulfoacetaldehyde
Thus, the two substrates of this enzyme are taurine and pyruvate, whereas its two products are L-alanine and 2-sulfoacetaldehyde.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is taurine:pyruvate aminotransferase. This enzyme is also called Tpa. This enzyme participates in taurine and hypotaurine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549923
|
14549958
|
Thyroid-hormone transaminase
|
In enzymology, a thyroid-hormone transaminase (EC 2.6.1.26) is an enzyme that catalyzes the chemical reaction
L-3,5,3'-triiodothyronine + 2-oxoglutarate formula_0 3-[4-(4-hydroxy-3-iodophenoxy)-3,5-diiodophenyl]-2-oxopropanoate + L-glutamate
Thus, the two substrates of this enzyme are L-3,5,3'-triiodothyronine and 2-oxoglutarate, whereas its two products are 3-[4-(4-hydroxy-3-iodophenoxy)-3,5-diiodophenyl]-2-oxopropanoate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-3,5,3'-triiodothyronine:2-oxoglutarate aminotransferase. Other names in common use include 3,5-dinitrotyrosine transaminase, and thyroid hormone aminotransferase. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14549958
|
14550001
|
Tryptophan—phenylpyruvate transaminase
|
In enzymology, a tryptophan-phenylpyruvate transaminase (EC 2.6.1.28) is an enzyme that catalyzes the chemical reaction:
L-tryptophan + phenylpyruvate formula_0 (indol-3-yl)pyruvate + L-phenylalanine
Thus, the two substrates of this enzyme are L-tryptophan and phenylpyruvate, whereas its two products are (indol-3-yl)pyruvate and L-phenylalanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-tryptophan:phenylpyruvate aminotransferase. This enzyme is also called L-tryptophan-alpha-ketoisocaproate aminotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14550001
|
14550026
|
Quantifier shift
|
Logical fallacy
A quantifier shift is a logical fallacy in which the quantifiers of a statement are erroneously transposed during the rewriting process. The change in the logical nature of the statement may not be obvious when it is stated in a natural language like English.
Definition.
The fallacious deduction is that:
"For every A, there is a B, such that C. Therefore, there is a B, such that for every A, C."
formula_0
However, an inverse switching:
formula_1
is logically valid.
Examples.
1. Every person has a woman that is their mother. Therefore, there is a woman that is the mother of every person.
formula_2
It is fallacious to conclude that there is "one woman" who is the mother of "all people".
However, if the major premise ("every person has a woman that is their mother") is assumed to be true, then it is valid to conclude that there is "some" woman who is "any given person's" mother.
2. Everybody has something to believe in. Therefore, there is something that everybody believes in.
formula_3
It is fallacious to conclude that there is "some particular concept" to which everyone subscribes.
It is valid to conclude that each person believes "a given concept". But it is entirely possible that each person believes in a unique concept.
3. Every natural number formula_4 has a successor formula_5, the smallest of all natural numbers that are greater than formula_4. Therefore, there is a natural number formula_6 that is a successor to all natural numbers.
formula_7
It is fallacious to conclude that there is a single natural number that is the successor of every natural number.
|
[
{
"math_id": 0,
"text": "\\forall x \\,\\exists y \\,Rxy \\vdash \\exists y \\,\\forall x \\,Rxy"
},
{
"math_id": 1,
"text": "\\exist y \\,\\forall x \\,Rxy \\vdash \\forall x \\,\\exist y\\, Rxy"
},
{
"math_id": 2,
"text": "\\forall x \\,\\exists y \\,(Px \\to (Wy \\land M(yx))) \\vdash \\exists y \\,\\forall x \\,(Px \\to (Wy \\land M(yx)))"
},
{
"math_id": 3,
"text": "\\forall x \\,\\exists y \\,Bxy \\vdash \\exists y \\,\\forall x \\,Bxy"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "m = n + 1"
},
{
"math_id": 6,
"text": "{m}"
},
{
"math_id": 7,
"text": "\\forall n \\,\\exists m \\,Snm \\vdash \\exists m \\,\\forall n \\,Snm"
}
] |
https://en.wikipedia.org/wiki?curid=14550026
|
14550029
|
Tryptophan transaminase
|
In enzymology, a tryptophan transaminase (EC 2.6.1.27) is an enzyme that catalyzes the chemical reaction
L-tryptophan + 2-oxoglutarate formula_0 (indol-3-yl)pyruvate + L-glutamate
Thus, the two substrates of this enzyme are L-tryptophan and 2-oxoglutarate, whereas its two products are (indol-3-yl)pyruvate and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-tryptophan:2-oxoglutarate aminotransferase. Other names in common use include L-phenylalanine-2-oxoglutarate aminotransferase, tryptophan aminotransferase, 5-hydroxytryptophan-ketoglutaric transaminase, hydroxytryptophan aminotransferase, L-tryptophan aminotransferase, and L-tryptophan transaminase. This enzyme participates in tryptophan metabolism. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14550029
|
14550048
|
UDP-2-acetamido-4-amino-2,4,6-trideoxyglucose transaminase
|
In enzymology, an UDP-2-acetamido-4-amino-2,4,6-trideoxyglucose transaminase (EC 2.6.1.34) is an enzyme that catalyzes the chemical reaction
UDP-2-acetamido-4-amino-2,4,6-trideoxyglucose + 2-oxoglutarate formula_0 UDP-2-acetamido-4-dehydro-2,6-dideoxyglucose + L-glutamate
Thus, the two substrates of this enzyme are UDP-2-acetamido-4-amino-2,4,6-trideoxyglucose and 2-oxoglutarate, whereas its two products are UDP-2-acetamido-4-dehydro-2,6-dideoxyglucose and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is UDP-2-acetamido-4-amino-2,4,6-trideoxyglucose:2-oxoglutarate aminotransferase. Other names in common use include uridine diphospho-4-amino-2-acetamido-2,4,6-trideoxyglucose, and aminotransferase. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14550048
|
14550070
|
Valine—3-methyl-2-oxovalerate transaminase
|
In enzymology, a valine-3-methyl-2-oxovalerate transaminase (EC 2.6.1.32) is an enzyme that catalyzes the chemical reaction
L-valine + (S)-3-methyl-2-oxopentanoate formula_0 3-methyl-2-oxobutanoate + L-isoleucine
Thus, the two substrates of this enzyme are L-valine and (S)-3-methyl-2-oxopentanoate, whereas its two products are 3-methyl-2-oxobutanoate and L-isoleucine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-valine:(S)-3-methyl-2-oxopentanoate aminotransferase. Other names in common use include valine-isoleucine transaminase, valine-3-methyl-2-oxovalerate aminotransferase, alanine-valine transaminase, valine-2-keto-methylvalerate aminotransferase, and valine-isoleucine aminotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14550070
|
14550101
|
Valine—pyruvate transaminase
|
In enzymology, a valine-pyruvate transaminase (EC 2.6.1.66) is an enzyme that catalyzes the chemical reaction
L-valine + pyruvate formula_0 3-methyl-2-oxobutanoate + L-alanine
Thus, the two substrates of this enzyme are L-valine and pyruvate, whereas its two products are 3-methyl-2-oxobutanoate and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-valine:pyruvate aminotransferase. Other names in common use include transaminase C, valine-pyruvate aminotransferase, and alanine-oxoisovalerate aminotransferase. This enzyme participates in valine, leucine and isoleucine biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14550101
|
1455062
|
Empirical risk minimization
|
Principle in statistical learning theory
<templatestyles src="Machine learning/styles.css"/>
Empirical risk minimization is a principle in statistical learning theory which defines a family of learning algorithms based on evaluating performance over a known and fixed dataset. The core idea is based on an application of the law of large numbers; more specifically, we cannot know exactly how well a predictive algorithm will work in practice (i.e. the true "risk") because we do not know the true distribution of the data, but we can instead estimate and optimize the performance of the algorithm on a known set of training data. The performance over the known set of training data is referred to as the "empirical risk".
Background.
The following situation is a general setting of many supervised learning problems. There are two spaces of objects formula_0 and formula_1 and would like to learn a function formula_2 (often called "hypothesis") which outputs an object formula_3, given formula_4. To do so, there is a "training set" of formula_5 examples formula_6 where formula_7 is an input and formula_8 is the corresponding response that is desired from formula_9.
To put it more formally, assuming that there is a joint probability distribution formula_10 over formula_0 and formula_1, and that the training set consists of formula_5 instances formula_6 drawn i.i.d. from formula_10. The assumption of a joint probability distribution allows for the modelling of uncertainty in predictions (e.g. from noise in data) because formula_11 is not a deterministic function of formula_12, but rather a random variable with conditional distribution formula_13 for a fixed formula_12.
It is also assumed that there is a non-negative real-valued loss function formula_14 which measures how different the prediction formula_15 of a hypothesis is from the true outcome formula_11. For classification tasks these loss functions can be scoring rules.
The risk associated with hypothesis formula_16 is then defined as the expectation of the loss function:
formula_17
A loss function commonly used in theory is the 0-1 loss function: formula_18.
The ultimate goal of a learning algorithm is to find a hypothesis formula_19 among a fixed class of functions formula_20 for which the risk formula_21 is minimal:
formula_22
For classification problems, the Bayes classifier is defined to be the classifier minimizing the risk defined with the 0–1 loss function.
Empirical risk minimization.
In general, the risk formula_21 cannot be computed because the distribution formula_10 is unknown to the learning algorithm (this situation is referred to as agnostic learning). However, given a sample of iid training data points, we can compute an estimate, called the "empirical risk", by computing the average of the loss function over the training set; more formally, computing the expectation with respect to the empirical measure:
formula_23
The empirical risk minimization principle states that the learning algorithm should choose a hypothesis formula_24 which minimizes the empirical risk over the hypothesis class formula_25:
formula_26
Thus, the learning algorithm defined by the empirical risk minimization principle consists in solving the above optimization problem.
Properties.
Guarantees for the performance of empirical risk minimization depend strongly on the function class selected as well as the distributional assumptions made. In general, distribution-free methods are too coarse, and do not lead to practical bounds. However, they are still useful in deriving asymptotic properties of learning algorithms, such as consistency. In particular, distribution-free bounds on the performance of empirical risk minimization given a fixed function class can be derived using bounds on the VC complexity of the function class.
For simplicity, considering the case of binary classification tasks, it is possible to bound the probability of the selected classifier, formula_27 being much worse than the best possible classifier formula_28. Consider the risk formula_29 defined over the hypothesis class formula_30 with growth function formula_31 given a dataset of size formula_5. Then, for every formula_32:
formula_33
Similar results hold for regression tasks. These results are often based on uniform laws of large numbers, which control the deviation of the empirical risk from the true risk, uniformly over the hypothesis class.
Impossibility results.
It is also possible to show lower bounds on algorithm performance if no distributional assumptions are made. This is sometimes referred to as the No free lunch theorem. Even though a specific learning algorithm may provide the asymptotically optimal performance for any distribution, the finite sample performance is always poor for at least one data distribution. This means that no classifier can provide on the error for a given sample size for all distributions.
Specifically, Let formula_32 and consider a sample size formula_5 and classification rule formula_27, there exists a distribution of formula_34 with risk formula_35 (meaning that perfect prediction is possible) such that:
formula_36
It is further possible to show that the convergence rate of a learning algorithm is poor for some distributions. Specifically, given a sequence of decreasing positive numbers formula_37 converging to zero, it is possible to find a distribution such that:
formula_38
for all formula_5. This result shows that universally good classification rules do not exist, in the sense that the rule must be low quality for at least one distribution.
Computational complexity.
Empirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable.
In practice, machine learning algorithms cope with this issue either by employing a convex approximation to the 0–1 loss function (like hinge loss for SVM), which is easier to optimize, or by imposing assumptions on the distribution formula_10 (and thus stop being agnostic learning algorithms to which the above result applies).
In the case of convexification, Zhang's lemma majors the excess risk of the original problem using the excess risk of the convexified problem. Minimizing the latter using convex optimization also allow to control the former.
Tilted empirical risk minimization.
Tilted empirical risk minimization is a machine learning technique used to modify standard loss functions like squared error, by introducing a tilt parameter. This parameter dynamically adjusts the weight of data points during training, allowing the algorithm to focus on specific regions or characteristics of the data distribution. Tilted empirical risk minimization is particularly useful in scenarios with imbalanced data or when there is a need to emphasize errors in certain parts of the prediction space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\ h: X \\to Y"
},
{
"math_id": 3,
"text": "y \\in Y"
},
{
"math_id": 4,
"text": "x \\in X"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\ (x_1, y_1), \\ldots, (x_n, y_n)"
},
{
"math_id": 7,
"text": "x_i \\in X"
},
{
"math_id": 8,
"text": "y_i \\in Y"
},
{
"math_id": 9,
"text": " h(x_i)"
},
{
"math_id": 10,
"text": "P(x, y)"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "P(y | x)"
},
{
"math_id": 14,
"text": "L(\\hat{y}, y)"
},
{
"math_id": 15,
"text": "\\hat{y}"
},
{
"math_id": 16,
"text": "h(x)"
},
{
"math_id": 17,
"text": "R(h) = \\mathbf{E}[L(h(x), y)] = \\int L(h(x), y)\\,dP(x, y)."
},
{
"math_id": 18,
"text": "L(\\hat{y}, y) = \\begin{cases} 1 & \\mbox{ if }\\quad \\hat{y} \\ne y \\\\ 0 & \\mbox{ if }\\quad \\hat{y} = y \\end{cases}"
},
{
"math_id": 19,
"text": " h^*"
},
{
"math_id": 20,
"text": "\\mathcal{H}"
},
{
"math_id": 21,
"text": "R(h)"
},
{
"math_id": 22,
"text": "h^* = \\underset{h \\in \\mathcal{H}}{\\operatorname{arg\\, min}}\\, {R(h)}."
},
{
"math_id": 23,
"text": "\\! R_\\text{emp}(h) = \\frac{1}{n} \\sum_{i=1}^n L(h(x_i), y_i)."
},
{
"math_id": 24,
"text": "\\hat{h}"
},
{
"math_id": 25,
"text": "\\mathcal H"
},
{
"math_id": 26,
"text": "\\hat{h} = \\underset{h \\in \\mathcal{H}}{\\operatorname{arg\\, min}}\\, R_{\\text{emp}}(h)."
},
{
"math_id": 27,
"text": "\\phi_n"
},
{
"math_id": 28,
"text": "\\phi^*"
},
{
"math_id": 29,
"text": "L"
},
{
"math_id": 30,
"text": "\\mathcal C"
},
{
"math_id": 31,
"text": "\\mathcal S(\\mathcal C, n)"
},
{
"math_id": 32,
"text": "\\epsilon > 0"
},
{
"math_id": 33,
"text": " \\mathbb P \\left (L(\\phi_n) - L(\\phi^*) \\right ) \\leq \\mathcal 8S(\\mathcal C, n) \\exp\\{-n\\epsilon^2 / 32\\} "
},
{
"math_id": 34,
"text": "(X, Y)"
},
{
"math_id": 35,
"text": "L^* =0"
},
{
"math_id": 36,
"text": "\\mathbb E L_n \\geq 1/2 - \\epsilon."
},
{
"math_id": 37,
"text": "a_i"
},
{
"math_id": 38,
"text": " \\mathbb E L_n \\geq a_i"
}
] |
https://en.wikipedia.org/wiki?curid=1455062
|
14552970
|
Stranski–Krastanov growth
|
Stranski–Krastanov growth (SK growth, also Stransky–Krastanov or 'Stranski–Krastanow') is one of the three primary modes by which thin films grow epitaxially at a crystal surface or interface. Also known as 'layer-plus-island growth', the SK mode follows a two step process: initially, complete films of adsorbates, up to several monolayers thick, grow in a layer-by-layer fashion on a crystal substrate. Beyond a critical layer thickness, which depends on strain and the chemical potential of the deposited film, growth continues through the nucleation and coalescence of adsorbate 'islands'. This growth mechanism was first noted by Ivan Stranski and Lyubomir Krastanov in 1938. It wasn't until 1958 however, in a seminal work by Ernst Bauer published in "Zeitschrift für Kristallographie", that the SK, Volmer–Weber, and Frank–van der Merwe mechanisms were systematically classified as the primary thin-film growth processes. Since then, SK growth has been the subject of intense investigation, not only to better understand the complex thermodynamics and kinetics at the core of thin-film formation, but also as a route to fabricating novel nanostructures for application in the microelectronics industry.
Modes of thin-film growth.
The growth of epitaxial (homogeneous or heterogeneous) thin films on a single crystal surface depends critically on the interaction strength between adatoms and the surface. While it is possible to grow epilayers from a liquid solution, most epitaxial growth occurs via a vapor phase technique such as molecular beam epitaxy (MBE). In Volmer–Weber (VW) growth, adatom–adatom interactions are stronger than those of the adatom with the surface, leading to the formation of three-dimensional adatom clusters or islands. Growth of these clusters, along with coarsening, will cause rough multi-layer films to grow on the substrate surface. Antithetically, during Frank–van der Merwe (FM) growth, adatoms attach preferentially to surface sites resulting in atomically smooth, fully formed layers. This layer-by-layer growth is two-dimensional, indicating that complete films form prior to growth of subsequent layers. Stranski–Krastanov growth is an intermediary process characterized by both 2D layer and 3D island growth. Transition from the layer-by-layer to island-based growth occurs at a critical layer thickness which is highly dependent on the chemical and physical properties, such as surface energies and lattice parameters, of the substrate and film. Figure 1 is a schematic representation of the three main growth modes for various surface coverages.
Determining the mechanism by which a thin film grows requires consideration of the chemical potentials of the first few deposited layers. A model for the layer chemical potential per atom has been proposed by Markov as:
formula_0
where formula_1 is the bulk chemical potential of the adsorbate material, formula_2 is the desorption energy of an adsorbate atom from a wetting layer of the same material, formula_3 the desorption energy of an adsorbate atom from the substrate, formula_4 is the per atom misfit dislocation energy, and formula_5 the per atom homogeneous strain energy. In general, the values of formula_2, formula_3, formula_4, and formula_5 depend in a complex way on the thickness of the growing layers and lattice misfit between the substrate and adsorbate film. In the limit of small strains, formula_6, the criterion for a film growth mode is dependent on formula_7.
SK growth can be described by both of these inequalities. While initial film growth follows an FM mechanism, i.e. positive differential μ, nontrivial amounts of strain energy accumulate in the deposited layers. At a critical thickness, this strain induces a sign reversal in the chemical potential, i.e. negative differential μ, leading to a switch in the growth mode. At this point it is energetically favorable to nucleate islands and further growth occurs by a VW type mechanism. A thermodynamic criterion for layer growth similar to the one presented above can be obtained using a force balance of surface tensions and contact angle.
Since the formation of wetting layers occurs in a commensurate fashion at a crystal surface, there is often an associated misfit between the film and the substrate due to the different lattice parameters of each material. Attachment of the thinner film to the thicker substrate induces a misfit strain at the interface given by formula_11. Here formula_12 and formula_13 are the film and substrate lattice constants, respectively. As the wetting layer thickens, the associated strain energy increases rapidly. In order to relieve the strain, island formation can occur in either a dislocated or coherent fashion. In dislocated islands, strain relief arises by forming interfacial misfit dislocations. The reduction in strain energy accommodated by introducing a dislocation is generally greater than the concomitant cost of increased surface energy associated with creating the clusters. The thickness of the wetting layer at which island nucleation initiates, called the critical thickness formula_10, is strongly dependent on the lattice mismatch between the film and substrate, with a greater mismatch leading to smaller critical thicknesses. Values of formula_10 can range from submonlayer coverage to up to several monolayers thick. Figure 2 illustrates a dislocated island during SK growth after reaching a critical layer height. A pure edge dislocation is shown at the island interface to illustrate the relieved structure of the cluster.
In some cases, most notably the Si/Ge system, nanoscale dislocation-free islands can be formed during SK growth by introducing undulations into the near surface layers of the substrate. These regions of local curvature serve to elastically deform both the substrate and island, relieving accumulated strain and bringing the wetting layer and island lattice constant closer to its bulk value. This elastic instability at formula_10 is known as the Grinfeld instability (formerly Asaro–Tiller–Grinfeld; ATG). The resulting islands are "coherent" and defect-free, garnering them significant interest for use in nanoscale electronic and optoelectronic devices. Such applications are discussed briefly later. A schematic of the resulting epitaxial structure is shown in figure 3 which highlights the induced radius of curvature at the substrate surface and in the island. Finally, strain stabilization indicative of coherent SK growth decreases with decreasing inter-island separation. At large areal island densities (smaller spacing), curvature effects from neighboring clusters will cause dislocation loops to form leading to defected island creation.
Monitoring SK growth.
Wide beam techniques.
Analytical techniques such as Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), and reflection high energy electron diffraction (RHEED), have been extensively used to monitor SK growth. AES data obtained "in situ" during film growth in a number model systems, such as Pd/W(100), Pb/Cu(110), Ag/W(110), and Ag/Fe(110), show characteristic segmented curves like those presented in figure 4. Height of the film Auger peaks plotted as a function of surface coverage Θ, initially exhibits a straight line, which is indicative of AES data for FM growth. There is a clear break point at a critical adsorbate surface coverage followed by another linear segment at a reduced slope. The paired break point and shallow line slope is characteristic of island nucleation; a similar plot for FM growth would exhibit many such line and break pairs while a plot of the VW mode would be a single line of low slope. In some systems, reorganization of the 2D wetting layer results in decreasing AES peaks with increasing adsorbate coverage. Such situations arise when many adatoms are required to reach a critical nucleus size on the surface and at nucleation the resulting adsorbed layer constitutes a significant fraction of a monolayer. After nucleation, metastable adatoms on the surface are incorporated into the nuclei, causing the Auger signal to fall. This phenomenon is particularly evident for deposits on a molybdenum substrate.
Evolution of island formation during a SK transitions have also been successfully measured using LEED and RHEED techniques. Diffraction data obtained via various LEED experiments have been effectively used in conjunction with AES to measure the critical layer thickness at the onset of island formation. In addition, RHEED oscillations have proven very sensitive to the layer-to-island transition during SK growth, with the diffraction data providing detailed crystallographic information about the nucleated islands. Following the time dependence of LEED, RHEED, and AES signals, extensive information on surface kinetics and thermodynamics has been gathered for a number of technologically relevant systems.
Microscopies.
Unlike the techniques presented in the last section in which probe size can be relatively large compared to island size, surface microscopies such scanning electron microscopy (SEM), transmission electron microscopy (TEM), scanning tunneling microscopy (STM), and Atomic force microscopy (AFM) offer the opportunity for direct viewing of deposit/substrate combination events. The extreme magnifications afforded by these techniques, often down to the nanometer length scale, make them particularly applicable for visualizing the strongly 3D islands. UHV-SEM and TEM are routinely used to image island formation during SK growth, enabling a wide range of information to be gathered, ranging from island densities to equilibrium shapes. AFM and STM have become increasingly utilized to correlate island geometry to the surface morphology of the surrounding substrate and wetting layer. These visualization tools are often used to complement quantitative information gathered during wide-beam analyses.
Application to nanotechnology.
As mentioned previously, coherent island formation during SK growth has attracted increased interest as a means for fabricating epitaxial nanoscale structures, particularly quantum dots (QDs). Widely used quantum dots grown in the SK-growth-mode are based on the material combinations Si/Ge or InAs/GaAs. Significant effort has been spent developing methods to control island organization, density, and size on a substrate. Techniques such as surface dimpling with a pulsed laser and control over growth rate have been successfully applied to alter the onset of the SK transition or even suppress it altogether. The ability to control this transition either spatially or temporally enables manipulation of physical parameters of the nanostructures, like geometry and size, which, in turn, can alter their electronic or optoelectronic properties (i.e. band gap). For example, Schwarz–Selinger, "et al." have used surface dimpling to create surface miscuts on Si that provide preferential Ge island nucleation sites surrounded by a denuded zone. In a similar fashion, lithographically patterned substrates have been used as nucleation templates for SiGe clusters. Several studies have also shown that island geometries can be altered during SK growth by controlling substrate relief and growth rate. Bimodal size distributions of Ge islands on Si are a striking example of this phenomenon in which pyramidal and dome-shaped islands coexist after Ge growth on a textured Si substrate. Such ability to control the size, location, and shape of these structures could provide invaluable techniques for 'bottom-up' fabrication schemes of next-generation devices in the microelectronics industry.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu(n) = \\mu_\\infty +[\\varphi_a - \\varphi_a'(n) + \\varepsilon_d(n) + \\varepsilon_e(n)]"
},
{
"math_id": 1,
"text": "\\mu_\\infty"
},
{
"math_id": 2,
"text": "\\varphi_a"
},
{
"math_id": 3,
"text": "\\varphi_a'(n)"
},
{
"math_id": 4,
"text": "\\varepsilon_d(n)"
},
{
"math_id": 5,
"text": "\\varepsilon_e(n)"
},
{
"math_id": 6,
"text": "\\varepsilon_{d,e}(n) \\ll \\mu_\\infty"
},
{
"math_id": 7,
"text": "\\frac{d\\mu}{dn}"
},
{
"math_id": 8,
"text": "\\frac{d\\mu}{dn} < 0"
},
{
"math_id": 9,
"text": "\\frac{d\\mu}{dn} > 0"
},
{
"math_id": 10,
"text": "h_C"
},
{
"math_id": 11,
"text": "\\frac{a_{f} - a_{s}}{ a_{s}}"
},
{
"math_id": 12,
"text": "a_f"
},
{
"math_id": 13,
"text": "a_s"
}
] |
https://en.wikipedia.org/wiki?curid=14552970
|
14553158
|
Cylindrical harmonics
|
In mathematics, the cylindrical harmonics are a set of linearly independent functions that are solutions to Laplace's differential equation, formula_0, expressed in cylindrical coordinates, "ρ" (radial coordinate), "φ" (polar angle), and "z" (height). Each function "V""n"("k") is the product of three terms, each depending on one coordinate alone. The "ρ"-dependent term is given by Bessel functions (which occasionally are also called cylindrical harmonics).
Definition.
Each function formula_1 of this basis consists of the product of three functions:
formula_2
where formula_3 are the cylindrical coordinates, and "n" and "k" constants that differentiate the members of the set. As a result of the superposition principle applied to Laplace's equation, very general solutions to Laplace's equation can be obtained by linear combinations of these functions.
Since all surfaces with constant ρ, φ and "z" are conicoid, Laplace's equation is separable in cylindrical coordinates. Using the technique of the separation of variables, a separated solution to Laplace's equation can be expressed as:
formula_4
and Laplace's equation, divided by "V", is written:
formula_5
The "Z" part of the equation is a function of "z" alone, and must therefore be equal to a constant:
formula_6
where "k" is, in general, a complex number. For a particular "k", the "Z"("z") function has two linearly independent solutions. If "k" is real they are:
formula_7
or by their behavior at infinity:
formula_8
If "k" is imaginary:
formula_9
or:
formula_10
It can be seen that the "Z"("k","z") functions are the kernels of the Fourier transform or Laplace transform of the "Z"("z") function and so "k" may be a discrete variable for periodic boundary conditions, or it may be a continuous variable for non-periodic boundary conditions.
Substituting formula_11 for formula_12 , Laplace's equation may now be written:
formula_13
Multiplying by formula_14, we may now separate the "P" and Φ functions and introduce another constant ("n") to obtain:
formula_15
formula_16
Since formula_17 is periodic, we may take "n" to be a non-negative integer and accordingly, the formula_18 the constants are subscripted. Real solutions for formula_18 are
formula_19
or, equivalently:
formula_20
The differential equation for formula_21 is a form of Bessel's equation.
If "k" is zero, but "n" is not, the solutions are:
formula_22
If both k and n are zero, the solutions are:
formula_23
If "k" is a real number we may write a real solution as:
formula_24
where formula_25 and formula_26 are ordinary Bessel functions.
If "k" is an imaginary number, we may write a real solution as:
formula_27
where formula_28 and formula_29 are modified Bessel functions.
The cylindrical harmonics for (k,n) are now the product of these solutions and the general solution to Laplace's equation is given by a linear combination of these solutions:
formula_30
where the formula_31 are constants with respect to the cylindrical coordinates and the limits of the summation and integration are determined by the boundary conditions of the problem. Note that the integral may be replaced by a sum for appropriate boundary conditions. The orthogonality of the formula_32 is often very useful when finding a solution to a particular problem. The formula_33 and formula_34 functions are essentially Fourier or Laplace expansions, and form a set of orthogonal functions. When formula_35 is simply formula_36 , the orthogonality of formula_37, along with the orthogonality relationships of formula_33 and formula_34 allow the constants to be determined.
If formula_38 is the sequence of the positive zeros of formula_37 then:
formula_39
In solving problems, the space may be divided into any number of pieces, as long as the values of the potential and its derivative match across a boundary which contains no sources.
Example: Point source inside a conducting cylindrical tube.
As an example, consider the problem of determining the potential of a unit source located at formula_40 inside a conducting cylindrical tube (e.g. an empty tin can) which is bounded above and below by the planes formula_41 and formula_42 and on the sides by the cylinder formula_43. (In MKS units, we will assume formula_44). Since the potential is bounded by the planes on the "z" axis, the "Z(k,z)" function can be taken to be periodic. Since the potential must be zero at the origin, we take the formula_35 function to be the ordinary Bessel function formula_36, and it must be chosen so that one of its zeroes lands on the bounding cylinder. For the measurement point below the source point on the "z" axis, the potential will be:
formula_45
where formula_46 is the "r"-th zero of formula_25 and, from the orthogonality relationships for each of the functions:
formula_47
Above the source point:
formula_48
formula_49
It is clear that when formula_43 or formula_50, the above function is zero. It can also be easily shown that the two functions match in value and in the value of their first derivatives at formula_51.
Point source inside cylinder.
Removing the plane ends (i.e. taking the limit as L approaches infinity) gives the field of the point source inside a conducting cylinder:
formula_52
formula_53
Point source in open space.
As the radius of the cylinder ("a") approaches infinity, the sum over the zeroes of "J""n"("z") becomes an integral, and we have the field of a point source in infinite space:
formula_54
formula_55
and "R" is the distance from the point source to the measurement point:
formula_56
Point source in open space at origin.
Finally, when the point source is at the origin, formula_57
formula_58
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\nabla^2 V = 0"
},
{
"math_id": 1,
"text": "V_n(k)"
},
{
"math_id": 2,
"text": "V_n(k;\\rho,\\varphi,z)=P_n(k,\\rho)\\Phi_n(\\varphi)Z(k,z)\\,"
},
{
"math_id": 3,
"text": "(\\rho,\\varphi,z)"
},
{
"math_id": 4,
"text": "V=P(\\rho)\\,\\Phi(\\varphi)\\,Z(z)"
},
{
"math_id": 5,
"text": "\n\\frac{\\ddot{P}}{P}+\\frac{1}{\\rho}\\,\\frac{\\dot{P}}{P}+\\frac{1}{\\rho^2}\\,\\frac{\\ddot{\\Phi}}{\\Phi}+\\frac{\\ddot{Z}}{Z}=0\n"
},
{
"math_id": 6,
"text": "\\frac{\\ddot{Z}}{Z}=k^2"
},
{
"math_id": 7,
"text": "Z(k,z)=\\cosh(kz)\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,\\sinh(kz)\\,"
},
{
"math_id": 8,
"text": "Z(k,z)=e^{kz}\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,e^{-kz}\\,"
},
{
"math_id": 9,
"text": "Z(k,z)=\\cos(|k|z)\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,\\sin(|k|z)\\,"
},
{
"math_id": 10,
"text": "Z(k,z)=e^{i|k|z}\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,e^{-i|k|z}\\,"
},
{
"math_id": 11,
"text": "k^2"
},
{
"math_id": 12,
"text": "\\ddot{Z}/Z"
},
{
"math_id": 13,
"text": "\n\\frac{\\ddot{P}}{P}+\\frac{1}{\\rho}\\,\\frac{\\dot{P}}{P}+\\frac{1}{\\rho^2}\\frac{\\ddot{\\Phi}}{\\Phi}+k^2=0\n"
},
{
"math_id": 14,
"text": "\\rho^2"
},
{
"math_id": 15,
"text": "\\frac{\\ddot{\\Phi}}{\\Phi} =-n^2"
},
{
"math_id": 16,
"text": "\\rho^2\\frac{\\ddot{P}}{P}+\\rho\\frac{\\dot{P}}{P}+k^2\\rho^2=n^2"
},
{
"math_id": 17,
"text": "\\varphi"
},
{
"math_id": 18,
"text": "\\Phi(\\varphi)"
},
{
"math_id": 19,
"text": "\\Phi_n=\\cos(n\\varphi)\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,\\sin(n\\varphi)"
},
{
"math_id": 20,
"text": "\\Phi_n=e^{in\\varphi}\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,e^{-in\\varphi}"
},
{
"math_id": 21,
"text": "\\rho"
},
{
"math_id": 22,
"text": "P_n(0,\\rho)=\\rho^n\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,\\rho^{-n}\\,"
},
{
"math_id": 23,
"text": "P_0(0,\\rho)=\\ln\\rho\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,1\\,"
},
{
"math_id": 24,
"text": "P_n(k,\\rho)=J_n(k\\rho)\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,Y_n(k\\rho)\\,"
},
{
"math_id": 25,
"text": "J_n(z)"
},
{
"math_id": 26,
"text": "Y_n(z)"
},
{
"math_id": 27,
"text": "P_n(k,\\rho)=I_n(|k|\\rho)\\,\\,\\,\\,\\,\\,\\mathrm{or}\\,\\,\\,\\,\\,\\,K_n(|k|\\rho)\\,"
},
{
"math_id": 28,
"text": "I_n(z)"
},
{
"math_id": 29,
"text": "K_n(z)"
},
{
"math_id": 30,
"text": "V(\\rho,\\varphi,z)=\\sum_n \\int d\\left|k\\right|\\,\\, A_n(k) P_n(k,\\rho) \\Phi_n(\\varphi) Z(k,z)\\,"
},
{
"math_id": 31,
"text": "A_n(k)"
},
{
"math_id": 32,
"text": "J_n(x)"
},
{
"math_id": 33,
"text": "\\Phi_n(\\varphi)"
},
{
"math_id": 34,
"text": "Z(k,z)"
},
{
"math_id": 35,
"text": "P_n(k\\rho)"
},
{
"math_id": 36,
"text": "J_n(k\\rho)"
},
{
"math_id": 37,
"text": "J_n"
},
{
"math_id": 38,
"text": "(x)_k"
},
{
"math_id": 39,
"text": "\\int_0^1 J_n(x_k\\rho)J_n(x_k'\\rho)\\rho\\,d\\rho = \\frac{1}{2}J_{n+1}(x_k)^2\\delta_{kk'}"
},
{
"math_id": 40,
"text": "(\\rho_0,\\varphi_0,z_0)"
},
{
"math_id": 41,
"text": "z=-L"
},
{
"math_id": 42,
"text": "z=L"
},
{
"math_id": 43,
"text": "\\rho=a"
},
{
"math_id": 44,
"text": "q/4\\pi\\epsilon_0=1"
},
{
"math_id": 45,
"text": "V(\\rho,\\varphi,z)=\\sum_{n=0}^\\infty \\sum_{r=0}^\\infty\\, A_{nr} J_n(k_{nr}\\rho)\\cos(n(\\varphi-\\varphi_0))\\sinh(k_{nr}(L+z))\\,\\,\\,\\,\\,z\\le z_0"
},
{
"math_id": 46,
"text": "k_{nr}a"
},
{
"math_id": 47,
"text": "A_{nr}=\\frac{4(2-\\delta_{n0})}{a^2}\\,\\,\\frac{\\sinh k_{nr}(L-z_0)}{\\sinh 2k_{nr}L}\\,\\,\\frac{J_n(k_{nr}\\rho_0)}{k_{nr}[J_{n+1}(k_{nr}a)]^2}\\,"
},
{
"math_id": 48,
"text": "V(\\rho,\\varphi,z)=\\sum_{n=0}^\\infty \\sum_{r=0}^\\infty\\, A_{nr} J_n(k_{nr}\\rho)\\cos(n(\\varphi-\\varphi_0))\\sinh(k_{nr}(L-z))\\,\\,\\,\\,\\,z\\ge z_0"
},
{
"math_id": 49,
"text": "A_{nr}=\\frac{4(2-\\delta_{n0})}{a^2}\\,\\,\\frac{\\sinh k_{nr}(L+z_0)}{\\sinh 2k_{nr}L}\\,\\,\\frac{J_n(k_{nr}\\rho_0)}{k_{nr}[J_{n+1}(k_{nr}a)]^2}.\\,"
},
{
"math_id": 50,
"text": "|z|=L"
},
{
"math_id": 51,
"text": "z=z_0"
},
{
"math_id": 52,
"text": "V(\\rho,\\varphi,z)=\\sum_{n=0}^\\infty \\sum_{r=0}^\\infty\\, A_{nr} J_n(k_{nr}\\rho)\\cos(n(\\varphi-\\varphi_0))e^{-k_{nr}|z-z_0|}"
},
{
"math_id": 53,
"text": "A_{nr}=\\frac{2(2-\\delta_{n0})}{a^2}\\,\\,\\frac{J_n(k_{nr}\\rho_0)}{k_{nr}[J_{n+1}(k_{nr}a)]^2}.\\,"
},
{
"math_id": 54,
"text": "V(\\rho,\\varphi,z)\n=\\frac{1}{R}\n=\\sum_{n=0}^\\infty \\int_0^\\infty d\\left|k\\right|\\, A_n(k) J_n(k\\rho)\\cos(n(\\varphi-\\varphi_0))e^{-k|z-z_0|}\n"
},
{
"math_id": 55,
"text": "A_n(k)=(2-\\delta_{n0})J_n(k\\rho_0)\\,"
},
{
"math_id": 56,
"text": "R=\\sqrt{(z-z_0)^2+\\rho^2+\\rho_0^2-2\\rho\\rho_0\\cos(\\varphi-\\varphi_0)}.\\,"
},
{
"math_id": 57,
"text": "\\rho_0=z_0=0"
},
{
"math_id": 58,
"text": "V(\\rho,\\varphi,z)=\\frac{1}{\\sqrt{\\rho^2+z^2}}=\\int_0^\\infty J_0(k\\rho)e^{-k|z|}\\,dk."
}
] |
https://en.wikipedia.org/wiki?curid=14553158
|
1455348
|
Sigma approximation
|
In mathematics, σ-approximation adjusts a Fourier summation to greatly reduce the Gibbs phenomenon, which would otherwise occur at discontinuities.
A σ-approximated summation for a series of period "T" can be written as follows:
formula_0
in terms of the normalized sinc function
formula_1
The term
formula_2
is the Lanczos σ factor, which is responsible for eliminating most of the Gibbs phenomenon. It does not do so entirely, however, but one can square or even cube the expression to serially attenuate Gibbs phenomenon in the most extreme cases.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "s(\\theta) = \\frac{1}{2} a_0 + \\sum_{k=1}^{m-1} \\operatorname{sinc} \\frac{k}{m} \\cdot \\left[a_{k} \\cos \\left( \\frac{2 \\pi k}{T} \\theta \\right) + b_k \\sin \\left( \\frac{2 \\pi k}{T} \\theta \\right) \\right],"
},
{
"math_id": 1,
"text": " \\operatorname{sinc} x = \\frac{\\sin \\pi x}{\\pi x}."
},
{
"math_id": 2,
"text": "\\operatorname{sinc} \\frac{k}{m}"
}
] |
https://en.wikipedia.org/wiki?curid=1455348
|
1455358
|
Capnography
|
Monitoring of the concentration of carbon dioxide in respiratory gases
Capnography is the monitoring of the concentration or partial pressure of carbon dioxide (CO2) in the respiratory gases. Its main development has been as a monitoring tool for use during anesthesia and intensive care. It is usually presented as a graph of CO2 (measured in kilopascals, "kPa" or millimeters of mercury, "mmHg") plotted against time, or, less commonly, but more usefully, expired volume (known as volumetric capnography). The plot may also show the inspired CO2, which is of interest when rebreathing systems are being used. When the measurement is taken at the end of a breath (exhaling), it is called "end tidal" CO2 (PETCO2).
The capnogram is a direct monitor of the inhaled and exhaled concentration or partial pressure of CO2, and an indirect monitor of the CO2 partial pressure in the arterial blood. In healthy individuals, the difference between arterial blood and expired gas CO2 partial pressures is very small (normal difference 4-5 mmHg). In the presence of most forms of lung disease, and some forms of congenital heart disease (the cyanotic lesions) the difference between arterial blood and expired gas increases which can be an indication of new pathology or change in the cardiovascular-ventilation system.
Medical Use.
Oxygenation and capnography, although related, remain distinct elements in the physiology of respiration. Ventilation refers to the mechanical process of which the lungs expand and exchange volumes of gasses, however respiration further describes the exchange of gasses (mainly CO2 and O2) at the level of the alveoli. The process of respiration can be divided into two main functions: elimination of CO2 waste and replenishing tissues with fresh O2. Oxygenation (typically measured via pulse oximetry) measures the latter portion of this system. Capnography measures the elimination of CO2 which may be of greater clinical usefulness than oxygenation status.
During the normal cycle of respiration, a single breath can be divided into two phases: inspiration and expiration. At the beginning of inspiration, the lungs expand and CO2 free gasses fill the lungs. As the alveoli are filled with this new gas, the concentration of CO2 that fills the alveoli is dependent on the ventilation of the alveoli and the perfusion (blood flow) that is delivering the CO2 for exchange. Once expiration begins to occur, the lung volume decreases as air is forced out the respiratory tract. The volume of CO2 that is exhaled at the end of exhalation is generated as a by product of metabolism from tissue throughout the body. The delivery of CO2 to the alveoli for exhalation is dependent on an intact cardiovascular system to ensure adequate blood flow from the tissue to the alveoli. If cardiac output (the amount of blood that is pumped out of the heart) is decreased, the ability to transport CO2 is also decreased which is reflected in a decreased expired amount of CO2. The relationship of cardiac output and end tidal CO2 is linear, such that as cardiac output increases or decreases, the amount of CO2 is also adjusted in the same manner. Therefore the monitoring of end tidal CO2 can provide vital information on the integrity of the cardiovascular system, specifically how well the heart is able to pump blood.
The amount of CO2 that is measured during each breath requires an intact cardiovascular system to delivery the CO2 to the alveoli which is the functional unit of the lungs. During phase I of expiration, the CO2 transported to the lungs gas occupies a given space that is not involved in gas exchange, called dead space. Phase II of expiration is when the CO2 within the lungs is forced up the respiratory tract on its way to leave the body, which causes mixing of the air from the dead space with the air in the functional alveoli responsible for gas exchange. Phase III is the final portion of expiration which reflects CO2 only from the alveoli and not the dead space. These three phases are important to understand in clinical scenarios since a change in the shape and absolute values can indicate respiratory and/or cardiovascular compromise.
Anesthesia.
During anesthesia, there is interplay between two components: the patient and the anesthesia administration device (which is usually a breathing circuit and a ventilator). The critical connection between the two components is either an endotracheal tube or a mask, and CO2 is typically monitored at this junction. Capnography directly reflects the elimination of CO2 by the lungs to the anesthesia device. Indirectly, it reflects the production of CO2 by tissues and the circulatory transport of CO2 to the lungs.
When expired CO2 is related to expired volume rather than time, the area beneath the curve represents the volume of CO2 in the breath, and thus over the course of a minute, this method can yield the CO2 per minute elimination, an important measure of metabolism. Sudden changes in CO2 elimination during lung or heart surgery usually imply important changes in cardiorespiratory function.
Capnography has been shown to be more effective than clinical judgement alone in the early detection of adverse respiratory events such as hypoventilation, esophageal intubation and circuit disconnection; thus allowing patient injury to be prevented. During procedures done under sedation, capnography provides more useful information, e.g. on the frequency and regularity of ventilation, than pulse oximetry.
Capnography provides a rapid and reliable method to detect life-threatening conditions (malposition of tracheal tubes, unsuspected ventilatory failure, circulatory failure and defective breathing circuits) and to circumvent potentially irreversible patient injury.
Capnography and pulse oximetry together could have helped in the prevention of 93% of avoidable anesthesia mishaps according to an ASA (American Society of Anesthesiologists) closed claim study.
Emergency medical services.
Capnography is increasingly being used by EMS personnel to aid in their assessment and treatment of patients in the prehospital environment. These uses include verifying and monitoring the position of an endotracheal tube or a blind insertion airway device. A properly positioned tube in the trachea guards the patient's airway and enables the paramedic to breathe for the patient. A misplaced tube in the esophagus can lead to the patient's death if it goes undetected.
A study in the March 2005 "Annals of Emergency Medicine," comparing field intubations that used continuous capnography to confirm intubations versus non-use showed zero unrecognized misplaced intubations in the monitoring group versus 23% misplaced tubes in the unmonitored group. The American Heart Association (AHA) affirmed the importance of using capnography to verify tube placement in their 2005 CPR and Emergency Cardiovascular Care Guidelines.
The AHA also notes in their new guidelines that capnography, which indirectly measures cardiac output, can also be used to monitor the effectiveness of CPR and as an early indication of return of spontaneous circulation (ROSC). Studies have shown that when a person doing CPR tires, the patient's end-tidal CO2 (PETCO2, the level of carbon dioxide released at the end of expiration) falls, and then rises when a fresh rescuer takes over. Other studies have shown when a patient experiences return of spontaneous circulation, the first indication is often a sudden rise in the PETCO2 as the rush of circulation washes untransported CO2 from the tissues. Likewise, a sudden drop in PETCO2 may indicate the patient has lost pulses and CPR may need to be initiated.
Paramedics are also now beginning to monitor the PETCO2 status of nonintubated patients by using a special nasal cannula that collects the carbon dioxide. A high PETCO2 reading in a patient with altered mental status or severe difficulty breathing may indicate hypoventilation and a possible need for the patient to be intubated. Low PETCO2 readings on patients may indicate hyperventilation.
Capnography, because it provides a breath by breath measurement of a patient's ventilation, can quickly reveal a worsening trend in a patient's condition by providing paramedics with an early warning system into a patient's respiratory status. When compared to oxygenation which is measured by pulse oximetry, there are several disadvantages that capnography can help address to provide a more accurate reflection of cardiovascular integrity. One shortcoming of measuring pulse oximetry alone is that administration of supplemental oxygen (ie. via nasal cannula) can delay desaturation in a patient if they stopped breathing, therefore delaying medical intervention. Capnography provides a rapid way to directly assess ventilation status and indirectly assess cardiac function. Clinical studies are expected to uncover further uses of capnography in asthma, congestive heart failure, diabetes, circulatory shock, pulmonary embolus, acidosis, and other conditions, with potential implications for the prehospital use of capnography.
Registered nurses.
Registered nurses, but more so RRTs (respiratory therapists), in critical care settings may use capnography to determine if a nasogastric tube, which is used for feeding, has been placed in the trachea as opposed to the esophagus. Usually a patient will cough or gag if the tube is misplaced, but most patients in critical care settings are sedated or comatose. If a nasogastric tube is accidentally placed in the trachea instead of the esophagus, the tube feedings will go into the lungs, which is a life-threatening situation. If the monitor displays typical CO2 waveforms then placement should be confirmed.
Diagnostic usage.
Capnography provides information about CO2 production, pulmonary (lung) perfusion, alveolar ventilation, respiratory patterns, and elimination of CO2 from the anesthesia breathing circuit and ventilator. The shape of the curve is affected by some forms of lung disease; in general there are obstructive conditions such as bronchitis, emphysema and asthma, in which the mixing of gases within the lung is affected.
Conditions such as pulmonary embolism and congenital heart disease, which affect perfusion of the lung, do not, in themselves, affect the shape of the curve, but greatly affect the relationship between expired CO2 and arterial blood CO2. Capnography can also be used to measure carbon dioxide production, a measure of metabolism. Increased CO2 production is seen during fever and shivering. Reduced production is seen during anesthesia and hypothermia.
Working mechanism.
Capnographs work on the principle that CO2 is a polyatomic gas and therefore absorbs infrared radiation. A beam of infrared light is passed across the gas sample to fall on a sensor. The presence of CO2 in the gas leads to a reduction in the amount of light falling on the sensor, which changes the voltage in a circuit. The analysis is rapid and accurate, but the presence of nitrous oxide in the gas mix changes the infrared absorption via the phenomenon of collision broadening. This must be corrected for measuring the CO2 in human breath by measuring its infrared absorptive power. This was established as a reliable technique by John Tyndall in 1864, though 19th and early 20th century devices were too cumbersome for everyday clinical use. Today, technologies have since improved and are able to measure the values of CO2 near instantaneously and has become a standard practice in medical settings. There are currently two main types of CO2 sensors that are used in clinical practice: main-stream sensors and side-stream sensors. Both effectively serve the same function to quantify the amount of CO2 that is being exhaled in each breath.
Capnogram model.
The capnogram waveform provides information about various respiratory and cardiac parameters. The capnogram double-exponential model attempts to quantitatively explain the relationship between respiratory parameters and the exhalatory segment of a capnogram waveform. According to the model, each exhalatory segment of capnogram waveform follows the analytical expression:
formula_0
where
In particular, this model explains the rounded "shark-fin" shape of the capnogram observed in patients with obstructive lung disease.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p_D(t) = p_A (1 - e ^{-\\alpha}e^{\\alpha e^{-t/\\tau}})"
},
{
"math_id": 1,
"text": "p_D(t)"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "p_A"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\tau"
}
] |
https://en.wikipedia.org/wiki?curid=1455358
|
14554
|
Imaginary number
|
Square root of a non-positive real number
An imaginary number is the product of a real number and the imaginary unit i, which is defined by its property "i"2 = −1. The square of an imaginary number bi is −"b"2. For example, 5"i" is an imaginary number, and its square is −25. The number zero is considered to be both real and imaginary.
Originally coined in the 17th century by René Descartes as a derogatory term and regarded as fictitious or useless, the concept gained wide acceptance following the work of Leonhard Euler (in the 18th century) and Augustin-Louis Cauchy and Carl Friedrich Gauss (in the early 19th century).
An imaginary number "bi" can be added to a real number a to form a complex number of the form "a" + "bi", where the real numbers a and b are called, respectively, the "real part" and the "imaginary part" of the complex number.
History.
Although the Greek mathematician and engineer Heron of Alexandria is noted as the first to present a calculation involving the square root of a negative number, it was Rafael Bombelli who first set down the rules for multiplication of complex numbers in 1572. The concept had appeared in print earlier, such as in work by Gerolamo Cardano. At the time, imaginary numbers and negative numbers were poorly understood and were regarded by some as fictitious or useless, much as zero once was. Many other mathematicians were slow to adopt the use of imaginary numbers, including René Descartes, who wrote about them in his "La Géométrie" in which he coined the term "imaginary" and meant it to be derogatory. The use of imaginary numbers was not widely accepted until the work of Leonhard Euler (1707–1783) and Carl Friedrich Gauss (1777–1855). The geometric significance of complex numbers as points in a plane was first described by Caspar Wessel (1745–1818).
In 1843, William Rowan Hamilton extended the idea of an axis of imaginary numbers in the plane to a four-dimensional space of quaternion imaginaries in which three of the dimensions are analogous to the imaginary numbers in the complex field.
Geometric interpretation.
Geometrically, imaginary numbers are found on the vertical axis of the complex number plane, which allows them to be presented perpendicular to the real axis. One way of viewing imaginary numbers is to consider a standard number line positively increasing in magnitude to the right and negatively increasing in magnitude to the left. At 0 on the x-axis, a y-axis can be drawn with "positive" direction going up; "positive" imaginary numbers then increase in magnitude upwards, and "negative" imaginary numbers increase in magnitude downwards. This vertical axis is often called the "imaginary axis" and is denoted formula_0 formula_1 or ℑ.
In this representation, multiplication by i corresponds to a counterclockwise rotation of 90 degrees about the origin, which is a quarter of a circle. Multiplication by −"i" corresponds to a clockwise rotation of 90 degrees about the origin. Similarly, multiplying by a purely imaginary number bi, with b a real number, both causes a counterclockwise rotation about the origin by 90 degrees and scales the answer by a factor of b. When "b" < 0, this can instead be described as a clockwise rotation by 90 degrees and a scaling by .
Square roots of negative numbers.
Care must be used when working with imaginary numbers that are expressed as the principal values of the square roots of negative numbers. For example, if x and y are both positive real numbers, the following chain of equalities appears reasonable at first glance:
formula_2
But the result is clearly nonsense. The step where the square root was broken apart was illegitimate. (See Mathematical fallacy.)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i \\mathbb{R},"
},
{
"math_id": 1,
"text": "\\mathbb{I},"
},
{
"math_id": 2,
"text": "\\textstyle\n\\sqrt{x \\cdot y \\vphantom{t}}\n=\\sqrt{(-x) \\cdot (-y)}\n\\mathrel{\\stackrel{\\text{ (fallacy) }}{=}} \\sqrt{-x\\vphantom{ty}} \\cdot \\sqrt{-y\\vphantom{ty}}\n= i\\sqrt{x\\vphantom{ty}} \\cdot i\\sqrt{y\\vphantom{ty}}\n= -\\sqrt{x \\cdot y \\vphantom{ty}}\\,.\n"
}
] |
https://en.wikipedia.org/wiki?curid=14554
|
1455471
|
Pseudotensor
|
Type of physical quantity
In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinate transformation (e.g., an improper rotation), which is a transformation that can be expressed as a proper rotation followed by reflection. This is a generalization of a pseudovector. To evaluate a tensor or pseudotensor sign, it has to be contracted with some vectors, as many as its rank is, belonging to the space where the rotation is made while keeping the tensor coordinates unaffected (differently from what one does in the case of a base change). Under improper rotation a pseudotensor and a proper tensor of the same rank will have different sign which depends on the rank being even or odd. Sometimes inversion of the axes is used as an example of an improper rotation to see the behaviour of a pseudotensor, but it works only if vector space dimensions is odd otherwise inversion is a proper rotation without an additional reflection.
There is a second meaning for pseudotensor (and likewise for pseudovector), restricted to general relativity. Tensors obey strict transformation laws, but pseudotensors in this sense are not so constrained. Consequently, the form of a pseudotensor will, in general, change as the frame of reference is altered. An equation containing pseudotensors which holds in one frame will not necessarily hold in a different frame. This makes pseudotensors of limited relevance because equations in which they appear are not invariant in form.
Definition.
Two quite different mathematical objects are called a pseudotensor in different contexts.
The first context is essentially a tensor multiplied by an extra sign factor, such that the pseudotensor changes sign under reflections when a normal tensor does not. According to one definition, a pseudotensor P of the type formula_0 is a geometric object whose components in an arbitrary basis are enumerated by formula_1indices and obey the transformation rule
formula_2
under a change of basis.
Here formula_3 are the components of the pseudotensor in the new and old bases, respectively, formula_4 is the transition matrix for the contravariant indices, formula_5 is the transition matrix for the covariant indices, and formula_6
This transformation rule differs from the rule for an ordinary tensor only by the presence of the factor formula_7
The second context where the word "pseudotensor" is used is general relativity. In that theory, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor.
Examples.
On non-orientable manifolds, one cannot define a volume form globally due to the non-orientability, but one can define a volume element, which is formally a density, and may also be called a "pseudo-volume form", due to the additional sign twist (tensoring with the sign bundle). The volume element is a pseudotensor density according to the first definition.
A change of variables in multi-dimensional integration may be achieved through the incorporation of a factor of the absolute value of the determinant of the Jacobian matrix. The use of the absolute value introduces a sign change for improper coordinate transformations to compensate for the convention of keeping integration (volume) element positive; as such, an integrand is an example of a pseudotensor density according to the first definition.
The Christoffel symbols of an affine connection on a manifold can be thought of as the correction terms to the partial derivatives of a coordinate expression of a vector field with respect to the coordinates to render it the vector field's covariant derivative. While the affine connection itself doesn't depend on the choice of coordinates, its Christoffel symbols do, making them a pseudotensor quantity according to the second definition.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(p, q)"
},
{
"math_id": 1,
"text": "(p + q)"
},
{
"math_id": 2,
"text": "\\hat{P}^{i_1\\ldots i_q}_{\\,j_1\\ldots j_p} =\n(-1)^A A^{i_1} {}_{k_1}\\cdots A^{i_q} {}_{k_q}\nB^{l_1} {}_{j_1}\\cdots B^{l_p} {}_{j_p}\nP^{k_1\\ldots k_q}_{l_1\\ldots l_p}"
},
{
"math_id": 3,
"text": "\\hat{P}^{i_1 \\ldots i_q}_{\\,j_1 \\ldots j_p}, P^{k_1 \\ldots k_q}_{l_1 \\ldots l_p}"
},
{
"math_id": 4,
"text": "A^{i_q} {}_{k_q}"
},
{
"math_id": 5,
"text": "B^{l_p} {}_{j_p}"
},
{
"math_id": 6,
"text": "(-1)^A = \\mathrm{sign}\\left(\\det\\left(A^{i_q} {}_{k_q}\\right)\\right) = \\pm{1}."
},
{
"math_id": 7,
"text": "(-1)^A."
}
] |
https://en.wikipedia.org/wiki?curid=1455471
|
145555
|
XOR swap algorithm
|
Binary arithmetic algorithm
In computer programming, the exclusive or swap (sometimes shortened to XOR swap) is an algorithm that uses the exclusive or bitwise operation to swap the values of two variables without using the temporary variable which is normally required.
The algorithm is primarily a novelty and a way of demonstrating properties of the "exclusive or" operation. It is sometimes discussed as a program optimization, but there are almost no cases where swapping via "exclusive or" provides benefit over the standard, obvious technique.
The algorithm.
Conventional swapping requires the use of a temporary storage variable. Using the XOR swap algorithm, however, no temporary storage is needed. The algorithm is as follows:
X := Y XOR X; // XOR the values and store the result in X
Y := X XOR Y; // XOR the values and store the result in Y
X := Y XOR X; // XOR the values and store the result in X
Since XOR is a commutative operation, either X XOR Y or Y XOR X can be used interchangeably in any of the foregoing three lines. Note that on some architectures the first operand of the XOR instruction specifies the target location at which the result of the operation is stored, preventing this interchangeability. The algorithm typically corresponds to three machine-code instructions, represented by corresponding pseudocode and assembly instructions in the three rows of the following table:
In the above System/370 assembly code sample, R1 and R2 are distinct registers, and each operation leaves its result in the register named in the first argument. Using x86 assembly, values X and Y are in registers eax and ebx (respectively), and places the result of the operation in the first register.
However, in the pseudocode or high-level language version or implementation, the algorithm fails if "x" and "y" use the same storage location, since the value stored in that location will be zeroed out by the first XOR instruction, and then remain zero; it will not be "swapped with itself". This is "not" the same as if "x" and "y" have the same values. The trouble only comes when "x" and "y" use the same storage location, in which case their values must already be equal. That is, if "x" and "y" use the same storage location, then the line:
X := X XOR Y
sets "x" to zero (because "x" = "y" so X XOR Y is zero) "and" sets "y" to zero (since it uses the same storage location), causing "x" and "y" to lose their original values.
Proof of correctness.
The binary operation XOR over bit strings of length formula_0 exhibits the following properties (where formula_1 denotes XOR):
Suppose that we have two distinct registers codice_0 and codice_1 as in the table below, with initial values "A" and "B" respectively. We perform the operations below in sequence, and reduce our results using the properties listed above.
Linear algebra interpretation.
As XOR can be interpreted as binary addition and a pair of bits can be interpreted as a vector in a two-dimensional vector space over the field with two elements, the steps in the algorithm can be interpreted as multiplication by 2×2 matrices over the field with two elements. For simplicity, assume initially that "x" and "y" are each single bits, not bit vectors.
For example, the step:
X := X XOR Y
which also has the implicit:
Y := Y
corresponds to the matrix formula_7 as
formula_8
The sequence of operations is then expressed as:
formula_9
(working with binary values, so formula_10), which expresses the elementary matrix of switching two rows (or columns) in terms of the transvections (shears) of adding one element to the other.
To generalize to where X and Y are not single bits, but instead bit vectors of length "n", these 2×2 matrices are replaced by 2"n"×2"n" block matrices such as formula_11
These matrices are operating on "values," not on "variables" (with storage locations), hence this interpretation abstracts away from issues of storage location and the problem of both variables sharing the same storage location.
Code example.
A C function that implements the XOR swap algorithm:
void XorSwap(int *x, int *y)
if (x == y) return;
*x ^= *y;
*y ^= *x;
*x ^= *y;
The code first checks if the addresses are distinct and uses a guard clause to exit the function early if they are equal. Without that check, if they were equal, the algorithm would fold to a triple codice_2 resulting in zero.
The XOR swap algorithm can also be defined with a macro:
((a) ^= (b), (b) ^= (a), \
(a) ^= (b)) /* Doesn't work when a and b are the same object - assigns zero \
(0) to the object in that case */
((&(a) == &(b)) ? (a) /* Check for distinct addresses */ \
: XORSWAP_UNSAFE(a, b))
Reasons for avoidance in practice.
On modern CPU architectures, the XOR technique can be slower than using a temporary variable to do swapping. At least on recent x86 CPUs, both by AMD and Intel, moving between registers regularly incurs zero latency. (This is called MOV-elimination.) Even if there is not any architectural register available to use, the codice_3 instruction will be at least as fast as the three XORs taken together. Another reason is that modern CPUs strive to execute instructions in parallel via instruction pipelines. In the XOR technique, the inputs to each operation depend on the results of the previous operation, so they must be executed in strictly sequential order, negating any benefits of instruction-level parallelism.
Aliasing.
The XOR swap is also complicated in practice by aliasing. If an attempt is made to XOR-swap the contents of some location with itself, the result is that the location is zeroed out and its value lost. Therefore, XOR swapping must not be used blindly in a high-level language if aliasing is possible. This issue does not apply if the technique is used in assembly to swap the contents of two registers.
Similar problems occur with call by name, as in Jensen's Device, where swapping codice_4 and codice_5 via a temporary variable yields incorrect results due to the arguments being related: swapping via codice_6 changes the value for codice_4 in the second statement, which then results in the incorrect codice_4 value for codice_5 in the third statement.
Variations.
The underlying principle of the XOR swap algorithm can be applied to any operation meeting criteria L1 through L4 above. Replacing XOR by addition and subtraction gives various slightly different, but largely equivalent, formulations. For example:
void AddSwap( unsigned int* x, unsigned int* y )
*x = *x + *y;
*y = *x - *y;
*x = *x - *y;
Unlike the XOR swap, this variation requires that the underlying processor or programming language uses a method such as modular arithmetic or bignums to guarantee that the computation of codice_10 cannot cause an error due to integer overflow. Therefore, it is seen even more rarely in practice than the XOR swap.
However, the implementation of codice_11 above in the C programming language always works even in case of integer overflow, since, according to the C standard, addition and subtraction of unsigned integers follow the rules of modular arithmetic, i. e. are done in the cyclic group formula_12 where formula_13 is the number of bits of codice_12. Indeed, the correctness of the algorithm follows from the fact that the formulas formula_14 and formula_15 hold in any abelian group. This generalizes the proof for the XOR swap algorithm: XOR is both the addition and subtraction in the abelian group formula_16 (which is the direct sum of "s" copies of formula_17).
This doesn't hold when dealing with the codice_13 type (the default for codice_14). Signed integer overflow is an undefined behavior in C and thus modular arithmetic is not guaranteed by the standard, which may lead to incorrect results.
The sequence of operations in codice_11 can be expressed via matrix multiplication as:
formula_18
Application to register allocation.
On architectures lacking a dedicated swap instruction, because it avoids the extra temporary register, the XOR swap algorithm is required for optimal register allocation. This is particularly important for compilers using static single assignment form for register allocation; these compilers occasionally produce programs that need to swap two registers when no registers are free. The XOR swap algorithm avoids the need to reserve an extra register or to spill any registers to main memory. The addition/subtraction variant can also be used for the same purpose.
This method of register allocation is particularly relevant to GPU shader compilers. On modern GPU architectures, spilling variables is expensive due to limited memory bandwidth and high memory latency, while limiting register usage can improve performance due to dynamic partitioning of the register file. The XOR swap algorithm is therefore required by some GPU compilers.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\oplus"
},
{
"math_id": 2,
"text": "A \\oplus B = B \\oplus A"
},
{
"math_id": 3,
"text": "(A \\oplus B) \\oplus C = A \\oplus (B \\oplus C)"
},
{
"math_id": 4,
"text": "A \\oplus 0 = A"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "A \\oplus A = 0"
},
{
"math_id": 7,
"text": "\\left(\\begin{smallmatrix}1 & 1\\\\0 & 1\\end{smallmatrix}\\right)"
},
{
"math_id": 8,
"text": "\\begin{pmatrix}1 & 1\\\\0 & 1\\end{pmatrix} \\begin{pmatrix}x\\\\y\\end{pmatrix}\n= \\begin{pmatrix}x+y\\\\y\\end{pmatrix}.\n"
},
{
"math_id": 9,
"text": "\n\\begin{pmatrix}1 & 1\\\\0 & 1\\end{pmatrix}\n\\begin{pmatrix}1 & 0\\\\1 & 1\\end{pmatrix}\n\\begin{pmatrix}1 & 1\\\\0 & 1\\end{pmatrix}\n=\n\\begin{pmatrix}0 & 1\\\\1 & 0\\end{pmatrix}\n"
},
{
"math_id": 10,
"text": "1 + 1 = 0"
},
{
"math_id": 11,
"text": "\\left(\\begin{smallmatrix}I_n & I_n\\\\0 & I_n\\end{smallmatrix}\\right)."
},
{
"math_id": 12,
"text": "\\mathbb Z/2^s\\mathbb Z"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "(x + y) - y = x"
},
{
"math_id": 15,
"text": "(x + y) - ((x + y) - y) = y"
},
{
"math_id": 16,
"text": "(\\mathbb Z/2\\mathbb Z)^{s}"
},
{
"math_id": 17,
"text": "\\mathbb Z/2\\mathbb Z"
},
{
"math_id": 18,
"text": "\n\\begin{pmatrix}1 & -1\\\\0 & 1\\end{pmatrix}\n\\begin{pmatrix}1 & 0\\\\1 & -1\\end{pmatrix}\n\\begin{pmatrix}1 & 1\\\\0 & 1\\end{pmatrix}\n=\n\\begin{pmatrix}0 & 1\\\\1 & 0\\end{pmatrix}\n"
}
] |
https://en.wikipedia.org/wiki?curid=145555
|
14555985
|
Runs produced
|
Runs produced is a baseball statistic that can help estimate the number of runs a hitter contributes to his team. The formula adds together the player's runs and run batted in, and then subtracts the player's home runs.
formula_0
Home runs are subtracted to compensate for the batter getting credit for both one run and at least one RBI when hitting a home run.
Unlike runs created, runs produced is a teammate-dependent stat in that it includes Runs and RBIs, which are affected by which batters bat near a player in the batting order. Also, subtracting home runs seems logical from an individual perspective, but on a team level it double-counts runs that are not home runs.
To counteract the double-counting, some have suggested an alternate formula which is the average of a player's runs scored and runs batted in.
formula_1
Here, when a player scores a run, he shares the credit with the batter who drove him in, so both are credited with half a run produced. The same is true for an RBI, where credit is shared between the batter and runner. In the case of a home run, the batter is responsible for both the run scored and the RBI, so the runs produced are (1 + 1)/2 = 1, as expected.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "RP = R+RBI-HR"
},
{
"math_id": 1,
"text": "RP = (R + RBI)/2"
}
] |
https://en.wikipedia.org/wiki?curid=14555985
|
1455603
|
BEST theorem
|
Formula used in graph theory
In graph theory, a part of discrete mathematics, the BEST theorem gives a product formula for the number of Eulerian circuits in directed (oriented) graphs. The name is an acronym of the names of people who discovered it: N. G. de Bruijn, Tatyana Ehrenfest, Cedric Smith and W. T. Tutte.
Precise statement.
Let "G" = ("V", "E") be a directed graph. An Eulerian circuit is a directed closed trail that visits each edge exactly once. In 1736, Euler showed that "G" has an Eulerian circuit if and only if "G" is connected and the indegree is equal to outdegree at every vertex. In this case "G" is called Eulerian. We denote the indegree of a vertex "v" by deg("v").
The BEST theorem states that the number ec("G") of Eulerian circuits in a connected Eulerian graph "G" is given by the formula
formula_0
Here "t""w"("G") is the number of arborescences, which are trees directed towards the root at a fixed vertex "w" in "G". The number "tw(G)" can be computed as a determinant, by the version of the matrix tree theorem for directed graphs. It is a property of Eulerian graphs that "t""v"("G") = "t""w"("G") for every two vertices "v" and "w" in a connected Eulerian graph "G".
Applications.
The BEST theorem shows that the number of Eulerian circuits in directed graphs can be computed in polynomial time, a problem which is #P-complete for undirected graphs. It is also used in the asymptotic enumeration of Eulerian circuits of complete and complete bipartite graphs.
History.
The BEST theorem is due to van Aardenne-Ehrenfest and de Bruijn
(1951), §6, Theorem 6.
Their proof is bijective and generalizes the de Bruijn sequences. In a "note added in proof", they refer to an earlier result by Smith and Tutte (1941) which proves the formula for graphs with deg(v)=2 at every vertex.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\operatorname{ec}(G) = t_w(G) \\prod_{v\\in V} \\bigl(\\deg(v)-1\\bigr)!.\n"
}
] |
https://en.wikipedia.org/wiki?curid=1455603
|
14556113
|
Tricubic interpolation
|
Method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid
In the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid. The approach involves approximating the function locally by an expression of the form
formula_0
This form has 64 coefficients formula_1; requiring the function to have a given value or given directional derivative at a point places one linear constraint on the 64 coefficients.
The term "tricubic interpolation" is used in more than one context; some experiments measure both the value of a function and its spatial derivatives, and it is desirable to interpolate preserving the values and the measured derivatives at the grid points. Those provide 32 constraints on the coefficients, and another 32 constraints can be provided by requiring smoothness of higher derivatives.
In other contexts, we can obtain the 64 coefficients by considering a 3×3×3 grid of small cubes surrounding the cube inside which we evaluate the function, and fitting the function at the 64 points on the corners of this grid.
The cubic interpolation article indicates that the method is equivalent to a sequential application of one-dimensional cubic interpolators. Let formula_2 be the value of a monovariable cubic polynomial (e.g. constrained by values, formula_3, formula_4, formula_5, formula_6 from consecutive grid points) evaluated at formula_7. In many useful cases, these cubic polynomials have the form formula_8 for some vector formula_9 which is a function of formula_7 alone. The tricubic interpolator is equivalent to:
formula_10
where formula_11 and formula_12.
At first glance, it might seem more convenient to use the 21 calls to formula_13 described above instead of the formula_14 matrix described in Lekien and Marsden. However, a proper implementation using a sparse format for the matrix (that is fairly sparse) makes the latter more efficient. This aspect is even much more pronounced when interpolation is needed at several locations inside the same cube. In this case, the formula_14 matrix is used once to compute the interpolation coefficients for the entire cube. The coefficients are then stored and used for interpolation at any location inside the cube. In comparison, sequential use of one-dimensional integrators formula_15 performs extremely poorly for repeated interpolations because each computational step must be repeated for each new location.
|
[
{
"math_id": 0,
"text": "f(x,y,z)=\\sum_{i=0}^3 \\sum_{j=0}^3 \\sum_{k=0}^3 a_{ijk} x^i y^j z^k."
},
{
"math_id": 1,
"text": "a_{ijk}"
},
{
"math_id": 2,
"text": "\\mathrm{CINT}_x(a_{-1}, a_0, a_1, a_2)"
},
{
"math_id": 3,
"text": "a_{-1}"
},
{
"math_id": 4,
"text": "a_{0}"
},
{
"math_id": 5,
"text": "a_{1}"
},
{
"math_id": 6,
"text": "a_{2}"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "\\mathrm{CINT}_x(u_{-1}, u_0, u_1, u_2) = \\mathbf{v}_x \\cdot \\left( u_{-1}, u_0, u_1, u_2 \\right)"
},
{
"math_id": 9,
"text": "\\mathbf{v}_x"
},
{
"math_id": 10,
"text": "\n\\begin{align}\ns(i,j,k) & {} = \\text{The value at grid point } (i,j,k)\\\\\nt(i,j,z) & {} = \\mathrm{CINT}_z\\left( s(i,j,-1), s(i,j,0), s(i,j,1), s(i,j,2)\\right) \\\\\nu(i,y,z) & {} = \\mathrm{CINT}_y\\left( t(i,-1,z), t(i,0,z), t(i,1,z), t(i,2,z)\\right) \\\\\nf(x,y,z) & {} = \\mathrm{CINT}_x\\left( u(-1,y,z), u(0,y,z), u(1,y,z), u(2,y,z)\\right)\n\\end{align}\n"
},
{
"math_id": 11,
"text": "i,j,k\\in\\{-1,0,1,2\\}"
},
{
"math_id": 12,
"text": "x,y,z\\in[0,1]"
},
{
"math_id": 13,
"text": "\\mathrm{CINT}"
},
{
"math_id": 14,
"text": "{64 \\times 64}"
},
{
"math_id": 15,
"text": "\\mathrm{CINT}_x"
}
] |
https://en.wikipedia.org/wiki?curid=14556113
|
145570
|
Quadric
|
Locus of the zeros of a polynomial of degree two
In mathematics, a quadric or quadric surface (quadric hypersurface in higher dimensions), is a generalization of conic sections (ellipses, parabolas, and hyperbolas). It is a hypersurface (of dimension "D") in a ("D" + 1)-dimensional space, and it is defined as the zero set of an irreducible polynomial of degree two in "D" + 1 variables; for example, "D" = 1 in the case of conic sections. When the defining polynomial is not absolutely irreducible, the zero set is generally not considered a quadric, although it is often called a "degenerate quadric" or a "reducible quadric".
In coordinates "x"1, "x"2, ..., "x""D"+1, the general quadric is thus defined by the algebraic equation
formula_0
which may be compactly written in vector and matrix notation as:
formula_1
where "x" = ("x"1, "x"2, ..., "x""D"+1) is a row vector, "x"T is the transpose of "x" (a column vector), "Q" is a ("D" + 1) × ("D" + 1) matrix and "P" is a ("D" + 1)-dimensional row vector and "R" a scalar constant. The values "Q", "P" and "R" are often taken to be over real numbers or complex numbers, but a quadric may be defined over any field.
A quadric is an affine algebraic variety, or, if it is reducible, an affine algebraic set. Quadrics may also be defined in projective spaces; see , below.
Euclidean plane.
As the dimension of a Euclidean plane is two, quadrics in a Euclidean plane have dimension one and are thus plane curves. They are called "conic sections", or "conics".
Euclidean space.
In three-dimensional Euclidean space, quadrics have dimension two, and are known as quadric surfaces. Their quadratic equations have the form
formula_2
where formula_3 are real numbers, and at least one of A, B, and C is nonzero.
The quadric surfaces are classified and named by their shape, which corresponds to the orbits under affine transformations. That is, if an affine transformation maps a quadric onto another one, they belong to the same class, and share the same name and many properties.
The principal axis theorem shows that for any (possibly reducible) quadric, a suitable change of Cartesian coordinates or, equivalently, a Euclidean transformation allows putting the equation of the quadric into a unique simple form on which the class of the quadric is immediately visible. This form is called the normal form of the equation, since two quadrics have the same normal form if and only if there is a Euclidean transformation that maps one quadric to the other. The normal forms are as follows:
formula_4
formula_5
formula_6
formula_7
where the formula_8 are either 1, –1 or 0, except formula_9 which takes only the value 0 or 1.
Each of these 17 normal forms corresponds to a single orbit under affine transformations. In three cases there are no real points: formula_10 ("imaginary ellipsoid"), formula_11 ("imaginary elliptic cylinder"), and formula_12 (pair of complex conjugate parallel planes, a reducible quadric). In one case, the "imaginary cone", there is a single point (formula_13). If formula_14 one has a line (in fact two complex conjugate intersecting planes). For formula_15 one has two intersecting planes (reducible quadric). For formula_16 one has a double plane. For formula_17 one has two parallel planes (reducible quadric).
Thus, among the 17 normal forms, there are nine true quadrics: a cone, three cylinders (often called degenerate quadrics) and five non-degenerate quadrics (ellipsoid, paraboloids and hyperboloids), which are detailed in the following tables. The eight remaining quadrics are the imaginary ellipsoid (no real point), the imaginary cylinder (no real point), the imaginary cone (a single real point), and the reducible quadrics, which are decomposed in two planes; there are five such decomposed quadrics, depending whether the planes are distinct or not, parallel or not, real or complex conjugate.
When two or more of the parameters of the canonical equation are equal, one obtains a quadric of revolution, which remains invariant when rotated around an axis (or infinitely many axes, in the case of the sphere).
Definition and basic properties.
An "affine quadric" is the set of zeros of a polynomial of degree two. When not specified otherwise, the polynomial is supposed to have real coefficients, and the zeros are points in a Euclidean space. However, most properties remain true when the coefficients belong to any field and the points belong in an affine space. As usual in algebraic geometry, it is often useful to consider points over an algebraically closed field containing the polynomial coefficients, generally the complex numbers, when the coefficients are real.
Many properties becomes easier to state (and to prove) by extending the quadric to the projective space by projective completion, consisting of adding points at infinity. Technically, if
formula_18
is a polynomial of degree two that defines an affine quadric, then its projective completion is defined by homogenizing p into
formula_19
(this is a polynomial, because the degree of p is two). The points of the projective completion are the points of the projective space whose projective coordinates are zeros of P.
So, a "projective quadric" is the set of zeros in a projective space of a homogeneous polynomial of degree two.
As the above process of homogenization can be reverted by setting "X"0 = 1:
formula_20
it is often useful to not distinguish an affine quadric from its projective completion, and to talk of the "affine equation" or the "projective equation" of a quadric. However, this is not a perfect equivalence; it is generally the case that formula_21 will include points with formula_22, which are not also solutions of formula_23 because these points in projective space correspond to points "at infinity" in affine space.
Equation.
A quadric in an affine space of dimension n is the set of zeros of a polynomial of degree 2. That is, it is the set of the points whose coordinates satisfy an equation
formula_24
where the polynomial p has the form
formula_25
for a matrix formula_26 with formula_27 and formula_28 running from 0 to formula_29. When the characteristic of the field of the coefficients is not two, generally formula_30 is assumed; equivalently formula_31. When the characteristic of the field of the coefficients is two, generally formula_32 is assumed when formula_33; equivalently formula_34 is upper triangular.
The equation may be shortened, as the matrix equation
formula_35
with
formula_36
The equation of the projective completion is almost identical:
formula_37
with
formula_38
These equations define a quadric as an algebraic hypersurface of dimension "n" – 1 and degree two in a space of dimension n.
A quadric is said to be non-degenerate if the matrix formula_34 is invertible.
A non-degenerate quadric is non-singular in the sense that its projective completion has no singular point (a cylinder is non-singular in the affine space, but it is a degenerate quadric that has a singular point at infinity).
The singular points of a degenerate quadric are the points whose projective coordinates belong to the null space of the matrix A.
A quadric is reducible if and only if the rank of A is one (case of a double hyperplane) or two (case of two hyperplanes).
Normal form of projective quadrics.
In real projective space, by Sylvester's law of inertia, a non-singular quadratic form "P"("X") may be put into the normal form
formula_39
by means of a suitable projective transformation (normal forms for singular quadrics can have zeros as well as ±1 as coefficients). For two-dimensional surfaces (dimension "D" = 2) in three-dimensional space, there are exactly three non-degenerate cases:
formula_40
The first case is the empty set.
The second case generates the ellipsoid, the elliptic paraboloid or the hyperboloid of two sheets, depending on whether the chosen plane at infinity cuts the quadric in the empty set, in a point, or in a nondegenerate conic respectively. These all have positive Gaussian curvature.
The third case generates the hyperbolic paraboloid or the hyperboloid of one sheet, depending on whether the plane at infinity cuts it in two lines, or in a nondegenerate conic respectively. These are doubly ruled surfaces of negative Gaussian curvature.
The degenerate form
formula_41
generates the elliptic cylinder, the parabolic cylinder, the hyperbolic cylinder, or the cone, depending on whether the plane at infinity cuts it in a point, a line, two lines, or a nondegenerate conic respectively. These are singly ruled surfaces of zero Gaussian curvature.
We see that projective transformations don't mix Gaussian curvatures of different sign. This is true for general surfaces.
In complex projective space all of the nondegenerate quadrics become indistinguishable from each other.
Rational parametrization.
Given a non-singular point A of a quadric, a line passing through A is either tangent to the quadric, or intersects the quadric in exactly one other point (as usual, a line contained in the quadric is considered as a tangent, since it is contained in the tangent hyperplane). This means that the lines passing through A and not tangent to the quadric are in one to one correspondence with the points of the quadric that do not belong to the tangent hyperplane at A. Expressing the points of the quadric in terms of the direction of the corresponding line provides parametric equations of the following forms.
In the case of conic sections (quadric curves), this parametrization establishes a bijection between a projective conic section and a projective line; this bijection is an isomorphism of algebraic curves. In higher dimensions, the parametrization defines a birational map, which is a bijection between dense open subsets of the quadric and a projective space of the same dimension (the topology that is considered is the usual one in the case of a real or complex quadric, or the Zariski topology in all cases). The points of the quadric that are not in the image of this bijection are the points of intersection of the quadric and its tangent hyperplane at A.
In the affine case, the parametrization is a rational parametrization of the form
formula_42
where formula_43 are the coordinates of a point of the quadric, formula_44 are parameters, and formula_45 are polynomials of degree at most two.
In the projective case, the parametrization has the form
formula_46
where formula_47 are the projective coordinates of a point of the quadric, formula_48 are parameters, and formula_49 are homogeneous polynomials of degree two.
One passes from one parametrization to the other by putting formula_50 and formula_51
formula_52
For computing the parametrization and proving that the degrees are as asserted, one may proceed as follows in the affine case. One can proceed similarly in the projective case.
Let q be the quadratic polynomial that defines the quadric, and formula_53 be the coordinate vector of the given point of the quadric (so, formula_54 Let formula_55 be the coordinate vector of the point of the quadric to be parametrized, and formula_56 be a vector defining the direction used for the parametrization (directions whose last coordinate is zero are not taken into account here; this means that some points of the affine quadric are not parametrized; one says often that they are parametrized by points at infinity in the space of parameters) . The points of the intersection of the quadric and the line of direction formula_57 passing through formula_58 are the points formula_59 such that
formula_60
for some value of the scalar formula_61 This is an equation of degree two in formula_62 except for the values of formula_57 such that the line is tangent to the quadric (in this case, the degree is one if the line is not included in the quadric, or the equation becomes formula_63 otherwise). The coefficients of formula_64 and formula_65 are respectively of degree at most one and two in formula_66 As the constant coefficient is formula_67 the equation becomes linear by dividing by formula_62 and its unique solution is the quotient of a polynomial of degree at most one by a polynomial of degree at most two. Substituting this solution into the expression of formula_68 one obtains the desired parametrization as fractions of polynomials of degree at most two.
Example: circle and spheres.
Let consider the quadric of equation
formula_69
For formula_70 this is the unit circle; for formula_71 this is the unit sphere; in higher dimensions, this is the unit hypersphere.
The point formula_72 belongs to the quadric (the choice of this point among other similar points is only a question of convenience). So, the equation formula_60 of the preceding section becomes
formula_73
By expanding the squares, simplifying the constant terms, dividing by formula_62 and solving in formula_62 one obtains
formula_74
Substituting this into formula_59 and simplifying the expression of the last coordinate, one obtains the parametric equation
formula_75
By homogenizing, one obtains the projective parametrization
formula_76
A straightforward verification shows that this induces a bijection between the points of the quadric such that formula_77 and the points such that formula_78 in the projective space of the parameters. On the other hand, all values of formula_79 such that formula_80 and formula_81 give the point formula_82
In the case of conic sections (formula_83), there is exactly one point with formula_84 and one has a bijection between the circle and the projective line.
For formula_85 there are many points with formula_86 and thus many parameter values for the point formula_82 On the other hand, the other points of the quadric for which formula_87 (and thus formula_88) cannot be obtained for any value of the parameters. These points are the points of the intersection of the quadric and its tangent plane at formula_82 In this specific case, these points have nonreal complex coordinates, but it suffices to change one sign in the equation of the quadric for producing real points that are not obtained with the resulting parametrization.
Rational points.
A quadric is "defined over" a field formula_89 if the coefficients of its equation belong to formula_90 When formula_89 is the field formula_91 of the rational numbers, one can suppose that the coefficients are integers by clearing denominators.
A point of a quadric defined over a field formula_89 is said rational over formula_89 if its coordinates belong to formula_90 A rational point over the field formula_92 of the real numbers, is called a real point.
A rational point over formula_91 is called simply a "rational point". By clearing denominators, one can suppose and one supposes generally that the projective coordinates of a rational point (in a quadric defined over formula_91) are integers. Also, by clearing denominators of the coefficients, one supposes generally that all the coefficients of the equation of the quadric and the polynomials occurring in the parametrization are integers.
Finding the rational points of a projective quadric amounts thus to solve a Diophantine equation.
Given a rational point A over a quadric over a field F, the parametrization described in the preceding section provides rational points when the parameters are in F, and, conversely, every rational point of the quadric can be obtained from parameters in F, if the point is not in the tangent hyperplane at A.
It follows that, if a quadric has a rational point, it has many other rational points (infinitely many if F is infinite), and these points can be algorithmically generated as soon one knows one of them.
As said above, in the case of projective quadrics defined over formula_93 the parametrization takes the form
formula_94
where the formula_95 are homogeneous polynomials of degree two with integer coefficients. Because of the homogeneity, one can consider only parameters that are setwise coprime integers. If formula_96 is the equation of the quadric, a solution of this equation is said "primitive" if its components are setwise coprime integers. The primitive solutions are in one to one correspondence with the rational points of the quadric (up to a change of sign of all components of the solution). The non-primitive integer solutions are obtained by multiplying primitive solutions by arbitrary integers; so they do not deserve a specific study. However, setwise coprime parameters can produce non-primitive solutions, and one may have to divide by a greatest common divisor to arrive at the associated primitive solution.
Pythagorean triples.
This is well illustrated by Pythagorean triples. A Pythagorean triple is a triple formula_97 of positive integers such that formula_98 A Pythagorean triple is "primitive" if formula_99 are setwise coprime, or, equivalently, if any of the three pairs formula_100 formula_101 and formula_102 is coprime.
By choosing formula_103 the above method provides the parametrization
formula_104
for the quadric of equation formula_105 (The names of variables and parameters are being changed from the above ones to those that are common when considering Pythagorean triples).
If m and n are coprime integers such that formula_106 the resulting triple is a Pythagorean triple. If one of m and n is even and the other is odd, this resulting triple is primitive; otherwise, m and n are both odd, and one obtains a primitive triple by dividing by 2.
In summary, the primitive Pythagorean triples with formula_107 even are obtained as
formula_108
with m and n coprime integers such that one is even and formula_109 (this is Euclid's formula). The primitive Pythagorean triples with formula_107 odd are obtained as
formula_110
with m and n coprime odd integers such that formula_111
As the exchange of a and b transforms a Pythagorean triple into another Pythagorean triple, only one of the two cases is sufficient for producing all primitive Pythagorean triples.
Projective quadrics over fields.
The definition of a projective quadric in a real projective space (see above) can be formally adapted by defining a projective quadric in an "n"-dimensional projective space over a field. In order to omit dealing with coordinates, a projective quadric is usually defined by starting with a quadratic form on a vector space.
Quadratic form.
Let formula_112 be a field and formula_113 a vector space over formula_112. A mapping formula_114 from formula_113 to formula_112 such that
(Q1) formula_115 for any formula_116 and formula_117.
(Q2) formula_118 is a bilinear form.
is called quadratic form. The bilinear form formula_119 is symmetric"."
In case of formula_120 the bilinear form is formula_121, i.e. formula_119 and formula_114 are mutually determined in a unique way.
In case of formula_122 (that means: formula_123) the bilinear form has the property formula_124, i.e. formula_119 is
"symplectic".
For formula_125 and formula_126
(formula_127 is a base of formula_113) formula_128 has the familiar form
formula_129 and
formula_130.
For example:
formula_131
"n"-dimensional projective space over a field.
Let formula_112 be a field, formula_132,
formula_133 an ("n" + 1)-dimensional vector space over the field formula_134
formula_135 the 1-dimensional subspace generated by formula_136,
formula_137 the "set of points" ,
formula_138 the "set of lines".
formula_139 is the n-dimensional projective space over formula_112.
The set of points contained in a formula_140-dimensional subspace of formula_141 is a "formula_142-dimensional subspace" of formula_143. A 2-dimensional subspace is a "plane".
In case of formula_144 a formula_145-dimensional subspace is called "hyperplane".
Projective quadric.
A quadratic form formula_114 on a vector space formula_133 defines a "quadric" formula_146 in the associated projective space formula_147 as the set of the points formula_148 such that formula_149. That is,
formula_150
Examples in formula_151.:
(E1): For formula_152 one obtains a conic.
(E2): For formula_153 one obtains the pair of lines with the equations formula_154 and formula_155, respectively. They intersect at point formula_156;
For the considerations below it is assumed that formula_157.
Polar space.
For point formula_158 the set
formula_159
is called polar space of formula_160 (with respect to formula_114).
If formula_161 for all formula_162, one obtains formula_163.
If formula_164 for at least one formula_162, the equation formula_161is a non trivial linear equation which defines a hyperplane. Hence
formula_165 is either a hyperplane or formula_166.
Intersection with a line.
For the intersection of an arbitrary line formula_167 with a quadric formula_168, the following cases may occur:
a) formula_169 and formula_167 is called "exterior line"
b) formula_170 and formula_167 is called a "line in the quadric"
c) formula_171 and formula_167 is called "tangent line"
d) formula_172 and formula_167 is called "secant line".
Proof:
Let formula_167 be a line, which intersects formula_173 at point formula_174 and formula_175 is a second point on formula_167.
From formula_176 one obtains
formula_177
I) In case of formula_178 the equation formula_179 holds and it is
formula_180 for any formula_181. Hence either formula_182
for "any" formula_181 or formula_183 for "any" formula_181, which proves b) and b').
II) In case of formula_184 one obtains formula_185 and the equation
formula_186 has exactly one solution formula_187.
Hence: formula_188, which proves c).
Additionally the proof shows:
A line formula_167 through a point formula_189 is a "tangent" line if and only if formula_190.
"f"-radical, "q"-radical.
In the classical cases formula_191 or formula_192 there exists only one radical, because of formula_120 and formula_119 and formula_114 are closely connected. In case of formula_122 the quadric formula_146 is not determined by formula_119 (see above) and so one has to deal with two radicals:
a) formula_193 is a projective subspace. formula_194 is called "f"-radical of quadric formula_146.
b) formula_195 is called singular radical or "formula_114-radical" of formula_146.
c) In case of formula_120 one has formula_196.
A quadric is called non-degenerate if formula_197.
Examples in formula_151 (see above):
(E1): For formula_152 (conic) the bilinear form is
formula_198
In case of formula_120 the polar spaces are never formula_199. Hence formula_200.
In case of formula_122 the bilinear form is reduced to
formula_201 and formula_202. Hence formula_203
In this case the "f"-radical is the common point of all tangents, the so called "knot".
In both cases formula_204 and the quadric (conic) ist "non-degenerate".
(E2): For formula_153 (pair of lines) the bilinear form is formula_205 and formula_206 the intersection point.
In this example the quadric is "degenerate".
Symmetries.
A quadric is a rather homogeneous object:
For any point formula_207 there exists an involutorial central collineation formula_208 with center formula_160 and formula_209.
Proof:
Due to formula_210 the polar space formula_165 is a hyperplane.
The linear mapping
formula_211
induces an "involutorial central collineation" formula_208 with axis formula_165 and centre formula_160 which leaves formula_146 invariant.
In the case of formula_120, the mapping formula_212 produces the familiar shape formula_213 with formula_214 and formula_215 for any formula_216.
Remark:
a) An exterior line, a tangent line or a secant line is mapped by the involution formula_208 on an exterior, tangent and secant line, respectively.
b) formula_217 is pointwise fixed by formula_208.
"q"-subspaces and index of a quadric.
A subspace formula_218 of formula_143 is called formula_114-subspace if formula_219
For example: points on a sphere or lines on a hyperboloid (s. below).
Any two "maximal" formula_114-subspaces have the same dimension formula_220.
Let be formula_220 the dimension of the maximal formula_114-subspaces of formula_146 then
The integer formula_221 is called index of formula_146.
Theorem: (BUEKENHOUT)
For the index formula_27 of a non-degenerate quadric formula_146 in formula_143 the following is true:
formula_222.
Let be formula_146 a non-degenerate quadric in formula_223, and formula_27 its index.
In case of formula_224 quadric formula_146 is called "sphere" (or oval conic if formula_83).
In case of formula_225 quadric formula_146 is called "hyperboloid" (of one sheet).
Examples:
a) Quadric formula_146 in formula_226 with form formula_152 is non-degenerate with index 1.
b) If polynomial formula_227 is irreducible over formula_112 the quadratic form formula_228 gives rise to a non-degenerate quadric formula_146 in formula_229 of index 1 (sphere). For example: formula_230 is irreducible over formula_92 (but not over formula_231 !).
c) In formula_229 the quadratic form formula_232 generates a "hyperboloid".
Generalization of quadrics: quadratic sets.
It is not reasonable to formally extend the definition of quadrics to spaces over genuine skew fields (division rings). Because one would obtain secants bearing more than 2 points of the quadric which is totally different from "usual" quadrics. The reason is the following statement.
A division ring formula_112 is commutative if and only if any equation formula_233, has at most two solutions.
There are "generalizations" of quadrics: quadratic sets. A quadratic set is a set of points of a projective space with the same geometric properties as a quadric: every line intersects a quadratic set in at most two points or is contained in the set.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\sum_{i,j=1}^{D+1} x_i Q_{ij} x_j + \\sum_{i=1}^{D+1} P_i x_i + R = 0\n"
},
{
"math_id": 1,
"text": "\nx Q x^\\mathrm{T} + P x^\\mathrm{T} + R = 0\\,\n"
},
{
"math_id": 2,
"text": "A x^2 + B y^2 + C z^2 + D xy + E yz + F xz + G x + H y + I z + J = 0,"
},
{
"math_id": 3,
"text": "A, B, \\ldots, J"
},
{
"math_id": 4,
"text": " {x^2 \\over a^2} + {y^2 \\over b^2} +\\varepsilon_1 {z^2 \\over c^2} + \\varepsilon_2=0,"
},
{
"math_id": 5,
"text": " {x^2 \\over a^2} - {y^2 \\over b^2} + \\varepsilon_3=0"
},
{
"math_id": 6,
"text": "{x^2 \\over a^2} + \\varepsilon_4 =0,"
},
{
"math_id": 7,
"text": "z={x^2 \\over a^2} +\\varepsilon_5 {y^2 \\over b^2}, "
},
{
"math_id": 8,
"text": "\\varepsilon_i"
},
{
"math_id": 9,
"text": " \\varepsilon_3 "
},
{
"math_id": 10,
"text": "\\varepsilon_1=\\varepsilon_2=1"
},
{
"math_id": 11,
"text": "\\varepsilon_1=0, \\varepsilon_2=1"
},
{
"math_id": 12,
"text": "\\varepsilon_4=1"
},
{
"math_id": 13,
"text": "\\varepsilon_1=1, \\varepsilon_2=0"
},
{
"math_id": 14,
"text": "\\varepsilon_1=\\varepsilon_2=0,"
},
{
"math_id": 15,
"text": "\\varepsilon_3=0,"
},
{
"math_id": 16,
"text": "\\varepsilon_4=0,"
},
{
"math_id": 17,
"text": "\\varepsilon_4=-1,"
},
{
"math_id": 18,
"text": "p(x_1, \\ldots,x_n)"
},
{
"math_id": 19,
"text": "P(X_0, \\ldots, X_n)=X_0^2\\,p\\left(\\frac {X_1}{X_0}, \\ldots,\\frac {X_n}{X_0}\\right)"
},
{
"math_id": 20,
"text": "p(x_1, \\ldots, x_n)=P(1, x_1, \\ldots, x_n)\\,,"
},
{
"math_id": 21,
"text": "P(\\mathbf{X}) = 0"
},
{
"math_id": 22,
"text": "X_0 = 0"
},
{
"math_id": 23,
"text": "p(\\mathbf{x}) = 0"
},
{
"math_id": 24,
"text": "p(x_1,\\ldots,x_n)=0,"
},
{
"math_id": 25,
"text": "p(x_1,\\ldots,x_n) = \\sum_{i=1}^n \\sum_{j=1}^n a_{i,j}x_i x_j + \\sum_{i=1}^n (a_{i,0}+a_{0,i})x_i + a_{0,0}\\,,"
},
{
"math_id": 26,
"text": "A = (a_{i,j})"
},
{
"math_id": 27,
"text": "i"
},
{
"math_id": 28,
"text": "j"
},
{
"math_id": 29,
"text": "n"
},
{
"math_id": 30,
"text": "a_{i,j} = a_{j,i}"
},
{
"math_id": 31,
"text": "A = A^{\\mathsf T}"
},
{
"math_id": 32,
"text": "a_{i,j} = 0"
},
{
"math_id": 33,
"text": "j < i"
},
{
"math_id": 34,
"text": "A"
},
{
"math_id": 35,
"text": "\\mathbf x^{\\mathsf T}A\\mathbf x=0\\,,"
},
{
"math_id": 36,
"text": "\\mathbf x = \\begin {pmatrix}1&x_1&\\cdots&x_n\\end{pmatrix}^{\\mathsf T}\\,."
},
{
"math_id": 37,
"text": "\\mathbf X^{\\mathsf T}A\\mathbf X=0,"
},
{
"math_id": 38,
"text": "\\mathbf X = \\begin {pmatrix}X_0&X_1&\\cdots&X_n\\end{pmatrix}^{\\mathsf T}."
},
{
"math_id": 39,
"text": "P(X) = \\pm X_0^2 \\pm X_1^2 \\pm\\cdots\\pm X_{D+1}^2"
},
{
"math_id": 40,
"text": "P(X) = \\begin{cases}\nX_0^2+X_1^2+X_2^2+X_3^2\\\\\nX_0^2+X_1^2+X_2^2-X_3^2\\\\\nX_0^2+X_1^2-X_2^2-X_3^2\n\\end{cases}\n"
},
{
"math_id": 41,
"text": "X_0^2-X_1^2-X_2^2=0. \\, "
},
{
"math_id": 42,
"text": "x_i=\\frac{f_i(t_1,\\ldots, t_{n-1})}{f_0(t_1,\\ldots, t_{n-1})}\\quad\\text{for }i=1, \\ldots, n,"
},
{
"math_id": 43,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 44,
"text": "t_1,\\ldots,t_{n-1}"
},
{
"math_id": 45,
"text": "f_0, f_1, \\ldots, f_n"
},
{
"math_id": 46,
"text": "X_i=F_i(T_1,\\ldots, T_n)\\quad\\text{for }i=0, \\ldots, n,"
},
{
"math_id": 47,
"text": "X_0, \\ldots, X_n"
},
{
"math_id": 48,
"text": "T_1,\\ldots,T_n"
},
{
"math_id": 49,
"text": "F_0, \\ldots, F_n"
},
{
"math_id": 50,
"text": "x_i=X_i/X_0,"
},
{
"math_id": 51,
"text": "t_i=T_i/T_n\\,:"
},
{
"math_id": 52,
"text": "F_i(T_1,\\ldots, T_n)=T_n^2 \\,f_i\\!{\\left(\\frac{T_1}{T_n},\\ldots,\\frac{T_{n-1}}{T_n}\\right)}."
},
{
"math_id": 53,
"text": "\\mathbf a=(a_1,\\ldots a_n)"
},
{
"math_id": 54,
"text": "q(\\mathbf a)=0)."
},
{
"math_id": 55,
"text": "\\mathbf x=(x_1,\\ldots x_n)"
},
{
"math_id": 56,
"text": "\\mathbf t=(t_1,\\ldots, t_{n-1},1)"
},
{
"math_id": 57,
"text": "\\mathbf t"
},
{
"math_id": 58,
"text": "\\mathbf a"
},
{
"math_id": 59,
"text": "\\mathbf x=\\mathbf a +\\lambda \\mathbf t"
},
{
"math_id": 60,
"text": "q(\\mathbf a +\\lambda \\mathbf t)=0"
},
{
"math_id": 61,
"text": "\\lambda."
},
{
"math_id": 62,
"text": "\\lambda,"
},
{
"math_id": 63,
"text": "0=0"
},
{
"math_id": 64,
"text": "\\lambda"
},
{
"math_id": 65,
"text": "\\lambda^2"
},
{
"math_id": 66,
"text": "\\mathbf t."
},
{
"math_id": 67,
"text": "q(\\mathbf a)=0,"
},
{
"math_id": 68,
"text": "\\mathbf x,"
},
{
"math_id": 69,
"text": "x_1^2+ x_2^2+\\cdots x_n^2 -1=0."
},
{
"math_id": 70,
"text": "n=2,"
},
{
"math_id": 71,
"text": "n=3"
},
{
"math_id": 72,
"text": "\\mathbf a=(0, \\ldots, 0, -1)"
},
{
"math_id": 73,
"text": "(\\lambda t_1^2)+\\cdots +(\\lambda t_{n-1})^2+ (1-\\lambda)^2-1=0."
},
{
"math_id": 74,
"text": "\\lambda = \\frac{2}{1+ t_1^2+ \\cdots +t_{n-1}^2}."
},
{
"math_id": 75,
"text": "\\begin{cases}\nx_1=\\frac{2t_1}{1+ t_1^2+ \\cdots +t_{n-1}^2}\\\\\n\\vdots\\\\\nx_{n-1}=\\frac{2 t_{n-1}}{1+ t_1^2+ \\cdots +t_{n-1}^2}\\\\\nx_n =\\frac{1- t_1^2- \\cdots -t_{n-1}^2}{1+ t_1^2+ \\cdots +t_{n-1}^2}.\n\\end{cases}"
},
{
"math_id": 76,
"text": "\\begin{cases}\nX_0=T_1^2+ \\cdots +T_n^2\\\\\nX_1=2T_1 T_n\\\\\n\\vdots\\\\\nX_{n-1}=2T_{n-1}T_n\\\\\nX_n =T_n^2- T_1^2- \\cdots -T_{n-1}^2.\n\\end{cases}"
},
{
"math_id": 77,
"text": "X_n\\neq -X_0"
},
{
"math_id": 78,
"text": "T_n\\neq 0"
},
{
"math_id": 79,
"text": "(T_1,\\ldots, T_n)"
},
{
"math_id": 80,
"text": "T_n=0"
},
{
"math_id": 81,
"text": "T_1^2+ \\cdots +T_{n-1}^2\\neq 0"
},
{
"math_id": 82,
"text": "A."
},
{
"math_id": 83,
"text": "n=2"
},
{
"math_id": 84,
"text": "T_n=0."
},
{
"math_id": 85,
"text": "n>2,"
},
{
"math_id": 86,
"text": "T_n=0,"
},
{
"math_id": 87,
"text": "X_n=-X_0"
},
{
"math_id": 88,
"text": "x_n=-1"
},
{
"math_id": 89,
"text": "F"
},
{
"math_id": 90,
"text": "F."
},
{
"math_id": 91,
"text": "\\Q"
},
{
"math_id": 92,
"text": "\\R"
},
{
"math_id": 93,
"text": "\\Q,"
},
{
"math_id": 94,
"text": "X_i=F_i(T_1, \\ldots, T_n)\\quad \\text{for } i=0,\\ldots,n,"
},
{
"math_id": 95,
"text": "F_i"
},
{
"math_id": 96,
"text": "Q(X_0,\\ldots, X_n)=0"
},
{
"math_id": 97,
"text": "(a,b,c)"
},
{
"math_id": 98,
"text": "a^2+b^2=c^2."
},
{
"math_id": 99,
"text": "a, b, c"
},
{
"math_id": 100,
"text": "(a,b),"
},
{
"math_id": 101,
"text": "(b,c)"
},
{
"math_id": 102,
"text": "(a,c)"
},
{
"math_id": 103,
"text": "A=(-1, 0, 1),"
},
{
"math_id": 104,
"text": "\\begin{cases}\na=m^2-n^2\\\\b=2mn\\\\c=m^2+n^2 \n\\end{cases}"
},
{
"math_id": 105,
"text": "a^2+b^2-c^2=0."
},
{
"math_id": 106,
"text": "m>n>0,"
},
{
"math_id": 107,
"text": "b"
},
{
"math_id": 108,
"text": "a=m^2-n^2,\\quad b=2mn,\\quad c= m^2+n^2,"
},
{
"math_id": 109,
"text": "m>n>0"
},
{
"math_id": 110,
"text": "a=\\frac{m^2-n^2}{2},\\quad b=mn, \\quad c= \\frac{m^2+n^2}2,"
},
{
"math_id": 111,
"text": "m>n>0."
},
{
"math_id": 112,
"text": "K"
},
{
"math_id": 113,
"text": "V"
},
{
"math_id": 114,
"text": "q"
},
{
"math_id": 115,
"text": "\\;q(\\lambda\\vec x)=\\lambda^2q(\\vec x )\\;"
},
{
"math_id": 116,
"text": "\\lambda\\in K"
},
{
"math_id": 117,
"text": "\\vec x \\in V"
},
{
"math_id": 118,
"text": "\\;f(\\vec x,\\vec y ):=q(\\vec x+\\vec y)-q(\\vec x)-q(\\vec y)\\;"
},
{
"math_id": 119,
"text": "f"
},
{
"math_id": 120,
"text": "\\operatorname{char}K\\ne2"
},
{
"math_id": 121,
"text": "f(\\vec x,\\vec x)=2q(\\vec x)"
},
{
"math_id": 122,
"text": "\\operatorname{char}K=2"
},
{
"math_id": 123,
"text": "1+1=0"
},
{
"math_id": 124,
"text": "f(\\vec x,\\vec x)=0"
},
{
"math_id": 125,
"text": "V=K^n\\ "
},
{
"math_id": 126,
"text": "\\ \\vec x=\\sum_{i=1}^{n}x_i\\vec e_i\\quad "
},
{
"math_id": 127,
"text": "\\{\\vec e_1,\\ldots,\\vec e_n\\} "
},
{
"math_id": 128,
"text": "\\ q"
},
{
"math_id": 129,
"text": "\nq(\\vec x)=\\sum_{1=i\\le k}^{n} a_{ik}x_ix_k\\ \\text{ with }\\ a_{ik}:= f(\\vec e_i,\\vec e_k)\\ \\text{ for }\\ i\\ne k\\ \\text{ and }\\ a_{ii}:= q(\\vec e_i)\\ "
},
{
"math_id": 130,
"text": " f(\\vec x,\\vec y)=\\sum_{1=i\\le k}^{n} a_{ik}(x_iy_k+x_ky_i)"
},
{
"math_id": 131,
"text": "n=3,\\quad q(\\vec x)=x_1x_2-x^2_3, \\quad f(\\vec x,\\vec y)=x_1y_2+x_2y_1-2x_3y_3\\; . "
},
{
"math_id": 132,
"text": "2\\le n\\in\\N"
},
{
"math_id": 133,
"text": "V_{n+1}"
},
{
"math_id": 134,
"text": "K,"
},
{
"math_id": 135,
"text": "\\langle\\vec x\\rangle"
},
{
"math_id": 136,
"text": "\\vec 0\\ne \\vec x\\in V_{n+1}"
},
{
"math_id": 137,
"text": "{\\mathcal P}=\\{\\langle \\vec x\\rangle \\mid \\vec x \\in V_{n+1}\\},\\ "
},
{
"math_id": 138,
"text": "{\\mathcal G}=\\{ \\text{2-dimensional subspaces of } V_{n+1}\\},\\ "
},
{
"math_id": 139,
"text": "P_n(K)=({\\mathcal P},{\\mathcal G})\\ "
},
{
"math_id": 140,
"text": "(k+1)"
},
{
"math_id": 141,
"text": " V_{n+1}"
},
{
"math_id": 142,
"text": "k"
},
{
"math_id": 143,
"text": "P_n(K)"
},
{
"math_id": 144,
"text": "\\;n>3\\;"
},
{
"math_id": 145,
"text": "(n-1)"
},
{
"math_id": 146,
"text": "\\mathcal Q"
},
{
"math_id": 147,
"text": "\\mathcal P,"
},
{
"math_id": 148,
"text": "\\langle\\vec x\\rangle \\in {\\mathcal P}"
},
{
"math_id": 149,
"text": "q(\\vec x)=0"
},
{
"math_id": 150,
"text": "\\mathcal Q=\\{\\langle\\vec x\\rangle \\in {\\mathcal P} \\mid q(\\vec x)=0\\}."
},
{
"math_id": 151,
"text": " P_2(K)"
},
{
"math_id": 152,
"text": "\\;q(\\vec x)=x_1x_2-x^2_3\\;"
},
{
"math_id": 153,
"text": "\\;q(\\vec x)=x_1x_2\\;"
},
{
"math_id": 154,
"text": "x_1=0"
},
{
"math_id": 155,
"text": "x_2=0"
},
{
"math_id": 156,
"text": "\\langle(0,0,1)^\\text{T}\\rangle"
},
{
"math_id": 157,
"text": "\\mathcal Q\\ne \\emptyset"
},
{
"math_id": 158,
"text": "P=\\langle\\vec p\\rangle \\in {\\mathcal P}"
},
{
"math_id": 159,
"text": "P^\\perp:=\\{\\langle\\vec x\\rangle\\in {\\mathcal P} \\mid f(\\vec p,\\vec x)=0\\}"
},
{
"math_id": 160,
"text": "P"
},
{
"math_id": 161,
"text": "\\;f(\\vec p,\\vec x)=0\\;"
},
{
"math_id": 162,
"text": "\\vec x "
},
{
"math_id": 163,
"text": "P^\\perp=\\mathcal P"
},
{
"math_id": 164,
"text": "\\;f(\\vec p,\\vec x)\\ne 0\\;"
},
{
"math_id": 165,
"text": "P^\\perp"
},
{
"math_id": 166,
"text": "{\\mathcal P}"
},
{
"math_id": 167,
"text": "g"
},
{
"math_id": 168,
"text": " \\mathcal Q"
},
{
"math_id": 169,
"text": "g\\cap \\mathcal Q=\\emptyset\\;"
},
{
"math_id": 170,
"text": " g \\subset \\mathcal Q\\; "
},
{
"math_id": 171,
"text": "|g\\cap \\mathcal Q|=1\\; "
},
{
"math_id": 172,
"text": "|g\\cap \\mathcal Q|=2\\; "
},
{
"math_id": 173,
"text": "\\mathcal Q "
},
{
"math_id": 174,
"text": "\\;U=\\langle\\vec u\\rangle\\;"
},
{
"math_id": 175,
"text": " \\;V= \\langle\\vec v\\rangle\\;"
},
{
"math_id": 176,
"text": "\\;q(\\vec u)=0\\;"
},
{
"math_id": 177,
"text": "q(x\\vec u+\\vec v)=q(x\\vec u)+q(\\vec v)+f(x\\vec u,\\vec v)=q(\\vec v)+xf(\\vec u,\\vec v)\\; ."
},
{
"math_id": 178,
"text": "g\\subset U^\\perp"
},
{
"math_id": 179,
"text": "f(\\vec u,\\vec v)=0"
},
{
"math_id": 180,
"text": "\\; q(x\\vec u+\\vec v)=q(\\vec v)\\; "
},
{
"math_id": 181,
"text": "x\\in K"
},
{
"math_id": 182,
"text": "\\;q(x\\vec u+\\vec v)=0\\;"
},
{
"math_id": 183,
"text": "\\;q(x\\vec u+\\vec v)\\ne 0\\;"
},
{
"math_id": 184,
"text": "g\\not\\subset U^\\perp"
},
{
"math_id": 185,
"text": "f(\\vec u,\\vec v)\\ne 0"
},
{
"math_id": 186,
"text": "\\;q(x\\vec u+\\vec v)=q(\\vec v)+xf(\\vec u,\\vec v)= 0\\;"
},
{
"math_id": 187,
"text": "x"
},
{
"math_id": 188,
"text": "|g\\cap \\mathcal Q|=2"
},
{
"math_id": 189,
"text": "P\\in \\mathcal Q"
},
{
"math_id": 190,
"text": "g\\subset P^\\perp"
},
{
"math_id": 191,
"text": "K=\\R"
},
{
"math_id": 192,
"text": " \\C"
},
{
"math_id": 193,
"text": "\\mathcal R:=\\{P\\in{\\mathcal P} \\mid P^\\perp=\\mathcal P\\}"
},
{
"math_id": 194,
"text": "\\mathcal R"
},
{
"math_id": 195,
"text": "\\mathcal S:=\\mathcal R\\cap\\mathcal Q"
},
{
"math_id": 196,
"text": "\\mathcal R=\\mathcal S"
},
{
"math_id": 197,
"text": "\\mathcal S=\\emptyset"
},
{
"math_id": 198,
"text": "f(\\vec x,\\vec y)=x_1y_2+x_2y_1-2x_3y_3\\; . "
},
{
"math_id": 199,
"text": "\\mathcal P"
},
{
"math_id": 200,
"text": "\\mathcal R=\\mathcal S=\\empty"
},
{
"math_id": 201,
"text": "f(\\vec x,\\vec y)=x_1y_2+x_2y_1\\; "
},
{
"math_id": 202,
"text": "\\mathcal R=\\langle(0,0,1)^\\text{T}\\rangle\\notin \\mathcal Q"
},
{
"math_id": 203,
"text": "\\mathcal R\\ne \\mathcal S=\\empty \\; ."
},
{
"math_id": 204,
"text": " S=\\empty"
},
{
"math_id": 205,
"text": "f(\\vec x,\\vec y)=x_1y_2+x_2y_1\\; "
},
{
"math_id": 206,
"text": "\\mathcal R=\\langle(0,0,1)^\\text{T}\\rangle=\\mathcal S\\; ,"
},
{
"math_id": 207,
"text": "P\\notin \\mathcal Q\\cup {\\mathcal R}\\;"
},
{
"math_id": 208,
"text": "\\sigma_P"
},
{
"math_id": 209,
"text": "\\sigma_P(\\mathcal Q)=\\mathcal Q"
},
{
"math_id": 210,
"text": "P\\notin \\mathcal Q\\cup {\\mathcal R}"
},
{
"math_id": 211,
"text": "\\varphi: \\vec x \\rightarrow \\vec x-\\frac{f(\\vec p,\\vec x)}{q(\\vec p)}\\vec p"
},
{
"math_id": 212,
"text": "\\varphi"
},
{
"math_id": 213,
"text": "\\; \\varphi: \\vec x \\rightarrow \\vec x-2\\frac{f(\\vec p,\\vec x)}{f(\\vec p,\\vec p)}\\vec p\\; "
},
{
"math_id": 214,
"text": "\\; \\varphi(\\vec p)=-\\vec p"
},
{
"math_id": 215,
"text": "\\; \\varphi(\\vec x)=\\vec x\\; "
},
{
"math_id": 216,
"text": "\\langle\\vec x\\rangle \\in P^\\perp"
},
{
"math_id": 217,
"text": "{\\mathcal R}"
},
{
"math_id": 218,
"text": "\\;\\mathcal U\\;"
},
{
"math_id": 219,
"text": "\\;\\mathcal U\\subset\\mathcal Q\\;"
},
{
"math_id": 220,
"text": "m"
},
{
"math_id": 221,
"text": "\\;i:=m+1\\;"
},
{
"math_id": 222,
"text": "i\\le \\frac{n+1}{2}"
},
{
"math_id": 223,
"text": " P_n(K), n\\ge 2"
},
{
"math_id": 224,
"text": "i=1"
},
{
"math_id": 225,
"text": "i=2"
},
{
"math_id": 226,
"text": "P_2(K)"
},
{
"math_id": 227,
"text": "\\;p(\\xi)=\\xi^2+a_0\\xi+b_0\\;"
},
{
"math_id": 228,
"text": "\\;q(\\vec x)=x^2_1+a_0x_1x_2+b_0x^2_2-x_3x_4\\;"
},
{
"math_id": 229,
"text": "P_3(K)"
},
{
"math_id": 230,
"text": "\\;p(\\xi)=\\xi^2+1\\;"
},
{
"math_id": 231,
"text": "\\C"
},
{
"math_id": 232,
"text": "\\;q(\\vec x)=x_1x_2+x_3x_4\\;"
},
{
"math_id": 233,
"text": "x^2+ax+b=0, \\ a,b \\in K"
}
] |
https://en.wikipedia.org/wiki?curid=145570
|
14557176
|
Preisach model of hysteresis
|
Model of magnetic hysteresis
In electromagnetism, the Preisach model of hysteresis is a model of magnetic hysteresis. Originally, it generalized hysteresis as the relationship between the magnetic field and magnetization of a magnetic material as the parallel connection of independent relay "hysterons". It was first suggested in 1935 by Ferenc (Franz) Preisach in the German academic journal . In the field of ferromagnetism, the Preisach model is sometimes thought to describe a ferromagnetic material as a network of small independently acting domains, each magnetized to a value of either formula_0 or formula_1. A sample of iron, for example, may have evenly distributed magnetic domains, resulting in a net magnetic moment of zero.
Mathematically similar models seem to have been independently developed in other fields of science and engineering. One notable example is the model of capillary hysteresis in porous materials developed by Everett and co-workers. Since then, following the work of people like M. Krasnoselkii, A. Pokrovskii, A. Visintin, and I.D. Mayergoyz, the model has become widely accepted as a general mathematical tool for the description of hysteresis phenomena of different kinds.
Nonideal relay.
The relay hysteron is the fundamental building block of the Preisach model. It is described as a two-valued operator denoted by formula_2. Its I/O map takes the form of a loop, as shown:
Above, a relay of magnitude 1, formula_3 defines the "switch-off" threshold, and formula_4 defines the "switch-on" threshold.
Graphically, if formula_5 is less than formula_3, the output formula_6 is "low" or "off." As we increase formula_5, the output remains low until formula_5 reaches formula_4—at which point the output switches "on." Further increasing formula_5 has no change. Decreasing formula_5, formula_6 does not go low until formula_5 reaches formula_3 again. It is apparent that the relay operator formula_2 takes the path of a loop, and its next state depends on its past state.
Mathematically, the output of formula_2 is expressed as:
formula_7
Where formula_8 if the last time formula_5 was outside of the boundaries formula_9, it was in the region of formula_10; and formula_11 if the last time formula_5 was outside of the boundaries formula_9, it was in the region of formula_12.
This definition of the hysteron shows that the current value formula_6 of the complete hysteresis loop depends upon the history of the input variable formula_5.
Discrete Preisach model.
The Preisach model consists of many relay hysterons connected in parallel, given weights, and summed. This can be visualized by a block diagram:
Each of these relays has different formula_3 and formula_4 thresholds and is scaled by formula_13. With increasing formula_14, the true hysteresis curve is approximated better.
In the limit as formula_14 approaches infinity, we obtain the continuous Preisach model.
formula_15
Preisach plane.
One of the easiest ways to look at the Preisach model is using a geometric interpretation.
Consider a plane of coordinates formula_16. On this plane, each point formula_17 is mapped to a specific relay hysteron formula_18. Each relay can be plotted on this so-called Preisach plane with its formula_16 values. Depending on their distribution on the Preisach plane, the relay hysterons can represent hysteresis with good accuracy.
We consider only the half-plane formula_19 as any other case does not have a physical equivalent in nature.
Next, we take a specific point on the half plane and build a right triangle by drawing two lines parallel to the axes, both from the point to the line formula_20.
We now present the Preisach density function, denoted formula_21. This function describes the amount of relay hysterons of each distinct values of formula_17. As a default we say that outside the right triangle formula_22.
A modified formulation of the classical Preisach model has been presented, allowing analytical expression of
the Everett function. This makes the model considerably faster and especially adequate for inclusion in electromagnetic field computation or electric circuit analysis codes.
Vector Preisach model.
The vector Preisach model is constructed as the linear superposition of scalar models. For considering the uniaxial anisotropy of the material, Everett functions are expanded by Fourier coefficients. In this case, the measured and simulated curves are in a very good agreement.
Another approach uses different relay hysteron, closed surfaces defined on the 3D input space. In general spherical hysteron is used for vector hysteresis in 3D, and circular hysteron is used for vector hysteresis in 2D.
Applications.
The Preisach model has been applied to model hysteresis in a wide variety of fields, including to study irreversible changes in soil hydraulic conductivity as a result of saline and sodic conditions, the modeling of soil water retention and the effect of stress and strains on soil and rock structures.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "-h"
},
{
"math_id": 2,
"text": "R_{\\alpha,\\beta}"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "y(x)=\\begin{cases}\n 1&\\mbox{ if }x\\geq\\beta\\\\\n 0&\\mbox{ if }x\\leq\\alpha \\\\\n k&\\mbox{ if }\\alpha<x<\\beta\n\\end{cases}"
},
{
"math_id": 8,
"text": "k=0"
},
{
"math_id": 9,
"text": "\\alpha<x<\\beta"
},
{
"math_id": 10,
"text": "x\\leq\\alpha"
},
{
"math_id": 11,
"text": "k=1"
},
{
"math_id": 12,
"text": "x\\geq\\beta"
},
{
"math_id": 13,
"text": "\\mu"
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "y(t)=\\iint_{\\alpha \\geq \\beta}\\mu(\\alpha,\\beta)R_{\\alpha\\beta}x(t) d \\alpha d \\beta"
},
{
"math_id": 16,
"text": "(\\alpha,\\beta)"
},
{
"math_id": 17,
"text": "(\\alpha_{i},\\beta_{i})"
},
{
"math_id": 18,
"text": "R_{\\alpha_{i},\\beta_{i}}"
},
{
"math_id": 19,
"text": "\\alpha<\\beta"
},
{
"math_id": 20,
"text": "\\alpha=\\beta"
},
{
"math_id": 21,
"text": "\\mu(\\alpha,\\beta)"
},
{
"math_id": 22,
"text": "\\mu(\\alpha,\\beta)=0"
}
] |
https://en.wikipedia.org/wiki?curid=14557176
|
14558397
|
Surface phonon
|
In solid state physics, a surface phonon is the quantum of a lattice vibration mode associated with a solid surface. Similar to the ordinary lattice vibrations in a bulk solid (whose quanta are simply called phonons), the nature of surface vibrations depends on details of periodicity and symmetry of a crystal structure. Surface vibrations are however distinct from the bulk vibrations, as they arise from the abrupt termination of a crystal structure at the surface of a solid. Knowledge of surface phonon dispersion gives important information related to the amount of surface relaxation, the existence and distance between an adsorbate and the surface, and information regarding presence, quantity, and type of defects existing on the surface.
In modern semiconductor research, surface vibrations are of interest as they can couple with electrons and thereby affect the electrical and optical properties of semiconductor devices. They are most relevant for devices where the electronic active area is near a surface, as is the case in two-dimensional electron systems and in quantum dots. As a specific example, the decreasing size of CdSe quantum dots was found to result in increasing frequency of the surface vibration resonance, which can couple with electrons and affect their properties.
Two methods are used for modeling surface phonons. One is the "slab method", which approaches the problem using lattice dynamics for a solid with parallel surfaces, and the other is based on Green's functions. Which of these approaches is employed is based upon what type of information is required from the computation. For broad surface phonon phenomena, the conventional lattice dynamics method can be used; for the study of lattice defects, resonances, or phonon state density, the Green's function method yields more useful results.
Quantum description.
Surface phonons are represented by a wave vector along the surface, q, and an energy corresponding to a particular vibrational mode frequency, ω. The surface Brillouin zone (SBZ) for phonons consists of two dimensions, rather than three for bulk. For example, the face-centered cubic (100) surface is described by the directions ΓX and ΓM, referring to the [110] direction and [100] direction, respectively.
The description of the atomic displacements by the harmonic approximation assumes that the force on an atom is a function of its displacement with respect to neighboring atoms, i.e. Hooke's law holds. Higher order anharmonicity terms can be accounted by using perturbative methods.
The positions are then given by the relation
formula_0
where i is the place where the atom would sit if it were in equilibrium, mi is the mass of the atom that should sit at i, α is the direction of its displacement, ui,α is the amount of displacement of the atom from i, and formula_1 are the force constants which come from the crystal potential.
The solution to this gives the atomic displacement due to the phonon, which is given by
formula_2
where the atomic position "i" is described by "l", "m", and "κ", which represent the specific atomic layer, "l", the particular unit cell it is in, "m", and the position of the atom with respect to its own unit cell, "κ". The term "x"("l","m") is the position of the unit cell with respect to some chosen origin.
Normal modes of vibration and types of surfaces phonons.
Phonons can be labeled by the manner in which the vibrations occur. If the vibration occurs lengthwise in the direction of the wave and involves contraction and relaxation of the lattice, the phonon is called a "longitudinal phonon". Alternatively, the atoms may vibrate side-to-side, perpendicular to wave propagation direction; this is known as a "transverse phonon”. In general, transverse vibrations tend to have smaller frequencies than longitudinal vibrations.
The wavelength of the vibration also lends itself to a second label. "Acoustic" branch phonons have a wavelength of vibration that is much bigger than the atomic separation so that the wave travels in the same manner as a sound wave; "optical" phonons can be excited by optical radiation in the infrared wavelength or longer. Phonons take on both labels such that transverse acoustic and optical phonons are denoted TA and TO, respectively; likewise, longitudinal acoustic and optical phonons are denoted LA and LO.
The type of surface phonon can be characterized by its dispersion in relation to the bulk phonon modes of the crystal. Surface phonon mode branches may occur in specific parts of the SBZ or encompass it entirely across. These modes can show up both in the bulk phonon dispersion bands as what is known as a resonance or outside these bands as a pure surface phonon mode. Thus surface phonons can be purely surface existing vibrations, or simply the expression of bulk vibrations in the presence of a surface, known as a surface-excess property.
A particular mode, the Rayleigh phonon mode, exists across the entire BZ and is known by special characteristics, including a linear frequency versus wave number relation near the SBZ center.
Experiment.
Two of the more common methods for studying surface phonons are electron energy loss spectroscopy and helium atom scattering.
Electron energy loss spectroscopy.
The technique of electron energy loss spectroscopy (EELS) is based upon the fact that electron energy decreases upon interaction with matter. Since the interaction of low energy electrons is mainly in the surface, the loss is due to surface phonon scattering, which have an energy range of 10−3 eV to 1 eV.
In EELS, an electron of known energy is incident upon the crystal, a phonon of some wave number, q, and frequency, ω, is then created, and the outgoing electron's energy and wave number are measured. If the incident electron energy, Ei, and wave number, ki, are chosen for the experiment and the scattered electron energy, Es, and wave number, ks, are known by measurement, as well as the angles with respect to the normal for the incident and scattered electrons, θi and θs, then values of q throughout the BZ can be obtained. Energy and momentum for the electron have the following relation,
formula_3
where "m" is the mass of an electron. Energy and momentum must be conserved, so the following relations must be true of the energy and momentum exchange throughout the encounter:
formula_4
formula_5
where G is a reciprocal lattice vector that ensures that q falls in the first BZ and the angles θi and θs are measured with respect to the normal to the surface.
The dispersion is often shown with q given in units of cm−1, in which 100 cm−1 = 12.41 meV. The electron incident angles for most EELS phonon study chambers can range from 135-θs and 90-"θ"f for "θ"f ranging between 55° and 65°.–
Helium atom scattering.
Helium is the best suited atom to be used for surface scattering techniques, as it has a low enough mass that multiple phonon scattering events are unlikely, and its closed valence electron shell makes it inert, unlikely to bond with the surface upon which it impinges. In particular, 4He is used because this isotope allows for very precise velocity control, important for obtaining maximum resolution in the experiment.
There are two main techniques used for helium atom scattering studies. One is a so-called time-of-flight measurement which consists of sending pulses of He atoms at the crystal surface and then measuring the scattered atoms after the pulse. The He beam velocity ranges from 644 to 2037 m/s. The other involves measuring the momentum of the scattered He atoms by a LiF grating monochromator.
It is important to note that the He nozzle beam source used in many He scattering experiments poses some risk of error, as it adds components to the velocity distributions that can mimic phonon peaks; particularly in time-of-flight measurements, these peaks can look very much like inelastic phonon peaks. Thus, these false peaks have come to be known by the names "deceptons" or "phonions".
Comparison of techniques.
EELS and helium scattering techniques each have their own particular merits that warrant the use of either depending on the sample type, resolution desired, etc. Helium scattering has a higher resolution than EELS, with a resolution of 0.5–1 meV compared to 7 meV. However, He scattering is available only for energy differences, Ei−Es, of less than about 30 meV, while EELS can be used for up to 500 meV.
During He scattering, the He atom does not actually penetrate into the material, being scattered only once at the surface; in EELS, the electron can go as deep as a few monolayers, scattering more than once during the course of the interaction. Thus, the resulting data is easier to understand and analyze for He atom scattering than for EELS, since there are no multiple collisions to account for.
He beams have a capabilities of delivering a beam of higher flux than electrons in EELS, but the detection of electrons is easier than the detection of He atoms. He scattering is also more sensitive to very low frequency vibrations, on the order of 1 meV. This is the reason for its high resolution in comparison to EELS.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " m_i \\ddot u_{i \\alpha} = - \\sum _{j, \\beta } \\phi_{i \\alpha , j \\beta } u_{j, \\beta } "
},
{
"math_id": 1,
"text": " \\phi_{i \\alpha,j \\beta} "
},
{
"math_id": 2,
"text": " u_{\\ell,m, \\kappa , \\alpha} = \\sqrt{(m_\\kappa )} v_{\\ell, \\kappa , \\alpha } (\\omega , q) e^{i[\\omega t - q x (\\ell,m)]} "
},
{
"math_id": 3,
"text": " E = \\frac { \\hbar^2 k^2} {2m} "
},
{
"math_id": 4,
"text": " \\Delta E = | E_i - E_s | = \\hbar \\omega "
},
{
"math_id": 5,
"text": " \\Delta K = k_i (\\sin \\theta_i ) - k_s (\\sin \\theta_s) = \\mathbf{G} + \\mathbf{q}"
}
] |
https://en.wikipedia.org/wiki?curid=14558397
|
14559354
|
Artstein's theorem
|
Theorem in control theory
Artstein's theorem states that a nonlinear dynamical system in the control-affine form
formula_0
has a differentiable control-Lyapunov function if and only if it admits a regular stabilizing feedback "u"("x"), that is a locally Lipschitz function on Rn\{0}.
The original 1983 proof by Zvi Artstein proceeds by a nonconstructive argument. In 1989 Eduardo D. Sontag provided a constructive version of this theorem explicitly exhibiting the feedback.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\dot{\\mathbf{x}} = \\mathbf{f(x)} + \\sum_{i=1}^m \\mathbf{g}_i(\\mathbf{x})u_i"
}
] |
https://en.wikipedia.org/wiki?curid=14559354
|
14562492
|
Reptation
|
Movement of entangled polymer chains
A peculiarity of thermal motion of very long linear macromolecules in "entangled" polymer melts or concentrated polymer solutions is reptation. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. Similar phenomena also occur in proteins.
Two closely related concepts are reptons and entanglement. A repton is a mobile point residing in the cells of a lattice, connected by bonds. Entanglement means the topological restriction of molecular motion by other chains.
Theory and mechanism.
Reptation theory describes the effect of polymer chain entanglements on the relationship between molecular mass and chain relaxation time. The theory predicts that, in entangled systems, the relaxation time τ is proportional to the cube of molecular mass, "M": τ ~ "M" 3. The prediction of the theory can be arrived at by a relatively simple argument. First, each polymer chain is envisioned as occupying a tube of length "L", through which it may move with snake-like motion (creating new sections of tube as it moves). Furthermore, if we consider a time scale comparable to τ, we may focus on the overall, global motion of the chain. Thus, we define the tube mobility as
μtube
"v"/"f",
where "v" is the velocity of the chain when it is pulled by a force, "f". μ tube will be inversely proportional to the degree of polymerization (and thus also inversely proportional to chain weight).
The diffusivity of the chain through the tube may then be written as
"D"tube
"k"B"T" μtube.
By then recalling that in 1-dimension the mean squared displacement due to Brownian motion is given by
s("t")2
2"D"tube "t",
we obtain
s("t")2
2"k"B"T" μtube "t".
The time necessary for a polymer chain to displace the length of its original tube is then
"t"
"L"2/(2"k"B"T" μtube).
By noting that this time is comparable to the relaxation time, we establish that τ~"L"2/μtube. Since the length of the tube is proportional to the degree of polymerization, and μtube is inversely proportional to the degree of polymerization, we observe that τ~("DP"n)3 (and so τ~"M" 3).
From the preceding analysis, we see that molecular mass has a very strong effect on relaxation time in entangled polymer systems. Indeed, this is significantly different from the untangled case, where relaxation time is observed to be proportional to molecular mass. This strong effect can be understood by recognizing that, as chain length increases, the number of tangles present will dramatically increase. These tangles serve to reduce chain mobility. The corresponding increase in relaxation time can result in viscoelastic behavior, which is often observed in polymer melts. Note that
the polymer’s zero-shear viscosity gives an approximation of the actual observed dependency, τ ~ "M" 3.4; this relaxation time has nothing to do with the reptation relaxation time.
Models.
Entangled polymers are characterized with effective internal scale, commonly known as "the length of macromolecule between adjacent entanglements" formula_0.
Entanglements with other polymer chains restrict polymer chain motion to a thin virtual "tube" passing through the restrictions. Without breaking polymer chains to allow the restricted chain to pass through it, the chain must be pulled or flow through the restrictions. The mechanism for movement of the chain through these restrictions is called reptation.
In the blob model, the polymer chain is made up of formula_1 Kuhn lengths of individual length formula_2. The chain is assumed to form blobs between each entanglement, containing formula_3 Kuhn length segments in each. The mathematics of random walks can show that the average end-to-end distance of a section of a polymer chain, made up of formula_3 Kuhn lengths is
formula_4. Therefore if there are formula_1 total Kuhn lengths, and formula_5 blobs on a particular chain:
formula_6
The total end-to-end length of the restricted chain formula_7 is then:
formula_8
This is the average length a polymer molecule must diffuse to escape from its particular tube, and so the characteristic time for this to happen can be calculated using diffusive equations. A classical derivation gives the reptation time formula_9:
formula_10
where formula_11 is the coefficient of friction on a particular polymer chain, formula_12 is Boltzmann's constant, and formula_13 is the absolute temperature.
The linear macromolecules reptate if the length of macromolecule formula_14 is bigger than the critical entanglement molecular weight formula_15. formula_15 is 1.4 to 3.5 times formula_0. There is no reptation motion for polymers with formula_16, so that the point formula_15 is a point of dynamic phase transition.
Due to the reptation motion the coefficient of self-diffusion and conformational relaxation times of macromolecules depend on the length of macromolecule as formula_17 and formula_18, correspondingly.
The conditions of existence of reptation in the thermal motion of macromolecules of complex architecture (macromolecules in the form of branch, star, comb and others) have not been established yet.
The dynamics of shorter chains or of long chains at short times is usually described by the Rouse model.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M_{e}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "n_{e}"
},
{
"math_id": 4,
"text": "d=l \\sqrt{n_{e}}"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "A= \\dfrac{n}{n_{e}}"
},
{
"math_id": 7,
"text": "L"
},
{
"math_id": 8,
"text": "L=Ad = \\dfrac{nl\\sqrt{n_{e}}}{n_{e}} = \\dfrac{nl}{\\sqrt{n_{e}}}"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "t=\\dfrac{l^2 n^3 \\mu}{n_{e} k T}"
},
{
"math_id": 11,
"text": "\\mu"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": "M_{c}"
},
{
"math_id": 16,
"text": "M<M_{c}"
},
{
"math_id": 17,
"text": "M^{-2}"
},
{
"math_id": 18,
"text": "M^3"
}
] |
https://en.wikipedia.org/wiki?curid=14562492
|
14563
|
Integer
|
An integer is the number zero (0), a positive natural number (1, 2, 3, . . .), or the negation of a positive natural number (−1, −2, −3, . . .). The negations or additive inverses of the positive natural numbers are referred to as negative integers. The set of all integers is often denoted by the boldface Z or blackboard bold
The set of natural numbers formula_1 is a subset of formula_0, which in turn is a subset of the set of all rational numbers formula_2, itself a subset of the real numbers formula_3. Like the set of natural numbers, the set of integers formula_0 is countably infinite. An integer may be regarded as a real number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, , 5/4 and are not.
The integers form the smallest group and the smallest ring containing the natural numbers. In algebraic number theory, the integers are sometimes qualified as rational integers to distinguish them from the more general algebraic integers. In fact, (rational) integers are algebraic integers that are also rational numbers.
The computer representation of integers normally involves a group of binary digits (bits).
History.
The word integer comes from the Latin "integer" meaning "whole" or (literally) "untouched", from "in" ("not") plus "tangere" ("to touch"). "Entire" derives from the same origin via the French word "entier", which means both "entire" and "integer". Historically the term was used for a number that was a multiple of 1, or to the whole part of a mixed number. Only positive integers were considered, making the term synonymous with the natural numbers. The definition of integer expanded over time to include negative numbers as their usefulness was recognized. For example Leonhard Euler in his 1765 "Elements of Algebra" defined integers to include both positive and negative numbers.
The phrase "the set of the integers" was not used before the end of the 19th century, when Georg Cantor introduced the concept of infinite sets and set theory. The use of the letter Z to denote the set of integers comes from the German word "Zahlen" ("numbers") and has been attributed to David Hilbert. The earliest known use of the notation in a textbook occurs in Algèbre written by the collective Nicolas Bourbaki, dating to 1947. The notation was not adopted immediately, for example another textbook used the letter J and a 1960 paper used Z to denote the non-negative integers. But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers.
The symbol formula_0 is often annotated to denote various sets, with varying usage amongst different authors: formula_4,formula_5 or formula_6 for the positive integers, formula_7 or formula_8 for non-negative integers, and formula_9 for non-zero integers. Some authors use formula_10 for non-zero integers, while others use it for non-negative integers, or for {–1, 1} (the group of units of formula_0). Additionally, formula_11 is used to denote either the set of integers modulo "p" (i.e., the set of congruence classes of integers), or the set of "p"-adic integers.
The "whole numbers" were synonymous with the integers up until the early 1950s. In the late 1950s, as part of the New Math movement, American elementary school teachers began teaching that "whole numbers" referred to the natural numbers, excluding negative numbers, while "integer" included the negative numbers. The "whole numbers" remain ambiguous to the present day.
Algebraic properties.
Like the natural numbers, formula_0 is closed under the operations of addition and multiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly, 0), formula_0, unlike the natural numbers, is also closed under subtraction.
The integers form a unital ring which is the most basic one, in the following sense: for any unital ring, there is a unique ring homomorphism from the integers into this ring. This universal property, namely to be an initial object in the category of rings, characterizes the ring formula_0.
formula_0 is not closed under division, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed under exponentiation, the integers are not (since the result can be a fraction when the exponent is negative).
The following table lists some of the basic properties of addition and multiplication for any integers "a", "b" and "c":
The first five properties listed above for addition say that formula_0, under addition, is an abelian group. It is also a cyclic group, since every non-zero integer can be written as a finite sum 1 + 1 + ... + 1 or (−1) + (−1) + ... + (−1). In fact, formula_0 under addition is the "only" infinite cyclic group—in the sense that any infinite cyclic group is isomorphic to formula_0.
The first four properties listed above for multiplication say that formula_0 under multiplication is a commutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means that formula_0 under multiplication is not a group.
All the rules from the above property table (except for the last), when taken together, say that formula_0 together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of such algebraic structure. Only those equalities of expressions are true in formula_0 for all values of variables, which are true in any unital commutative ring. Certain non-zero integers map to zero in certain rings.
The lack of zero divisors in the integers (last property in the table) means that the commutative ring formula_0 is an integral domain.
The lack of multiplicative inverses, which is equivalent to the fact that formula_0 is not closed under division, means that formula_0 is "not" a field. The smallest field containing the integers as a subring is the field of rational numbers. The process of constructing the rationals from the integers can be mimicked to form the field of fractions of any integral domain. And back, starting from an algebraic number field (an extension of rational numbers), its ring of integers can be extracted, which includes formula_0 as its subring.
Although ordinary division is not defined on formula_0, the division "with remainder" is defined on them. It is called Euclidean division, and possesses the following important property: given two integers "a" and "b" with "b" ≠ 0, there exist unique integers "q" and "r" such that "a"
"q" × "b" + "r" and 0 ≤ "r" < |"b"|, where |"b"| denotes the absolute value of "b". The integer "q" is called the "quotient" and "r" is called the "remainder" of the division of "a" by "b". The Euclidean algorithm for computing greatest common divisors works by a sequence of Euclidean divisions.
The above says that formula_0 is a Euclidean domain. This implies that formula_0 is a principal ideal domain, and any positive integer can be written as the products of primes in an essentially unique way. This is the fundamental theorem of arithmetic.
Order-theoretic properties.
formula_0 is a totally ordered set without upper or lower bound. The ordering of formula_0 is given by:
... −3 < −2 < −1 < 0 < 1 < 2 < 3 < ...
An integer is "positive" if it is greater than zero, and "negative" if it is less than zero. Zero is defined as neither negative nor positive.
The ordering of integers is compatible with the algebraic operations in the following way:
Thus it follows that formula_0 together with the above ordering is an ordered ring.
The integers are the only nontrivial totally ordered abelian group whose positive elements are well-ordered. This is equivalent to the statement that any Noetherian valuation ring is either a field—or a discrete valuation ring.
Construction.
Traditional development.
In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers, zero, and the negations of the natural numbers. This can be formalized as follows. First construct the set of natural numbers according to the Peano axioms, call this formula_12. Then construct a set formula_13 which is disjoint from formula_12 and in one-to-one correspondence with formula_12 via a function formula_14. For example, take formula_13 to be the ordered pairs formula_15 with the mapping formula_16. Finally let 0 be some object not in formula_12 or formula_13, for example the ordered pair formula_17. Then the integers are defined to be the union formula_18.
The traditional arithmetic operations can then be defined on the integers in a piecewise fashion, for each of positive numbers, negative numbers, and zero. For example negation is defined as follows:
formula_19
The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic.
Equivalence classes of ordered pairs.
In modern set-theoretic mathematics, a more abstract construction allowing one to define arithmetical operations without any case distinction is often used instead. The integers can thus be formally constructed as the equivalence classes of ordered pairs of natural numbers ("a","b").
The intuition is that ("a","b") stands for the result of subtracting "b" from "a". To confirm our expectation that 1 − 2 and 4 − 5 denote the same number, we define an equivalence relation ~ on these pairs with the following rule:
formula_20
precisely when
formula_21
Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers; by using [("a","b")] to denote the equivalence class having ("a","b") as a member, one has:
formula_22
formula_23
The negation (or additive inverse) of an integer is obtained by reversing the order of the pair:
formula_24
Hence subtraction can be defined as the addition of the additive inverse:
formula_25
The standard ordering on the integers is given by:
formula_26 if and only if formula_27
It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes.
Every equivalence class has a unique member that is of the form ("n",0) or (0,"n") (or both at once). The natural number "n" is identified with the class [("n",0)] (i.e., the natural numbers are embedded into the integers by map sending "n" to [("n",0)]), and the class [(0,"n")] is denoted −"n" (this covers all remaining classes, and gives the class [(0,0)] a second time since −0
0.
Thus, [("a","b")] is denoted by
formula_28
If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity.
This notation recovers the familiar representation of the integers as {..., −2, −1, 0, 1, 2, ...}.
Some examples are:
formula_29
Other approaches.
In theoretical computer science, other approaches for the construction of integers are used by automated theorem provers and term rewrite engines.
Integers are represented as algebraic terms built using a few basic operations (e.g., zero, succ, pred) and, possibly, using natural numbers, which are assumed to be already constructed (using, say, the Peano approach).
There exist at least ten such constructions of signed integers. These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2) and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms.
The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operation pairformula_30 that takes as arguments two natural numbers formula_31 and formula_32, and returns an integer (equal to formula_33). This operation is not free since the integer 0 can be written pair(0,0), or pair(1,1), or pair(2,2), etc. This technique of construction is used by the proof assistant Isabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers.
Computer science.
An integer is often a primitive data type in computer languages. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denoted "int" or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.).
Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10).
Cardinality.
The set of integers is countably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is
(0, 1), (1, 2), (−1, 3), (2, 4), (−2, 5), (3, 6), .&hairsp;.&hairsp;.&hairsp;,(1&hairsp;− "k", 2"k" −&hairsp;1), ("k", 2"k"&hairsp;), .&hairsp;.&hairsp;.
More technically, the cardinality of formula_0 is said to equal ℵ0 (aleph-null). The pairing between elements of formula_0 and formula_1 is called a bijection.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
External links.
"This article incorporates material from Integer on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}"
},
{
"math_id": 1,
"text": "\\mathbb{N}"
},
{
"math_id": 2,
"text": "\\mathbb{Q}"
},
{
"math_id": 3,
"text": "\\mathbb{R}"
},
{
"math_id": 4,
"text": "\\mathbb{Z}^+"
},
{
"math_id": 5,
"text": "\\mathbb{Z}_+"
},
{
"math_id": 6,
"text": "\\mathbb{Z}^{>}"
},
{
"math_id": 7,
"text": "\\mathbb{Z}^{0+}"
},
{
"math_id": 8,
"text": "\\mathbb{Z}^{\\geq}"
},
{
"math_id": 9,
"text": "\\mathbb{Z}^{\\neq}"
},
{
"math_id": 10,
"text": "\\mathbb{Z}^{*}"
},
{
"math_id": 11,
"text": "\\mathbb{Z}_{p}"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "P^-"
},
{
"math_id": 14,
"text": "\\psi"
},
{
"math_id": 15,
"text": "(1,n)"
},
{
"math_id": 16,
"text": "\\psi = n \\mapsto (1,n)"
},
{
"math_id": 17,
"text": "(0,0)"
},
{
"math_id": 18,
"text": "P \\cup P^- \\cup \\{0\\}"
},
{
"math_id": 19,
"text": "\n-x = \\begin{cases}\n \\psi(x), & \\text{if } x \\in P \\\\\n \\psi^{-1}(x), & \\text{if } x \\in P^- \\\\\n 0, & \\text{if } x = 0\n\\end{cases}\n"
},
{
"math_id": 20,
"text": "(a,b) \\sim (c,d) "
},
{
"math_id": 21,
"text": "a + d = b + c. "
},
{
"math_id": 22,
"text": "[(a,b)] + [(c,d)] := [(a+c,b+d)]."
},
{
"math_id": 23,
"text": "[(a,b)]\\cdot[(c,d)] := [(ac+bd,ad+bc)]."
},
{
"math_id": 24,
"text": "-[(a,b)] := [(b,a)]."
},
{
"math_id": 25,
"text": "[(a,b)] - [(c,d)] := [(a+d,b+c)]."
},
{
"math_id": 26,
"text": "[(a,b)] < [(c,d)]"
},
{
"math_id": 27,
"text": "a+d < b+c."
},
{
"math_id": 28,
"text": "\\begin{cases} a - b, & \\mbox{if } a \\ge b \\\\ -(b - a), & \\mbox{if } a < b. \\end{cases}"
},
{
"math_id": 29,
"text": "\\begin{align}\n 0 &= [(0,0)] &= [(1,1)] &= \\cdots & &= [(k,k)] \\\\\n 1 &= [(1,0)] &= [(2,1)] &= \\cdots & &= [(k+1,k)] \\\\\n-1 &= [(0,1)] &= [(1,2)] &= \\cdots & &= [(k,k+1)] \\\\\n 2 &= [(2,0)] &= [(3,1)] &= \\cdots & &= [(k+2,k)] \\\\\n-2 &= [(0,2)] &= [(1,3)] &= \\cdots & &= [(k,k+2)].\n\\end{align}"
},
{
"math_id": 30,
"text": "(x,y)"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": "y"
},
{
"math_id": 33,
"text": "x-y"
}
] |
https://en.wikipedia.org/wiki?curid=14563
|
14563922
|
Nikodym set
|
In mathematics, a Nikodym set is a subset of the unit square in formula_0 with complement of Lebesgue measure zero (i.e. with an area of 1), such that, given any point in the set, there is a straight line that only intersects the set at that point. The existence of a Nikodym set was first proved by Otto Nikodym in 1927. Subsequently, constructions were found of Nikodym sets having continuum many exceptional lines for each point, and Kenneth Falconer found analogues in higher dimensions.
Nikodym sets are closely related to Kakeya sets (also known as Besicovitch sets).
The existence of Nikodym sets is sometimes compared with the Banach–Tarski paradox. There is, however, an important difference between the two: the Banach–Tarski paradox relies on non-measurable sets.
Mathematicians have also researched Nikodym sets over finite fields (as opposed to formula_1).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^2"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
}
] |
https://en.wikipedia.org/wiki?curid=14563922
|
14564979
|
Hypersonic flight
|
Flight at altitudes lower than 90km (56 mi) and at speeds above Mach 5
Hypersonic flight is flight through the atmosphere below altitudes of about at speeds greater than Mach 5, a speed where dissociation of air begins to become significant and high heat loads exist. Speeds over Mach 25 have been achieved below the thermosphere as of 2020.
Hypersonic vehicles are able to maneuver through the atmosphere in a non-parabolic trajectory, but their aerodynamic heat loads need to be managed (see figure to the right).
<templatestyles src="Template:TOC limit/styles.css" />
History.
The first manufactured object to achieve hypersonic flight was the two-stage Bumper rocket, consisting of a WAC Corporal second stage set on top of a V-2 first stage. In February 1949, at White Sands, the rocket reached a speed of , or about Mach 6.7. The vehicle, however, burned on atmospheric re-entry, and only charred remnants were found. In April 1961, Russian Major Yuri Gagarin became the first human to travel at hypersonic speed, during the world's first piloted orbital flight. Soon after, in May 1961, Alan Shepard became the first American and second person to fly hypersonic when his capsule reentered the atmosphere at a speed above Mach 5 at the end of his suborbital flight over the Atlantic Ocean.
In November 1961, Air Force Major Robert White flew the X-15 research aircraft at speeds over Mach 6.
On 3 October 1967, in California, an X-15 reached Mach 6.7.
The reentry problem of a space vehicle was extensively studied. The NASA X-43A flew on scramjet for 10 seconds, and then glided for 10 minutes on its last flight in 2004. The Boeing X-51 Waverider flew on scramjet for 210 seconds in 2013, finally reaching Mach 5.1 on its fourth flight test. The hypersonic regime has since become the subject for further study during the 21st century, and strategic competition between the United States, India, Russia, and China.
Physics.
Stagnation point.
The stagnation point of air flowing around a body is a point where its local velocity is zero. At this point the air flows around this location. A shock wave forms, which deflects the air from the stagnation point and insulates the flight body from the atmosphere. This can affect the lifting ability of a flight surface to counteract its drag and subsequent free fall.
In order to maneuver in the atmosphere at faster speeds than supersonic, the forms of propulsion can still be airbreathing systems, but a ramjet does not suffice for a system to attain Mach 5, as a ramjet slows down the airflow to subsonic. Some systems (waveriders) use a first stage rocket to boost a body into the hypersonic regime. Other systems (boost-glide vehicles) use scramjets after their initial boost, in which the speed of the air passing through the scramjet remains supersonic. Other systems (munitions) use a cannon for their initial boost.
High temperature effect.
Hypersonic flow is a high energy flow. The ratio of kinetic energy to the internal energy of the gas increases as the square of the Mach number. When this flow enters a boundary layer, there are high viscous effects due to the friction between air and the high-speed object. In this case, the high kinetic energy is converted in part to internal energy and gas energy is proportional to the internal energy. Therefore, hypersonic boundary layers are high temperature regions due to the viscous dissipation of the flow's kinetic energy. Another region of high temperature flow is the shock layer behind the strong bow shock wave. In the case of the shock layer, the flow's velocity decreases discontinuously as it passes through the shock wave. This results in a loss of kinetic energy and a gain of internal energy behind the shock wave. Due to high temperatures behind the shock wave, dissociation of molecules in the air becomes thermally active. For example, for air at T > , dissociation of diatomic oxygen into oxygen radicals is active: O2 → 2O For T > , dissociation of diatomic nitrogen into N radicals is active: N2 → 2N Consequently, in this temperature range, a plasma forms: —molecular dissociation followed by recombination of oxygen and nitrogen radicals produces nitric oxide: N2 + O2 → 2NO, which then dissociates and recombines to form ions: N + O → NO+ + e−
Low density flow.
At standard sea-level condition for air, the mean free path of air molecules is about formula_0. Low density air is much thinner. At an altitude of the mean free path is formula_1. Because of this large free mean path aerodynamic concepts, equations, and results based on the assumption of a continuum begin to break down, therefore aerodynamics must be considered from kinetic theory. This regime of aerodynamics is called low-density flow.
For a given aerodynamic condition low-density effects depends on the value of a nondimensional parameter called the Knudsen number formula_2, defined as formula_3 where formula_4 is the typical length scale of the object considered. The value of the Knudsen number based on nose radius, formula_5, can be near one.
Hypersonic vehicles frequently fly at very high altitudes and therefore encounter low-density conditions. Hence, the design and analysis of hypersonic vehicles sometimes require consideration of low-density flow. New generations of hypersonic airplanes may spend a considerable portion of their mission at high altitudes, and for these vehicles, low-density effects will become more significant.
Thin shock layer.
The flow field between the shock wave and the body surface is called the shock layer. As the Mach number M increases, the angle of the resulting shock wave decreases. This Mach angle is described by the equation formula_6 where a is the speed of the sound wave and v is the flow velocity. Since M=v/a, the equation becomes formula_7. Higher Mach numbers position the shock wave closer to the body surface, thus at hypersonic speeds, the shock wave lies extremely close to the body surface, resulting in a thin shock layer. At low Reynolds number, the boundary layer grows quite thick and merges with the shock wave, leading to a fully viscous shock layer.
Viscous interaction.
The compressible flow boundary layer increases proportionately to the square of the Mach number, and inversely to the square root of the Reynolds number.
At hypersonic speeds, this effect becomes much more pronounced, due to the exponential reliance on the Mach number. Since the boundary layer becomes so large, it interacts more viscously with the surrounding flow. The overall effect of this interaction is to create a much higher skin friction than normal, causing greater surface heat flow. Additionally, the surface pressure spikes, which results in a much larger aerodynamic drag coefficient. This effect is extreme at the leading edge and decreases as a function of length along the surface.
Entropy layer.
The entropy layer is a region of large velocity gradients caused by the strong curvature of the shock wave. The entropy layer begins at the nose of the aircraft and extends downstream close to the body surface. Downstream of the nose, the entropy layer interacts with the boundary layer which causes an increase in aerodynamic heating at the body surface. Although the shock wave at the nose at supersonic speeds is also curved, the entropy layer is only observed at hypersonic speeds because the magnitude of the curve is far greater at hypersonic speeds.
Propulsion.
Controlled detonation.
Researchers in China have used shock waves in a detonation chamber to compress ionized argon plasma waves moving at Mach 14. The waves are directed into magnetohydrodynamic (MHD) generators to create a current pulse that could be scaled up to gigawatt scale, given enough argon gas to feed into the MHD generators.
Rotating detonation.
A rotating detonation engine (RDE) might propel airframes in hypersonic flight; on 14 December 2023 engineers at GE Aerospace demonstrated their test rig, which is to combine an RDE with a ramjet/scramjet, in order to evaluate the regimes of rotating detonation combustion. The goal is to achieve sustainable turbine-based combined cycle (TBCC) propulsion systems, at speeds between Mach 1 and Mach 5.
Applications.
Shipping.
Transport consumes energy for three purposes: overcoming gravity, overcoming air/water friction, and achieving terminal velocity. The reduced trip times and higher flight altitudes reduce the first two, while increasing the third. Proponents claim that the net energy costs of hypersonic transport can be lower than those of conventional transport while slashing journey times.
Stratolaunch Roc can be used to launch hypersonic aircraft.
Hermeus demonstrated transition from turbojet aircraft engine operation to ramjet operation on 17 November 2022, thus avoiding the need to boost aircraft velocities by rocket or scramjet.
"See: SR-72, § Mayhem"
Weapons.
Two main types of hypersonic weapons are hypersonic cruise missiles and hypersonic glide vehicles. Hypersonic weapons, by definition, travel five or more times the speed of sound. Hypersonic cruise missiles, which are powered by scramjets, are limited to below ; hypersonic glide vehicles can travel higher.
Hypersonic vehicles are much slower than ballistic (i.e. sub-orbital or fractional orbital) missiles, because they travel in the atmosphere, and ballistic missiles travel in the vacuum above the atmosphere. However, they can use the atmosphere to manoeuvre, making them capable of large-angle deviations from a ballistic trajectory. A hypersonic glide vehicle is usually launched with a ballistic first stage, then deploys wings and switches to hypersonic flight as it re-enters the atmosphere, allowing the final stage to evade all existing nuclear missile defense systems, which were designed for ballistic-only missiles.
According to a CNBC July 2019 report (and now in a CNN 2022 report), Russia and China lead in hypersonic weapon development, trailed by the United States, and in this case the problem is being addressed in a joint program of the entire Department of Defense. To meet this development need, the US Army is participating in a joint program with the US Navy and Air Force, to develop a hypersonic glide body. India is also developing such weapons. France and Australia may also be pursuing the technology. Japan is acquiring both scramjet (Hypersonic Cruise Missile), and boost-glide weapons (Hyper Velocity Gliding Projectile).
China.
China's XingKong-2 (星空二号, "Starry-sky-2"), a waverider, had its first flight 3 August 2018.
In August 2021 China launched a boost-glide vehicle to low-earth orbit, circling Earth before maneuvering toward its target location, missing its target by two dozen miles. However China has responded that the vehicle was a spacecraft, and not a missile; there was a July 2021 test of a spaceplane, according to Chinese Foreign Ministry Spokesperson Zhao Lijian; Todd Harrison points out that an orbital trajectory would take 90 minutes for a spaceplane to circle Earth (which would defeat the mission of a weapon in hypersonic flight). The US DoD's headquarters (The Pentagon) reported in October 2021 that two such hypersonic launches have occurred; one launch did not demonstrate the accuracy needed for a precision weapon; the second launch by China demonstrated its ability to change trajectories, according to Pentagon reports on the 2021 competition in arms capabilities.
In 2022, China unveiled two more hypersonic models. An AI simulation has revealed that a Mach 11 aircraft can simply outrun a Mach 1.3 fighter attempting to engage it, while firing its missile at the "pursuing" fighter. This strategy entails a fire control system to accomplish an over-the-shoulder missile launch, which does not yet exist (2023).
In February 2023, the DF-27 covered in 12 minutes, according to leaked secret documents. The capability directly threatens Guam, and US Navy aircraft carriers.
Russia.
In 2016, Russia is believed to have conducted two successful tests of Avangard, a hypersonic glide vehicle. The third known test, in 2017, failed. In 2018, an Avangard was launched at the Dombarovskiy missile base, reaching its target at the Kura shooting range, a distance of . Avangard uses new composite materials which are to withstand temperatures of up to . The Avangard's environment at hypersonic speeds reaches such temperatures. Russia considered its carbon fiber solution to be unreliable, and replaced it with new composite materials. Two Avangard hypersonic glide vehicles (HGVs) will first be mounted on SS-19 ICBMs; on 27 December 2019 the weapon was first fielded to the Yasnensky Missile Division, a unit in the Orenburg Oblast. In an earlier report, Franz-Stefan Gady named the unit as the 13th Regiment/Dombarovskiy Division (Strategic Missile Force).
In 2021 Russia launched a 3M22 Zircon antiship missile over the White Sea, as part of a series of tests. "Kinzhal and Zircon (Tsirkon) are standoff strike weapons". In February 2022, a coordinated series of missile exercises, some of them hypersonic, were launched on 18 February 2022 in an apparent display of power projection. The launch platforms ranged from submarines in the Barents sea in the Arctic, as well as from ships on the Black sea to the south of Russia. The exercise included a RS-24 Yars ICBM, which was launched from the Plesetsk Cosmodrome in Northern Russia until it reached its destination on the Kamchatka Peninsula in Eastern Russia. Ukraine estimated a 3M22 Zircon was used against it, but apparently did not exceed Mach 3 and was shot down 7 February 2024 in Kyiv.
United States.
These tests have prompted US responses in weapons development. By 2018, the AGM-183 and Long-Range Hypersonic Weapon were in development per John Hyten's USSTRATCOM statement on 8 August 2018 (UTC). At least one vendor is developing ceramics to handle the temperatures of hypersonics systems. There are over a dozen US hypersonics projects as of 2018, notes the commander of USSTRATCOM; from which a future hypersonic cruise missile is sought, perhaps by Q4 FY2021. The Long range precision fires (LRPF) CFT is supporting Space and Missile Defense Command's pursuit of hypersonics. Joint programs in hypersonics are informed by Army work; however, at the strategic level, the bulk of the hypersonics work remains at the Joint level. Long Range Precision Fires (LRPF) is an Army priority, and also a DoD joint effort. The Army and Navy's Common Hypersonic Glide Body (C-HGB) had a successful test of a prototype in March 2020. A wind tunnel for testing hypersonic vehicles was completed in Texas (2021). The Army's Land-based Hypersonic Missile "is intended to have a range of ". By adding rocket propulsion to a shell or glide body, the joint effort shaved five years off the likely fielding time for hypersonic weapon systems. Countermeasures against hypersonics will require sensor data fusion: both radar and infrared sensor tracking data will be required to capture the signature of a hypersonic vehicle in the atmosphere. There are also privately developed hypersonic systems, as well as critics.
DoD tested a Common Hypersonic Glide Body (C-HGB) in 2020. The Air Force dropped out of the tri-service hypersonic project in 2020, leaving only the Army and Navy on the C-HGB.
According to Air Force chief scientist, Dr. Greg Zacharias, the US anticipates having hypersonic weapons by the 2020s, hypersonic drones by the 2030s, and recoverable hypersonic drone aircraft by the 2040s. The focus of DoD development will be on air-breathing boost-glide hypersonics systems. Countering hypersonic weapons during their cruise phase will require radar with longer range, as well as space-based sensors, and systems for tracking and fire control. A mid-2021 report from the Congressional Research Service states the United States is "unlikely" to field an operational hypersonic glide vehicle (HGV) until 2023.
On 21 October 2021, the Pentagon stated that a test of a hypersonic glide body failed to complete because its booster failed; according to Lt. Cmdr. Timothy Gorman the booster was not part of the equipment under test, but the booster's failure mode will be reviewed to improve the test setup. The test occurred at Pacific Spaceport Complex – Alaska, on Kodiak island. Three rocketsondes at Wallops Island completed successful tests earlier that week, for the hypersonics effort. On 29 October 2021 the booster rocket for the Long-Range Hypersonic Weapon was successfully tested in a static test; the first stage thrust vector control system control system was included. On 26 October 2022 Sandia National Laboratories conducted a successful test of hypersonic technologies at Wallops Island.
On 28 June 2024 DoD announced a successful recent end-to-end test of the US Army's Long-Range Hypersonic Weapon all-up round (AUR) and the US Navy's Conventional Prompt Strike. The missile was launched from the Pacific Missile Range Facility, Kauai, Hawaii.
In September 2021, and in March 2022, US vendors Raytheon/Northrop Grumman, and Lockheed respectively, first successfully tested their air-launched, scramjet-powered hypersonic cruise missiles, which were funded by DARPA. By September 2022 Raytheon was selected for fielding Hypersonic Attack Cruise Missile (HACM), a scramjet-powered hypersonic missile by FY2027.
In March 2024 Stratolaunch Roc launched TA-1, a vehicle which is nearing Mach 5 at in a powered flight, a risk-reduction exercise for TA-2. In a similar development Castelion launched its low-cost hypersonic platform in the Mojave desert, in March 2024.
Iran.
In 2022, Iran was believed to have constructed their first hypersonic missile. Amir Ali Hajizadeh, the commander of the Air Force of the Islamic Republic of Iran's Revolutionary Guards Corps, announced the construction of the Islamic Republic's first hypersonic missile. He noted: "This new missile was produced to counter air defense shields and passes through all missile defense systems and which represents a big leap in the generation of missiles" and has a speed above Mach 13. but Col. Rob Lodwick, the spokesman for the Pentagon on Middle East affairs said that there are doubts in this regard.
In 2021, DoD was codifying flight test guidelines, knowledge gained from Conventional Prompt Strike (CPS), and the other hypersonics programs, for some 70 hypersonics R&D programs alone, as of 2021. In 2021-2023, Heidi Shyu, the Under Secretary of Defense for Research and Engineering (USD(R&E)) is pursuing a program of annual rapid joint experiments, including hypersonics capabilities, to bring down their cost of development. A hypersonic test bed aims to bring the frequency of tests to one per week.
France, Australia, India, Germany, Japan, South Korea, North Korea, and Iran also have hypersonic weapon research programs.
Australia and the US have begun joint development of air-launched hypersonic missiles, as announced by a Pentagon statement on 30 November 2020. The development will build on the $54 million Hypersonic International Flight Research Experimentation (HIFiRE) under which both nations collaborated on over a 15-year period. Small and large companies will all contribute to the development of these hypersonic missiles, named SCIFIRE in 2022.
Defenses.
In May 2023 Ukraine shot down a Kinzhal with a Patriot. IBCS, or the Integrated Air and Missile Defense Battle Command System is an Integrated Air and Missile Defense (IAMD) capability designed to work with Patriots and other missiles.
Rand Corporation (28 September 2017) estimates there is less than a decade to prevent Hypersonic Missile proliferation.
In the same way that anti-ballistic missiles were developed as countermeasures to ballistic missiles, counter-countermeasures to hypersonics systems were not yet in development, as of 2019. "See the National Defense Space Architecture (2021), above." But by 2019, $157.4 million was allocated in the FY2020 Pentagon budget for hypersonic defense, out of $2.6 billion for all hypersonic-related research. $207 million of the FY2021 budget was allocated to defensive hypersonics, up from the FY2020 budget allocation of $157 million. Both the US and Russia withdrew from the Intermediate-Range Nuclear Forces (INF) Treaty in February 2019. This will spur arms development, including hypersonic weapons, in FY2021 and forward. By 2021 the Missile Defense Agency was funding regional countermeasures against hypersonic weapons in their glide phase. James Acton characterized the proliferation of hypersonic vehicles as never-ending in October 2021; Jeffery Lewis views the proliferation as additional arguments for ending the arms race. Doug Loverro assesses that both missile defense and competition need rethinking. CSIS assesses that hypersonic defense should be the US' priority over hypersonic weapons.
NDSA / PWSA.
As part of their Hypersonic vehicle tracking mission, the Space Development Agency (SDA) launched four satellites and the Missile Defense Agency (MDA) launched two satellites on 14 February 2024 (launch USSF-124). The satellites will share the same orbit, which allows the SDA's wide field of view (WFOV) satellites and the MDA's medium field of view (MFOV) downward-looking satellites to traverse the same terrain of Earth. The SDA's four satellites are part of its Tranche 0 tracking layer (T0TL). The MDA's two satellites are HBTSS or Hypersonic and ballistic tracking space sensors.
Additional capabilities of Tranche 0 of the National defense space architecture (NDSA), also known as the Proliferated warfighting space architecture (PWSA) will be tested over the next two years.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\lambda = 68 \\,\\mathrm{nm}"
},
{
"math_id": 1,
"text": "\\lambda = 1 \\, \\mathrm{ft} = 0.305 \\, \\mathrm{m}"
},
{
"math_id": 2,
"text": "\\mathrm{Kn}"
},
{
"math_id": 3,
"text": "\\mathrm{Kn}=\\frac{\\lambda}{l}"
},
{
"math_id": 4,
"text": "l"
},
{
"math_id": 5,
"text": "\\mathrm{Kn}=\\frac{\\lambda}{R}"
},
{
"math_id": 6,
"text": "\\mu = \\sin^{-1} (a/v)"
},
{
"math_id": 7,
"text": "\\mu = \\sin^{-1} (1/M)"
}
] |
https://en.wikipedia.org/wiki?curid=14564979
|
145661
|
Centralizer and normalizer
|
Special types of subgroups encountered in group theory
In mathematics, especially group theory, the centralizer (also called commutant) of a subset "S" in a group "G" is the set formula_0 of elements of "G" that commute with every element of "S", or equivalently, such that conjugation by formula_1 leaves each element of "S" fixed. The normalizer of "S" in "G" is the set of elements formula_2 of "G" that satisfy the weaker condition of leaving the set formula_3 fixed under conjugation. The centralizer and normalizer of "S" are subgroups of "G". Many techniques in group theory are based on studying the centralizers and normalizers of suitable subsets "S".
Suitably formulated, the definitions also apply to semigroups.
In ring theory, the centralizer of a subset of a ring is defined with respect to the semigroup (multiplication) operation of the ring. The centralizer of a subset of a ring "R" is a subring of "R". This article also deals with centralizers and normalizers in a Lie algebra.
The idealizer in a semigroup or ring is another construction that is in the same vein as the centralizer and normalizer.
Definitions.
Group and semigroup.
The centralizer of a subset "S" of group (or semigroup) "G" is defined as
formula_4
where only the first definition applies to semigroups.
If there is no ambiguity about the group in question, the "G" can be suppressed from the notation. When "S" = {"a"} is a singleton set, we write C"G"("a") instead of C"G"({"a"}). Another less common notation for the centralizer is Z("a"), which parallels the notation for the center. With this latter notation, one must be careful to avoid confusion between the center of a group "G", Z("G"), and the "centralizer" of an "element" "g" in "G", Z("g").
The normalizer of "S" in the group (or semigroup) "G" is defined as
formula_5
where again only the first definition applies to semigroups. If the set formula_6 is a subgroup of formula_7, then the normalizer formula_8 is the largest subgroup formula_9 where formula_6 is a normal subgroup of formula_10. The definitions of "centralizer" and "normalizer" are similar but not identical. If "g" is in the centralizer of "S" and "s" is in "S", then it must be that "gs" = "sg", but if "g" is in the normalizer, then "gs" = "tg" for some "t" in "S", with "t" possibly different from "s". That is, elements of the centralizer of "S" must commute pointwise with "S", but elements of the normalizer of "S" need only commute with "S as a set". The same notational conventions mentioned above for centralizers also apply to normalizers. The normalizer should not be confused with the normal closure.
Clearly formula_11 and both are subgroups of formula_7.
Ring, algebra over a field, Lie ring, and Lie algebra.
If "R" is a ring or an algebra over a field, and "S" is a subset of "R", then the centralizer of "S" is exactly as defined for groups, with "R" in the place of "G".
If formula_12 is a Lie algebra (or Lie ring) with Lie product ["x", "y"], then the centralizer of a subset "S" of formula_12 is defined to be
formula_13
The definition of centralizers for Lie rings is linked to the definition for rings in the following way. If "R" is an associative ring, then "R" can be given the bracket product ["x", "y"] = "xy" − "yx". Of course then "xy" = "yx" if and only if ["x", "y"] = 0. If we denote the set "R" with the bracket product as L"R", then clearly the "ring centralizer" of "S" in "R" is equal to the "Lie ring centralizer" of "S" in L"R".
The normalizer of a subset "S" of a Lie algebra (or Lie ring) formula_12 is given by
formula_14
While this is the standard usage of the term "normalizer" in Lie algebra, this construction is actually the idealizer of the set "S" in formula_12. If "S" is an additive subgroup of formula_12, then formula_15 is the largest Lie subring (or Lie subalgebra, as the case may be) in which "S" is a Lie ideal.
Example.
Consider the group
formula_16 (the symmetric group of permutations of 3 elements).
Take a subset H of the group G:
formula_17
Note that [1, 2, 3] is the identity permutation in G and retains the order of each element and [1, 3, 2] is the permutation that fixes the first element and swaps the second and third element.
The normalizer of H with respect to the group G are all elements of G that yield the set H (potentially permuted) when the group operation is applied.
Working out the example for each element of G:
formula_18 when applied to H => formula_19; therefore [1, 2, 3] is in the Normalizer(H) with respect to G.
formula_20 when applied to H => formula_21; therefore [1, 3, 2] is in the Normalizer(H) with respect to G.
formula_22 when applied to H => formula_23; therefore [2, 1, 3] is not in the Normalizer(H) with respect to G.
formula_24 when applied to H => formula_25; therefore [2, 3, 1] is not in the Normalizer(H) with respect to G.
formula_26 when applied to H => formula_27; therefore [3, 1, 2] is not in the Normalizer(H) with respect to G.
formula_28 when applied to H => formula_29; therefore [3, 2, 2] is not in the Normalizer(H) with respect to G.
Therefore, the Normalizer(H) with respect to G is formula_30 since both these group elements preserve the set H.
A group is considered simple if the normalizer with respect to a subset is always the identity and itself. Here, it's clear that S3 is not a simple group.
The centralizer of the group G is the set of elements that leave each element of H unchanged.
It's clear that the only such element in S3 is the identity element [1, 2, 3].
Properties.
Semigroups.
Let formula_31 denote the centralizer of formula_6 in the semigroup formula_32; i.e. formula_33 Then formula_31 forms a subsemigroup and formula_34; i.e. a commutant is its own bicommutant.
Groups.
Source:
Rings and algebras over a field.
Source:
|
[
{
"math_id": 0,
"text": "\\operatorname{C}_G(S)"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "\\mathrm{N}_G(S)"
},
{
"math_id": 3,
"text": "S \\subseteq G"
},
{
"math_id": 4,
"text": "\\mathrm{C}_G(S) = \\left\\{g \\in G \\mid gs = sg \\text{ for all } s \\in S\\right\\} = \\left\\{g \\in G \\mid gsg^{-1} = s \\text{ for all } s \\in S\\right\\},"
},
{
"math_id": 5,
"text": "\\mathrm{N}_G(S) = \\left\\{ g \\in G \\mid gS = Sg \\right\\} = \\left\\{g \\in G \\mid gSg^{-1} = S\\right\\},"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "N_G(S)"
},
{
"math_id": 9,
"text": "G' \\subseteq G"
},
{
"math_id": 10,
"text": "G'"
},
{
"math_id": 11,
"text": "C_G(S) \\subseteq N_G(S)"
},
{
"math_id": 12,
"text": "\\mathfrak{L}"
},
{
"math_id": 13,
"text": "\\mathrm{C}_{\\mathfrak{L}}(S) = \\{ x \\in \\mathfrak{L} \\mid [x, s] = 0 \\text{ for all } s \\in S \\}."
},
{
"math_id": 14,
"text": "\\mathrm{N}_\\mathfrak{L}(S) = \\{ x \\in \\mathfrak{L} \\mid [x, s] \\in S \\text{ for all } s \\in S \\}."
},
{
"math_id": 15,
"text": "\\mathrm{N}_{\\mathfrak{L}}(S)"
},
{
"math_id": 16,
"text": "G = S_3 = \\{[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]\\}"
},
{
"math_id": 17,
"text": "H = \\{[1, 2, 3], [1, 3, 2]\\}. "
},
{
"math_id": 18,
"text": "[1, 2, 3]"
},
{
"math_id": 19,
"text": "\\{[1, 2, 3], [1, 3, 2]\\} = H"
},
{
"math_id": 20,
"text": "[1, 3, 2]"
},
{
"math_id": 21,
"text": "\\{[1, 3, 2], [1, 2, 3]\\} = H"
},
{
"math_id": 22,
"text": "[2, 1, 3]"
},
{
"math_id": 23,
"text": "\\{[2, 1, 3], [3, 1, 2]\\} != H"
},
{
"math_id": 24,
"text": "[2, 3, 1]"
},
{
"math_id": 25,
"text": "\\{[2, 3, 1], [3, 2, 1]\\} != H"
},
{
"math_id": 26,
"text": "[3, 1, 2]"
},
{
"math_id": 27,
"text": "\\{[3, 1, 2], [2, 1, 3]\\} != H"
},
{
"math_id": 28,
"text": "[3, 2, 1]"
},
{
"math_id": 29,
"text": "\\{[3, 2, 1], [2, 3, 1]\\} != H"
},
{
"math_id": 30,
"text": "\\{[1, 2, 3], [1, 3, 2]\\}"
},
{
"math_id": 31,
"text": "S'"
},
{
"math_id": 32,
"text": "A"
},
{
"math_id": 33,
"text": "S' = \\{x \\in A \\mid sx = xs \\text{ for every } s \\in S\\}."
},
{
"math_id": 34,
"text": "S' = S''' = S'''''"
}
] |
https://en.wikipedia.org/wiki?curid=145661
|
14566906
|
Ordinal definable set
|
In mathematical set theory, a set "S" is said to be ordinal definable if, informally, it can be defined in terms of a finite number of ordinals by a first-order formula. Ordinal definable sets were introduced by .
Definition.
A drawback to the above informal definition is that it requires quantification over all first-order formulas, which cannot be formalized in the standard language of set theory. However, there is a different, formal such characterization:
A set "S" is ordinal definable if there is some collection of ordinals "α"1, ..., "α""n" and a first-order formula φ taking α2, ..., α"n" as parameters that uniquely defines formula_0 as an element of formula_1, i.e., such that "S" is the unique object validating φ("S", α2...α"n"), with its quantifiers ranging over formula_1.
The latter denotes the set in the von Neumann hierarchy indexed by the ordinal "α"1. The class of all ordinal definable sets is denoted OD; it is not necessarily transitive, and need not be a model of ZFC because it might not satisfy the axiom of extensionality.
A set further is hereditarily ordinal definable if it is ordinal definable and all elements of its transitive closure are ordinal definable. The class of hereditarily ordinal definable sets is denoted by HOD, and is a transitive model of ZFC, with a definable well ordering.
It is consistent with the axioms of set theory that all sets are ordinal definable, and so hereditarily ordinal definable. The assertion that this situation holds is referred to as V = OD or V = HOD. It follows from V = L, and is equivalent to the existence of a (definable) well-ordering of the universe. Note however that the formula expressing V = HOD need not hold true within HOD, as it is not absolute for models of set theory: within HOD, the interpretation of the formula for HOD may yield an even smaller inner model.
HOD has been found to be useful in that it is an inner model that can accommodate essentially all known large cardinals. This is in contrast with the situation for core models, as core models have not yet been constructed that can accommodate supercompact cardinals, for example.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "V_{\\alpha_1}"
}
] |
https://en.wikipedia.org/wiki?curid=14566906
|
14568414
|
Rule of Sarrus
|
Mnemonic device for calculating 3 by 3 matrix determinants
In matrix theory, the rule of Sarrus is a mnemonic device for computing the determinant of a formula_0 matrix named after the French mathematician Pierre Frédéric Sarrus.
Consider a formula_0 matrix
formula_1
then its determinant can be computed by the following scheme.
Write out the first two columns of the matrix to the right of the third column, giving five columns in a row. Then add the products of the diagonals going from top to bottom (solid) and subtract the products of the diagonals going from bottom to top (dashed). This yields
formula_2
A similar scheme based on diagonals works for formula_3 matrices:
formula_4
Both are special cases of the Leibniz formula, which however does not yield similar memorization schemes for larger matrices. Sarrus' rule can also be derived using the Laplace expansion of a formula_0 matrix.
Another way of thinking of Sarrus' rule is to imagine that the matrix is wrapped around a cylinder, such that the right and left edges are joined.
|
[
{
"math_id": 0,
"text": " 3 \\times 3 "
},
{
"math_id": 1,
"text": "M=\\begin{bmatrix}a&b&c\\\\d&e&f\\\\g&h&i\\end{bmatrix} "
},
{
"math_id": 2,
"text": "\n\\begin{align}\n\\det(M)= \\begin{vmatrix}\na&b&c\\\\d&e&f\\\\g&h&i\n\\end{vmatrix}=\naei + bfg + cdh - ceg - bdi - afh.\n\\end{align}\n"
},
{
"math_id": 3,
"text": " 2 \\times 2 "
},
{
"math_id": 4,
"text": "\\begin{vmatrix}\na&b\\\\c&d\n\\end{vmatrix}\n=ad - bc "
}
] |
https://en.wikipedia.org/wiki?curid=14568414
|
1456863
|
Setpoint (control system)
|
Target value for the process variable of a control system
In cybernetics and control theory, a setpoint (SP; also set point) is the desired or target value for an essential variable, or process value (PV) of a control system, which may differ from the actual measured value of the variable. Departure of such a variable from its setpoint is one basis for error-controlled regulation using negative feedback for automatic control. A setpoint can be any physical quantity or parameter that a control system seeks to regulate, such as temperature, pressure, flow rate, position, speed, or any other measurable attribute.
In the context of PID controller, the setpoint represents the reference or goal for the controlled process variable. It serves as the benchmark against which the actual process variable (PV) is continuously compared. The PID controller calculates an error signal by taking the difference between the setpoint and the current value of the process variable. Mathematically, this error is expressed as:
formula_0
where formula_1 is the error at a given time formula_2, formula_3 is the setpoint, formula_4 is the process variable at time formula_2.
The PID controller uses this error signal to determine how to adjust the control output to bring the process variable as close as possible to the setpoint while maintaining stability and minimizing overshoot.
Examples.
Cruise control
The formula_5 error can be used to return a system to its norm. An everyday example is the cruise control on a road vehicle; where external influences such as gradients cause speed changes (PV), and the driver also alters the desired set speed (SP). The automatic control algorithm restores the actual speed to the desired speed in the optimum way, without delay or overshoot, by altering the power output of the vehicle's engine. In this way the formula_5 error is used to control the PV so that it equals the SP. A widespread of formula_5 error is classically used in the PID controller.
Industrial applications
Special consideration must be given for engineering applications. In industrial systems, physical or process restraints may limit the determined set point. For example, a reactor which operates more efficiently at higher temperatures may be rated to withstand 500°C. However, for safety reasons, the set point for the reactor temperature control loop would be well below this limit, even if this means the reactor is running less efficiently.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e(t) = SP - PV(t),"
},
{
"math_id": 1,
"text": "e(t)"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "SP"
},
{
"math_id": 4,
"text": "PV(t)"
},
{
"math_id": 5,
"text": "SP-PV"
}
] |
https://en.wikipedia.org/wiki?curid=1456863
|
14569
|
Interpolation
|
Method for estimating new data within known data points
In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate; that is, estimate the value of that function for an intermediate value of the independent variable.
A closely related problem is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error and give better performance in calculation process.
Example.
This table gives some values of an unknown function formula_0.
Interpolation provides a means of estimating the function at intermediate points, such as formula_2
We describe some methods of interpolation, differing in such properties as: accuracy, cost, number of data points needed, and smoothness of the resulting interpolant function.
Piecewise constant interpolation.
The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, as linear interpolation (see below) is almost as easy, but in higher-dimensional multivariate interpolation, this could be a favourable choice for its speed and simplicity.
Linear interpolation.
One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of estimating "f"(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to take "f"(2.5) midway between "f"(2) = 0.9093 and "f"(3) = 0.1411, which yields 0.5252.
Generally, linear interpolation takes two data points, say ("x""a","y""a") and ("x""b","y""b"), and the interpolant is given by:
formula_3
formula_4
formula_5
This previous equation states that the slope of the new line between formula_6 and formula_7 is the same as the slope of the line between formula_6 and formula_8
Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is not differentiable at the point "x""k".
The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by "g", and suppose that "x" lies between "x""a" and "x""b" and that "g" is twice continuously differentiable. Then the linear interpolation error is
formula_9
In words, the error is proportional to the square of the distance between the data points. The error in some other methods, including polynomial interpolation and spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants.
Polynomial interpolation.
Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree.
Consider again the problem given above. The following sixth degree polynomial goes through all the seven points:
formula_10
Substituting "x" = 2.5, we find that "f"(2.5) = ~0.59678.
Generally, if we have "n" data points, there is exactly one polynomial of degree at most "n"−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the power "n". Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation overcomes most of the problems of linear interpolation.
However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (see computational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially at the end points (see Runge's phenomenon).
Polynomial interpolation can estimate local maxima and minima that are outside the range of the samples, unlike linear interpolation. For example, the interpolant above has a local maximum at "x" ≈ 1.566, "f"("x") ≈ 1.003 and a local minimum at "x" ≈ 4.708, "f"("x") ≈ −1.003. However, these maxima and minima may exceed the theoretical range of the function; for example, a function that is always positive may have an interpolant with negative values, and whose inverse therefore contains false vertical asymptotes.
More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense; that is, to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention to Chebyshev polynomials.
Spline interpolation.
Linear interpolation uses a linear function for each of intervals ["x""k","x""k+1"]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline.
For instance, the natural cubic spline is piecewise cubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by
formula_11
In this case we get "f"(2.5) = 0.5972.
Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation, while the interpolant is smoother and easier to evaluate than the high-degree polynomials used in polynomial interpolation. However, the global nature of the basis functions leads to ill-conditioning. This is completely mitigated by using splines of compact support, such as are implemented in Boost.Math and discussed in Kress.
Mimetic interpolation.
Depending on the underlying discretisation of fields, different interpolants may be required. In contrast to other interpolation methods, which estimate functions on target points, mimetic interpolation evaluates the integral of fields on target lines, areas or volumes, depending on the type of field (scalar, vector, pseudo-vector or pseudo-scalar).
A key feature of mimetic interpolation is that vector calculus identities are satisfied, including Stokes' theorem and the divergence theorem. As a result, mimetic interpolation conserves line, area and volume integrals. Conservation of line integrals might be desirable when interpolating the electric field, for instance, since the line integral gives the electric potential difference at the endpoints of the integration path. Mimetic interpolation ensures that the error of estimating the line integral of an electric field is the same as the error obtained by interpolating the potential at the end points of the integration path, regardless of the length of the integration path.
Linear, bilinear and trilinear interpolation are also considered mimetic, even if it is the field values that are conserved (not the integral of the field). Apart from linear interpolation, area weighted interpolation can be considered one of the first mimetic interpolation methods to have been developed.
Function approximation.
Interpolation is a common way to approximate functions. Given a function formula_12 with a set of points formula_13 one can form a function formula_14 such that formula_15 for formula_16 (that is, that formula_17 interpolates formula_18 at these points). In general, an interpolant need not be a good approximation, but there are well known and often reasonable conditions where it will. For example, if formula_19 (four times continuously differentiable) then cubic spline interpolation has an error bound given by formula_20 where formula_21 and formula_22 is a constant.
Via Gaussian processes.
Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression; that is, for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known as Kriging.
Other forms.
Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is interpolation by rational functions using Padé approximant, and trigonometric interpolation is interpolation by trigonometric polynomials using Fourier series. Another possibility is to use wavelets.
The Whittaker–Shannon interpolation formula can be used if the number of data points is infinite or if the function to be interpolated has compact support.
Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads to Hermite interpolation problems.
When each data point is itself a function, it can be useful to see the interpolation problem as a partial advection problem between each data point. This idea leads to the displacement interpolation problem used in transportation theory.
In higher dimensions.
Multivariate interpolation is the interpolation of functions of more than one variable.
Methods include bilinear interpolation and bicubic interpolation in two dimensions, and trilinear interpolation in three dimensions.
They can be applied to gridded or scattered data. Mimetic interpolation generalizes to formula_23 dimensional spaces where formula_24.
In digital signal processing.
In the domain of digital signal processing, the term interpolation refers to the process of converting a sampled digital signal (such as a sampled audio signal) to that of a higher sampling rate (Upsampling) using various digital filtering techniques (for example, convolution with a frequency-limited impulse signal). In this application there is a specific requirement that the harmonic content of the original signal be preserved without creating aliased harmonic content of the original signal above the original Nyquist limit of the signal (that is, above fs/2 of the original signal sample rate). An early and fairly elementary discussion on this subject can be found in Rabiner and Crochiere's book "Multirate Digital Signal Processing".
Related concepts.
The term "extrapolation" is used to find data points outside the range of known data points.
In curve fitting problems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads to least squares approximation.
Approximation theory studies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function.
Generalization.
If we consider formula_1 as a variable in a topological space, and the function formula_0 mapping to a Banach space, then the problem is treated as "interpolation of operators". The classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "x=2.5."
},
{
"math_id": 3,
"text": " y = y_a + \\left( y_b-y_a \\right) \\frac{x-x_a}{x_b-x_a} \\text{ at the point } \\left( x,y \\right) "
},
{
"math_id": 4,
"text": " \\frac{y-y_a}{y_b-y_a} = \\frac{x-x_a}{x_b-x_a} "
},
{
"math_id": 5,
"text": " \\frac{y-y_a}{x-x_a} = \\frac{y_b-y_a}{x_b-x_a} "
},
{
"math_id": 6,
"text": " (x_a,y_a) "
},
{
"math_id": 7,
"text": " (x,y) "
},
{
"math_id": 8,
"text": " (x_b,y_b) "
},
{
"math_id": 9,
"text": " |f(x)-g(x)| \\le C(x_b-x_a)^2 \\quad\\text{where}\\quad C = \\frac18 \\max_{r\\in[x_a,x_b]} |g''(r)|. "
},
{
"math_id": 10,
"text": " f(x) = -0.0001521 x^6 - 0.003130 x^5 + 0.07321 x^4 - 0.3577 x^3 + 0.2255 x^2 + 0.9038 x. "
},
{
"math_id": 11,
"text": " f(x) = \\begin{cases}\n-0.1522 x^3 + 0.9937 x, & \\text{if } x \\in [0,1], \\\\\n-0.01258 x^3 - 0.4189 x^2 + 1.4126 x - 0.1396, & \\text{if } x \\in [1,2], \\\\\n0.1403 x^3 - 1.3359 x^2 + 3.2467 x - 1.3623, & \\text{if } x \\in [2,3], \\\\\n0.1579 x^3 - 1.4945 x^2 + 3.7225 x - 1.8381, & \\text{if } x \\in [3,4], \\\\\n0.05375 x^3 -0.2450 x^2 - 1.2756 x + 4.8259, & \\text{if } x \\in [4,5], \\\\\n-0.1871 x^3 + 3.3673 x^2 - 19.3370 x + 34.9282, & \\text{if } x \\in [5,6].\n\\end{cases} "
},
{
"math_id": 12,
"text": "f:[a,b] \\to \\mathbb{R}"
},
{
"math_id": 13,
"text": "x_1, x_2, \\dots, x_n \\in [a, b]"
},
{
"math_id": 14,
"text": "s: [a,b] \\to \\mathbb{R}"
},
{
"math_id": 15,
"text": "f(x_i)=s(x_i)"
},
{
"math_id": 16,
"text": "i=1, 2, \\dots, n"
},
{
"math_id": 17,
"text": "s"
},
{
"math_id": 18,
"text": "f"
},
{
"math_id": 19,
"text": "f\\in C^4([a,b])"
},
{
"math_id": 20,
"text": "\\|f-s\\|_\\infty \\leq C \\|f^{(4)}\\|_\\infty h^4"
},
{
"math_id": 21,
"text": "h \\max_{i=1,2, \\dots, n-1} |x_{i+1}-x_i|"
},
{
"math_id": 22,
"text": "C"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "n > 3"
}
] |
https://en.wikipedia.org/wiki?curid=14569
|
145698
|
Seafloor spreading
|
Geological process at mid-ocean ridges
Seafloor spreading, or seafloor spread, is a process that occurs at mid-ocean ridges, where new oceanic crust is formed through volcanic activity and then gradually moves away from the ridge.
History of study.
Earlier theories by Alfred Wegener and Alexander du Toit of continental drift postulated that continents in motion "plowed" through the fixed and immovable seafloor. The idea that the seafloor itself moves and also carries the continents with it as it spreads from a central rift axis was proposed by Harold Hammond Hess from Princeton University and Robert Dietz of the U.S. Naval Electronics Laboratory in San Diego in the 1960s. The phenomenon is known today as plate tectonics. In locations where two plates move apart, at mid-ocean ridges, new seafloor is continually formed during seafloor spreading.
Significance.
Seafloor spreading helps explain continental drift in the theory of plate tectonics. When oceanic plates diverge, tensional stress causes fractures to occur in the lithosphere. The motivating force for seafloor spreading ridges is tectonic plate slab pull at subduction zones, rather than magma pressure, although there is typically significant magma activity at spreading ridges. Plates that are not subducting are driven by gravity sliding off the elevated mid-ocean ridges a process called ridge push. At a spreading center, basaltic magma rises up the fractures and cools on the ocean floor to form new seabed. Hydrothermal vents are common at spreading centers. Older rocks will be found farther away from the spreading zone while younger rocks will be found nearer to the spreading zone.
"Spreading rate" is the rate at which an ocean basin widens due to seafloor spreading. (The rate at which new oceanic lithosphere is added to each tectonic plate on either side of a mid-ocean ridge is the "spreading half-rate" and is equal to half of the spreading rate). Spreading rates determine if the ridge is fast, intermediate, or slow. As a general rule, fast ridges have spreading (opening) rates of more than 90 mm/year. Intermediate ridges have a spreading rate of 40–90 mm/year while slow spreading ridges have a rate less than 40 mm/year. The highest known rate was over 200 mm/yr during the Miocene on the East Pacific Rise.
In the 1960s, the past record of geomagnetic reversals of Earth's magnetic field was noticed by observing magnetic stripe "anomalies" on the ocean floor. This results in broadly evident "stripes" from which the past magnetic field polarity can be inferred from data gathered with a magnetometer towed on the sea surface or from an aircraft. The stripes on one side of the mid-ocean ridge were the mirror image of those on the other side. By identifying a reversal with a known age and measuring the distance of that reversal from the spreading center, the spreading half-rate could be computed.
In some locations spreading rates have been found to be asymmetric; the half rates differ on each side of the ridge crest by about five percent. This is thought due to temperature gradients in the asthenosphere from mantle plumes near the spreading center.
Spreading center.
Seafloor spreading occurs at spreading centers, distributed along the crests of mid-ocean ridges. Spreading centers end in transform faults or in overlapping spreading center offsets. A spreading center includes a seismically active plate boundary zone a few kilometers to tens of kilometers wide, a crustal accretion zone within the boundary zone where the ocean crust is youngest, and an instantaneous plate boundary - a line within the crustal accretion zone demarcating the two separating plates. Within the crustal accretion zone is a 1–2 km-wide neovolcanic zone where active volcanism occurs.
Incipient spreading.
In the general case, seafloor spreading starts as a rift in a continental land mass, similar to the Red Sea-East Africa Rift System today. The process starts by heating at the base of the continental crust which causes it to become more plastic and less dense. Because less dense objects rise in relation to denser objects, the area being heated becomes a broad dome (see isostasy). As the crust bows upward, fractures occur that gradually grow into rifts. The typical rift system consists of three rift arms at approximately 120-degree angles. These areas are named triple junctions and can be found in several places across the world today. The separated margins of the continents evolve to form passive margins. Hess' theory was that new seafloor is formed when magma is forced upward toward the surface at a mid-ocean ridge.
If spreading continues past the incipient stage described above, two of the rift arms will open while the third arm stops opening and becomes a 'failed rift' or aulacogen. As the two active rifts continue to open, eventually the continental crust is attenuated as far as it will stretch. At this point basaltic oceanic crust and upper mantle lithosphere begins to form between the separating continental fragments. When one of the rifts opens into the existing ocean, the rift system is flooded with seawater and becomes a new sea. The Red Sea is an example of a new arm of the sea. The East African rift was thought to be a failed arm that was opening more slowly than the other two arms, but in 2005 the Ethiopian Afar Geophysical Lithospheric Experiment reported that in the Afar region, September 2005, a 60 km fissure opened as wide as eight meters. During this period of initial flooding the new sea is sensitive to changes in climate and eustasy. As a result, the new sea will evaporate (partially or completely) several times before the elevation of the rift valley has been lowered to the point that the sea becomes stable. During this period of evaporation large evaporite deposits will be made in the rift valley. Later these deposits have the potential to become hydrocarbon seals and are of particular interest to petroleum geologists.
Seafloor spreading can stop during the process, but if it continues to the point that the continent is completely severed, then a new ocean basin is created. The Red Sea has not yet completely split Arabia from Africa, but a similar feature can be found on the other side of Africa that has broken completely free. South America once fit into the area of the Niger Delta. The Niger River has formed in the failed rift arm of the triple junction.
Continued spreading and subduction.
As new seafloor forms and spreads apart from the mid-ocean ridge it slowly cools over time. Older seafloor is, therefore, colder than new seafloor, and older oceanic basins deeper than new oceanic basins due to isostasy. If the diameter of the earth remains relatively constant despite the production of new crust, a mechanism must exist by which crust is also destroyed. The destruction of oceanic crust occurs at subduction zones where oceanic crust is forced under either continental crust or oceanic crust. Today, the Atlantic basin is actively spreading at the Mid-Atlantic Ridge. Only a small portion of the oceanic crust produced in the Atlantic is subducted. However, the plates making up the Pacific Ocean are experiencing subduction along many of their boundaries which causes the volcanic activity in what has been termed the Ring of Fire of the Pacific Ocean. The Pacific is also home to one of the world's most active spreading centers (the East Pacific Rise) with spreading rates of up to 145 ± 4 mm/yr between the Pacific and Nazca plates. The Mid-Atlantic Ridge is a slow-spreading center, while the East Pacific Rise is an example of fast spreading. Spreading centers at slow and intermediate rates exhibit a rift valley while at fast rates an axial high is found within the crustal accretion zone. The differences in spreading rates affect not only the geometries of the ridges but also the geochemistry of the basalts that are produced.
Since the new oceanic basins are shallower than the old oceanic basins, the total capacity of the world's ocean basins decreases during times of active sea floor spreading. During the opening of the Atlantic Ocean, sea level was so high that a Western Interior Seaway formed across North America from the Gulf of Mexico to the Arctic Ocean.
Debate and search for mechanism.
At the Mid-Atlantic Ridge (and in other mid-ocean ridges), material from the upper mantle rises through the faults between oceanic plates to form new crust as the plates move away from each other, a phenomenon first observed as continental drift. When Alfred Wegener first presented a hypothesis of continental drift in 1912, he suggested that continents plowed through the ocean crust. This was impossible: oceanic crust is both more dense and more rigid than continental crust. Accordingly, Wegener's theory wasn't taken very seriously, especially in the United States.
At first the driving force for spreading was argued to be convection currents in the mantle. Since then, it has been shown that the motion of the continents is linked to seafloor spreading by the theory of plate tectonics, which is driven by convection that includes the crust itself as well.
The driver for seafloor spreading in plates with active margins is the weight of the cool, dense, subducting slabs that pull them along, or slab pull. The magmatism at the ridge is considered to be passive upwelling, which is caused by the plates being pulled apart under the weight of their own slabs. This can be thought of as analogous to a rug on a table with little friction: when part of the rug is off of the table, its weight pulls the rest of the rug down with it. However, the Mid-Atlantic ridge itself is not bordered by plates that are being pulled into subduction zones, except the minor subduction in the Lesser Antilles and Scotia Arc. In this case the plates are sliding apart over the mantle upwelling in the process of ridge push.
Seafloor global topography: cooling models.
The depth of the seafloor (or the height of a location on a mid-ocean ridge above a base-level) is closely correlated with its age (age of the lithosphere where depth is measured). The age-depth relation can be modeled by the cooling of a lithosphere plate or mantle half-space in areas without significant subduction.
Cooling mantle model.
In the mantle half-space model, the seabed height is determined by the oceanic lithosphere and mantle temperature, due to thermal expansion. The simple result is that the ridge height or ocean depth is proportional to the square root of its age. Oceanic lithosphere is continuously formed at a constant rate at the mid-ocean ridges. The source of the lithosphere has a half-plane shape ("x" = 0, "z" < 0) and a constant temperature "T"1. Due to its continuous creation, the lithosphere at "x" > 0 is moving away from the ridge at a constant velocity "v", which is assumed large compared to other typical scales in the problem. The temperature at the upper boundary of the lithosphere ("z" = 0) is a constant "T"0 = 0. Thus at "x" = 0 the temperature is the Heaviside step function formula_0. The system is assumed to be at a quasi-steady state, so that the temperature distribution is constant in time, i.e. formula_1
By calculating in the frame of reference of the moving lithosphere (velocity "v"), which has spatial coordinate formula_2 formula_3 and the heat equation is:
formula_4
where formula_5 is the thermal diffusivity of the mantle lithosphere.
Since "T" depends on "x"' and "t" only through the combination formula_6:
formula_7
Thus:
formula_8
It is assumed that formula_9 is large compared to other scales in the problem; therefore the last term in the equation is neglected, giving a 1-dimensional diffusion equation:
formula_10
with the initial conditions
formula_11
The solution for formula_12 is given by the error function:
formula_13.
Due to the large velocity, the temperature dependence on the horizontal direction is negligible, and the height at time "t" (i.e. of sea floor of age "t") can be calculated by integrating the thermal expansion over "z":
formula_14
where formula_15 is the effective volumetric thermal expansion coefficient, and "h0" is the mid-ocean ridge height (compared to some reference).
The assumption that "v" is relatively large is equivalent to the assumption that the thermal diffusivity formula_5 is small compared to formula_16, where "L" is the ocean width (from mid-ocean ridges to continental shelf) and "A" is the age of the ocean basin.
The effective thermal expansion coefficient formula_15 is different from the usual thermal expansion coefficient formula_17 due to isostasic effect of the change in water column height above the lithosphere as it expands or retracts. Both coefficients are related by:
formula_18
where formula_19 is the rock density and formula_20 is the density of water.
By substituting the parameters by their rough estimates:
formula_21
gives:
formula_22
where the height is in meters and time is in millions of years. To get the dependence on "x", one must substitute "t" = "x"/"v" ~ "Ax"/"L", where "L" is the distance between the ridge to the continental shelf (roughly half the ocean width), and "A" is the ocean basin age.
Rather than height of the ocean floor formula_23above a base or reference level formula_24, the depth of the ocean formula_25is of interest. Because formula_26(with formula_24 measured from the ocean surface):
formula_27; for the eastern Pacific for example, where formula_28 is the depth at the ridge crest, typically 2600 m.
Cooling plate model.
The depth predicted by the square root of seafloor age derived above is too deep for seafloor older than 80 million years. Depth is better explained by a cooling lithosphere plate model rather than the cooling mantle half-space. The plate has a constant temperature at its base and spreading edge. Analysis of depth versus age and depth versus square root of age data allowed Parsons and Sclater to estimate model parameters (for the North Pacific):
~125 km for lithosphere thickness
formula_29 at base and young edge of plate
formula_30
Assuming isostatic equilibrium everywhere beneath the cooling plate yields a revised age depth relationship for older sea floor that is approximately correct for ages as young as 20 million years:
formula_31meters
Thus older seafloor deepens more slowly than younger and in fact can be assumed almost constant at ~6400 m depth. Parsons and Sclater concluded that some style of mantle convection must apply heat to the base of the plate everywhere to prevent cooling down below 125 km and lithosphere contraction (seafloor deepening) at older ages. Their plate model also allowed an expression for conductive heat flow, "q(t)" from the ocean floor, which is approximately constant at formula_32 beyond 120 million years:
formula_33
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T_1\\cdot\\Theta(-z)"
},
{
"math_id": 1,
"text": "T=T(x,z)."
},
{
"math_id": 2,
"text": "x' = x-vt,"
},
{
"math_id": 3,
"text": "T=T(x',z, t)."
},
{
"math_id": 4,
"text": "\\frac{\\partial T}{\\partial t} = \\kappa \\nabla^2 T = \\kappa\\frac{\\partial^2 T}{\\partial^2 z} + \\kappa\\frac{\\partial^2 T}{\\partial^2 x'}"
},
{
"math_id": 5,
"text": "\\kappa"
},
{
"math_id": 6,
"text": "x = x'+vt,"
},
{
"math_id": 7,
"text": "\\frac{\\partial T}{\\partial x'} = \\frac{1}{v}\\cdot\\frac{\\partial T}{\\partial t}"
},
{
"math_id": 8,
"text": "\\frac{\\partial T}{\\partial t} = \\kappa \\nabla^2 T = \\kappa\\frac{\\partial^2 T}{\\partial^2 z} + \\frac{\\kappa}{v^2} \\frac{\\partial^2 T}{\\partial^2 t}"
},
{
"math_id": 9,
"text": "v"
},
{
"math_id": 10,
"text": "\\frac{\\partial T}{\\partial t} = \\kappa\\frac{\\partial^2 T}{\\partial^2 z}"
},
{
"math_id": 11,
"text": "T(t=0) = T_1\\cdot\\Theta(-z)."
},
{
"math_id": 12,
"text": "z\\le 0"
},
{
"math_id": 13,
"text": "T(x',z,t) = T_1 \\cdot \\operatorname{erf} \\left(\\frac{z}{2\\sqrt{\\kappa t}}\\right)"
},
{
"math_id": 14,
"text": "h(t) = h_0 + \\alpha_\\mathrm{eff} \\int_0^{\\infty} [T(z)-T_1]dz = h_0 - \\frac{2}{\\sqrt{\\pi}}\\alpha_\\mathrm{eff}T_1\\sqrt{\\kappa t} "
},
{
"math_id": 15,
"text": "\\alpha_\\mathrm{eff}"
},
{
"math_id": 16,
"text": "L^2/A"
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": " \\alpha_\\mathrm{eff} = \\alpha \\cdot \\frac{\\rho}{\\rho-\\rho_w}"
},
{
"math_id": 19,
"text": "\\rho \\sim 3.3 \\ \\mathrm{g}\\cdot \\mathrm{cm}^{-3}"
},
{
"math_id": 20,
"text": "\\rho_0 = 1 \\ \\mathrm{g} \\cdot \\mathrm{cm}^{-3}"
},
{
"math_id": 21,
"text": "\\begin{align}\n\\kappa &\\sim 8\\cdot 10^{-7} \\ \\mathrm{m}^2\\cdot \\mathrm{s}^{-1} \\\\ \n\\alpha &\\sim 4\\cdot 10^{-5} \\ {}^{\\circ}\\mathrm{C}^{-1} \\\\\nT_1 &\\sim 1220 \\ {}^{\\circ}\\mathrm{C} && \\text{for the Atlantic and Indian oceans} \\\\\nT_1 &\\sim 1120 \\ {}^{\\circ}\\mathrm{C} && \\text{for the eastern Pacific} \n\\end{align}"
},
{
"math_id": 22,
"text": "h(t) \\sim \\begin{cases} h_0 - 390 \\sqrt{t} & \\text{for the Atlantic and Indian oceans} \\\\ h_0 - 350 \\sqrt{t} & \\text{for the eastern Pacific} \\end{cases}"
},
{
"math_id": 23,
"text": "h(t)"
},
{
"math_id": 24,
"text": "h_b"
},
{
"math_id": 25,
"text": "d(t)"
},
{
"math_id": 26,
"text": "d(t)+h(t)=h_b"
},
{
"math_id": 27,
"text": "d(t)=h_b-h_0+350\\sqrt{t}"
},
{
"math_id": 28,
"text": "h_b-h_0"
},
{
"math_id": 29,
"text": "T_1\\thicksim1350\\ {}^{\\circ}\\mathrm{C}"
},
{
"math_id": 30,
"text": "\\alpha\\thicksim3.2\\cdot 10^{-5} \\ {}^{\\circ}\\mathrm{C}^{-1}"
},
{
"math_id": 31,
"text": "d(t)=6400-3200\\exp\\bigl(-t/62.8\\bigr)"
},
{
"math_id": 32,
"text": "1\\cdot 10^{-6}\\mathrm{cal}\\, \\mathrm{cm}^{-2} \\mathrm{sec}^{-1}"
},
{
"math_id": 33,
"text": "q(t)=11.3/\\sqrt{t}"
}
] |
https://en.wikipedia.org/wiki?curid=145698
|
1457116
|
Numéraire
|
The numéraire (or numeraire) is a basic standard by which value is computed. In mathematical economics it is a tradable economic entity in terms of whose price the relative prices of all other tradables are expressed. In a monetary economy, one of the functions of money is to act as the numéraire, i.e. to serve as a unit of account and therefore provide a common benchmark relative to which the value of various goods and services can be measured against.
Using a numeraire, whether monetary or some consumable good, facilitates value comparisons when only the relative prices are relevant, as in general equilibrium theory. When economic analysis refers to a particular good as the numéraire, one says that all other prices are normalized by the price of that good. For example, if a unit of good "g" has twice the market value of a unit of the numeraire, then the (relative) price of "g" is 2. Since the value of one unit of the numeraire relative to one unit of itself is 1, the price of the numeraire is always 1.
Change of numéraire.
In a financial market with traded securities, one may use a numéraire to price assets. For instance, let formula_0 be the price at time t of $1 that was invested in the money market at time 0. The fundamental theorem of asset pricing says that all assets formula_1 priced in terms of the numéraire (in this case, M), are martingales with respect to a risk-neutral measure, say formula_2. That is:
formula_3
Now, suppose that formula_4 is another strictly positive traded asset (and hence a martingale when priced in terms of the money market). Then we can define a new probability measure formula_5 by the Radon–Nikodym derivative
formula_6
Then it can be shown that formula_1 is a martingale under formula_5 when priced in terms of the new numéraire formula_7:
formula_8
This technique has many important applications in LIBOR and swap market models, as well as commodity markets. Jamshidian (1989) first used it in the context of the Vasicek model for interest rates in order to calculate bond options prices. Geman, El Karoui and Rochet (1995) introduced the general formal framework for the change of numéraire technique. See for example Brigo and Mercurio (2001) for a change of numéraire toolkit.
Numéraire in financial pricing.
Determining an appropriate numéraire has foundation in several financial pricing models such as options and certain assets. Identifying a risky asset as numéraire has a correlation with the number of underlying assets to model. Underlying shifts are modelled by the following:
formula_9
formula_10
where "1" defines the numéraire.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M\n(t)"
},
{
"math_id": 1,
"text": "S(t)"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "\\frac{S(t)}{M(t)} = E_Q\\left[\\frac{S(T)}{M(T)}\\right]"
},
{
"math_id": 4,
"text": "N(t)>0"
},
{
"math_id": 5,
"text": "Q^N"
},
{
"math_id": 6,
"text": "\\frac{dQ^N}{dQ} = \\frac{M(0)}{M(T)}\\frac{N(T)}{N(0)} = \\frac{N(T)}{M(T)}"
},
{
"math_id": 7,
"text": "N(t)"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n& {} \\quad E_{Q^N}\\left[\\frac{S(T)}{N(T)}\\right] \\\\\n& = E_{Q}\\left[\\frac{N(T)}{M(T)}\\frac{S(T)}{N(T)}\\right]/ E_Q\\left[\\frac{N(T)}{M(T)}\\right] \\\\\n& = \\frac{M(t)}{N(t)}E_{Q}\\left[\\frac{S(T)}{M(T)}\\right]\\\\\n&= \\frac{M(t)}{N(t)}\\frac{S(t)}{M(t)}\\\\\n& = \\frac{S(t)}{N(t)}\n\\end{align}\n"
},
{
"math_id": 9,
"text": " Z_i:= \\frac{X_i}{X_0} "
},
{
"math_id": 10,
"text": " X = (X_0, X_1, ..., X_n) \\to Z = (1, Z_1, ..., Z_n) "
}
] |
https://en.wikipedia.org/wiki?curid=1457116
|
145716
|
Lithosphere
|
Outermost shell of a terrestrial-type planet or natural satellite
A lithosphere (from grc " ' ()" 'rocky' and " ' ()" 'sphere') is the rigid, outermost rocky shell of a terrestrial planet or natural satellite. On Earth, it is composed of the crust and the lithospheric mantle, the topmost portion of the upper mantle that behaves elastically on time scales of up to thousands of years or more. The crust and upper mantle are distinguished on the basis of chemistry and mineralogy.
Earth's lithosphere.
Earth's lithosphere, which constitutes the hard and rigid outer vertical layer of the Earth, includes the crust and the lithospheric mantle (or mantle lithosphere), the uppermost part of the mantle that is not convecting. The lithosphere is underlain by the asthenosphere which is the weaker, hotter, and deeper part of the upper mantle that is able to convect. The lithosphere–asthenosphere boundary is defined by a difference in response to stress. The lithosphere remains rigid for very long periods of geologic time in which it deforms elastically and through brittle failure, while the asthenosphere deforms viscously and accommodates strain through plastic deformation.
The thickness of the lithosphere is thus considered to be the depth to the isotherm associated with the transition between brittle and viscous behavior. The temperature at which olivine becomes ductile (~) is often used to set this isotherm because olivine is generally the weakest mineral in the upper mantle.
The lithosphere is subdivided horizontally into tectonic plates, which often include terranes accreted from other plates.
History of the concept.
The concept of the lithosphere as Earth's strong outer layer was described by the English mathematician A. E. H. Love in his 1911 monograph "Some problems of Geodynamics" and further developed by the American geologist Joseph Barrell, who wrote a series of papers about the concept and introduced the term "lithosphere". The concept was based on the presence of significant gravity anomalies over continental crust, from which he inferred that there must exist a strong, solid upper layer (which he called the lithosphere) above a weaker layer which could flow (which he called the asthenosphere). These ideas were expanded by the Canadian geologist Reginald Aldworth Daly in 1940 with his seminal work "Strength and Structure of the Earth." They have been broadly accepted by geologists and geophysicists. These concepts of a strong lithosphere resting on a weak asthenosphere are essential to the theory of plate tectonics.
Types.
The lithosphere can be divided into oceanic and continental lithosphere. Oceanic lithosphere is associated with oceanic crust (having a mean density of about ) and exists in the ocean basins. Continental lithosphere is associated with continental crust (having a mean density of about ) and underlies the continents and continental shelves.
Oceanic lithosphere.
Oceanic lithosphere consists mainly of mafic crust and ultramafic mantle (peridotite) and is denser than continental lithosphere. Young oceanic lithosphere, found at mid-ocean ridges, is no thicker than the crust, but oceanic lithosphere thickens as it ages and moves away from the mid-ocean ridge. The oldest oceanic lithosphere is typically about thick. This thickening occurs by conductive cooling, which converts hot asthenosphere into lithospheric mantle and causes the oceanic lithosphere to become increasingly thick and dense with age. In fact, oceanic lithosphere is a thermal boundary layer for the convection in the mantle. The thickness of the mantle part of the oceanic lithosphere can be approximated as a thermal boundary layer that thickens as the square root of time.
formula_0
Here, formula_1 is the thickness of the oceanic mantle lithosphere, formula_2 is the thermal diffusivity (approximately ) for silicate rocks, and formula_3 is the age of the given part of the lithosphere. The age is often equal to L/V, where L is the distance from the spreading centre of mid-oceanic ridge, and V is velocity of the lithospheric plate.
Oceanic lithosphere is less dense than asthenosphere for a few tens of millions of years but after this becomes increasingly denser than asthenosphere. While chemically differentiated oceanic crust is lighter than asthenosphere, thermal contraction of the mantle lithosphere makes it more dense than the asthenosphere. The gravitational instability of mature oceanic lithosphere has the effect that at subduction zones, oceanic lithosphere invariably sinks underneath the overriding lithosphere, which can be oceanic or continental. New oceanic lithosphere is constantly being produced at mid-ocean ridges and is recycled back to the mantle at subduction zones. As a result, oceanic lithosphere is much younger than continental lithosphere: the oldest oceanic lithosphere is about 170 million years old, while parts of the continental lithosphere are billions of years old.
Subducted lithosphere.
Geophysical studies in the early 21st century posit that large pieces of the lithosphere have been subducted into the mantle as deep as to near the core-mantle boundary, while others "float" in the upper mantle. Yet others stick down into the mantle as far as but remain "attached" to the continental plate above, similar to the extent of the old concept of "tectosphere" revisited by Jordan in 1988. Subducting lithosphere remains rigid (as demonstrated by deep earthquakes along Wadati–Benioff zone) to a depth of about .
Continental lithosphere.
Continental lithosphere has a range in thickness from about to perhaps ; the upper approximately of typical continental lithosphere is crust. The crust is distinguished from the upper mantle by the change in chemical composition that takes place at the Moho discontinuity. The oldest parts of continental lithosphere underlie cratons, and the mantle lithosphere there is thicker and less dense than typical; the relatively low density of such mantle "roots of cratons" helps to stabilize these regions.
Because of its relatively low density, continental lithosphere that arrives at a subduction zone cannot subduct much further than about before resurfacing. As a result, continental lithosphere is not recycled at subduction zones the way oceanic lithosphere is recycled. Instead, continental lithosphere is a nearly permanent feature of the Earth.
Mantle xenoliths.
Geoscientists can directly study the nature of the subcontinental mantle by examining mantle xenoliths brought up in kimberlite, lamproite, and other volcanic pipes. The histories of these xenoliths have been investigated by many methods, including analyses of abundances of isotopes of osmium and rhenium. Such studies have confirmed that mantle lithospheres below some cratons have persisted for periods in excess of 3 billion years, despite the mantle flow that accompanies plate tectonics.
Microorganisms.
The upper part of the lithosphere is a large habitat for microorganisms, with some found more than below Earth's surface.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " h \\, \\sim \\, 2\\, \\sqrt{ \\kappa t } "
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "\\kappa"
},
{
"math_id": 3,
"text": "t"
}
] |
https://en.wikipedia.org/wiki?curid=145716
|
1457254
|
Small-world network
|
Graph where most nodes are reachable in a small number of steps
A small-world network is a graph characterized by a high clustering coefficient and low distances. On an example of social network, high clustering implies the high probability that two friends of one person are friends themselves. The low distances, on the other hand, mean that there is a short chain of social connections between any two people (this effect is known as six degrees of separation). Specifically, a small-world network is defined to be a network where the typical distance "L" between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes "N" in the network, that is:
formula_0
while the global clustering coefficient is not small.
In the context of a social network, this results in the small world phenomenon of strangers being linked by a short chain of acquaintances. Many empirical graphs show the small-world effect, including social networks, wikis such as Wikipedia, gene networks, and even the underlying architecture of the Internet. It is the inspiration for many network-on-chip architectures in contemporary computer hardware.
A certain category of small-world networks were identified as a class of random graphs by Duncan Watts and Steven Strogatz in 1998. They noted that graphs could be classified according to two independent structural features, namely the clustering coefficient, and average node-to-node distance (also known as average shortest path length). Purely random graphs, built according to the Erdős–Rényi (ER) model, exhibit a small average shortest path length (varying typically as the logarithm of the number of nodes) along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but also a clustering coefficient significantly higher than expected by random chance. Watts and Strogatz then proposed a novel graph model, currently named the Watts and Strogatz model, with (i) a small average shortest path length, and (ii) a large clustering coefficient. The crossover in the Watts–Strogatz model between a "large world" (such as a lattice) and a small world was first described by Barthelemy and Amaral in 1999. This work was followed by many studies, including exact results (Barrat and Weigt, 1999; Dorogovtsev and Mendes; Barmpoutis and Murray, 2010).
Properties of small-world networks.
Small-world networks tend to contain cliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them. This follows from the defining property of a high clustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path. This follows from the defining property that the mean-shortest path length be small. Several other properties are often associated with small-world networks. Typically there is an over-abundance of "hubs" – nodes in the network with a high number of connections (known as high degree nodes). These hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length (i.e. between any two cities you are likely to have to take three or fewer flights) because many flights are routed through hub cities. This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as a fat-tailed distribution. Graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above.
Network small-worldness has been quantified by a small-coefficient, formula_1, calculated by comparing clustering and path length of a given network to an Erdős–Rényi model with same degree on average.
formula_2
if formula_3 (formula_4 and formula_5), network is small-world. However, this metric is known to perform poorly because it is heavily influenced by the network's size.
Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network. The small-world measure (formula_6) is defined as
formula_7
Where the characteristic path length "L" and clustering coefficient "C" are calculated from the network you are testing, "C""ℓ" is the clustering coefficient for an equivalent lattice network and "L""r" is the characteristic path length for an equivalent random network.
Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks. The Small World Index (SWI) is defined as
formula_8
Both "ω"′ and SWI range between 0 and 1, and have been shown to capture aspects of small-worldness. However, they adopt slightly different conceptions of ideal small-worldness. For a given set of constraints (e.g. size, density, degree distribution), there exists a network for which "ω"′ = 1, and thus "ω" aims to capture the extent to which a network with given constraints as small worldly as possible. In contrast, there may not exist a network for which SWI = 1, the thus SWI aims to capture the extent to which a network with given constraints approaches the theoretical small world ideal of a network where "C" ≈ "C""ℓ" and "L" ≈ "L""r".
Examples of small-world networks.
Small-world properties are found in many real-world phenomena, including websites with navigation menus, food webs, electric power grids, metabolite processing networks, networks of brain neurons,
voter networks, telephone call graphs, and airport networks. Cultural networks and word co-occurrence networks have also been shown to be small-world networks.
Networks of connected proteins have small world properties such as power-law obeying degree distributions. Similarly transcriptional networks, in which the nodes are genes, and they are linked if one gene has an up or down-regulatory genetic influence on the other, have small world network properties.
Examples of non-small-world networks.
In another example, the famous theory of "six degrees of separation" between people tacitly presumes that the domain of discourse is the set of people alive at any one time. The number of degrees of separation between Albert Einstein and Alexander the Great is almost certainly greater than 30 and this network does not have small-world properties. A similarly constrained network would be the "went to school with" network: if two people went to the same college ten years apart from one another, it is unlikely that they have acquaintances in common amongst the student body.
Similarly, the number of relay stations through which a message must pass was not always small. In the days when the post was carried by hand or on horseback, the number of times a letter changed hands between its source and destination would have been much greater than it is today. The number of times a message changed hands in the days of the visual telegraph (circa 1800–1850) was determined by the requirement that two stations be connected by line-of-sight.
Tacit assumptions, if not examined, can cause a bias in the literature on graphs in favor of finding small-world networks (an example of the file drawer effect resulting from the publication bias).
Network robustness.
It is hypothesized by some researchers, such as Albert-László Barabási, that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture. One possibility is that small-world networks are more robust to perturbations than other network architectures. If this were the case, it would provide an advantage to biological systems that are subject to damage by mutation or viral infection.
In a small world network with a degree distribution following a power-law, deletion of a random node rarely causes a dramatic increase in mean-shortest path length (or a dramatic decrease in the clustering coefficient). This follows from the fact that most shortest paths between nodes flow through hubs, and if a peripheral node is deleted it is unlikely to interfere with passage between other peripheral nodes. As the fraction of peripheral nodes in a small world network is much higher than the fraction of hubs, the probability of deleting an important node is very low. For example, if the small airport in Sun Valley, Idaho was shut down, it would not increase the average number of flights that other passengers traveling in the United States would have to take to arrive at their respective destinations. However, if random deletion of a node hits a hub by chance, the average path length can increase dramatically. This can be observed annually when northern hub airports, such as Chicago's O'Hare airport, are shut down because of snow; many people have to take additional flights.
By contrast, in a random network, in which all nodes have roughly the same number of connections, deleting a random node is likely to increase the mean-shortest path length slightly but significantly for almost any node deleted. In this sense, random networks are vulnerable to random perturbations, whereas small-world networks are robust. However, small-world networks are vulnerable to targeted attack of hubs, whereas random networks cannot be targeted for catastrophic failure.
Construction of small-world networks.
The main mechanism to construct small-world networks is the Watts–Strogatz mechanism.
Small-world networks can also be introduced with time-delay, which will not only produce fractals but also chaos under the right conditions, or transition to chaos in dynamics networks.
Soon after the publication of Watts–Strogatz mechanism, approaches have been developed by Mashaghi and co-workers to generate network models that exhibit high degree correlations, while preserving the desired degree distribution and small-world properties. These approaches are based on edge-dual transformation and can be used to generate analytically solvable small-world network models for research into these systems.
Degree–diameter graphs are constructed such that the number of neighbors each vertex in the network has is bounded, while the distance from any given vertex in the network to any other vertex (the diameter of the network) is minimized. Constructing such small-world networks is done as part of the effort to find graphs of order close to the Moore bound.
Another way to construct a small world network from scratch is given in Barmpoutis "et al.", where a network with very small average distance and very large average clustering is constructed. A fast algorithm of constant complexity is given, along with measurements of the robustness of the resulting graphs. Depending on the application of each network, one can start with one such "ultra small-world" network, and then rewire some edges, or use several small such networks as subgraphs to a larger graph.
Small-world properties can arise naturally in social networks and other real-world systems via the process of dual-phase evolution. This is particularly common where time or spatial constraints limit the addition of connections between vertices The mechanism generally involves periodic shifts between phases, with connections being added during a "global" phase and being reinforced or removed during a "local" phase.
Small-world networks can change from scale-free class to broad-scale class whose connectivity distribution has a sharp cutoff following a power law regime due to constraints limiting the addition of new links. For strong enough constraints, scale-free networks can even become single-scale networks whose connectivity distribution is characterized as fast decaying. It was also shown analytically that scale-free networks are ultra-small, meaning that the distance scales according to formula_9.
Applications.
Applications to sociology.
The advantages to small world networking for social movement groups are their resistance to change due to the filtering apparatus of using highly connected nodes, and its better effectiveness in relaying information while keeping the number of links required to connect a network to a minimum.
The small world network model is directly applicable to affinity group theory represented in sociological arguments by William Finnegan. Affinity groups are social movement groups that are small and semi-independent pledged to a larger goal or function. Though largely unaffiliated at the node level, a few members of high connectivity function as connectivity nodes, linking the different groups through networking. This small world model has proven an extremely effective protest organization tactic against police action. Clay Shirky argues that the larger the social network created through small world networking, the more valuable the nodes of high connectivity within the network. The same can be said for the affinity group model, where the few people within each group connected to outside groups allowed for a large amount of mobilization and adaptation. A practical example of this is small world networking through affinity groups that William Finnegan outlines in reference to the 1999 Seattle WTO protests.
Applications to earth sciences.
Many networks studied in geology and geophysics have been shown to have characteristics of small-world networks. Networks defined in fracture systems and porous substances have demonstrated these characteristics. The seismic network in the Southern California region may be a small-world network. The examples above occur on very different spatial scales, demonstrating the scale invariance of the phenomenon in the earth sciences.
Applications to computing.
Small-world networks have been used to estimate the usability of information stored in large databases. The measure is termed the Small World Data Transformation Measure. The greater the database links align to a small-world network the more likely a user is going to be able to extract information in the future. This usability typically comes at the cost of the amount of information that can be stored in the same repository.
The Freenet peer-to-peer network has been shown to form a small-world network in simulation, allowing information to be stored and retrieved in a manner that scales efficiency as the network grows.
Nearest Neighbor Search solutions like HNSW use small-world networks to efficiently find the information in large item corpuses.
Small-world neural networks in the brain.
Both anatomical connections in the brain and the synchronization networks of cortical neurons exhibit small-world topology.
Structural and functional connectivity in the brain has also been found to reflect the small-world topology of short path length and high clustering. The network structure has been found in the mammalian cortex across species as well as in large scale imaging studies in humans. Advances in connectomics and network neuroscience, have found the small-worldness of neural networks to be associated with efficient communication.
In neural networks, short pathlength between nodes and high clustering at network hubs supports efficient communication between brain regions at the lowest energetic cost. The brain is constantly processing and adapting to new information and small-world network model supports the intense communication demands of neural networks. High clustering of nodes forms local networks which are often functionally related. Short path length between these hubs supports efficient global communication. This balance enables the efficiency of the global network while simultaneously equipping the brain to handle disruptions and maintain homeostasis, due to local subsystems being isolated from the global network. Loss of small-world network structure has been found to indicate changes in cognition and increased risk of psychological disorders.
In addition to characterizing whole-brain functional and structural connectivity, specific neural systems, such as the visual system, exhibit small-world network properties.
A small-world network of neurons can exhibit short-term memory. A computer model developed by Sara Solla had two stable states, a property (called bistability) thought to be important in memory storage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a "memory"), and stasis (holding it). Small world neuronal networks have also been used as models to understand seizures.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
Books.
<templatestyles src="Refbegin/styles.css" />
Journal articles.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "L \\propto \\log N"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\sigma = \\frac \\frac C {C_r} \\frac L {L_r}"
},
{
"math_id": 3,
"text": "\\sigma > 1"
},
{
"math_id": 4,
"text": "C \\gg C_r"
},
{
"math_id": 5,
"text": "L \\approx {L_r}"
},
{
"math_id": 6,
"text": "\\omega"
},
{
"math_id": 7,
"text": "\\omega = \\frac{L_r} L - \\frac C {C_\\ell}"
},
{
"math_id": 8,
"text": " \\text{SWI} = \\frac{L-L_\\ell}{L_r-L_\\ell}\\times\\frac{C-C_r}{C_\\ell-C_r}"
},
{
"math_id": 9,
"text": "L \\propto \\log \\log N"
}
] |
https://en.wikipedia.org/wiki?curid=1457254
|
14573391
|
Algebraic-group factorisation algorithm
|
Algebraic-group factorisation algorithms are algorithms for factoring an integer "N" by working in an algebraic group defined modulo "N" whose group structure is the direct sum of the 'reduced groups' obtained by performing the equations defining the group arithmetic modulo the unknown prime factors "p"1, "p"2, ... By the Chinese remainder theorem, arithmetic modulo "N" corresponds to arithmetic in all the reduced groups simultaneously.
The aim is to find an element which is not the identity of the group modulo "N", but is the identity modulo one of the factors, so a method for recognising such "one-sided identities" is required. In general, one finds them by performing operations that move elements around and leave the identities in the reduced groups unchanged. Once the algorithm finds a one-sided identity all future terms will also be one-sided identities, so checking periodically suffices.
Computation proceeds by picking an arbitrary element "x" of the group modulo "N" and computing a large and smooth multiple "Ax" of it; if the order of at least one but not all of the reduced groups is a divisor of A, this yields a factorisation. It need not be a prime factorisation, as the element might be an identity in more than one of the reduced groups.
Generally, A is taken as a product of the primes below some limit K, and "Ax" is computed by successive multiplication of "x" by these primes; after each multiplication, or every few multiplications, the check is made for a one-sided identity.
The two-step procedure.
It is often possible to multiply a group element by several small integers more quickly than by their product, generally by difference-based methods; one calculates differences between consecutive primes and adds consecutively by the formula_0. This means that a two-step procedure becomes sensible, first computing "Ax" by multiplying "x" by all the primes below a limit B1, and then examining "p Ax" for all the primes between B1 and a larger limit B2.
Methods corresponding to particular algebraic groups.
If the algebraic group is the multiplicative group mod "N", the one-sided identities are recognised by computing greatest common divisors with "N", and the result is the "p" − 1 method.
If the algebraic group is the multiplicative group of a quadratic extension of "N", the result is the "p" + 1 method; the calculation involves pairs of numbers modulo "N". It is not possible to tell whether formula_1 is actually a quadratic extension of formula_2 without knowing the factorisation of "N". This requires knowing whether "t" is a quadratic residue modulo "N", and there are no known methods for doing this without knowledge of the factorisation. However, provided "N" does not have a very large number of factors, in which case another method should be used first, picking random "t" (or rather picking "A" with "t" = "A"2 − 4) will accidentally hit a quadratic non-residue fairly quickly. If "t" is a quadratic residue, the p+1 method degenerates to a slower form of the "p" − 1 method.
If the algebraic group is an elliptic curve, the one-sided identities can be recognised by failure of inversion in the elliptic-curve point addition procedure, and the result is the elliptic curve method; Hasse's theorem states that the number of points on an elliptic curve modulo "p" is always within formula_3 of "p".
All three of the above algebraic groups are used by the GMP-ECM package, which includes efficient implementations of the two-stage procedure, and an implementation of the PRAC group-exponentiation algorithm which is rather more efficient than the standard binary exponentiation approach.
The use of other algebraic groups—higher-order extensions of "N" or groups corresponding to algebraic curves of higher genus—is occasionally proposed, but almost always impractical. These methods end up with smoothness constraints on numbers of the order of "p""d" for some "d" > 1, which are much less likely to be smooth than numbers of the order of "p".
|
[
{
"math_id": 0,
"text": "d_i r"
},
{
"math_id": 1,
"text": "\\mathbb Z/N\\mathbb Z [ \\sqrt t]"
},
{
"math_id": 2,
"text": "\\mathbb Z/N\\mathbb Z "
},
{
"math_id": 3,
"text": "2 \\sqrt p"
}
] |
https://en.wikipedia.org/wiki?curid=14573391
|
1457636
|
Clustering coefficient
|
Measure of how connected and clustered a node is in its graph
In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971; Watts and Strogatz, 1998).
Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the extent of "clustering" of a single node.
Local clustering coefficient.
The local clustering coefficient of a vertex (node) in a graph quantifies how close its neighbours are to being a clique (complete graph). Duncan J. Watts and Steven Strogatz introduced the measure in 1998 to determine whether a graph is a small-world network.
A graph formula_0 formally consists of a set of vertices formula_1 and a set of edges formula_2 between them. An edge formula_3 connects vertex formula_4 with vertex formula_5.
The neighbourhood formula_6 for a vertex formula_4 is defined as its immediately connected neighbours as follows:
formula_7
We define formula_8 as the number of vertices, formula_9, in the neighbourhood, formula_10, of vertex formula_4.
The local clustering coefficient formula_11 for a vertex formula_4 is then given by a proportion of the number of links between the vertices within its neighbourhood divided by the number of links that could possibly exist between them. For a directed graph, formula_3 is distinct from formula_12, and therefore for each neighbourhood formula_10 there are formula_13 links that could exist among the vertices within the neighbourhood (formula_8 is the number of neighbours of a vertex). Thus, the local clustering coefficient for directed graphs is given as
formula_14
An undirected graph has the property that formula_3 and formula_12 are considered identical. Therefore, if a vertex formula_4 has formula_8 neighbours, formula_15 edges could exist among the vertices within the neighbourhood. Thus, the local clustering coefficient for undirected graphs can be defined as
formula_16
Let formula_17 be the number of triangles on formula_18 for undirected graph formula_19. That is, formula_17 is the number of subgraphs of formula_19 with 3 edges and 3 vertices, one of which is formula_20. Let formula_21 be the number of triples on formula_22. That is, formula_21 is the number of subgraphs (not necessarily induced) with 2 edges and 3 vertices, one of which is formula_20 and such that formula_20 is incident to both edges. Then we can also define the clustering coefficient as
formula_23
It is simple to show that the two preceding definitions are the same, since
formula_24
These measures are 1 if every neighbour connected to formula_4 is also connected to every other vertex within the neighbourhood, and 0 if no vertex that is connected to formula_4 connects to any other vertex that is connected to formula_4.
Since any graph is fully specified by its adjacency matrix "A", the local clustering coefficient for a simple undirected graph can be expressed in terms of "A" as:
formula_25
where:
formula_26
and "Ci"=0 when "ki" is zero or one. In the above expression, the numerator counts twice the number of complete triangles that vertex "i" is involved in. In the denominator, "ki2" counts the number of edge pairs that vertex "i" is involved in plus the number of single edges traversed twice. "ki" is the number of edges connected to vertex i, and subtracting "ki" then removes the latter, leaving only a set of edge pairs that could conceivably be connected into triangles. For every such edge pair, there will be another edge pair which could form the same triangle, so the denominator counts twice the number of conceivable triangles that vertex "i" could be involved in.
Global clustering coefficient.
The global clustering coefficient is based on triplets of nodes. A triplet is three nodes that are connected by either two (open triplet) or three (closed triplet) undirected ties. A triangle graph therefore includes three closed triplets, one centred on each of the nodes (n.b. this means the three triplets in a triangle come from overlapping selections of nodes). The global clustering coefficient is the number of closed triplets (or 3 x triangles) over the total number of triplets (both open and closed). The first attempt to measure it was made by Luce and Perry (1949). This measure gives an indication of the clustering in the whole network (global), and can be applied to both undirected and directed networks (often called transitivity, see Wasserman and Faust, 1994, page 243).
The global clustering coefficient is defined as:
formula_27.
The number of closed triplets has also been referred to as 3 × triangles in the literature, so:
formula_28.
A generalisation to weighted networks was proposed by Opsahl and Panzarasa (2009), and a redefinition to two-mode networks (both binary and weighted) by Opsahl (2009).
Since any simple graph is fully specified by its adjacency matrix "A", the global clustering coefficient for an undirected graph can be expressed in terms of "A" as:
formula_29
where:
formula_26
and "C"=0 when the denominator is zero.
Network average clustering coefficient.
As an alternative to the global clustering coefficient, the overall level of clustering in a network is measured by Watts and Strogatz as the average of the local clustering coefficients of all the vertices formula_30 :
formula_31
It is worth noting that this metric places more weight on the low degree nodes, while the transitivity ratio places more weight on the high degree nodes.
A generalisation to weighted networks was proposed by Barrat et al. (2004), and a redefinition to bipartite graphs (also called two-mode networks) by Latapy et al. (2008) and Opsahl (2009).
Alternative generalisations to weighted and directed graphs have been provided by Fagiolo (2007) and Clemente and Grassi (2018).
This formula is not, by default, defined for graphs with isolated vertices; see Kaiser (2008) and Barmpoutis et al. The networks with the largest possible average clustering coefficient are found to have a modular structure, and at the same time, they have the smallest possible average distance among the different nodes.
Percolation of clustered networks.
For a random tree-like network without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by formula_32, where formula_33 is the generating function corresponding to the excess degree distribution.
In networks with low clustering, formula_34, the critical point gets scaled by formula_35 such that:
formula_36
This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable.
For studying the robustness of clustered networks a percolation approach is developed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "e_{ij}"
},
{
"math_id": 4,
"text": "v_i"
},
{
"math_id": 5,
"text": "v_j"
},
{
"math_id": 6,
"text": " N_i "
},
{
"math_id": 7,
"text": "N_i = \\{v_j : e_{ij} \\in E \\lor e_{ji} \\in E\\}."
},
{
"math_id": 8,
"text": "k_i"
},
{
"math_id": 9,
"text": "|N_i|"
},
{
"math_id": 10,
"text": "N_i"
},
{
"math_id": 11,
"text": "C_i"
},
{
"math_id": 12,
"text": "e_{ji}"
},
{
"math_id": 13,
"text": "k_i(k_i-1)"
},
{
"math_id": 14,
"text": "C_i = \\frac{|\\{e_{jk}: v_j,v_k \\in N_i, e_{jk} \\in E\\}|}{k_i(k_i-1)}."
},
{
"math_id": 15,
"text": "\\frac{k_i(k_i-1)}{2}"
},
{
"math_id": 16,
"text": "C_i = \\frac{2|\\{e_{jk}: v_j,v_k \\in N_i, e_{jk} \\in E\\}|}{k_i(k_i-1)}."
},
{
"math_id": 17,
"text": "\\lambda_G(v)"
},
{
"math_id": 18,
"text": "v \\in V(G)"
},
{
"math_id": 19,
"text": "G"
},
{
"math_id": 20,
"text": "v"
},
{
"math_id": 21,
"text": "\\tau_G(v)"
},
{
"math_id": 22,
"text": "v \\in G"
},
{
"math_id": 23,
"text": "C_i = \\frac{\\lambda_G(v)}{\\tau_G(v)}."
},
{
"math_id": 24,
"text": "\\tau_G(v) = C({k_i},2) = \\frac{1}{2}k_i(k_i-1)."
},
{
"math_id": 25,
"text": "\nC_i=\\frac{1}{k_i(k_i-1)}\\sum_{j,k} A_{ij}A_{jk}A_{ki}\n"
},
{
"math_id": 26,
"text": "\nk_i=\\sum_j A_{ij}\n"
},
{
"math_id": 27,
"text": "C = \\frac{\\mbox{number of closed triplets}}{\\mbox{number of all triplets (open and closed)}}"
},
{
"math_id": 28,
"text": "C = \\frac{3 \\times \\mbox{number of triangles}}{\\mbox{number of all triplets}}"
},
{
"math_id": 29,
"text": "\nC=\\frac{\\sum_{i,j,k} A_{ij}A_{jk}A_{ki}}{\\frac{1}{2}\\sum_i k_i(k_i-1)}\n"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "\\bar{C} = \\frac{1}{n}\\sum_{i=1}^{n} C_i."
},
{
"math_id": 32,
"text": "p_c = \\frac{1}{g_1'(1)}"
},
{
"math_id": 33,
"text": "g_1(z)"
},
{
"math_id": 34,
"text": "\n0 < C \\ll 1\n"
},
{
"math_id": 35,
"text": "\n(1-C)^{-1}\n"
},
{
"math_id": 36,
"text": "p_c = \\frac{1}{1-C}\\frac{1}{g_1'(1)}."
}
] |
https://en.wikipedia.org/wiki?curid=1457636
|
14576408
|
Radiative transfer equation and diffusion theory for photon transport in biological tissue
|
Photon transport in biological tissue can be equivalently modeled numerically with Monte Carlo simulations or analytically by the radiative transfer equation (RTE). However, the RTE is difficult to solve without introducing approximations. A common approximation summarized here is the diffusion approximation. Overall, solutions to the diffusion equation for photon transport are more computationally efficient, but less accurate than Monte Carlo simulations.
Definitions.
The RTE can mathematically model the transfer of energy as photons move inside a tissue. The flow of radiation energy through a small area element in the radiation field can be characterized by radiance formula_2 with units
formula_3. Radiance is defined as energy flow per unit normal area per unit solid angle per unit time. Here, formula_0 denotes position, formula_4 denotes unit direction vector and formula_5 denotes time (Figure 1).
Several other important physical quantities are based on the definition of radiance:
Radiative transfer equation.
The RTE is a differential equation describing radiance formula_2. It can be derived via conservation of energy. Briefly, the RTE states that a beam of light loses energy through divergence and extinction (including both absorption and scattering away from the beam) and gains energy from light sources in the medium and scattering directed towards the beam. Coherence, polarization and non-linearity are neglected. Optical properties such as refractive index formula_9, absorption coefficient μa, scattering coefficient μs, and scattering anisotropy formula_10 are taken as time-invariant but may vary spatially. Scattering is assumed to be elastic.
The RTE (Boltzmann equation) is thus written as:
formula_11
where
Diffusion theory.
Assumptions.
In the RTE, six different independent variables define the radiance at any spatial and temporal point (formula_19, formula_20, and formula_21 from formula_0, polar angle formula_22 and azimuthal angle formula_23 from formula_4, and formula_5). By making appropriate assumptions about the behavior of photons in a scattering medium, the number of independent variables can be reduced. These assumptions lead to the diffusion theory (and diffusion equation) for photon transport.
Two assumptions permit the application of diffusion theory to the RTE:
Both of these assumptions require a high-albedo (predominantly scattering) medium.
The RTE in the diffusion approximation.
Radiance can be expanded on a basis set of spherical harmonics formula_24n, m. In diffusion theory, radiance is taken to be largely isotropic, so only the isotropic and first-order anisotropic terms are used:
formula_25
where formula_26n, m are the expansion coefficients. Radiance is expressed with 4 terms: one for n = 0 (the isotropic term) and 3 terms for n = 1 (the anisotropic terms). Using properties of spherical harmonics and the definitions of fluence rate formula_27 and current density formula_28, the isotropic and anisotropic terms can respectively be expressed as follows:
Hence, we can approximate radiance as
formula_31
Substituting the above expression for radiance, the RTE can be respectively rewritten in scalar and vector forms as follows (The scattering term of the RTE is integrated over the complete formula_32 solid angle. For the vector form, the RTE is multiplied by direction formula_4 before evaluation.):
formula_33
formula_34
The diffusion approximation is limited to systems where reduced scattering coefficients are much larger than their absorption coefficients and having a minimum layer thickness of the order of a few transport mean free path.
The diffusion equation.
Using the second assumption of diffusion theory, we note that the fractional change in current density formula_28 over one transport mean free path is negligible. The vector representation of the diffusion theory RTE reduces to Fick's law formula_35, which defines current density in terms of the gradient of fluence rate. Substituting Fick's law into the scalar representation of the RTE gives the diffusion equation:
formula_36
formula_37 is the diffusion coefficient and μ'sformula_38μs is the reduced scattering coefficient.
Notably, there is no explicit dependence on the scattering coefficient in the diffusion equation. Instead, only the reduced scattering coefficient appears in the expression for formula_39. This leads to an important relationship; diffusion is unaffected if the anisotropy of the scattering medium is changed while the reduced scattering coefficient stays constant.
Solutions to the diffusion equation.
For various configurations of boundaries (e.g. layers of tissue) and light sources, the diffusion equation may be solved by applying appropriate boundary conditions and defining the source term formula_40 as the situation demands.
Point sources in infinite homogeneous media.
A solution to the diffusion equation for the simple case of a short-pulsed point source in an infinite homogeneous medium is presented in this section. The source term in the diffusion equation becomes formula_41, where formula_0 is the position at which fluence rate is measured and formula_42 is the position of the source. The pulse peaks at time formula_43. The diffusion equation is solved for fluence rate to yield the Green function for the diffusion equation:
formula_44
The term formula_45 represents the exponential decay in fluence rate due to absorption in accordance with Beer's law. The other terms represent broadening due to scattering. Given the above solution, an arbitrary source can be characterized as a superposition of short-pulsed point sources.
Taking time variation out of the diffusion equation gives the following for a time-independent point source formula_46:
formula_47
formula_48 is the effective attenuation coefficient and indicates the rate of spatial decay in fluence.
Boundary conditions.
Fluence rate at a boundary.
Consideration of boundary conditions permits use of the diffusion equation to characterize light propagation in media of limited size (where interfaces between the medium and the ambient environment must be considered). To begin to address a boundary, one can consider what happens when photons in the medium reach a boundary (i.e. a surface). The direction-integrated radiance at the boundary and directed into the medium is equal to the direction-integrated radiance at the boundary and directed out of the medium multiplied by reflectance formula_49:
formula_50
where formula_51 is normal to and pointing away from the boundary. The diffusion approximation gives an expression for radiance formula_26 in terms of fluence rate formula_52 and current density formula_53. Evaluating the above integrals after substitution gives:
formula_54
Substituting Fick's law (formula_57) gives, at a distance from the boundary z=0,
formula_58
The extrapolated boundary.
It is desirable to identify a zero-fluence boundary. However, the fluence rate formula_61 at a physical boundary is, in general, not zero. An extrapolated boundary, at formula_21b for which fluence rate is zero, can be determined to establish image sources. Using a first order Taylor series approximation,
formula_62
which evaluates to zero since formula_58. Thus, by definition, formula_21b must be formula_63z as defined above. Notably, when the index of refraction is the same on both sides of the boundary, formula_64F is zero and the extrapolated boundary is at formula_21bformula_65.
Pencil beam normally incident on a semi-infinite medium.
Using boundary conditions, one may approximately characterize diffuse reflectance for a pencil beam normally incident on a semi-infinite medium. The beam will be represented as two point sources in an infinite medium as follows (Figure 2):
The two point sources can be characterized as point sources in an infinite medium via
formula_72
formula_73 is the distance from observation point formula_74 to source location formula_75 in cylindrical coordinates. The linear combination of the fluence rate contributions from the two image sources is
formula_76
This can be used to get diffuse reflectance formula_64dformula_77 via Fick's law:
formula_78
formula_79 is the distance from the observation point formula_80 to the source at formula_81 and formula_82 is the distance from the observation point to the image source at formula_83bformula_68.
Properties of diffusion equation.
Scaling.
Let formula_84 be the Green function solution to the diffusion equation for a homogeneous medium of optical properties formula_85, formula_86, then the Green function solution for a homogeneous medium which differs from the former only by optical properties formula_87, formula_88, such that formula_89, can be obtained with the following rescaling:
formula_90
where formula_91 and formula_92.
Such property can also be extended to the radiance in the more general general framework of the RTE, by substituting the transport coefficients formula_88, formula_86 with the extinction coefficients formula_93, formula_94.
The usefulness of the property resides in taking the results obtained for a given geometry and set of optical properties, typical of a lab scale setting, rescaling them and extending them to contexts in which it would be complicated to perform measurements due to the sheer extension or inaccessibility.
Dependence on absorption.
Let formula_95 be the Green function solution to the diffusion equation for a non-absorbing homogeneous medium. Then, the Green function solution for the medium when its absorption coefficient is formula_85 can be obtained as:
formula_96
Again, the same property also holds for radiance within the RTE.
Diffusion theory solutions vs. Monte Carlo simulations.
Monte Carlo simulations of photon transport, though time consuming, will accurately predict photon behavior in a scattering medium. The assumptions involved in characterizing photon behavior with the diffusion equation generate inaccuracies. Generally, the diffusion approximation is less accurate as the absorption coefficient μa increases and the scattering coefficient μs decreases.
For a photon beam incident on a medium of limited depth, error due to the diffusion approximation is most prominent within one transport mean free path of the location of photon incidence (where radiance is not yet isotropic) (Figure 3).
Among the steps in describing a pencil beam incident on a semi-infinite medium with the diffusion equation, converting the medium from anisotropic to isotropic (step 1) (Figure 4) and converting the beam to a source (step 2) (Figure 5) generate more error than converting from a single source to a pair of image sources (step 3) (Figure 6). Step 2 generates the most significant error.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\vec{r}"
},
{
"math_id": 1,
"text": "d\\Omega"
},
{
"math_id": 2,
"text": "L(\\vec{r},\\hat{s},t)"
},
{
"math_id": 3,
"text": "\\frac{\\mathrm{W}}{\\mathrm{m}^2 \\mathrm{sr}}"
},
{
"math_id": 4,
"text": "\\hat{s}"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "\\Phi(\\vec{r},t)=\\int_{4\\pi}L(\\vec{r},\\hat{s},t)d\\Omega \\quad\\left[\\frac{\\mathrm{W}}{\\mathrm{m}^2 \\mathrm{sr}}\\right]"
},
{
"math_id": 7,
"text": "F(\\vec{r})=\\int_{-\\infty}^{+\\infty}\\Phi(\\vec{r},t)dt \\quad\\left[\\frac{\\mathrm{J}}{\\mathrm{m}^2}\\right]"
},
{
"math_id": 8,
"text": "\\vec{J}(\\vec{r},t)=\\int_{4\\pi}\\hat{s}L(\\vec{r},\\hat{s},t)d\\Omega \\quad\\left[\\frac{\\mathrm{W}}{\\mathrm{m}^2}\\right]"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "g"
},
{
"math_id": 11,
"text": "\\frac{\\partial L(\\vec{r},\\hat{s},t)/c}{\\partial t} = -\\hat{s}\\cdot \\nabla L(\\vec{r},\\hat{s},t)-\\mu_tL(\\vec{r},\\hat{s},t)+\\mu_s\\int_{4\\pi}L(\\vec{r},\\hat{s}',t)P(\\hat{s}',\\hat{s})d\\Omega' + S(\\vec{r},\\hat{s},t)"
},
{
"math_id": 12,
"text": "c"
},
{
"math_id": 13,
"text": "="
},
{
"math_id": 14,
"text": "P(\\hat{s}',\\hat{s})"
},
{
"math_id": 15,
"text": "\\hat{s}'"
},
{
"math_id": 16,
"text": "P(\\hat{s}',\\hat{s})=P(\\hat{s}'\\cdot\\hat{s})"
},
{
"math_id": 17,
"text": "g=\\int_{4\\pi}(\\hat{s}'\\cdot\\hat{s})P(\\hat{s}'\\cdot\\hat{s})d\\Omega"
},
{
"math_id": 18,
"text": "S(\\vec{r},\\hat{s},t)"
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "y"
},
{
"math_id": 21,
"text": "z"
},
{
"math_id": 22,
"text": "\\theta"
},
{
"math_id": 23,
"text": "\\phi"
},
{
"math_id": 24,
"text": "Y"
},
{
"math_id": 25,
"text": "L(\\vec{r},\\hat{s},t) \\approx\\ \\sum_{n=0}^{1} \\sum_{m=-n}^{n}L_{n,m}(\\vec{r},t)Y_{n,m}(\\hat{s})"
},
{
"math_id": 26,
"text": "L"
},
{
"math_id": 27,
"text": "\\Phi(\\vec{r},t)"
},
{
"math_id": 28,
"text": "\\vec{J}(\\vec{r},t)"
},
{
"math_id": 29,
"text": "L_{0,0}(\\vec{r},t)Y_{0,0}(\\hat{s})=\\frac{\\Phi(\\vec{r},t)}{4\\pi}"
},
{
"math_id": 30,
"text": "\\sum_{m=-1}^{1}L_{1,m}(\\vec{r},t)Y_{1,m}(\\hat{s})=\\frac{3}{4\\pi}\\vec{J}(\\vec{r},t)\\cdot \\hat{s}"
},
{
"math_id": 31,
"text": "L(\\vec{r},\\hat{s},t)=\\frac{1}{4\\pi}\\Phi(\\vec{r},t)+\\frac{3}{4\\pi}\\vec{J}(\\vec{r},t)\\cdot \\hat{s}"
},
{
"math_id": 32,
"text": "4\\pi"
},
{
"math_id": 33,
"text": " \\frac{\\partial \\Phi(\\vec{r},t)}{c\\partial t} + \\mu_a\\Phi(\\vec{r},t) + \\nabla \\cdot \\vec{J}(\\vec{r},t) = S(\\vec{r},t)"
},
{
"math_id": 34,
"text": " \\frac{\\partial \\vec{J}(\\vec{r},t)}{c\\partial t} + (\\mu_a+\\mu_s')\\vec{J}(\\vec{r},t) + \\frac{1}{3}\\nabla \\Phi(\\vec{r},t) = 0"
},
{
"math_id": 35,
"text": "\\vec{J}(\\vec{r},t)=\\frac{-\\nabla \\Phi(\\vec{r},t)}{3(\\mu_a+\\mu_s')}"
},
{
"math_id": 36,
"text": " \\frac{1}{c}\\frac{\\partial \\Phi(\\vec{r},t)}{\\partial t} + \\mu_a\\Phi(\\vec{r},t) - \\nabla \\cdot [D\\nabla\\Phi(\\vec{r},t)] = S(\\vec{r},t)"
},
{
"math_id": 37,
"text": "D=\\frac{1}{3(\\mu_a+\\mu_s')}"
},
{
"math_id": 38,
"text": "=(1-g)"
},
{
"math_id": 39,
"text": "D"
},
{
"math_id": 40,
"text": "S(\\vec{r},t)"
},
{
"math_id": 41,
"text": "S(\\vec{r},t, \\vec{r}',t')=\\delta(\\vec{r}-\\vec{r}')\\delta(t-t')"
},
{
"math_id": 42,
"text": "\\vec{r}'"
},
{
"math_id": 43,
"text": "t'"
},
{
"math_id": 44,
"text": "\\Phi(\\vec{r},t;\\vec{r}',t)=\\frac{c}{[4\\pi Dc(t-t')]^{3/2}}\\exp\\left[-\\frac{\\mid \\vec{r}-\\vec{r}' \\mid ^2}{4Dc(t-t')}\\right]\\exp[-\\mu_ac(t-t')]"
},
{
"math_id": 45,
"text": "\\exp\\left[-\\mu_ac(t-t')\\right]"
},
{
"math_id": 46,
"text": "S(\\vec{r})=\\delta(\\vec{r})"
},
{
"math_id": 47,
"text": "\\Phi(\\vec{r})=\\frac{1}{4\\pi Dr}\\exp(-\\mu_{\\mathrm{eff}}r)"
},
{
"math_id": 48,
"text": "\\mu_{\\mathrm{eff}}=\\sqrt{\\frac{\\mu_a}{D}}"
},
{
"math_id": 49,
"text": "R_F"
},
{
"math_id": 50,
"text": "\\int_{\\hat{s}\\cdot \\hat{n}<0}L(\\vec{r},\\hat{s},t)\\hat{s}\\cdot \\hat{n} d\\Omega=\\int_{\\hat{s}\\cdot \\hat{n}>0}R_F(\\hat{s}\\cdot \\hat{n})L(\\vec{r},\\hat{s},t)\\hat{s}\\cdot \\hat{n}d\\Omega"
},
{
"math_id": 51,
"text": "\\hat{n}"
},
{
"math_id": 52,
"text": "\\Phi"
},
{
"math_id": 53,
"text": "\\vec{J}"
},
{
"math_id": 54,
"text": "\\frac{\\Phi(\\vec{r},t)}{4}+\\vec{J}(\\vec{r},t)\\cdot \\frac{\\hat{n}}{2}=R_{\\Phi}\\frac{\\Phi(\\vec{r},t)}{4}-R_{J}\\vec{J}(\\vec{r},t)\\cdot \\frac{\\hat{n}}{2}"
},
{
"math_id": 55,
"text": "R_{\\Phi}=\\int_{0}^{\\pi/2}2\\sin \\theta \\cos \\theta R_F(\\cos \\theta)d\\theta"
},
{
"math_id": 56,
"text": "R_{J}=\\int_{0}^{\\pi/2}3\\sin \\theta (\\cos \\theta)^2 R_F(\\cos \\theta)d\\theta"
},
{
"math_id": 57,
"text": "\\vec{J}(\\vec{r},t)=-D\\nabla \\Phi(\\vec{r},t)"
},
{
"math_id": 58,
"text": "\\Phi(\\vec{r},t)=A_z\\frac{\\partial \\Phi(\\vec{r},t)}{\\partial z}"
},
{
"math_id": 59,
"text": "A_z=2D\\frac{1+R_{\\mathrm{eff}}}{1-R_{\\mathrm{eff}}}"
},
{
"math_id": 60,
"text": "R_{\\mathrm{eff}}=\\frac{R_{\\Phi}+R_{J}}{2-R_{\\Phi}+R_J}"
},
{
"math_id": 61,
"text": "\\Phi(z=0, t)"
},
{
"math_id": 62,
"text": "\\left.\\Phi(z=-A_z,t)\\approx \\Phi(z=0,t)-A_z\\frac{\\partial \\Phi(\\vec{r},t)}{\\partial z}\\right|_{z=0}"
},
{
"math_id": 63,
"text": "-A"
},
{
"math_id": 64,
"text": "R"
},
{
"math_id": 65,
"text": "=-2D"
},
{
"math_id": 66,
"text": "=0"
},
{
"math_id": 67,
"text": "(1-g"
},
{
"math_id": 68,
"text": ")"
},
{
"math_id": 69,
"text": "l"
},
{
"math_id": 70,
"text": "a"
},
{
"math_id": 71,
"text": "+2z"
},
{
"math_id": 72,
"text": "\\Phi_{\\infty}(r,\\theta,z; r',\\theta',z')=\\frac{1}{4\\pi D\\rho}\\exp(-\\mu_{\\mathrm{eff}}\\rho)"
},
{
"math_id": 73,
"text": "\\rho"
},
{
"math_id": 74,
"text": "(r,\\theta,z)"
},
{
"math_id": 75,
"text": "(r ',\\theta ',z')"
},
{
"math_id": 76,
"text": "\\Phi(r,\\theta,z; r',\\theta',z')=a'\\Phi_{\\infty}(r,\\theta,z; r',\\theta',z')-a'\\Phi_{\\infty}(r,\\theta,z; r',\\theta',-z'-2z_b)"
},
{
"math_id": 77,
"text": "(r)"
},
{
"math_id": 78,
"text": "\\left.R_d(r)=D\\frac{\\partial \\Phi}{\\partial z}\\right|_{z=0}= \\frac{a 'z '(1+\\mu_{\\mathrm{eff}}\\rho_1)\\exp(-\\mu_{\\mathrm{eff}}\\rho_1)}{4\\pi \\rho_1^3} + \\frac{a '(z '+4D)(1+\\mu_{\\mathrm{eff}}\\rho_2)\\exp(-\\mu_{\\mathrm{eff}}\\rho_2)}{4\\pi \\rho_2^3}"
},
{
"math_id": 79,
"text": "\\rho_1"
},
{
"math_id": 80,
"text": "(r,0,0)"
},
{
"math_id": 81,
"text": "(0,0,z ')"
},
{
"math_id": 82,
"text": "\\rho_2"
},
{
"math_id": 83,
"text": "(0,0,-z '-2z"
},
{
"math_id": 84,
"text": "\\Phi(\\vec{r}, t)"
},
{
"math_id": 85,
"text": "\\mu_a"
},
{
"math_id": 86,
"text": "\\mu_s'"
},
{
"math_id": 87,
"text": "\\bar{\\mu}_a"
},
{
"math_id": 88,
"text": "\\bar{\\mu}_s'"
},
{
"math_id": 89,
"text": "\\bar{\\mu}_a/\\mu_a=\\bar{\\mu}_s'/\\mu_s'"
},
{
"math_id": 90,
"text": "\\bar{\\Phi}(\\bar{\\vec{r}}, \\bar{t})=\\left(\\frac{\\bar{\\mu}_s'}{\\mu_s'}\\right)^3\\Phi(\\vec{r}, t)"
},
{
"math_id": 91,
"text": "\\bar{\\vec{r}}=\\vec{r}\\frac{\\mu_s'}{\\bar{\\mu}_s'}"
},
{
"math_id": 92,
"text": "\\bar{t}=t\\frac{\\mu_s'}{\\bar{\\mu}_s'}"
},
{
"math_id": 93,
"text": "\\bar{\\mu}_t"
},
{
"math_id": 94,
"text": "\\mu_t"
},
{
"math_id": 95,
"text": "\\Phi(\\vec{r}, t, \\mu_a=0)"
},
{
"math_id": 96,
"text": "\\Phi(\\vec{r}, t, \\mu_a) = \\Phi(\\vec{r}, t, \\mu_a=0)\\exp(-\\mu_avt)"
}
] |
https://en.wikipedia.org/wiki?curid=14576408
|
145772
|
Food web
|
Natural interconnection of food chains
A food web is the natural interconnection of food chains and a graphical representation of what-eats-what in an ecological community. Ecologists can broadly define all life forms as either autotrophs or heterotrophs, based on their trophic levels, the position that they occupy in the food web. To maintain their bodies, grow, develop, and to reproduce, autotrophs produce organic matter from inorganic substances, including both minerals and gases such as carbon dioxide. These chemical reactions require energy, which mainly comes from the Sun and largely by photosynthesis, although a very small amount comes from bioelectrogenesis in wetlands, and mineral electron donors in hydrothermal vents and hot springs. These trophic levels are not binary, but form a gradient that includes complete autotrophs, which obtain their sole source of carbon from the atmosphere, mixotrophs (such as carnivorous plants), which are autotrophic organisms that partially obtain organic matter from sources other than the atmosphere, and complete heterotrophs that must feed to obtain organic matter.
The linkages in a food web illustrate the feeding pathways, such as where heterotrophs obtain organic matter by feeding on autotrophs and other heterotrophs. The food web is a simplified illustration of the various methods of feeding that link an ecosystem into a unified system of exchange. There are different kinds of consumer–resource interactions that can be roughly divided into herbivory, carnivory, scavenging, and parasitism. Some of the organic matter eaten by heterotrophs, such as sugars, provides energy. Autotrophs and heterotrophs come in all sizes, from microscopic to many tonnes - from cyanobacteria to giant redwoods, and from viruses and bdellovibrio to blue whales.
Charles Elton pioneered the concept of food cycles, food chains, and food size in his classical 1927 book "Animal Ecology"; Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Elton organized species into functional groups, which was the basis for Raymond Lindeman's classic and landmark paper in 1942 on trophic dynamics. Lindeman emphasized the important role of decomposer organisms in a trophic system of classification. The notion of a food web has a historical foothold in the writings of Charles Darwin and his terminology, including an "entangled bank", "web of life", "web of complex relations", and in reference to the decomposition actions of earthworms he talked about "the continued movement of the particles of earth". Even earlier, in 1768 John Bruckner described nature as "one continued web of life".
Food webs are limited representations of real ecosystems as they necessarily aggregate many species into trophic species, which are functional groups of species that have the same predators and prey in a food web. Ecologists use these simplifications in quantitative (or mathematical representation) models of trophic or consumer-resource systems dynamics. Using these models they can measure and test for generalized patterns in the structure of real food web networks. Ecologists have identified non-random properties in the topological structure of food webs. Published examples that are used in meta analysis are of variable quality with omissions. However, the number of empirical studies on community webs is on the rise and the mathematical treatment of food webs using network theory had identified patterns that are common to all. Scaling laws, for example, predict a relationship between the topology of food web predator-prey linkages and levels of species richness.
Taxonomy of a food web.
<templatestyles src="Template:Quote_box/styles.css" />
Food webs are the road-maps through Darwin's famous 'entangled bank' and have a long history in ecology. Like maps of unfamiliar ground, food webs appear bewilderingly complex. They were often published to make just that point. Yet recent studies have shown that food webs from a wide range of terrestrial, freshwater, and marine communities share a remarkable list of patterns.
Links in food webs map the feeding connections (who eats whom) in an ecological community. "Food cycle" is an obsolete term that is synonymous with food web. Ecologists can broadly group all life forms into one of two trophic layers, the autotrophs and the heterotrophs. Autotrophs produce more biomass energy, either chemically without the sun's energy or by capturing the sun's energy in photosynthesis, than they use during metabolic respiration. Heterotrophs consume rather than produce biomass energy as they metabolize, grow, and add to levels of secondary production. A food web depicts a collection of polyphagous heterotrophic consumers that network and cycle the flow of energy and nutrients from a productive base of self-feeding autotrophs.
The base or basal species in a food web are those species without prey and can include autotrophs or saprophytic detritivores (i.e., the community of decomposers in soil, biofilms, and periphyton). Feeding connections in the web are called trophic links. The number of trophic links per consumer is a measure of food web connectance. Food chains are nested within the trophic links of food webs. Food chains are linear (noncyclic) feeding pathways that trace monophagous consumers from a base species up to the top consumer, which is usually a larger predatory carnivore.
Linkages connect to nodes in a food web, which are aggregates of biological taxa called trophic species. Trophic species are functional groups that have the same predators and prey in a food web. Common examples of an aggregated node in a food web might include parasites, microbes, decomposers, saprotrophs, consumers, or predators, each containing many species in a web that can otherwise be connected to other trophic species.
Trophic levels.
Food webs have trophic levels and positions. Basal species, such as plants, form the first level and are the resource limited species that feed on no other living creature in the web. Basal species can be autotrophs or detritivores, including "decomposing organic material and its associated microorganisms which we defined as detritus, micro-inorganic material and associated microorganisms (MIP), and vascular plant material." Most autotrophs capture the sun's energy in chlorophyll, but some autotrophs (the chemolithotrophs) obtain energy by the chemical oxidation of inorganic compounds and can grow in dark environments, such as the sulfur bacterium "Thiobacillus", which lives in hot sulfur springs. The top level has top (or apex) predators which no other species kills directly for its food resource needs. The intermediate levels are filled with omnivores that feed on more than one trophic level and cause energy to flow through a number of food pathways starting from a basal species.
In the simplest scheme, the first trophic level (level 1) is plants, then herbivores (level 2), and then carnivores (level 3). The trophic level is equal to one more than the chain length, which is the number of links connecting to the base. The base of the food chain (primary producers or detritivores) is set at zero. Ecologists identify feeding relations and organize species into trophic species through extensive gut content analysis of different species. The technique has been improved through the use of stable isotopes to better trace energy flow through the web. It was once thought that omnivory was rare, but recent evidence suggests otherwise. This realization has made trophic classifications more complex.
Trophic dynamics and multitrophic interactions.
The trophic level concept was introduced in a historical landmark paper on trophic dynamics in 1942 by Raymond L. Lindeman. The basis of trophic dynamics is the transfer of energy from one part of the ecosystem to another. The trophic dynamic concept has served as a useful quantitative heuristic, but it has several major limitations including the precision by which an organism can be allocated to a specific trophic level. Omnivores, for example, are not restricted to any single level. Nonetheless, recent research has found that discrete trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
A central question in the trophic dynamic literature is the nature of control and regulation over resources and production. Ecologists use simplified one trophic position food chain models (producer, carnivore, decomposer). Using these models, ecologists have tested various types of ecological control mechanisms. For example, herbivores generally have an abundance of vegetative resources, which meant that their populations were largely controlled or regulated by predators. This is known as the top-down hypothesis or 'green-world' hypothesis. Alternatively to the top-down hypothesis, not all plant material is edible and the nutritional quality or antiherbivore defenses of plants (structural and chemical) suggests a bottom-up form of regulation or control. Recent studies have concluded that both "top-down" and "bottom-up" forces can influence community structure and the strength of the influence is environmentally context dependent. These complex multitrophic interactions involve more than two trophic levels in a food web. For example, such interactions have been discovered in the context of arbuscular mycorrhizal fungi and aphid herbivores that utilize the same plant species.
Another example of a multitrophic interaction is a trophic cascade, in which predators help to increase plant growth and prevent overgrazing by suppressing herbivores. Links in a food-web illustrate direct trophic relations among species, but there are also indirect effects that can alter the abundance, distribution, or biomass in the trophic levels. For example, predators eating herbivores indirectly influence the control and regulation of primary production in plants. Although the predators do not eat the plants directly, they regulate the population of herbivores that are directly linked to plant trophism. The net effect of direct and indirect relations is called trophic cascades. Trophic cascades are separated into species-level cascades, where only a subset of the food-web dynamic is impacted by a change in population numbers, and community-level cascades, where a change in population numbers has a dramatic effect on the entire food-web, such as the distribution of plant biomass.
The field of chemical ecology has elucidated multitrophic interactions that entail the transfer of defensive compounds across multiple trophic levels. For example, certain plant species in the "Castilleja" and "Plantago" genera have been found to produce defensive compounds called iridoid glycosides that are sequestered in the tissues of the Taylor's checkerspot butterfly larvae that have developed a tolerance for these compounds and are able to consume the foliage of these plants. These sequestered iridoid glycosides then confer chemical protection against bird predators to the butterfly larvae. Another example of this sort of multitrophic interaction in plants is the transfer of defensive alkaloids produced by endophytes living within a grass host to a hemiparasitic plant that is also using the grass as a host.
Energy flow and biomass.
<templatestyles src="Template:Quote_box/styles.css" />
The Law of Conservation of Mass dates from Antoine Lavoisier's 1789 discovery that mass is neither created nor destroyed in chemical reactions. In other words, the mass of any one element at the beginning of a reaction will equal the mass of that element at the end of the reaction.
Food webs depict energy flow via trophic linkages. Energy flow is directional, which contrasts against the cyclic flows of material through the food web systems. Energy flow "typically includes production, consumption, assimilation, non-assimilation losses (feces), and respiration (maintenance costs)." In a very general sense, energy flow (E) can be defined as the sum of metabolic production (P) and respiration (R), such that E=P+R.
Biomass represents stored energy. However, concentration and quality of nutrients and energy is variable. Many plant fibers, for example, are indigestible to many herbivores leaving grazer community food webs more nutrient limited than detrital food webs where bacteria are able to access and release the nutrient and energy stores. "Organisms usually extract energy in the form of carbohydrates, lipids, and proteins. These polymers have a dual role as supplies of energy as well as building blocks; the part that functions as energy supply results in the production of nutrients (and carbon dioxide, water, and heat). Excretion of nutrients is, therefore, basic to metabolism." The units in energy flow webs are typically a measure mass or energy per m2 per unit time. Different consumers are going to have different metabolic assimilation efficiencies in their diets. Each trophic level transforms energy into biomass. Energy flow diagrams illustrate the rates and efficiency of transfer from one trophic level into another and up through the hierarchy.
It is the case that the biomass of each trophic level decreases from the base of the chain to the top. This is because energy is lost to the environment with each transfer as entropy increases. About eighty to ninety percent of the energy is expended for the organism's life processes or is lost as heat or waste. Only about ten to twenty percent of the organism's energy is generally passed to the next organism. The amount can be less than one percent in animals consuming less digestible plants, and it can be as high as forty percent in zooplankton consuming phytoplankton. Graphic representations of the biomass or productivity at each tropic level are called ecological pyramids or trophic pyramids. The transfer of energy from primary producers to top consumers can also be characterized by energy flow diagrams.
Food chain.
A common metric used to quantify food web trophic structure is food chain length. Food chain length is another way of describing food webs as a measure of the number of species encountered as energy or nutrients move from the plants to top predators. There are different ways of calculating food chain length depending on what parameters of the food web dynamic are being considered: connectance, energy, or interaction. In its simplest form, the length of a chain is the number of links between a trophic consumer and the base of the web. The mean chain length of an entire web is the arithmetic average of the lengths of all chains in a food web.
In a simple predator-prey example, a deer is one step removed from the plants it eats (chain length = 1) and a wolf that eats the deer is two steps removed from the plants (chain length = 2). The relative amount or strength of influence that these parameters have on the food web address questions about:
Ecological pyramids.
In a pyramid of numbers, the number of consumers at each level decreases significantly, so that a single top consumer, (e.g., a polar bear or a human), will be supported by a much larger number of separate producers. There is usually a maximum of four or five links in a food chain, although food chains in aquatic ecosystems are more often longer than those on land. Eventually, all the energy in a food chain is dispersed as heat.
Ecological pyramids place the primary producers at the base. They can depict different numerical properties of ecosystems, including numbers of individuals per unit of area, biomass (g/m2), and energy (k cal m−2 yr−1). The emergent pyramidal arrangement of trophic levels with amounts of energy transfer decreasing as species become further removed from the source of production is one of several patterns that is repeated amongst the planets ecosystems. The size of each level in the pyramid generally represents biomass, which can be measured as the dry weight of an organism. Autotrophs may have the highest global proportion of biomass, but they are closely rivaled or surpassed by microbes.
Pyramid structure can vary across ecosystems and across time. In some instances biomass pyramids can be inverted. This pattern is often identified in aquatic and coral reef ecosystems. The pattern of biomass inversion is attributed to different sizes of producers. Aquatic communities are often dominated by producers that are smaller than the consumers that have high growth rates. Aquatic producers, such as planktonic algae or aquatic plants, lack the large accumulation of secondary growth as exists in the woody trees of terrestrial ecosystems. However, they are able to reproduce quickly enough to support a larger biomass of grazers. This inverts the pyramid. Primary consumers have longer lifespans and slower growth rates that accumulates more biomass than the producers they consume. Phytoplankton live just a few days, whereas the zooplankton eating the phytoplankton live for several weeks and the fish eating the zooplankton live for several consecutive years. Aquatic predators also tend to have a lower death rate than the smaller consumers, which contributes to the inverted pyramidal pattern. Population structure, migration rates, and environmental refuge for prey are other possible causes for pyramids with biomass inverted. Energy pyramids, however, will always have an upright pyramid shape if all sources of food energy are included and this is dictated by the second law of thermodynamics.
Material flux and recycling.
Many of the Earth's elements and minerals (or mineral nutrients) are contained within the tissues and diets of organisms. Hence, mineral and nutrient cycles trace food web energy pathways. Ecologists employ stoichiometry to analyze the ratios of the main elements found in all organisms: carbon (C), nitrogen (N), phosphorus (P). There is a large transitional difference between many terrestrial and aquatic systems as C:P and C:N ratios are much higher in terrestrial systems while N:P ratios are equal between the two systems. Mineral nutrients are the material resources that organisms need for growth, development, and vitality. Food webs depict the pathways of mineral nutrient cycling as they flow through organisms. Most of the primary production in an ecosystem is not consumed, but is recycled by detritus back into useful nutrients. Many of the Earth's microorganisms are involved in the formation of minerals in a process called biomineralization. Bacteria that live in detrital sediments create and cycle nutrients and biominerals. Food web models and nutrient cycles have traditionally been treated separately, but there is a strong functional connection between the two in terms of stability, flux, sources, sinks, and recycling of mineral nutrients.
Kinds of food webs.
Food webs are necessarily aggregated and only illustrate a tiny portion of the complexity of real ecosystems. For example, the number of species on the planet are likely in the general order of 107, over 95% of these species consist of microbes and invertebrates, and relatively few have been named or classified by taxonomists. It is explicitly understood that natural systems are 'sloppy' and that food web trophic positions simplify the complexity of real systems that sometimes overemphasize many rare interactions. Most studies focus on the larger influences where the bulk of energy transfer occurs. "These omissions and problems are causes for concern, but on present evidence do not present insurmountable difficulties."
There are different kinds or categories of food webs:
Within these categories, food webs can be further organized according to the different kinds of ecosystems being investigated. For example, human food webs, agricultural food webs, detrital food webs, marine food webs, aquatic food webs, soil food webs, Arctic (or polar) food webs, terrestrial food webs, and microbial food webs. These characterizations stem from the ecosystem concept, which assumes that the phenomena under investigation (interactions and feedback loops) are sufficient to explain patterns within boundaries, such as the edge of a forest, an island, a shoreline, or some other pronounced physical characteristic.
Detrital web.
In a detrital web, plant and animal matter is broken down by decomposers, e.g., bacteria and fungi, and moves to detritivores and then carnivores. There are often relationships between the detrital web and the grazing web. Mushrooms produced by decomposers in the detrital web become a food source for deer, squirrels, and mice in the grazing web. Earthworms eaten by robins are detritivores consuming decaying leaves.
"Detritus can be broadly defined as any form of non-living organic matter, including different types of plant tissue (e.g. leaf litter, dead wood, aquatic macrophytes, algae), animal tissue (carrion), dead microbes, faeces (manure, dung, faecal pellets, guano, frass), as well as products secreted, excreted or exuded from organisms (e.g. extra-cellular polymers, nectar, root exudates and leachates, dissolved organic matter, extra-cellular matrix, mucilage). The relative importance of these forms of detritus, in terms of origin, size and chemical composition, varies across ecosystems."
Quantitative food webs.
Ecologists collect data on trophic levels and food webs to statistically model and mathematically calculate parameters, such as those used in other kinds of network analysis (e.g., graph theory), to study emergent patterns and properties shared among ecosystems. There are different ecological dimensions that can be mapped to create more complicated food webs, including: species composition (type of species), richness (number of species), biomass (the dry weight of plants and animals), productivity (rates of conversion of energy and nutrients into growth), and stability (food webs over time). A food web diagram illustrating species composition shows how change in a single species can directly and indirectly influence many others. are used to simplify food web research into semi-isolated units such as small springs, decaying logs, and laboratory experiments using organisms that reproduce quickly, such as daphnia feeding on algae grown under controlled environments in jars of water.
While the complexity of real food webs connections are difficult to decipher, ecologists have found mathematical models on networks an invaluable tool for gaining insight into the structure, stability, and laws of food web behaviours relative to observable outcomes. "Food web theory centers around the idea of connectance." Quantitative formulas simplify the complexity of food web structure. The number of trophic links (tL), for example, is converted into a connectance value:
formula_0,
where, S(S-1)/2 is the maximum number of binary connections among S species. "Connectance (C) is the fraction of all possible links that are realized (L/S2) and represents a standard measure of food web complexity..." The distance (d) between every species pair in a web is averaged to compute the mean distance between all nodes in a web (D) and multiplied by the total number of links (L) to obtain link-density (LD), which is influenced by scale-dependent variables such as species richness. These formulas are the basis for comparing and investigating the nature of non-random patterns in the structure of food web networks among many different types of ecosystems.
Scaling laws, complexity, chaos, and pattern correlates are common features attributed to food web structure.
Complexity and stability.
Food webs are extremely complex. Complexity is a term that conveys the mental intractability of understanding all possible higher-order effects in a food web. Sometimes in food web terminology, complexity is defined as product of the number of species and connectance., though there have been criticisms of this definition and other proposed methods for measuring network complexity. Connectance is "the fraction of all possible links that are realized in a network". These concepts were derived and stimulated through the suggestion that complexity leads to stability in food webs, such as increasing the number of trophic levels in more species rich ecosystems. This hypothesis was challenged through mathematical models suggesting otherwise, but subsequent studies have shown that the premise holds in real systems.
At different levels in the hierarchy of life, such as the stability of a food web, "the same overall structure is maintained in spite of an ongoing flow and change of components." The farther a living system (e.g., ecosystem) sways from equilibrium, the greater its complexity. Complexity has multiple meanings in the life sciences and in the public sphere that confuse its application as a precise term for analytical purposes in science. Complexity in the life sciences (or biocomplexity) is defined by the "properties emerging from the interplay of behavioral, biological, physical, and social interactions that affect, sustain, or are modified by living organisms, including humans".
Several concepts have emerged from the study of complexity in food webs. Complexity explains many principals pertaining to self-organization, non-linearity, interaction, cybernetic feedback, discontinuity, emergence, and stability in food webs. Nestedness, for example, is defined as "a pattern of interaction in which specialists interact with species that form perfect subsets of the species with which generalists interact", "—that is, the diet of the most specialized species is a subset of the diet of the next more generalized species, and its diet a subset of the next more generalized, and so on." Until recently, it was thought that food webs had little nested structure, but empirical evidence shows that many published webs have nested subwebs in their assembly.
Food webs are complex networks. As networks, they exhibit similar structural properties and mathematical laws that have been used to describe other complex systems, such as small world and scale free properties. The small world attribute refers to the many loosely connected nodes, non-random dense clustering of a few nodes (i.e., trophic or keystone species in ecology), and small path length compared to a regular lattice. "Ecological networks, especially mutualistic networks, are generally very heterogeneous, consisting of areas with sparse links among species and distinct areas of tightly linked species. These regions of high link density are often referred to as cliques, hubs, compartments, cohesive sub-groups, or modules...Within food webs, especially in aquatic systems, nestedness appears to be related to body size because the diets of smaller predators tend to be nested subsets of those of larger predators (Woodward & Warren 2007; YvonDurocher et al. 2008), and phylogenetic constraints, whereby related taxa are nested based on their common evolutionary history, are also evident (Cattin et al. 2004)." "Compartments in food webs are subgroups of taxa in which many strong interactions occur within the subgroups and few weak interactions occur between the subgroups. Theoretically, compartments increase the stability in networks, such as food webs."
Food webs are also complex in the way that they change in scale, seasonally, and geographically. The components of food webs, including organisms and mineral nutrients, cross the thresholds of ecosystem boundaries. This has led to the concept or area of study known as cross-boundary subsidy. "This leads to anomalies, such as food web calculations determining that an ecosystem can support one half of a top carnivore, without specifying which end." Nonetheless, real differences in structure and function have been identified when comparing different kinds of ecological food webs, such as terrestrial vs. aquatic food webs.
History of food webs.
Food webs serve as a framework to help ecologists organize the complex network of interactions among species observed in nature and around the world. One of the earliest descriptions of a food chain was described by a medieval Afro-Arab scholar named Al-Jahiz: "All animals, in short, cannot exist without food, neither can the hunting animal escape being hunted in his turn." The earliest graphical depiction of a food web was by Lorenzo Camerano in 1880, followed independently by those of Pierce and colleagues in 1912 and Victor Shelford in 1913. Two food webs about herring were produced by Victor Summerhayes and Charles Elton and Alister Hardy in 1923 and 1924. Charles Elton subsequently pioneered the concept of food cycles, food chains, and food size in his classical 1927 book "Animal Ecology"; Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. After Charles Elton's use of food webs in his 1927 synthesis, they became a central concept in the field of ecology. Elton organized species into functional groups, which formed the basis for the trophic system of classification in Raymond Lindeman's classic and landmark paper in 1942 on trophic dynamics. The notion of a food web has a historical foothold in the writings of Charles Darwin and his terminology, including an "entangled bank", "web of life", "web of complex relations", and in reference to the decomposition actions of earthworms he talked about "the continued movement of the particles of earth". Even earlier, in 1768 John Bruckner described nature as "one continued web of life".
Interest in food webs increased after Robert Paine's experimental and descriptive study of intertidal shores suggesting that food web complexity was key to maintaining species diversity and ecological stability. Many theoretical ecologists, including Sir Robert May and Stuart Pimm, were prompted by this discovery and others to examine the mathematical properties of food webs.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C= \\cfrac{t_L}{S(S-1)/2}"
}
] |
https://en.wikipedia.org/wiki?curid=145772
|
1457846
|
Nesbitt's inequality
|
Mathematical inequality
In mathematics, Nesbitt's inequality, named after Alfred Nesbitt, states that for positive real numbers "a", "b" and "c",
formula_0
with equality only when formula_1 (i. e. in an equilateral triangle).
There is no corresponding upper bound as any of the 3 fractions in the inequality can be made arbitrarily large.
It is the three-variable case of the rather more difficult Shapiro inequality, and was published at least 50 years earlier.
Proof.
First proof: AM-HM inequality.
By the AM-HM inequality on formula_2,
formula_3
Clearing denominators yields
formula_4
from which we obtain
formula_5
by expanding the product and collecting like denominators. This then simplifies directly to the final result.
Second proof: Rearrangement.
Supposing formula_6, we have that
formula_7
Define
formula_8 and formula_9.
By the rearrangement inequality, the dot product of the two sequences is maximized when the terms are arranged to be both increasing or both decreasing. The order here is both decreasing. Let formula_10 and formula_11 be the vector formula_12 cyclically shifted by one and by two places; then
formula_13
formula_14
Addition then yields Nesbitt's inequality.
Third proof: Sum of Squares.
The following identity is true for all formula_15
formula_16
This clearly proves that the left side is no less than formula_17 for positive "a", "b" and "c".
Note: every rational inequality can be demonstrated by transforming it to the appropriate sum-of-squares identity—see Hilbert's seventeenth problem.
Fourth proof: Cauchy–Schwarz.
Invoking the Cauchy–Schwarz inequality on the vectors formula_18 yields
formula_19
which can be transformed into the final result as we did in .
Fifth proof: AM-GM.
Let formula_20. We then apply the AM-GM inequality to obtain
formula_21
because formula_22
Substituting out the formula_23 in favor of formula_24 yields
formula_25
formula_26
which then simplifies to the final result.
Sixth proof: Titu's lemma.
Titu's lemma, a direct consequence of the Cauchy–Schwarz inequality, states that for any sequence of formula_27 real numbers formula_28 and any sequence of formula_27 positive numbers formula_29, formula_30
We use the lemma on formula_31 and formula_32. This gives
formula_33
which results in
formula_34 i.e.,
formula_35
Seventh proof: Using homogeneity.
As the left side of the inequality is homogeneous, we may assume formula_36. Now define formula_37, formula_38, and formula_39. The desired inequality turns into formula_40, or, equivalently, formula_41. This is clearly true by Titu's Lemma.
Eighth proof: Jensen's inequality.
Let formula_42 and consider the function formula_43. This function can be shown to be convex in formula_44 and, invoking Jensen's inequality, we get
formula_45
A straightforward computation then yields
formula_46
Ninth proof: Reduction to a two-variable inequality.
By clearing denominators,
formula_47
It therefore suffices to prove that formula_48 for formula_49, as summing this three times for formula_50 and formula_51 completes the proof.
As formula_52 we are done.
|
[
{
"math_id": 0,
"text": "\\frac{a}{b+c} + \\frac{b}{a+c} + \\frac{c}{a+b} \\geq \\frac{3}{2},"
},
{
"math_id": 1,
"text": "a=b=c"
},
{
"math_id": 2,
"text": "(a+b),(b+c),(c+a)"
},
{
"math_id": 3,
"text": "\\frac{(a+b)+(a+c)+(b+c)}{3} \\geq \\frac{3}{\\displaystyle\\frac{1}{a+b} + \\frac{1}{a+c} + \\frac{1}{b+c}}."
},
{
"math_id": 4,
"text": "((a+b)+(a+c)+(b+c))\\left(\\frac{1}{a+b} + \\frac{1}{a+c} + \\frac{1}{b+c}\\right)\\geq 9,"
},
{
"math_id": 5,
"text": "2\\frac{a+b+c}{b+c} + 2\\frac{a+b+c}{a+c} + 2\\frac{a+b+c}{a+b} \\geq 9"
},
{
"math_id": 6,
"text": "a \\ge b \\ge c"
},
{
"math_id": 7,
"text": "\\frac{1}{b+c} \\ge \\frac{1}{a+c} \\ge \\frac{1}{a+b}."
},
{
"math_id": 8,
"text": "\\vec{x} = (a,b,c)\\quad"
},
{
"math_id": 9,
"text": "\\quad\\vec{y} = \\left(\\frac{1}{b+c} , \\frac{1}{a+c} , \\frac{1}{a+b}\\right) "
},
{
"math_id": 10,
"text": "\\vec y_1"
},
{
"math_id": 11,
"text": "\\vec y_2"
},
{
"math_id": 12,
"text": "\\vec y"
},
{
"math_id": 13,
"text": "\\vec{x} \\cdot \\vec{y} \\ge \\vec{x} \\cdot \\vec y_1"
},
{
"math_id": 14,
"text": "\\vec{x} \\cdot \\vec{y} \\ge \\vec{x} \\cdot \\vec y_2"
},
{
"math_id": 15,
"text": "a,b,c:"
},
{
"math_id": 16,
"text": "\\frac{a}{b+c} + \\frac{b}{a+c} + \\frac{c}{a+b} \\ge \\frac{3}{2}."
},
{
"math_id": 17,
"text": "3/2"
},
{
"math_id": 18,
"text": "\\displaystyle\\left\\langle\\sqrt{a+b},\\sqrt{b+c},\\sqrt{c+a}\\right\\rangle,\\left\\langle\\frac{1}{\\sqrt{a+b}},\\frac{1}{\\sqrt{b+c}},\\frac{1}{\\sqrt{c+a}}\\right\\rangle"
},
{
"math_id": 19,
"text": "((b+c)+(a+c)+(a+b))\\left(\\frac{1}{b+c} + \\frac{1}{a+c} + \\frac{1}{a+b}\\right) \\geq 9,"
},
{
"math_id": 20,
"text": "x=a+b, y=b+c, z=c+a"
},
{
"math_id": 21,
"text": "\\frac{x+z}{y} + \\frac{y+z}{x} + \\frac{x+y}{z} \\geq 6,"
},
{
"math_id": 22,
"text": "\\frac{x}{y} + \\frac{z}{y} + \\frac{y}{x} + \\frac{z}{x} + \\frac{x}{z} + \\frac{y}{z} \\geq 6\\sqrt[6]{\\frac{x}{y} \\cdot \\frac{z}{y} \\cdot \\frac{y}{x} \\cdot \\frac{z}{x} \\cdot \\frac{x}{z} \\cdot \\frac{y}{z}} = 6."
},
{
"math_id": 23,
"text": "x,y,z"
},
{
"math_id": 24,
"text": "a,b,c"
},
{
"math_id": 25,
"text": "\\frac{2a+b+c}{b+c} + \\frac{a+b+2c}{a+b} + \\frac{a+2b+c}{c+a} \\geq 6"
},
{
"math_id": 26,
"text": "\\frac{2a}{b+c} + \\frac{2c}{a+b} + \\frac{2b}{a+c} + 3 \\geq 6,"
},
{
"math_id": 27,
"text": "n"
},
{
"math_id": 28,
"text": "(x_k)"
},
{
"math_id": 29,
"text": "(a_k)"
},
{
"math_id": 30,
"text": "\\displaystyle\\sum_{k=1}^n\\frac{x_k^2}{a_k}\\geq\\frac{(\\sum_{k=1}^n x_k)^2}{\\sum_{k=1}^n a_k}."
},
{
"math_id": 31,
"text": "(x_k)=(1,1,1)"
},
{
"math_id": 32,
"text": "(a_k)=(b+c,a+c,a+b)"
},
{
"math_id": 33,
"text": "\\frac{1}{b+c} + \\frac{1}{c+a} + \\frac{1}{a+b} \\geq \\frac{3^2}{2(a+b+c)},"
},
{
"math_id": 34,
"text": "\\frac{a+b+c}{b+c} + \\frac{a+b+c}{c+a} + \\frac{a+b+c}{a+b} \\geq \\frac{9}{2}"
},
{
"math_id": 35,
"text": "\\frac{a}{b+c} + \\frac{b}{c+a} + \\frac{c}{a+b} \\geq \\frac{9}{2} - 3 = \\frac{3}{2}."
},
{
"math_id": 36,
"text": "a+b+c=1"
},
{
"math_id": 37,
"text": "x=a+b"
},
{
"math_id": 38,
"text": "y=b+c"
},
{
"math_id": 39,
"text": "z=c+a"
},
{
"math_id": 40,
"text": "\\frac{1-x}{x} + \\frac{1-y}{y} + \\frac{1-z}{z} \\ge \\frac{3}{2}"
},
{
"math_id": 41,
"text": "\\frac{1}{x} + \\frac{1}{y} + \\frac{1}{z} \\ge \\frac{9}{2}"
},
{
"math_id": 42,
"text": "S=a+b+c"
},
{
"math_id": 43,
"text": "f(x)=\\frac{x}{S-x}"
},
{
"math_id": 44,
"text": "[0,S]"
},
{
"math_id": 45,
"text": "\\displaystyle \\frac{\\frac{a}{S-a} + \\frac{b}{S-b} + \\frac{c}{S-c}}{3} \\geq \\frac{S/3}{S-S/3}."
},
{
"math_id": 46,
"text": "\\frac{a}{b+c} + \\frac{b}{c+a} + \\frac{c}{a+b} \\geq \\frac{3}{2}."
},
{
"math_id": 47,
"text": "\\frac{a}{b+c} + \\frac{b}{a+c} + \\frac{c}{a+b} \\geq \\frac{3}{2} \\iff 2(a^3+b^3+c^3) \\geq ab^2 + a^2b + ac^2 + a^2c + bc^2 + b^2c."
},
{
"math_id": 48,
"text": "x^3+y^3 \\geq xy^2+x^2y"
},
{
"math_id": 49,
"text": "(x,y) \\in \\mathbb{R}^2_+"
},
{
"math_id": 50,
"text": "(x,y) = (a,b),\\ (a,c),"
},
{
"math_id": 51,
"text": "(b,c)"
},
{
"math_id": 52,
"text": "x^3+y^3 \\geq xy^2+x^2y \\iff (x-y)(x^2-y^2) \\geq 0"
}
] |
https://en.wikipedia.org/wiki?curid=1457846
|
1458081
|
Yield (chemistry)
|
Amount of product formed in a reaction
In chemistry, yield, also known as reaction yield or chemical yield, refers to the amount of product obtained in a chemical reaction. Yield is one of the primary factors that scientists must consider in organic and inorganic chemical synthesis processes. In chemical reaction engineering, "yield", "conversion" and "selectivity" are terms used to describe ratios of how much of a reactant was consumed (conversion), how much desired product was formed (yield) in relation to the undesired product (selectivity), represented as X, Y, and S.
The term yield also plays an important role in analytical chemistry, as individual compounds are recovered in purification processes in a range from quantitative yield (100 %) to low yield (< 50 %).
Definitions.
In chemical reaction engineering, "yield", "conversion" and "selectivity" are terms used to describe ratios of how much of a reactant has reacted—conversion, how much of a desired product was formed—yield, and how much desired product was formed in ratio to the undesired product—selectivity, represented as X, S, and Y.
According to the "Elements of Chemical Reaction Engineering" manual, yield refers to the amount of a specific product formed per mole of reactant consumed. In chemistry, mole is used to describe quantities of reactants and products in chemical reactions.
The Compendium of Chemical Terminology defined yield as the "ratio expressing the efficiency of a mass conversion process. The yield coefficient is defined as the amount of cell mass (kg) or product formed (kg,mol) related to the consumed substrate (carbon or nitrogen source or oxygen in kg or moles) or to the intracellular ATP production (moles)."
In the section "Calculations of yields in the monitoring of reactions" in the 1996 4th edition of "Vogel's Textbook of Practical Organic Chemistry" (1978), the authors write that, "theoretical yield in an organic reaction is the weight of product which would be obtained if the reaction has proceeded to completion according to the chemical equation. The yield is the weight of the pure product which is isolated from the reaction." In 'the 1996 edition of "Vogel's Textbook", percentage yield is expressed as,
formula_0
According to the 1996 edition of "Vogel's Textbook", yields close to 100% are called "quantitative", yields above 90% are called "excellent", yields above 80% are "very good", yields above 70% are "good", yields above 50% are "fair", and yields below 40% are called "poor". In their 2002 publication, Petrucci, Harwood, and Herring wrote that "Vogel's Textbook" names were arbitrary, and not universally accepted, and depending on the nature of the reaction in question, these expectations may be unrealistically high. Yields may appear to be 100% or above when products are impure, as the measured weight of the product will include the weight of any impurities.
In their 2016 laboratory manual, "Experimental Organic Chemistry", the authors described the "reaction yield" or "absolute yield" of a chemical reaction as the "amount of pure and dry product yielded in a reaction". They wrote that knowing the stoichiometry of a chemical reaction—the numbers and types of atoms in the reactants and products, in a balanced equation "make it possible to compare different elements through stoichiometric factors." Ratios obtained by these quantitative relationships are useful in data analysis.
Theoretical, actual, and percent yields.
The percent yield is a comparison between the actual yield—which is the weight of the intended product of a chemical reaction in a laboratory setting—and the theoretical yield—the measurement of pure intended isolated product, based on the chemical equation of a flawless chemical reaction, and is defined as,
formula_0
The ideal relationship between products and reactants in a chemical reaction can be obtained by using a chemical reaction equation. Stoichiometry is used to run calculations about chemical reactions, for example, the stoichiometric mole ratio between reactants and products. The stoichiometry of a chemical reaction is based on chemical formulas and equations that provide the quantitative relation between the number of moles of various products and reactants, including yields. Stoichiometric equations are used to determine the limiting reagent or reactant—the reactant that is completely consumed in a reaction. The limiting reagent determines the theoretical yield—the relative quantity of moles of reactants and the product formed in a chemical reaction. Other reactants are said to be present in excess. The actual yield—the quantity physically obtained from a chemical reaction conducted in a laboratory—is often less than the theoretical yield. The theoretical yield is what would be obtained if all of the limiting reagent reacted to give the product in question. A more accurate yield is measured based on how much product was actually produced versus how much could be produced. The ratio of the theoretical yield and the actual yield results in a percent yield.
When more than one reactant participates in a reaction, the yield is usually calculated based on the amount of the limiting reactant, whose amount is less than stoichiometrically equivalent (or just equivalent) to the amounts of all other reactants present. Other reagents present in amounts greater than required to react with all the limiting reagent present are considered excess. As a result, the yield should not be automatically taken as a measure for reaction efficiency.
In their 1992 publication "General Chemistry", Whitten, Gailey, and Davis described the theoretical yield as the amount predicted by a stoichiometric calculation based on the number of moles of all reactants present. This calculation assumes that only one reaction occurs and that the limiting reactant reacts completely.
According to Whitten, the actual yield is always smaller (the percent yield is less than 100%), often very much so, for several reasons. As a result, many reactions are incomplete and the reactants are not completely converted to products. If a reverse reaction occurs, the final state contains both reactants and products in a state of chemical equilibrium. Two or more reactions may occur simultaneously, so that some reactant is converted to undesired side products. Losses occur in the separation and purification of the desired product from the reaction mixture. Impurities are present in the starting material which do not react to give desired product.
Example.
This is an example of an esterification reaction where one molecule acetic acid (also called ethanoic acid) reacts with one molecule ethanol, yielding one molecule ethyl acetate (a bimolecular second-order reaction of the type A + B → C):
120 g acetic acid (60 g/mol, 2.0 mol) was reacted with 230 g ethanol (46 g/mol, 5.0 mol), yielding 132 g ethyl acetate (88 g/mol, 1.5 mol). The yield was 75%.
Purification of products.
In his 2016 "Handbook of Synthetic Organic Chemistry", Michael Pirrung wrote that yield is one of the primary factors synthetic chemists must consider in evaluating a synthetic method or a particular transformation in "multistep syntheses." He wrote that a yield based on recovered starting material (BRSM) or (BORSM) does not provide the theoretical yield or the "100% of the amount of product calculated", that is necessary in order to take the next step in the multistep systhesis.
Purification steps always lower the yield, through losses incurred during the transfer of material between reaction vessels and purification apparatus or imperfect separation of the product from impurities, which may necessitate the discarding of fractions deemed insufficiently pure. The yield of the product measured after purification (typically to >95% spectroscopic purity, or to sufficient purity to pass combustion analysis) is called the "isolated yield" of the reaction.
Internal standard yield.
Yields can also be calculated by measuring the amount of product formed (typically in the crude, unpurified reaction mixture) relative to a known amount of an added internal standard, using techniques like Gas chromatography (GC), High-performance liquid chromatography, or Nuclear magnetic resonance spectroscopy (NMR spectroscopy) or magnetic resonance spectroscopy (MRS). A yield determined using this approach is known as an "internal standard yield". Yields are typically obtained in this manner to accurately determine the quantity of product produced by a reaction, irrespective of potential isolation problems. Additionally, they can be useful when isolation of the product is challenging or tedious, or when the rapid determination of an approximate yield is desired. Unless otherwise indicated, yields reported in the synthetic organic and inorganic chemistry literature refer to isolated yields, which better reflect the amount of pure product one is likely to obtain under the reported conditions, upon repeating the experimental procedure.
Reporting of yields.
In their 2010 "Synlett" article, Martina Wernerova and organic chemist, Tomáš Hudlický, raised concerns about inaccurate reporting of yields, and offered solutions—including the proper characterization of compounds. After performing careful control experiments, Wernerova and Hudlický said that each physical manipulation (including extraction/washing, drying over desiccant, filtration, and column chromatography) results in a loss of yield of about 2%. Thus, isolated yields measured after standard aqueous workup and chromatographic purification should seldom exceed 94%. They called this phenomenon "yield inflation" and said that yield inflation had gradually crept upward in recent decades in chemistry literature. They attributed yield inflation to careless measurement of yield on reactions conducted on small scale, wishful thinking and a desire to report higher numbers for publication purposes.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mbox{percent yield} = \\frac{\\mbox{actual yield}}{\\mbox{theoretical yield}} \\times 100"
}
] |
https://en.wikipedia.org/wiki?curid=1458081
|
14581057
|
Heat flux
|
Vector representing the energy passing through a given area per unit time
In physics and engineering, heat flux or thermal flux, sometimes also referred to as heat flux density, heat-flow density or heat-flow rate intensity, is a flow of energy per unit area per unit time. Its SI units are watts per square metre (W/m2). It has both a direction and a magnitude, and so it is a vector quantity. To define the heat flux at a certain point in space, one takes the limiting case where the size of the surface becomes infinitesimally small.
Heat flux is often denoted formula_0, the subscript q specifying "heat" flux, as opposed to "mass" or "momentum" flux. Fourier's law is an important application of these concepts.
Fourier's law.
For most solids in usual conditions, heat is transported mainly by conduction and the heat flux is adequately described by Fourier's law.
Fourier's law in one dimension.
formula_1
where formula_2 is the thermal conductivity. The negative sign shows that heat flux moves from higher temperature regions to lower temperature regions.
Multi-dimensional extension.
The multi-dimensional case is similar, the heat flux goes "down" and hence the temperature gradient has the negative sign:
formula_3
where formula_4 is the gradient operator.
Measurement.
The measurement of heat flux can be performed in a few different manners.
With a given thermal conductivity.
A commonly known, but often impractical, method is performed by measuring a temperature difference over a piece of material with a well-known thermal conductivity. This method is analogous to a standard way to measure an electric current, where one measures the voltage drop over a known resistor. Usually this method is difficult to perform since the thermal resistance of the material being tested is often not known. Accurate values for the material's thickness and thermal conductivity would be required in order to determine thermal resistance. Using the thermal resistance, along with temperature measurements on either side of the material, heat flux can then be indirectly calculated.
With unknown thermal conductivity.
A second method of measuring heat flux is by using a heat flux sensor, or heat flux transducer, to directly measure the amount of heat being transferred to/from the surface that the heat flux sensor is mounted to. The most common type of heat flux sensor is a differential temperature thermopile which operates on essentially the same principle as the first measurement method that was mentioned except it has the advantage in that the thermal resistance/conductivity does not need to be a known parameter. These parameters do not have to be known since the heat flux sensor enables an in-situ measurement of the existing heat flux by using the Seebeck effect. However, differential thermopile heat flux sensors have to be calibrated in order to relate their output signals [μV] to heat flux values [W/(m2⋅K)]. Once the heat flux sensor is calibrated it can then be used to directly measure heat flux without requiring the rarely known value of thermal resistance or thermal conductivity.
Science and engineering.
One of the tools in a scientist's or engineer's toolbox is the energy balance. Such a balance can be set up for any physical system, from chemical reactors to living organisms, and generally takes the following form
formula_5
where the three formula_6 terms stand for the time rate of change of respectively the total amount of incoming energy, the total amount of outgoing energy and the total amount of accumulated energy.
Now, if the only way the system exchanges energy with its surroundings is through heat transfer, the heat rate can be used to calculate the energy balance, since
formula_7
where we have integrated the heat flux formula_0 over the surface formula_8 of the system.
In real-world applications one cannot know the exact heat flux at every point on the surface, but approximation schemes can be used to calculate the integral, for example Monte Carlo integration.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\vec{\\phi}_\\mathrm{q}"
},
{
"math_id": 1,
"text": "\\phi_\\text{q} = -k \\frac{\\mathrm{d}T(x)}{\\mathrm{d}x}"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "\\vec{\\phi}_\\mathrm{q} = - k \\nabla T"
},
{
"math_id": 4,
"text": "{\\nabla}"
},
{
"math_id": 5,
"text": "\\big. \\frac{\\partial E_\\mathrm{in}}{\\partial t} - \\frac{\\partial E_\\mathrm{out}}{\\partial t} - \\frac{\\partial E_\\mathrm{accumulated}}{\\partial t} = 0"
},
{
"math_id": 6,
"text": "\\big. \\frac{\\partial E}{\\partial t}"
},
{
"math_id": 7,
"text": "\\frac{\\partial E_\\mathrm{in}}{\\partial t} - \\frac{\\partial E_\\mathrm{out}}{\\partial t} = \\oint_S \\vec{\\phi}_\\mathrm{q} \\cdot \\, \\mathrm{d} \\vec{S}"
},
{
"math_id": 8,
"text": "S"
}
] |
https://en.wikipedia.org/wiki?curid=14581057
|
1458192
|
Green–Kubo relations
|
Equation relating transport coefficients to correlation functions
The Green–Kubo relations (Melville S. Green 1954, Ryogo Kubo 1957) give the exact mathematical expression for a transport coefficient formula_0 in terms of the integral of the equilibrium time correlation function of the time derivative of a corresponding microscopic variable formula_1 (sometimes termed a "gross variable", as in ):
formula_2
One intuitive way to understand this relation is that relaxations resulting from random fluctuations in equilibrium are indistinguishable from those due to an external perturbation in linear response.
Green-Kubo relations are important because they relate a macroscopic transport coefficient to the correlation function of a microscopic variable. In addition, they allow one to measure the transport coefficient without perturbing the system out of equilibrium, which has found much use in molecular dynamics simulations.
Thermal and mechanical transport processes.
Thermodynamic systems may be prevented from relaxing to equilibrium because of the application of a field (e.g. electric or magnetic field), or because the boundaries of the system are in relative motion (shear) or maintained at different temperatures, etc. This generates two classes of nonequilibrium system: mechanical nonequilibrium systems and thermal nonequilibrium systems.
The standard example of an electrical transport process is Ohm's law, which states that, at least for sufficiently small applied voltages, the current "I" is linearly proportional to the applied voltage "V",
formula_3
As the applied voltage increases one expects to see deviations from linear behavior. The coefficient of proportionality is the electrical conductance which is the reciprocal of the electrical resistance.
The standard example of a mechanical transport process is Newton's law of viscosity, which states that the shear stress formula_4 is linearly proportional to the strain rate. The strain rate formula_5 is the rate of change streaming velocity in the x-direction, with respect to the y-coordinate, formula_6. Newton's law of viscosity states
formula_7
As the strain rate increases we expect to see deviations from linear behavior
formula_8
Another well known thermal transport process is Fourier's law of heat conduction, stating that the heat flux between two bodies maintained at different temperatures is proportional to the temperature gradient (the temperature difference divided by the spatial separation).
Linear constitutive relation.
Regardless of whether transport processes are stimulated thermally or mechanically, in the small field limit it is expected that a flux will be linearly proportional to an applied field. In the linear case the flux and the force are said to be conjugate to each other. The relation between a thermodynamic force "F" and its conjugate thermodynamic flux "J" is called a linear constitutive relation,
formula_9
"L"(0) is called a linear transport coefficient. In the case of multiple forces and fluxes acting simultaneously, the fluxes and forces will be related by a linear transport coefficient matrix. Except in special cases, this matrix is symmetric as expressed in the Onsager reciprocal relations.
In the 1950s Green and Kubo proved an exact expression for linear transport coefficients which is valid for systems of arbitrary temperature T, and density. They proved that linear transport coefficients are exactly related to the time dependence of equilibrium fluctuations in the conjugate flux,
formula_10
where formula_11 (with "k" the Boltzmann constant), and "V" is the system volume. The integral is over the equilibrium flux autocovariance function. At zero time the autocovariance is positive since it is the mean square value of the flux at equilibrium. Note that at equilibrium the mean value of the flux is zero by definition. At long times the flux at time "t", "J"("t"), is uncorrelated with its value a long time earlier "J"(0) and the autocorrelation function decays to zero. This remarkable relation is frequently used in molecular dynamics computer simulation to compute linear transport coefficients; see Evans and Morriss, "Statistical Mechanics of Nonequilibrium Liquids", Academic Press 1990.
Nonlinear response and transient time correlation functions.
In 1985 Denis Evans and Morriss derived two exact fluctuation expressions for nonlinear transport coefficients—see Evans and Morriss in Mol. Phys, 54, 629(1985). Evans later argued that these are consequences of the extremization of free energy in Response theory as a free energy minimum.
Evans and Morriss proved that in a thermostatted system that is at equilibrium at "t" = 0, the nonlinear transport coefficient can be calculated from the so-called transient time correlation function expression:
formula_12
where the equilibrium (formula_13) flux autocorrelation function is replaced by a thermostatted field dependent transient autocorrelation function. At time zero formula_14 but at later times since the field is applied formula_15.
Another exact fluctuation expression derived by Evans and Morriss is the so-called Kawasaki expression for the nonlinear response:
formula_16
The ensemble average of the right hand side of the Kawasaki expression is to be evaluated under the application of both the thermostat and the external field. At first sight the transient time correlation function (TTCF) and Kawasaki expression might appear to be of limited use—because of their innate complexity. However, the TTCF is quite useful in computer simulations for calculating transport coefficients. Both expressions can be used to derive new and useful fluctuation expressions quantities like specific heats, in nonequilibrium steady states. Thus they can be used as a kind of partition function for nonequilibrium steady states.
Derivation from the fluctuation theorem and the central limit theorem.
For a thermostatted steady state, time integrals of the dissipation function are related to the dissipative flux, J, by the equation
formula_17
We note in passing that the long time average of the dissipation function is a product of the thermodynamic force and the average conjugate thermodynamic flux. It is therefore equal to the spontaneous entropy production in the system. The spontaneous entropy production plays a key role in linear irreversible thermodynamics – see de Groot and Mazur "Non-equilibrium thermodynamics" Dover.
The fluctuation theorem (FT) is valid for arbitrary averaging times, t. Let's apply the FT in the long time limit while simultaneously reducing the field so that the product formula_18 is held constant,
formula_19
Because of the particular way we take the double limit, the negative of the mean value of the flux remains a fixed number of standard deviations away from the mean as the averaging time increases (narrowing the distribution) and the field decreases. This means that as the averaging time gets longer the distribution near the mean flux and its negative, is accurately described by the central limit theorem. This means that the distribution is Gaussian near the mean and its negative so that
formula_20
Combining these two relations yields (after some tedious algebra!) the exact Green–Kubo relation for the linear zero field transport coefficient, namely,
formula_21
Here are the details of the proof of Green–Kubo relations from the FT.
A proof using only elementary quantum mechanics was given by Robert Zwanzig.
Summary.
This shows the fundamental importance of the fluctuation theorem (FT) in nonequilibrium statistical mechanics.
The FT gives a generalisation of the second law of thermodynamics. It is then easy to prove the second law inequality and the Kawasaki identity. When combined with the central limit theorem, the FT also implies the Green–Kubo relations for linear transport coefficients close to equilibrium. The FT is, however, more general than the Green–Kubo Relations because, unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, no one has yet been able to derive the equations for nonlinear response theory from the FT.
The FT does "not" imply or require that the distribution of time-averaged dissipation is Gaussian. There are many examples known when the distribution is non-Gaussian and yet the FT still correctly describes the probability ratios.
|
[
{
"math_id": 0,
"text": "\\gamma"
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": "\\gamma = \\int_0^\\infty \\left\\langle \\dot{A}(t) \\dot{A}(0) \\right\\rangle \\;{\\mathrm d}t."
},
{
"math_id": 3,
"text": " I = G V, "
},
{
"math_id": 4,
"text": " S_{xy} "
},
{
"math_id": 5,
"text": " \\gamma "
},
{
"math_id": 6,
"text": " \\gamma \\mathrel\\stackrel{\\mathrm{def}}{=} \\partial u_x /\\partial y "
},
{
"math_id": 7,
"text": " S_{xy} = \\eta \\gamma.\\, "
},
{
"math_id": 8,
"text": " S_{xy} = \\eta (\\gamma )\\gamma.\\, "
},
{
"math_id": 9,
"text": "J = L(F_e = 0)F_e. \\,"
},
{
"math_id": 10,
"text": "\nL(F_e = 0) = \\beta V\\;\\int_0^\\infty {\\mathrm d}s \\, \\left\\langle J(0)J(s) \\right\\rangle _{F_e = 0}, \n\\, "
},
{
"math_id": 11,
"text": "\\beta = \\frac{1}{kT}"
},
{
"math_id": 12,
"text": "\nL(F_e ) = \\beta V\\;\\int_0^\\infty {\\mathrm d}s \\, \\left\\langle J(0)J(s) \\right\\rangle_{F_e},\n"
},
{
"math_id": 13,
"text": " F_e = 0 "
},
{
"math_id": 14,
"text": " \\left\\langle J(0) \\right\\rangle_{F_e} = 0 "
},
{
"math_id": 15,
"text": " \\left\\langle J(t) \\right\\rangle_{F_e} \\ne 0 "
},
{
"math_id": 16,
"text": "\n\\left\\langle J(t;F_e ) \\right\\rangle = \\left\\langle J(0)\\exp \\left[ -\\beta V\\int_0^t J(-s)F_e \\, {\\mathrm d}s \\right] \\right\\rangle _{F_e}. \n\\,"
},
{
"math_id": 17,
"text": " \\bar \\Omega _t = - \\beta \\overline J _t VF_e.\\, "
},
{
"math_id": 18,
"text": " F_e^2 t "
},
{
"math_id": 19,
"text": "\n\\lim_{t \\to \\infty, \\, F_e \\to 0}\\frac{1}{t} \\ln \\left( \\frac{p\\left(\\beta \\overline J _t = A\\right)}{p\\left(\\beta \\overline J_t = -A\\right)} \\right) = -\\lim_{t \\to \\infty, \\, F_e \\to 0} AVF_e,\\quad F_e^2 t = c.\n"
},
{
"math_id": 20,
"text": " \n\\lim_{t \\to \\infty, \\, F_e \\to 0} \\frac{1}{t} \\ln \\left( \\frac{p\\left(\\overline J _t\\right) = A}{p\\left(\\overline J _t\\right) = -A} \\right) = \\lim_{t \\to \\infty, \\, F_e \\to 0} \\frac{2A\\left\\langle J \\right\\rangle_{F_e}}{t\\sigma_{\\overline J (t)}^2 }.\n"
},
{
"math_id": 21,
"text": " \nL(0) = \\beta V\\;\\int_0^\\infty {\\mathrm d}t \\, \\left\\langle J(0)J(t) \\right\\rangle_{F_e = 0}.\n"
}
] |
https://en.wikipedia.org/wiki?curid=1458192
|
14582412
|
Gladstone–Dale relation
|
Equation in optical analysis of liquids
The Gladstone–Dale relation is a mathematical relation used for optical analysis of liquids, the determination of composition from optical measurements. It can also be used to calculate the density of a liquid for use in fluid dynamics (e.g., flow visualization). The relation has also been used to calculate refractive index of glass and minerals in optical mineralogy.
Uses.
In the Gladstone–Dale relation, formula_0, the index of refraction ("n") or the density ("ρ" in g/cm3) of miscible liquids that are mixed in mass fraction ("m") can be calculated from characteristic optical constants (the molar refractivity "k" in cm3/g) of pure molecular end-members. For example, for any mass ("m") of ethanol added to a mass of water, the alcohol content is determined by measuring density or index of refraction (Brix refractometer). Mass ("m") per unit volume ("V") is the density "m"/"V". Mass is conserved on mixing, but the volume of 1 cm3 of ethanol mixed with 1 cm3 of water is reduced to less than 2 cm3 due to the formation of ethanol-water bonds. The plot of volume or density versus molecular fraction of ethanol in water is a quadratic curve. However, the plot of index of refraction versus molecular fraction of ethanol in water is linear, and the weight fraction equals the fractional density
In the 1900s, the Gladstone–Dale relation was applied to glass, synthetic crystals and minerals. Average values for the refractivity of oxides such as MgO or SiO2 give good to excellent agreement between the calculated and measured average indices of refraction of minerals. However, specific values of refractivity are required to deal with different structure-types, and the relation required modification to deal with structural polymorphs and the birefringence of anisotropic crystal structures.
In recent optical crystallography, Gladstone–Dale constants for the refractivity of ions were related to the inter-ionic distances and angles of the crystal structure. The ionic refractivity depends on 1/"d"2, where "d" is the inter-ionic distance, indicating that a particle-like photon refracts locally due to the electrostatic Coulomb force between ions.
Expression.
The Gladstone–Dale relation can be expressed as an equation of state by re-arranging the terms to formula_1. formula_2
where "n" is the index of refraction, "D" = density and constant = Gladstone-Dale constant.
The macroscopic values ("n") and ("V") determined on bulk material are now calculated as a sum of atomic or molecular properties. Each molecule has a characteristic mass (due to the atomic weights of the elements) and atomic or molecular volume that contributes to the bulk density, and a characteristic refractivity due to a characteristic electric structure that contributes to the net index of refraction.
The refractivity of a single molecule is the refractive volume "k"(MW)/"N"A in nm3, where MW is the molecular weight and "N"A is the Avogadro constant. To calculate the optical properties of materials using the polarizability or refractivity volumes in nm3, the Gladstone–Dale relation competes with the Kramers–Kronig relation and Lorentz–Lorenz relation but differs in optical theory.
The index of refraction ("n") is calculated from the change of angle of a collimated monochromatic beam of light from vacuum into liquid using Snell's law for refraction. Using the theory of light as an electromagnetic wave, light takes a straight-line path through water at reduced speed ("v") and wavelength ("λ"). The ratio "v"/"λ" is a constant equal to the frequency ("ν") of the light, as is the quantized (photon) energy using the Planck constant and "E" = "hν". Compared to the constant speed of light in vacuum ("c"), the index of refraction of water is "n" = "c"/"v".
The Gladstone–Dale term ("n" − 1) is the non-linear optical path length or time delay. Using Isaac Newton's theory of light as a stream of particles refracted locally by (electric) forces acting between atoms, the optic path length is due to refraction at constant speed by displacement about each atom. For light passing through 1 m of water with "n" = 1.33, light traveled an extra 0.33 m compared to light that traveled 1 m in a straight line in vacuum. As the speed of light is a ratio (distance per unit time in m/s), light also took an extra 0.33 s to travel through water compared to light traveling 1 s in vacuum.
Compatibility index.
Mandarino, in his review of the Gladstone–Dale relationship in minerals proposed the concept of the Compatibility Index in comparing the physical and optical properties of minerals. This compatibility index is a required calculation for approval as a new mineral species (see IMA guidelines).
The compatibility index (CI) is defined as follows:
formula_3
Where, KP = Gladstone-Dale Constant derived from physical properties.
Requirements.
The Gladstone–Dale relation requires a particle model of light because the continuous wave-front required by wave theory cannot be maintained if light encounters atoms or molecules that maintain a local electric structure with a characteristic refractivity. Similarly, the wave theory cannot explain the photoelectric effect or absorption by individual atoms and one requires a local particle of light (see "Wave–particle duality").
A local model of light consistent with these electrostatic refraction calculations occurs if the electromagnetic energy is restricted to a finite region of space. An electric-charge monopole must occur perpendicular to dipole loops of magnetic flux, but if local mechanisms for propagation are required, a periodic oscillatory exchange of electromagnetic energy occurs with transient mass. In the same manner, a change of mass occurs as an electron binds to a proton. This local photon has zero rest mass and no net charge, but has wave properties with spin-1 symmetry on trace over time. In this modern version of Newton's corpuscular theory of light, the local photon acts as a probe of the molecular or crystal structure.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(n-1)/\\rho = \\sum km"
},
{
"math_id": 1,
"text": "(n-1)V = \\sum kdm"
},
{
"math_id": 2,
"text": "(n - 1) / d = \\mathrm{constant}\n"
},
{
"math_id": 3,
"text": "\\mathrm{CI}_\\text{meas} = (1 - \\mathrm{KPD}_\\text{meas} / \\mathrm{KC} )\n\\quad \\mathrm{CI}_\\text{calc} = (1 - \\mathrm{KPD}_\\text{calc} / \\mathrm{KC} )"
}
] |
https://en.wikipedia.org/wiki?curid=14582412
|
1458409
|
Comma (music)
|
Very small interval arising from discrepancies in tuning
In music theory, a comma is a very small interval, the difference resulting from tuning one note two different ways. Strictly speaking, there are only two kinds of comma, the syntonic comma, "the difference between a just major 3rd and four just perfect 5ths less two octaves", and the Pythagorean comma, "the difference between twelve 5ths and seven octaves". The word "comma" used without qualification refers to the syntonic comma, which can be defined, for instance, as the difference between an F♯ tuned using the D-based Pythagorean tuning system, and another F♯ tuned using the D-based quarter-comma meantone tuning system. Intervals separated by the ratio 81:80 are considered the same note because the 12-note Western chromatic scale does not distinguish Pythagorean intervals from 5-limit intervals in its notation. Other intervals are considered commas because of the enharmonic equivalences of a tuning system. For example, in 53TET, B♭ and A♯ are both approximated by the same interval although they are a septimal kleisma apart.
Etymology.
Translated in this context, "comma" means "a hair" as in "off by just a hair". The word "comma" came via Latin from Greek "κόμμα", from earlier *κοπ-μα: "the result or effect of cutting". A more complete etymology is given in the article κόμμα (Ancient Greek) in the Wiktionary.
Description.
Within the same tuning system, two enharmonically equivalent notes (such as G♯ and A♭) may have a slightly different frequency, and the interval between them is a comma. For example, in extended scales produced with five-limit tuning an A♭ tuned as a major third below C5 and a G♯ tuned as two major thirds above C4 are not exactly the same note, as they would be in equal temperament. The interval between those notes, the diesis, is an easily audible comma (its size is more than 40% of a semitone).
Commas are often defined as the difference in size between two semitones. Each meantone temperament tuning system produces a 12-tone scale characterized by two different kinds of semitones (diatonic and chromatic), and hence by a comma of unique size. The same is true for Pythagorean tuning.
In just intonation, more than two kinds of semitones may be produced. Thus, a single tuning system may be characterized by several different commas. For instance, a commonly used version of five-limit tuning produces a 12-tone scale with four kinds of semitones and four commas.
The size of commas is commonly expressed and compared in terms of cents – <templatestyles src="Fraction/styles.css" />1⁄1200 fractions of an octave on a logarithmic scale.
Commas in different contexts.
In the column below labeled "Difference between semitones", min2 is the minor second (diatonic semitone), aug1 is the augmented unison (chromatic semitone), and S1, S2, S3, S4 are semitones as defined here. In the columns labeled "Interval 1" and "Interval 2", all intervals are presumed to be tuned in just intonation. Notice that the Pythagorean comma (κ𝜋) and the syntonic comma (κS) are basic intervals that can be used as yardsticks to define some of the other commas. For instance, the difference between them is a small comma called schisma. A schisma is not audible in many contexts, as its size is narrower than the smallest audible difference between tones (which is around six cents, also known as just-noticeable difference, or JND).
Many other commas have been enumerated and named by microtonalists.
The syntonic comma has a crucial role in the history of music. It is the amount by which some of the notes produced in Pythagorean tuning were flattened or sharpened to produce just minor and major thirds. In Pythagorean tuning, the only highly consonant intervals were the perfect fifth and its inversion, the perfect fourth. The Pythagorean major third (81:64) and minor third (32:27) were dissonant, and this prevented musicians from freely using triads and chords, forcing them to write music with relatively simple texture. Musicians in late Middle Ages recognized that by slightly tempering the pitch of some notes, the Pythagorean thirds could be made consonant. For instance, if you decrease the frequency of E by a syntonic comma (81:80), C–E (a major third) and E–G (a minor third) become just: C–E is flattened by a just ratio of
formula_0
and at the same time E–G is sharpened to the just ratio of
formula_1
This led to the creation of a new tuning system, known as quarter-comma meantone, which permitted the full development of music with complex texture, such as polyphonic music, or melodies with instrumental accompaniment. Since then, other tuning systems were developed, and the syntonic comma was used as a reference value to temper the perfect fifths throughout the family of syntonic temperaments, including meantone temperaments.
Alternative definitions.
In quarter-comma meantone, and any kind of meantone temperament tuning system that tempers the fifth to a size smaller than 700 cents, the comma is a diminished second, which can be equivalently defined as the difference between:
In Pythagorean tuning, and any kind of meantone temperament tuning system that tempers the fifth to a size larger than 700 cents (such as comma meantone), the comma is the opposite of a diminished second, and therefore the opposite of the above-listed differences. More exactly, in these tuning systems the diminished second is a descending interval, while the comma is its ascending opposite. For instance, the Pythagorean comma (531441:524288, or about 23.5 cents) can be computed as the difference between a chromatic and a diatonic semitone, which is the opposite of a Pythagorean diminished second (524288:531441, or about −23.5 cents).
In each of the above-mentioned tuning systems, the above-listed differences have all the same size. For instance, in Pythagorean tuning they are all equal to the opposite of a Pythagorean comma, and in quarter comma meantone they are all equal to a diesis.
Notation.
In the years 2000–2004, Marc Sabat and Wolfgang von Schweinitz worked together in Berlin to develop a method to exactly indicate pitches in staff notation. This method was called the extended Helmholtz-Ellis JI pitch notation. Sabat and Schweinitz take the "conventional" flats, naturals and sharps as a Pythagorean series of perfect fifths. Thus, a series of perfect fifths beginning with F proceeds and so on. The advantage for musicians is that conventional reading of the basic fourths and fifths remains familiar. Such an approach has also been advocated by Daniel James Wolf and by Joe Monzo, who refers to it by the acronym HEWM (Helmholtz-Ellis-Wolf-Monzo). In the Sabat-Schweinitz design, syntonic commas are marked by arrows attached to the flat, natural or sharp sign, septimal commas using Giuseppe Tartini's symbol, and undecimal quartertones using the common practice quartertone signs (a single cross and backwards flat). For higher primes, additional signs have been designed. To facilitate quick estimation of pitches, cents indications may be added (downward deviations below and upward deviations above the respective accidental). The convention used is that the cents written refer to the tempered pitch implied by the flat, natural, or sharp sign and the note name. One of the great advantages of any such a notation is that it allows the natural harmonic series to be precisely notated. A complete legend and fonts for the notation (see samples) are open source and available from Plainsound Music Edition. Thus a Pythagorean scale is C D E F G A B C, while a just scale is C D E F G A B C.
Composer Ben Johnston uses a "−" as an accidental to indicate a note is lowered a syntonic comma, or a "+" to indicate a note is raised a syntonic comma; however, Johnston's "basic scale" (the plain nominals A B C D E F G) is tuned to just-intonation and thus already includes the syntonic comma. Thus a Pythagorean scale is C D E+ F G A+ B+ C, while a just scale is C D E F G A B.
Tempering of commas.
Commas are frequently used in the description of musical temperaments, where they describe distinctions between musical intervals that are eliminated by that tuning system. A comma can be viewed as the distance between two musical intervals. When a given comma is tempered out in a tuning system, the ability to distinguish between those two intervals in that tuning is eliminated. For example, the difference between the diatonic semitone and chromatic semitone is called the diesis. The widely used 12 tone equal temperament "tempers out" the diesis, and thus does not distinguish between the two different types of semitones. On the other hand, 19 tone equal temperament does not temper out this comma, and thus it distinguishes between the two semitones.
Examples:
The following table lists the number of steps used that correspond various just intervals in various tuning systems. Zeros indicate that the interval is a comma (i.e. is tempered out) in that particular equal temperament. All of the frequency ratios in the first column are linked to their wikipedia article.
The comma can also be considered to be the fractional interval that remains after a "full circle" of some repeated chosen interval; the repeated intervals are all the same size, in relative pitch, and all the tones produced are reduced or raised by whole octaves back to the octave surrounding the starting pitch. The Pythagorean comma, for instance, is the difference obtained, say, between A♭ and G♯ after a circle of twelve just fifths. A circle of three just major thirds, such as produces the "small diesis" (41.1 cent) between G♯ and A♭. A circle of four just minor thirds, such as produces an interval of between A♭ and G♯, etc. An interesting property of temperaments is that this difference remains whatever the tuning of the intervals forming the circle.
In this sense, commas and similar minute intervals can never be completely tempered out, whatever the tuning.
Comma sequence.
A comma sequence defines a musical temperament through a unique sequence of commas at increasing prime limits.
The first comma of the comma sequence is in the q-limit, where q is the n‑th odd prime (prime 2 being ignored because it represents the octave) and n is the number of generators. Subsequent commas are in prime limits, each the next prime in sequence above the last.
Other intervals called commas.
There are also several intervals called commas, which are not technically commas because they are not rational fractions like those above, but are irrational approximations of them. These include the Holdrian and Mercator's commas,
and the pitch-to-pitch step size in .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\frac{\\ 81\\ }{ 64 } \\cdot \\frac{\\ 80\\ }{ 81 } = \\frac{\\ 1 \\cdot 5\\ }{ 4 \\cdot 1 } = \\frac{\\ 5\\ }{ 4 }"
},
{
"math_id": 1,
"text": " \\frac{ 32 }{\\ 27\\ } \\cdot \\frac{ 81 }{\\ 80\\ } = \\frac{ 2 \\cdot 3 }{\\ 1 \\cdot 5\\ } = \\frac{ 6 }{\\ 5\\ }"
}
] |
https://en.wikipedia.org/wiki?curid=1458409
|
145844
|
Hyperboloid
|
Unbounded quadric surface
In geometry, a hyperboloid of revolution, sometimes called a circular hyperboloid, is the surface generated by rotating a hyperbola around one of its principal axes. A hyperboloid is the surface obtained from a hyperboloid of revolution by deforming it by means of directional scalings, or more generally, of an affine transformation.
A hyperboloid is a quadric surface, that is, a surface defined as the zero set of a polynomial of degree two in three variables. Among quadric surfaces, a hyperboloid is characterized by not being a cone or a cylinder, having a center of symmetry, and intersecting many planes into hyperbolas. A hyperboloid has three pairwise perpendicular axes of symmetry, and three pairwise perpendicular planes of symmetry.
Given a hyperboloid, one can choose a Cartesian coordinate system such that the hyperboloid is defined by one of the following equations:
formula_0
or
formula_1
The coordinate axes are axes of symmetry of the hyperboloid and the origin is the center of symmetry of the hyperboloid. In any case, the hyperboloid is asymptotic to the cone of the equations:
formula_2
One has a hyperboloid of revolution if and only if formula_3 Otherwise, the axes are uniquely defined (up to the exchange of the "x"-axis and the "y"-axis).
There are two kinds of hyperboloids. In the first case (+1 in the right-hand side of the equation): a one-sheet hyperboloid, also called a hyperbolic hyperboloid. It is a connected surface, which has a negative Gaussian curvature at every point. This implies near every point the intersection of the hyperboloid and its tangent plane at the point consists of two branches of curve that have distinct tangents at the point. In the case of the one-sheet hyperboloid, these branches of curves are lines and thus the one-sheet hyperboloid is a doubly ruled surface.
In the second case (−1 in the right-hand side of the equation): a two-sheet hyperboloid, also called an elliptic hyperboloid. The surface has two connected components and a positive Gaussian curvature at every point. The surface is "convex" in the sense that the tangent plane at every point intersects the surface only in this point.
Parametric representations.
Cartesian coordinates for the hyperboloids can be defined, similar to spherical coordinates, keeping the azimuth angle "θ" ∈ [0, 2"π"), but changing inclination "v" into hyperbolic trigonometric functions:
One-surface hyperboloid: "v" ∈ (−∞, ∞)
formula_4
Two-surface hyperboloid: "v" ∈ [0, ∞)
formula_5
The following parametric representation includes hyperboloids of one sheet, two sheets, and their common boundary cone, each with the formula_6-axis as the axis of symmetry:
formula_7
One can obtain a parametric representation of a hyperboloid with a different coordinate axis as the axis of symmetry by shuffling the position of the formula_11 term to the appropriate component in the equation above.
Generalised equations.
More generally, an arbitrarily oriented hyperboloid, centered at v, is defined by the equation
formula_12
where "A" is a matrix and x, v are vectors.
The eigenvectors of "A" define the principal directions of the hyperboloid and the eigenvalues of A are the reciprocals of the squares of the semi-axes: formula_13, formula_14 and formula_15. The one-sheet hyperboloid has two positive eigenvalues and one negative eigenvalue. The two-sheet hyperboloid has one positive eigenvalue and two negative eigenvalues.
Properties.
Hyperboloid of one sheet.
Lines on the surface.
If the hyperboloid has the equation
formula_16 then the lines
formula_17
are contained in the surface.
In case formula_18 the hyperboloid is a surface of revolution and can be generated by rotating one of the two lines formula_19 or formula_20, which are skew to the rotation axis (see picture). This property is called "Wren's theorem". The more common generation of a one-sheet hyperboloid of revolution is rotating a hyperbola around its semi-minor axis (see picture; rotating the hyperbola around its other axis gives a two-sheet hyperbola of revolution).
A hyperboloid of one sheet is "projectively" equivalent to a hyperbolic paraboloid.
Plane sections.
For simplicity the plane sections of the "unit hyperboloid" with equation formula_21 are considered. Because a hyperboloid in general position is an affine image of the unit hyperboloid, the result applies to the general case, too.
Obviously, any one-sheet hyperboloid of revolution contains circles. This is also true, but less obvious, in the general case (see circular section).
Hyperboloid of two sheets.
The hyperboloid of two sheets does "not" contain lines. The discussion of plane sections can be performed for the "unit hyperboloid of two sheets" with equation
formula_23
which can be generated by a rotating hyperbola around one of its axes (the one that cuts the hyperbola)
Obviously, any two-sheet hyperboloid of revolution contains circles. This is also true, but less obvious, in the general case (see circular section).
"Remark:" A hyperboloid of two sheets is "projectively" equivalent to a sphere.
Other properties.
Symmetries.
The hyperboloids with equations
formula_25
are
Curvature.
Whereas the Gaussian curvature of a hyperboloid of one sheet is negative, that of a two-sheet hyperboloid is positive. In spite of its positive curvature, the hyperboloid of two sheets with another suitably chosen metric can also be used as a model for hyperbolic geometry.
In more than three dimensions.
Imaginary hyperboloids are frequently found in mathematics of higher dimensions. For example, in a pseudo-Euclidean space one has the use of a quadratic form:
formula_27
When "c" is any constant, then the part of the space given by
formula_28
is called a "hyperboloid". The degenerate case corresponds to "c" = 0.
As an example, consider the following passage:
... the velocity vectors always lie on a surface which Minkowski calls a four-dimensional hyperboloid since, expressed in terms of purely real coordinates ("y"1, ..., "y"4), its equation is "y" + "y" + "y" − "y"
−1, analogous to the hyperboloid "y" + "y" − "y"
−1 of three-dimensional space.
However, the term quasi-sphere is also used in this context since the sphere and hyperboloid have some commonality (See below).
Hyperboloid structures.
One-sheeted hyperboloids are used in construction, with the structures called hyperboloid structures. A hyperboloid is a doubly ruled surface; thus, it can be built with straight steel beams, producing a strong structure at a lower cost than other methods. Examples include cooling towers, especially of power stations, and many other structures.
Relation to the sphere.
In 1853 William Rowan Hamilton published his "Lectures on Quaternions" which included presentation of biquaternions. The following passage from page 673 shows how Hamilton uses biquaternion algebra and vectors from quaternions to produce hyperboloids from the equation of a sphere:
... the "equation of the unit sphere" "ρ"2 + 1 = 0, and change the vector "ρ" to a "bivector form", such as "σ" + "τ" √−1. The equation of the sphere then breaks up into the system of the two following,
<templatestyles src="Block indent/styles.css"/>"σ"2 − "τ"2 + 1 = 0, S."στ" = 0;
and suggests our considering "σ" and "τ" as two real and rectangular vectors, such that
<templatestyles src="Block indent/styles.css"/>Tτ" = (Tσ"2 − 1 )1/2.
Hence it is easy to infer that if we assume "σ" || "λ", where "λ" is a vector in a given position, the "new real vector" "σ" + "τ" will terminate on the surface of a "double-sheeted and equilateral hyperboloid"; and that if, on the other hand, we assume "τ" || "λ", then the locus of the extremity of the real vector "σ" + "τ" will be an "equilateral but single-sheeted hyperboloid". The study of these two hyperboloids is, therefore, in this way connected very simply, through biquaternions, with the study of the sphere; ...
In this passage S is the operator giving the scalar part of a quaternion, and T is the "tensor", now called norm, of a quaternion.
A modern view of the unification of the sphere and hyperboloid uses the idea of a conic section as a slice of a quadratic form. Instead of a conical surface, one requires conical hypersurfaces in four-dimensional space with points "p" = ("w", "x", "y", "z") ∈ R4 determined by quadratic forms. First consider the conical hypersurface
Then formula_31 is the sphere with radius "r". On the other hand, the conical hypersurface
<templatestyles src="Block indent/styles.css"/>formula_32 provides that formula_33 is a hyperboloid.
In the theory of quadratic forms, a unit quasi-sphere is the subset of a quadratic space "X" consisting of the "x" ∈ "X" such that the quadratic norm of "x" is one.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " {x^2 \\over a^2} + {y^2 \\over b^2} - {z^2 \\over c^2} = 1,"
},
{
"math_id": 1,
"text": " {x^2 \\over a^2} + {y^2 \\over b^2} - {z^2 \\over c^2} = -1."
},
{
"math_id": 2,
"text": " {x^2 \\over a^2} + {y^2 \\over b^2} - {z^2 \\over c^2} = 0 ."
},
{
"math_id": 3,
"text": "a^2=b^2."
},
{
"math_id": 4,
"text": "\\begin{align} x&=a \\cosh v \\cos\\theta \\\\ y&=b \\cosh v \\sin\\theta \\\\ z&=c \\sinh v \\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align} x&=a \\sinh v \\cos\\theta \\\\ y&=b \\sinh v \\sin\\theta \\\\ z&=\\pm c \\cosh v \\end{align}"
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": "\\mathbf x(s,t) =\n\\left( \\begin{array}{lll}\na \\sqrt{s^2+d} \\cos t\\\\\nb \\sqrt{s^2+d} \\sin t\\\\\nc s\n\\end{array} \\right)\n"
},
{
"math_id": 8,
"text": "d>0"
},
{
"math_id": 9,
"text": "d<0"
},
{
"math_id": 10,
"text": "d=0"
},
{
"math_id": 11,
"text": "c s"
},
{
"math_id": 12,
"text": "(\\mathbf{x}-\\mathbf{v})^\\mathrm{T} A (\\mathbf{x}-\\mathbf{v}) = 1,"
},
{
"math_id": 13,
"text": "{1/a^2}"
},
{
"math_id": 14,
"text": "{1/b^2} "
},
{
"math_id": 15,
"text": "{1/c^2}"
},
{
"math_id": 16,
"text": " {x^2 \\over a^2} + {y^2 \\over b^2} - {z^2 \\over c^2}= 1"
},
{
"math_id": 17,
"text": "g^{\\pm}_{\\alpha}:\n \\mathbf{x}(t) = \\begin{pmatrix} a\\cos\\alpha \\\\ b\\sin\\alpha \\\\ 0\\end{pmatrix}\n + t\\cdot \\begin{pmatrix} -a\\sin\\alpha\\\\ b\\cos\\alpha\\\\ \\pm c\\end{pmatrix}\\ ,\\quad t\\in \\R,\\ 0\\le \\alpha\\le 2\\pi\\ "
},
{
"math_id": 18,
"text": "a = b"
},
{
"math_id": 19,
"text": "g^{+}_{0}"
},
{
"math_id": 20,
"text": "g^{-}_{0}"
},
{
"math_id": 21,
"text": " \\ H_1: x^2+y^2-z^2=1"
},
{
"math_id": 22,
"text": "H_1"
},
{
"math_id": 23,
"text": "H_2: \\ x^2+y^2-z^2 = -1."
},
{
"math_id": 24,
"text": "H_2"
},
{
"math_id": 25,
"text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} - \\frac{z^2}{c^2} = 1 , \\quad \\frac{x^2}{a^2} + \\frac{y^2}{b^2} - \\frac{z^2}{c^2} = -1 "
},
{
"math_id": 26,
"text": "a=b"
},
{
"math_id": 27,
"text": "q(x) = \\left(x_1^2+\\cdots + x_k^2\\right)-\\left(x_{k+1}^2+\\cdots + x_n^2\\right), \\quad k < n ."
},
{
"math_id": 28,
"text": "\\lbrace x \\ :\\ q(x) = c \\rbrace "
},
{
"math_id": 29,
"text": "P = \\left\\{ p \\; : \\; w^2 = x^2 + y^2 + z^2 \\right\\} "
},
{
"math_id": 30,
"text": "H_r = \\lbrace p \\ :\\ w = r \\rbrace ,"
},
{
"math_id": 31,
"text": "P \\cap H_r"
},
{
"math_id": 32,
"text": "Q = \\lbrace p \\ :\\ w^2 + z^2 = x^2 + y^2 \\rbrace"
},
{
"math_id": 33,
"text": "Q \\cap H_r"
}
] |
https://en.wikipedia.org/wiki?curid=145844
|
145845
|
Paraboloid
|
Quadric surface with one axis of symmetry and no center of symmetry
In geometry, a paraboloid is a quadric surface that has exactly one axis of symmetry and no center of symmetry. The term "paraboloid" is derived from parabola, which refers to a conic section that has a similar property of symmetry.
Every plane section of a paraboloid by a plane parallel to the axis of symmetry is a parabola. The paraboloid is hyperbolic if every other plane section is either a hyperbola, or two crossing lines (in the case of a section by a tangent plane). The paraboloid is elliptic if every other nonempty plane section is either an ellipse, or a single point (in the case of a section by a tangent plane). A paraboloid is either elliptic or hyperbolic.
Equivalently, a paraboloid may be defined as a quadric surface that is not a cylinder, and has an implicit equation whose part of degree two may be factored over the complex numbers into two different linear factors. The paraboloid is hyperbolic if the factors are real; elliptic if the factors are complex conjugate.
An elliptic paraboloid is shaped like an oval cup and has a maximum or minimum point when its axis is vertical. In a suitable coordinate system with three axes "x", "y", and "z", it can be represented by the equation
formula_0
where "a" and "b" are constants that dictate the level of curvature in the "xz" and "yz" planes respectively. In this position, the elliptic paraboloid opens upward.
A hyperbolic paraboloid (not to be confused with a hyperboloid) is a doubly ruled surface shaped like a saddle. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation
formula_1
In this position, the hyperbolic paraboloid opens downward along the "x"-axis and upward along the "y"-axis (that is, the parabola in the plane "x"
0 opens upward and the parabola in the plane "y"
0 opens downward).
Any paraboloid (elliptic or hyperbolic) is a translation surface, as it can be generated by a moving parabola directed by a second parabola.
Properties and applications.
Elliptic paraboloid.
In a suitable Cartesian coordinate system, an elliptic paraboloid has the equation
formula_2
If "a" = "b", an elliptic paraboloid is a "circular paraboloid" or "paraboloid of revolution". It is a surface of revolution obtained by revolving a parabola around its axis.
A circular paraboloid contains circles. This is also true in the general case (see Circular section).
From the point of view of projective geometry, an elliptic paraboloid is an ellipsoid that is tangent to the plane at infinity.
The plane sections of an elliptic paraboloid can be:
Parabolic reflector.
On the axis of a circular paraboloid, there is a point called the "focus" (or "focal point"), such that, if the paraboloid is a mirror, light (or other waves) from a point source at the focus is reflected into a parallel beam, parallel to the axis of the paraboloid. This also works the other way around: a parallel beam of light that is parallel to the axis of the paraboloid is concentrated at the focal point. For a proof, see .
Therefore, the shape of a circular paraboloid is widely used in astronomy for parabolic reflectors and parabolic antennas.
The surface of a rotating liquid is also a circular paraboloid. This is used in liquid-mirror telescopes and in making solid telescope mirrors (see rotating furnace).
Hyperbolic paraboloid.
The hyperbolic paraboloid is a doubly ruled surface: it contains two families of mutually skew lines. The lines in each family are parallel to a common plane, but not to each other. Hence the hyperbolic paraboloid is a conoid.
These properties characterize hyperbolic paraboloids and are used in one of the oldest definitions of hyperbolic paraboloids: "a hyperbolic paraboloid is a surface that may be generated by a moving line that is parallel to a fixed plane and crosses two fixed skew lines".
This property makes it simple to manufacture a hyperbolic paraboloid from a variety of materials and for a variety of purposes, from concrete roofs to snack foods. In particular, Pringles fried snacks resemble a truncated hyperbolic paraboloid.
A hyperbolic paraboloid is a saddle surface, as its Gauss curvature is negative at every point. Therefore, although it is a ruled surface, it is not developable.
From the point of view of projective geometry, a hyperbolic paraboloid is one-sheet hyperboloid that is tangent to the plane at infinity.
A hyperbolic paraboloid of equation formula_3 or formula_4 (this is the same up to a rotation of axes) may be called a "rectangular hyperbolic paraboloid", by analogy with rectangular hyperbolas.
A plane section of a hyperbolic paraboloid with equation
formula_5
can be
Examples in architecture.
Saddle roofs are often hyperbolic paraboloids as they are easily constructed from straight sections of material. Some examples:
Cylinder between pencils of elliptic and hyperbolic paraboloids.
The pencil of elliptic paraboloids
formula_7
and the pencil of hyperbolic paraboloids
formula_8
approach the same surface
formula_9
for formula_10,
which is a "parabolic cylinder" (see image).
Curvature.
The elliptic paraboloid, parametrized simply as
formula_11
has Gaussian curvature
formula_12
and mean curvature
formula_13
which are both always positive, have their maximum at the origin, become smaller as a point on the surface moves further away from the origin, and tend asymptotically to zero as the said point moves infinitely away from the origin.
The hyperbolic paraboloid, when parametrized as
formula_14
has Gaussian curvature
formula_15
and mean curvature
formula_16
Geometric representation of multiplication table.
If the hyperbolic paraboloid
formula_5
is rotated by an angle of in the +"z" direction (according to the right hand rule), the result is the surface
formula_17
and if "a"
"b" then this simplifies to
formula_18
Finally, letting "a"
√2, we see that the hyperbolic paraboloid
formula_19
is congruent to the surface
formula_20
which can be thought of as the geometric representation (a three-dimensional nomograph, as it were) of a multiplication table.
The two paraboloidal R2 → R functions
formula_21
and
formula_22
are harmonic conjugates, and together form the analytic function
formula_23
which is the analytic continuation of the R → R parabolic function "f"("x") =.
Dimensions of a paraboloidal dish.
The dimensions of a symmetrical paraboloidal dish are related by the equation
formula_24
where "F" is the focal length, "D" is the depth of the dish (measured along the axis of symmetry from the vertex to the plane of the rim), and "R" is the radius of the rim. They must all be in the same unit of length. If two of these three lengths are known, this equation can be used to calculate the third.
A more complex calculation is needed to find the diameter of the dish "measured along its surface". This is sometimes called the "linear diameter", and equals the diameter of a flat, circular sheet of material, usually metal, which is the right size to be cut and bent to make the dish. Two intermediate results are useful in the calculation: "P"
2"F" (or the equivalent: "P"
) and "Q"
√"P"2 + "R"2, where "F", "D", and "R" are defined as above. The diameter of the dish, measured along the surface, is then given by
formula_25
where ln "x" means the natural logarithm of "x", i.e. its logarithm to base "e".
The volume of the dish, the amount of liquid it could hold if the rim were horizontal and the vertex at the bottom (e.g. the capacity of a paraboloidal wok), is given by
formula_26
where the symbols are defined as above. This can be compared with the formulae for the volumes of a cylinder (π"R"2"D"), a hemisphere ("R"2"D", where "D"
"R"), and a cone ("R"2"D"). π"R"2 is the aperture area of the dish, the area enclosed by the rim, which is proportional to the amount of sunlight a reflector dish can intercept. The surface area of a parabolic dish can be found using the area formula for a surface of revolution which gives
formula_27
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z = \\frac{x^2}{a^2} + \\frac{y^2}{b^2}."
},
{
"math_id": 1,
"text": "z = \\frac{y^2}{b^2} - \\frac{x^2}{a^2}."
},
{
"math_id": 2,
"text": "z = \\frac{x^2}{a^2}+\\frac{y^2}{b^2}."
},
{
"math_id": 3,
"text": "z=axy"
},
{
"math_id": 4,
"text": "z=\\tfrac a 2(x^2-y^2)"
},
{
"math_id": 5,
"text": "z = \\frac{x^2}{a^2} - \\frac{y^2}{b^2}"
},
{
"math_id": 6,
"text": " bx \\pm ay+b=0"
},
{
"math_id": 7,
"text": "z=x^2 + \\frac{y^2}{b^2}, \\ b>0, "
},
{
"math_id": 8,
"text": "z=x^2 - \\frac{y^2}{b^2}, \\ b>0, "
},
{
"math_id": 9,
"text": " z=x^2"
},
{
"math_id": 10,
"text": " b \\rightarrow \\infty"
},
{
"math_id": 11,
"text": "\\vec \\sigma(u,v) = \\left(u, v, \\frac{u^2}{a^2} + \\frac{v^2}{b^2}\\right) "
},
{
"math_id": 12,
"text": "K(u,v) = \\frac{4}{a^2 b^2 \\left(1 + \\frac{4u^2}{a^4} + \\frac{4v^2}{b^4}\\right)^2}"
},
{
"math_id": 13,
"text": "H(u,v) = \\frac{a^2 + b^2 + \\frac{4u^2}{a^2} + \\frac{4v^2}{b^2}}{a^2 b^2 \\sqrt{\\left(1 + \\frac{4u^2}{a^4} + \\frac{4v^2}{b^4}\\right)^3}}"
},
{
"math_id": 14,
"text": "\\vec \\sigma (u,v) = \\left(u, v, \\frac{u^2}{a^2} - \\frac{v^2}{b^2}\\right) "
},
{
"math_id": 15,
"text": "K(u,v) = \\frac{-4}{a^2 b^2 \\left(1 + \\frac{4u^2}{a^4} + \\frac{4v^2}{b^4}\\right)^2} "
},
{
"math_id": 16,
"text": "H(u,v) = \\frac{-a^2 + b^2 - \\frac{4u^2}{a^2} + \\frac{4v^2}{b^2}}{a^2 b^2 \\sqrt{\\left(1 + \\frac{4u^2}{a^4} + \\frac{4v^2}{b^4}\\right)^3}}. "
},
{
"math_id": 17,
"text": "z = \\left(\\frac{x^2 + y^2}{2}\\right) \\left(\\frac{1}{a^2} - \\frac{1}{b^2}\\right) + xy \\left(\\frac{1}{a^2} + \\frac{1}{b^2}\\right)"
},
{
"math_id": 18,
"text": "z = \\frac{2xy}{a^2}."
},
{
"math_id": 19,
"text": "z = \\frac{x^2 - y^2}{2}."
},
{
"math_id": 20,
"text": "z = xy"
},
{
"math_id": 21,
"text": "z_1 (x,y) = \\frac{x^2 - y^2}{2}"
},
{
"math_id": 22,
"text": "z_2 (x,y) = xy"
},
{
"math_id": 23,
"text": "f(z) = \\frac{z^2}{2} = f(x + yi) = z_1 (x,y) + i z_2 (x,y)"
},
{
"math_id": 24,
"text": "4FD = R^2,"
},
{
"math_id": 25,
"text": "\\frac{RQ}{P} + P \\ln\\left(\\frac{R+Q}{P}\\right),"
},
{
"math_id": 26,
"text": "\\frac{\\pi}{2} R^2 D,"
},
{
"math_id": 27,
"text": "A = \\frac{\\pi R\\left(\\sqrt{(R^2+4D^2)^3}-R^3\\right)}{6D^2}."
}
] |
https://en.wikipedia.org/wiki?curid=145845
|
1458651
|
Filter bank
|
Tool for Digital Signal Processing
In signal processing, a filter bank (or filterbank) is an array of bandpass filters that separates the input signal into multiple components, each one carrying a sub-band of the original signal. One application of a filter bank is a graphic equalizer, which can attenuate the components differently and recombine them into a modified version of the original signal. The process of decomposition performed by the filter bank is called "analysis" (meaning analysis of the signal in terms of its components in each sub-band); the output of analysis is referred to as a subband signal with as many subbands as there are filters in the filter bank. The reconstruction process is called "synthesis", meaning reconstitution of a complete signal resulting from the filtering process.
In digital signal processing, the term "filter bank" is also commonly applied to a bank of receivers. The difference is that receivers also down-convert the subbands to a low center frequency that can be re-sampled at a reduced rate. The same result can sometimes be achieved by undersampling the bandpass subbands.
Another application of filter banks is signal compression when some frequencies are more important than others. After decomposition, the important frequencies can be coded with a fine resolution. Small differences at these frequencies are significant and a coding scheme that preserves these differences must be used. On the other hand, less important frequencies do not have to be exact. A coarser coding scheme can be used, even though some of the finer (but less important) details will be lost in the coding.
The vocoder uses a filter bank to determine the amplitude information of the subbands of a modulator signal (such as a voice) and uses them to control the amplitude of the subbands of a carrier signal (such as the output of a guitar or synthesizer), thus imposing the dynamic characteristics of the modulator on the carrier.
Some filter banks work almost entirely in the time domain, using a series of filters such as quadrature mirror filters or the Goertzel algorithm to divide the signal into smaller bands.
Other filter banks use a fast Fourier transform (FFT).
FFT filter banks.
A bank of receivers can be created by performing a sequence of FFTs on overlapping "segments" of the input data stream. A weighting function (aka window function) is applied to each segment to control the shape of the frequency responses of the filters. The wider the shape, the more often the FFTs have to be done to satisfy the Nyquist sampling criteria. For a fixed segment length, the amount of overlap determines how often the FFTs are done (and vice versa). Also, the wider the shape of the filters, the fewer filters that are needed to span the input bandwidth. Eliminating unnecessary filters (i.e. decimation in frequency) is efficiently done by treating each weighted segment as a sequence of smaller "blocks", and the FFT is performed on only the sum of the blocks. This has been referred to as "weight overlap-add (WOLA)" and "weighted pre-sum FFT". (see )
A special case occurs when, by design, the length of the blocks is an integer multiple of the interval between FFTs. Then the FFT filter bank can be described in terms of one or more polyphase filter structures where the phases are recombined by an FFT instead of a simple summation. The number of blocks per segment is the impulse response length (or "depth") of each filter. The computational efficiencies of the FFT and polyphase structures, on a general purpose processor, are identical.
Synthesis (i.e. recombining the outputs of multiple receivers) is basically a matter of upsampling each one at a rate commensurate with the total bandwidth to be created, translating each channel to its new center frequency, and summing the streams of samples. In that context, the interpolation filter associated with upsampling is called "synthesis filter". The net frequency response of each channel is the product of the synthesis filter with the frequency response of the filter bank ("analysis filter"). Ideally, the frequency responses of adjacent channels sum to a constant value at every frequency between the channel centers. That condition is known as "perfect reconstruction".
Filter banks as time–frequency distributions.
In time–frequency signal processing, a filter bank is a special quadratic time–frequency distribution (TFD) that represents the signal in a joint time–frequency domain. It is related to the Wigner–Ville distribution by a two-dimensional filtering that defines the class of quadratic (or bilinear) time–frequency distributions. The filter bank and the spectrogram are the two simplest ways of producing a quadratic TFD; they are in essence similar as one (the spectrogram) is obtained by dividing the time domain into slices and then taking a Fourier transform, while the other (the filter bank) is obtained by dividing the frequency domain in slices forming bandpass filters that are excited by the signal under analysis.
Multirate filter bank.
A multirate filter bank divides a signal into a number of subbands, which can be analysed at different rates corresponding to the bandwidth of the frequency bands. The implementation makes use of downsampling (decimation) and upsampling (expansion). See and for additional insight into the effects of those operations in the transform domains.
Narrow lowpass filter.
One can define a narrow lowpass filter as a lowpass filter with a narrow passband.
In order to create a multirate narrow lowpass FIR filter, one can replace the time-invariant FIR filter with a lowpass antialiasing filter and a decimator, along with an interpolator and lowpass anti-imaging filter.
In this way, the resulting multirate system is a time-varying linear-phase filter via the decimator and interpolator.
The lowpass filter consists of two polyphase filters, one for the decimator and one for the interpolator.
A filter bank divides the input signal formula_0 into a set of signals formula_1. In this way each of the generated signals corresponds to a different region in the spectrum of formula_0.
In this process it can be possible for the regions overlap (or not, based on application).
The generated signals formula_1 can be generated via a collection of set of bandpass filters with bandwidths formula_2 and center frequencies formula_3(respectively).
A multirate filter bank uses a single input signal and then produces multiple outputs of the signal by filtering and subsampling.
In order to split the input signal into two or more signals, an analysis-synthesis system can be used.
The signal would split with the help of four filters formula_4 for "k" =0,1,2,3 into 4 bands of the same bandwidths (In the analysis bank) and then each sub-signal is decimated by a factor of 4.
In each band by dividing the signal in each band, we would have different signal characteristics.
In synthesis section the filter will reconstruct the original signal:
First, upsampling the 4 sub-signals at the output of the processing unit by a factor of 4 and then filter by 4 synthesis filters formula_5 for "k" = 0,1,2,3.
Finally, the outputs of these four filters are added.
Statistically optimized filter bank (Eigen filter bank).
A discrete-time filter bank framework allows inclusion of desired input signal dependent features in the design in addition to the more traditional perfect reconstruction property. The information theoretic features like maximized energy compaction, perfect de-correlation of sub-band signals and other characteristics for the given input covariance/correlation structure are incorporated in the design of optimal filter banks. These filter banks resemble the signal dependent Karhunen–Loève transform (KLT) that is the optimal block transform where the length L of basis functions (filters) and the subspace dimension M are the same.
Multidimensional filter banks.
Multidimensional filtering, downsampling, and upsampling are the main parts of multirate systems and filter banks.
A complete filter bank consists of the analysis and synthesis side.
The analysis filter bank divides an input signal to different subbands with different frequency spectra.
The synthesis part reassembles the different subband signals and generates a reconstructed signal.
Two of the basic building blocks are the decimator and expander. For example, the input divides into four directional sub bands that each of them covers one of the wedge-shaped frequency regions. In 1D systems, M-fold decimators keep only those samples that are multiples of M and discard the rest. while in multi-dimensional systems the decimators are "D" × "D" nonsingular integer matrix. it considers only those samples that are on the lattice generated by the decimator. Commonly used decimator is the quincunx decimator whose lattice is generated from the Quincunx matrix which is defined by formula_6
The quincunx lattice generated by quincunx matrix is as shown; the synthesis part is dual to the analysis part.
Filter banks can be analyzed from a frequency-domain perspective in terms of subband decomposition and reconstruction. However, equally important is Hilbert-space interpretation of filter banks, which plays a key role in geometrical signal representations.
For generic "K"-channel filter bank, with analysis filters formula_7, synthesis filters formula_8, and sampling matrices formula_9.
In the analysis side, we can define vectors in "formula_10" as
formula_11,
each index by two parameters: formula_12 and formula_13.
Similarly, for the synthesis filters formula_14 we can define formula_15.
Considering the definition of analysis/synthesis sides we can verify that formula_16 and for reconstruction part:
formula_17.
In other words, the analysis filter bank calculate the inner product of the input signal and the vector from analysis set. Moreover, the reconstructed signal in the combination of the vectors from the synthesis set, and the combination coefficients of the computed inner products, meaning that
formula_18
If there is no loss in the decomposition and the subsequent reconstruction, the filter bank is called "perfect reconstruction". (in that case we would have formula_19.
Figure shows a general multidimensional filter bank with "N" channels and a common sampling matrix "M".
The analysis part transforms the input signal formula_20 into "N" filtered and downsampled outputs formula_21 formula_22.
The synthesis part recovers the original signal from formula_23 by upsampling and filtering.
This kind of setup is used in many applications such as subband coding, multichannel acquisition, and discrete wavelet transforms.
Perfect reconstruction filter banks.
We can use polyphase representation, so input signal formula_20 can be represented by a vector of its polyphase components formula_24. Denote formula_25
So we would have formula_26, where formula_27 denotes the "j"-th polyphase component of the filter formula_28.
Similarly, for the output signal we would have formula_29, where formula_30. Also G is a matrix where formula_31 denotes ith polyphase component of the jth synthesis
filter Gj(z).
The filter bank has perfect reconstruction
if formula_32 for any input, or equivalently formula_33 which means that G(z) is a left inverse of H(z).
Multidimensional filter design.
1-D filter banks have been well developed until today. However, many signals, such as image, video, 3D sound, radar, sonar, are multidimensional, and require the design of multidimensional filter banks.
With the fast development of communication technology, signal processing system needs more room to store data during the processing, transmission and reception. In order to reduce the data to be processed, save storage and lower the complexity, multirate sampling techniques were introduced to achieve these goals. Filter banks can be used in various areas, such as image coding, voice coding, radar and so on.
Many 1D filter issues were well studied and researchers proposed many 1D filter bank design approaches. But there are still many multidimensional filter bank design problems that need to be solved. Some methods may not well reconstruct the signal, some methods are complex and hard to implement.
The simplest approach to design a multi-dimensional filter bank is to cascade 1D filter banks in the form of a tree structure where the decimation matrix is diagonal and data is processed in each dimension separately. Such systems are referred to as separable systems. However, the region of support for the filter banks might not be separable. In that case designing of filter bank gets complex. In most cases we deal with non-separable systems.
A filter bank consists of an analysis stage and a synthesis stage. Each stage consists of a set of filters in parallel. The filter bank design is the design of the filters in the analysis and synthesis stages. The analysis filters divide the signal into overlapping or non-overlapping subbands depending on the application requirements. The synthesis filters should be designed to reconstruct the input signal back from the subbands when the outputs of these filters are combined. Processing is typically performed after the analysis stage. These filter banks can be designed as Infinite impulse response (IIR) or Finite impulse response (FIR).
In order to reduce the data rate, downsampling and upsampling are performed in the analysis and synthesis stages, respectively.
Existing approaches.
Below are several approaches on the design of multidimensional filter banks. For more details, please check the ORIGINAL references.
Multidimensional perfect-reconstruction filter banks.
When it is necessary to reconstruct the divided signal back to the original one, perfect-reconstruction (PR) filter banks may be used.
Let H(z) be the transfer function of a filter. The size of the filter is defined as the order of corresponding polynomial in every dimension. The symmetry or anti-symmetry of a polynomial determines the linear phase property of the corresponding filter and is related to its size.
Like the 1D case, the aliasing term A(z) and transfer function T(z) for a 2 channel filter bank are:
A(z)=1/2(H0(-z) F0 (z)+H1 (-z) F1 (z));
T(z)=1/2(H0 (z) F0 (z)+H1 (z) F1 (z)),
where H0 and H1 are decomposition filters, and F0 and F1 are reconstruction filters.
The input signal can be perfectly reconstructed if the alias term is cancelled and T(z) equal to a monomial. So the necessary condition is that T'(z) is generally symmetric and of an odd-by-odd size.
Linear phase PR filters are very useful for image processing. This two-channel filter bank is relatively easy to implement. But two channels sometimes are not enough. Two-channel filter banks can be cascaded to generate multi-channel filter banks.
Multidimensional directional filter banks and surfacelets.
M-dimensional directional filter banks (MDFB) are a family of filter banks that can achieve the directional decomposition of arbitrary M-dimensional signals with a simple and efficient tree-structured construction. It has many distinctive properties like: directional decomposition, efficient tree construction, angular resolution and perfect reconstruction.
In the general M-dimensional case, the ideal frequency supports of the MDFB are hypercube-based hyperpyramids. The first level of decomposition for MDFB is achieved by an N-channel undecimated filter bank, whose component filters are M-D "hourglass"-shaped filter aligned with the w1...,wM respectively axes. After that, the input signal is further decomposed by a series of 2-D iteratively resampled checkerboard filter banks "IRC""li"("Li")(i=2,3...,M), where "IRC""li"("Li")operates on 2-D slices of the input signal represented by the dimension pair (n1,ni) and superscript (Li) means the levels of decomposition for the ith level filter bank. Note that, starting from the second level, we attach an IRC filter bank to each output channel from the previous level, and hence the entire filter has a total of 2("L"1+...+"L"N) output channels.
Multidimensional oversampled filter banks.
Oversampled filter banks are multirate filter banks where the number of output samples at the analysis stage is larger than the number of input samples. It is proposed for robust applications. One particular class of oversampled filter banks is nonsubsampled filter banks without downsampling or upsampling. The perfect reconstruction condition for an oversampled filter bank can be stated as a matrix inverse problem in the polyphase domain.
For IIR oversampled filter bank, perfect reconstruction have been studied in Wolovich and Kailath.
in the context of control theory. While for FIR oversampled filter bank we have to use different strategy for 1-D and M-D.
FIR filter are more popular since it is easier to implement. For 1-D oversampled FIR filter banks, the Euclidean algorithm plays a key role in the matrix inverse problem.
However, the Euclidean algorithm fails for multidimensional (MD) filters. For MD filter, we can convert the FIR representation into a polynomial representation. And then use Algebraic geometry and Gröbner bases to get the framework and the reconstruction condition of the multidimensional oversampled filter banks.
Multidimensional nonsubsampled FIR filter banks.
Nonsubsampled filter banks are particular oversampled filter banks without downsampling or upsampling.
The perfect reconstruction condition for nonsubsampled FIR filter banks leads to a vector inverse problem: the
analysis filters formula_34 are given and FIR, and the goal is to find a set of FIR synthesis filters formula_35 satisfying.
Using Gröbner bases.
As multidimensional filter banks can be represented by multivariate rational matrices, this method is a very effective tool that can be used to deal with the multidimensional filter banks.
In Charo, a multivariate polynomial matrix-factorization algorithm is introduced and discussed. The most common problem is the multidimensional filter banks for perfect reconstruction. This paper talks about the method to achieve this goal that satisfies the constrained condition of linear phase.
According to the description of the paper, some new results in factorization are discussed and being applied to issues of multidimensional linear phase perfect reconstruction finite-impulse response filter banks. The basic concept of Gröbner bases is given in Adams.
This approach based on multivariate matrix factorization can be used in different areas. The algorithmic theory of polynomial ideals and modules can be modified to address problems in processing, compression, transmission, and decoding of multidimensional signals.
The general multidimensional filter bank (Figure 7) can be represented by a pair of analysis and synthesis polyphase matrices formula_36 and formula_37 of size formula_38 and formula_39, where "N" is the number of channels and formula_40 is the absolute value of the determinant of the sampling matrix. Also formula_36 and formula_37 are the z-transform of the polyphase components of the analysis and synthesis filters. Therefore, they are "multivariate Laurent polynomials", which have the general form:
formula_41.
Laurent polynomial matrix equation need to be solve to design perfect reconstruction filter banks:
formula_42.
In the multidimensional case with multivariate polynomials we need to use the theory and algorithms of Gröbner bases.
Gröbner bases can be used to characterizing perfect reconstruction multidimensional filter banks, but it first need to extend from polynomial matrices to Laurent polynomial matrices.
The Gröbner-basis computation can be considered equivalently as Gaussian elimination for solving the polynomial matrix equation formula_42.
If we have set of polynomial vectors
formula_43
where formula_44 are polynomials.
The Module is analogous to the "span" of a set of vectors in linear algebra. The theory of Gröbner bases implies that the Module has a unique reduced Gröbner basis for a given order of power products in polynomials.
If we define the Gröbner basis as formula_45, it can be
obtained from formula_46 by a finite sequence of reduction
(division) steps.
Using reverse engineering, we can compute the basis vectors formula_47 in terms of the original vectors formula_48 through a formula_49 transformation matrix formula_50 as:
formula_51
Mapping-based multidimensional filter banks.
Designing filters with good frequency responses is challenging via Gröbner bases approach.
Mapping based design in popularly used to design nonseparable multidimensional filter banks with good frequency responses.
The mapping approaches have certain restrictions on the kind of filters; however, it brings many important advantages, such as efficient implementation via lifting/ladder structures.
Here we provide an example of two-channel filter banks in 2D with sampling matrix
formula_52
We would have several possible choices of ideal frequency responses of the channel filter formula_53 and formula_54. (Note that the other two filters formula_55 and formula_56 are supported on complementary regions.)
All the frequency regions in Figure can be critically sampled by the rectangular lattice spanned by formula_57.
So imagine the filter bank achieves perfect reconstruction
with FIR filters. Then from the polyphase domain characterization it follows that the filters H1(z) and G1(z) are completely
specified by H0(z) and G0(z), respectively. Therefore, we need to design H0(x) and G0(z) which have desired frequency responses and satisfy the polyphase-domain conditions.
formula_58
There are different mapping technique that can be used to get above result.
Filter-bank design in the frequency domain.
When perfect reconstruction is not needed, the design problem can be simplified by working in frequency domain instead of using FIR filters.
Note that the frequency domain method is not limited to the design of nonsubsampled filter banks (read ).
Direct frequency-domain optimization.
Many of the existing methods for designing 2-channel filter banks are based on transformation of variable technique. For example, McClellan transform can be used to design 1-D 2-channel filter banks. Though the 2-D filter banks have many similar properties with the 1-D prototype, but it is difficult to extend to more than 2-channel cases.
In Nguyen, the authors talk about the design of multidimensional filter banks by direct optimization in the frequency domain. The method proposed here is mainly focused on the M-channel 2D filter banks design. The method is flexible towards frequency support configurations. 2D filter banks designed by optimization in the frequency domain has been used in Wei and Lu. In Nguyen's paper, the proposed method is not limited to two-channel 2D filter banks design; the approach is generalized to M-channel filter banks with any critical subsampling matrix. According to the implementation in the paper, it can be used to achieve up to 8-channel 2D filter banks design.
(6)Reverse Jacket Matrix
In Lee's 1999 paper, the authors talk about the multidimensional filter bank design using a reverse jacket matrix. Let "H" be a Hadamard matrix of order "n", the transpose of "H" is closely related to its inverse. The correct formula is: formula_59, where In is the n×n identity matrix and "HT" is the transpose of "H". In the 1999 paper, the authors generalize the reverse jacket matrix [RJ]N using Hadamard matrices and weighted Hadamard matrices.
In this paper, the authors proposed that the FIR filter with 128 taps be used as a basic filter, and decimation factor is computed for RJ matrices. They did simulations based on different parameters and achieve a good quality performances in low decimation factor.
Directional filter banks.
Bamberger and Smith proposed a 2D directional filter bank (DFB).
The DFB is efficiently implemented via an "l"-level tree-structured decomposition that leads to formula_60 subbands with wedge-shaped frequency partition (see Figure).
The original construction of the DFB involves modulating the input signal and using diamond-shaped filters.
Moreover, in order to obtain the desired frequency partition, a complicated tree expanding rule has to be followed. As a result, the frequency regions
for the resulting subbands do not follow a simple ordering as shown in Figure 9 based on the channel indices.
The first advantage of DFB is that not only it is not a redundant transform but also it offers perfect reconstruction.
Another advantage of DFB is its directional-selectivity and efficient structure.
This advantage makes DFB an appropriate approach for many signal and image processing usage. (e.g., Laplacian pyramid, constructed the contourlets, sparse image representation, medical imaging, etc.).
Directional Filter Banks can be developed to higher dimensions. It can be use in 3-D to achieve the frequency sectioning.
Filter-bank transceiver.
Filter banks are important elements for the physical layer in wideband wireless communication, where the problem is efficient base-band processing of multiple channels. A filter-bank-based transceiver architecture eliminates the scalability and efficiency issues observed by previous schemes in case of non-contiguous channels. Appropriate filter design is necessary to reduce performance degradation caused by the filter bank. In order to obtain universally applicable designs, mild assumptions can be made about waveform format, channel statistics and the coding/decoding scheme. Both heuristic and optimal design methodologies can be used, and excellent performance is possible with low complexity as long as the transceiver operates with a reasonably large oversampling factor. A practical application is OFDM transmission, where they provide very good performance with small additional complexity.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x\\left(n\\right)"
},
{
"math_id": 1,
"text": "x_{1}(n),x_{2}(n),x_{3}(n),..."
},
{
"math_id": 2,
"text": "\\rm BW_{1},BW_{2},BW_{3},..."
},
{
"math_id": 3,
"text": "f_{c1},f_{c2},f_{c3},..."
},
{
"math_id": 4,
"text": "H_{k}(z)"
},
{
"math_id": 5,
"text": "F_{k}(z)"
},
{
"math_id": 6,
"text": "\\begin{bmatrix}\\;\\;\\,1 & 1 \\\\-1 & 1 \\end{bmatrix}"
},
{
"math_id": 7,
"text": "\\left\\{ h_{k}[n]\\right\\} _{k=1}^{K}\n"
},
{
"math_id": 8,
"text": "\\left\\{ g_{k}[n]\\right\\} _{k=1}^{K}"
},
{
"math_id": 9,
"text": "\\left\\{ M_{k}[n]\\right\\} _{k=1}^{K}\n"
},
{
"math_id": 10,
"text": "\\ell^{2}(\\mathbf{Z}^{d})\n"
},
{
"math_id": 11,
"text": "\\varphi_{k,m}[n]\\stackrel{\\rm def}{=}h_{k}^{*}[M_{k}m-n]"
},
{
"math_id": 12,
"text": "1\\leq k\\leq K"
},
{
"math_id": 13,
"text": "m\\in \\mathbf{Z}^{2}"
},
{
"math_id": 14,
"text": "g_{k}[n]"
},
{
"math_id": 15,
"text": "\\psi_{k,m}[n]\\stackrel{\\rm def}{=}g_{k}^{*}[M_{k}m-n]"
},
{
"math_id": 16,
"text": "c_{k}[m]=\\langle x[n],\\varphi_{k,m}[n] \\rangle"
},
{
"math_id": 17,
"text": "\\hat{x}[n]=\\sum_{1\\leq k\\leq K,m\\in \\mathbf{Z}^{2}}c_{k}[m]\\psi_{k,m}[n]"
},
{
"math_id": 18,
"text": "\\hat{x}[n]=\\sum_{1\\leq k\\leq K,m\\in \\mathbf{Z}^{2}}\\langle x[n],\\varphi_{k,m}[n] \\rangle\\psi_{k,m}[n]"
},
{
"math_id": 19,
"text": "x[n]=\\hat{x[n]}"
},
{
"math_id": 20,
"text": "x[n]"
},
{
"math_id": 21,
"text": "y_{j}[n],"
},
{
"math_id": 22,
"text": "j=0,1,...,N-1"
},
{
"math_id": 23,
"text": "y_{j}[n]"
},
{
"math_id": 24,
"text": "x(z)\\stackrel{\\rm def}{=}(X_{0}(z),...,X_{|M|-1}(z))^{T}\n"
},
{
"math_id": 25,
"text": "y(z)\\stackrel{\\rm def}{=}(Y_{0}(z),...,Y_{|N|-1}(z))^{T}."
},
{
"math_id": 26,
"text": "y(z)=H(z)x(z)"
},
{
"math_id": 27,
"text": "H_{i,j}(z)"
},
{
"math_id": 28,
"text": "H_{i}(z)"
},
{
"math_id": 29,
"text": "\\hat{x}(z)=G(z)y(z)"
},
{
"math_id": 30,
"text": "\\hat{x}(z)\\stackrel{\\rm def}{=}(\\hat{X}_{0}(z),...,\\hat{X}_{|M|-1}(z))^{T}\n"
},
{
"math_id": 31,
"text": "G_{i,j}(z)"
},
{
"math_id": 32,
"text": "x(z)= \\hat{x}(z)"
},
{
"math_id": 33,
"text": "I_{|M|}=G(z)H(z)"
},
{
"math_id": 34,
"text": "\\{H_{1},...,H_{N}\\}"
},
{
"math_id": 35,
"text": "\\{G_{1},...,G_{N}\\}"
},
{
"math_id": 36,
"text": "H(z)"
},
{
"math_id": 37,
"text": "G(z)"
},
{
"math_id": 38,
"text": "N\\times M\n"
},
{
"math_id": 39,
"text": "M\\times N"
},
{
"math_id": 40,
"text": "M\\stackrel{\\rm def}{=}|M|\n"
},
{
"math_id": 41,
"text": "F(z)=\\sum_{k\\in \\mathbf{Z}^{d}}f[k]z^{k}=\\sum_{k\\in \\mathbf{Z}^{d}}f[k_{1},...,k_{d}]z_{1}^{k_{1}}...z_{d}^{k_{d}}"
},
{
"math_id": 42,
"text": "G(z)H(z)=I_{|M|}"
},
{
"math_id": 43,
"text": "\\mathrm{Module}\\left\\{ h_{1}(z),...,h_{N}(z)\\right\\} \\stackrel{\\rm def}{=}\\{c_{1}(z)h_{1}(z)+...+c_{N}(z)h_{N}(z)\\}"
},
{
"math_id": 44,
"text": "c_{1}(z),...,c_{N}(z)"
},
{
"math_id": 45,
"text": "\\left\\{ b_{1}(z),...,b_{N}(z)\\right\\}"
},
{
"math_id": 46,
"text": "\\left\\{ h_{1}(z),...,h_{N}(z)\\right\\} "
},
{
"math_id": 47,
"text": "b_{i}(z)"
},
{
"math_id": 48,
"text": "h_{j}(z)"
},
{
"math_id": 49,
"text": "K\\times N"
},
{
"math_id": 50,
"text": "W_{ij}(z)"
},
{
"math_id": 51,
"text": "b_{i}(z)=\\sum_{j=1}^{N}W_{ij}(z)h_{j}(z),i=1,...,K"
},
{
"math_id": 52,
"text": "D_{1}=\\left[\\begin{array}{cc}\n2 & 0\\\\\n0 & 1\n\\end{array}\\right]"
},
{
"math_id": 53,
"text": "H_{0}(\\xi)\n"
},
{
"math_id": 54,
"text": "G_{0}(\\xi)"
},
{
"math_id": 55,
"text": "H_{1}(\\xi)\n"
},
{
"math_id": 56,
"text": "G_{1}(\\xi)"
},
{
"math_id": 57,
"text": "D_1"
},
{
"math_id": 58,
"text": "H_{0}(z_{1},z_{2})G_{0}(z_{1},z_{2})+H_{0}(-z_{1},z_{2})G_{0}(-z_{1},z_{2})=2"
},
{
"math_id": 59,
"text": "HH^T=I_n"
},
{
"math_id": 60,
"text": "2^{l}"
}
] |
https://en.wikipedia.org/wiki?curid=1458651
|
1458875
|
Rigidity (mathematics)
|
In mathematics, a rigid collection "C" of mathematical objects (for instance sets or functions) is one in which every "c" ∈ "C" is uniquely determined by less information about "c" than one would expect.
The above statement does not define a mathematical property; instead, it describes in what sense the adjective "rigid" is typically used in mathematics, by mathematicians.
Examples.
Some examples include:
Combinatorial use.
In combinatorics, the term rigid is also used to define the notion of a rigid surjection, which is a surjection formula_0 for which the following equivalent conditions hold:
This relates to the above definition of rigid, in that each rigid surjection formula_3 uniquely defines, and is uniquely defined by, a partition of formula_4 into formula_6 pieces. Given a rigid surjection formula_3, the partition is defined by formula_7. Conversely, given a partition of formula_8, order the formula_9 by letting formula_10. If formula_11 is now the formula_12-ordered partition, the function formula_0 defined by formula_13 is a rigid surjection.
References.
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from rigid on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "f: n \\to m"
},
{
"math_id": 1,
"text": "i, j \\in m"
},
{
"math_id": 2,
"text": "i < j \\implies \\min f^{-1}(i) < \\min f^{-1}(j)"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\big( f(0), f(1), \\ldots, f(n-1) \\big)"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "n = f^{-1}(0) \\sqcup \\cdots \\sqcup f^{-1}(m-1)"
},
{
"math_id": 8,
"text": "n = A_0 \\sqcup \\cdots \\sqcup A_{m-1}"
},
{
"math_id": 9,
"text": "A_i"
},
{
"math_id": 10,
"text": "A_i \\prec A_j \\iff \\min A_i < \\min A_j"
},
{
"math_id": 11,
"text": "n = B_0 \\sqcup \\cdots \\sqcup B_{m-1}"
},
{
"math_id": 12,
"text": "\\prec"
},
{
"math_id": 13,
"text": "f(i) = j \\iff i \\in B_j"
}
] |
https://en.wikipedia.org/wiki?curid=1458875
|
1459010
|
Stationary phase approximation
|
Asymptotic analysis used when integrating rapidly-varying complex exponentials
In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.
This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.
It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.
Basics.
The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.
Formula.
Letting formula_0 denote the set of critical points of the function formula_1 (i.e. points where formula_2), under the assumption that formula_3 is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. formula_4 for formula_5) we have the following asymptotic formula, as formula_6:
formula_7
Here formula_8 denotes the Hessian of formula_1, and formula_9 denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.
For formula_10, this reduces to:
formula_11
In this case the assumptions on formula_1 reduce to all the critical points being non-degenerate.
This is just the Wick-rotated version of the formula for the method of steepest descent.
An example.
Consider a function
formula_12.
The phase term in this function, formula_13, is stationary when
formula_14
or equivalently,
formula_15.
Solutions to this equation yield dominant frequencies formula_16 for some formula_17 and formula_18. If we expand formula_19 as a Taylor series about formula_16 and neglect terms of order higher than formula_20, we have
formula_21
where formula_22 denotes the second derivative of formula_23. When formula_17 is relatively large, even a small difference formula_24 will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula,
formula_25.
formula_26.
This integrates to
formula_27.
Reduction steps.
The first major general statement of the principle involved is that the asymptotic behaviour of "I"("k") depends only on the critical points of "f". If by choice of "g" the integral is localised to a region of space where "f" has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma.
The second statement is that when "f" is a Morse function, so that the singular points of "f" are non-degenerate and isolated, then the question can be reduced to the case "n" = 1. In fact, then, a choice of "g" can be made to split the integral into cases with just one critical point "P" in each. At that point, because the Hessian determinant at "P" is by assumption not 0, the Morse lemma applies. By a change of co-ordinates "f" may be replaced by
formula_28.
The value of "j" is given by the signature of the Hessian matrix of "f" at "P". As for "g", the essential case is that "g" is a product of bump functions of "x""i". Assuming now without loss of generality that "P" is the origin, take a smooth bump function "h" with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take
formula_29,
then Fubini's theorem reduces "I"("k") to a product of integrals over the real line like
formula_30
with "f"("x") = ±"x"2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate.
In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function).
One-dimensional case.
The essential statement is this one:
formula_31.
In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range formula_32 (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, formula_33.
This is the model for all one-dimensional integrals formula_34 with formula_1 having a single non-degenerate critical point at which formula_1 has second derivative formula_35. In fact the model case has second derivative 2 at 0. In order to scale using formula_23, observe that replacing formula_23 by formula_36
where formula_37 is constant is the same as scaling formula_17 by formula_38. It follows that for general values of formula_39, the factor formula_40 becomes
formula_41.
For formula_42 one uses the complex conjugate formula, as mentioned before.
Lower-order terms.
As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved formula_1.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Sigma"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "\\nabla f =0"
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "\\det(\\mathrm{Hess}(f(x_0)))\\neq 0"
},
{
"math_id": 5,
"text": "x_0 \\in \\Sigma"
},
{
"math_id": 6,
"text": "k\\to \\infty"
},
{
"math_id": 7,
"text": "\\int_{\\mathbb{R}^n}g(x)e^{ikf(x)} dx=\\sum_{x_0\\in \\Sigma} e^{ik f(x_0)}|\\det({\\mathrm{Hess}}(f(x_0)))|^{-1/2}e^{\\frac{i\\pi}{4} \\mathrm{sgn}(\\mathrm{Hess}(f(x_0)))}(2\\pi/k)^{n/2}g(x_0)+o(k^{-n/2})"
},
{
"math_id": 8,
"text": "\\mathrm{Hess}(f)"
},
{
"math_id": 9,
"text": "\\mathrm{sgn}(\\mathrm{Hess}(f))"
},
{
"math_id": 10,
"text": "n=1"
},
{
"math_id": 11,
"text": "\\int_\\mathbb{R}g(x)e^{ikf(x)}dx=\\sum_{x_0\\in \\Sigma} g(x_0)e^{ik f(x_0)+\\mathrm{sign}(f''(x_0))i\\pi/4}\\left(\\frac{2\\pi}{k |f''(x_0)|}\\right)^{1/2}+o(k^{-1/2})"
},
{
"math_id": 12,
"text": "f(x,t) = \\frac{1}{2\\pi} \\int_{\\mathbb R} F(\\omega) e^{i [k(\\omega) x - \\omega t]} \\, d\\omega"
},
{
"math_id": 13,
"text": "\\phi = k(\\omega) x - \\omega t"
},
{
"math_id": 14,
"text": "\\frac{d}{d\\omega}\\mathopen{}\\left(k(\\omega) x - \\omega t\\right)\\mathclose{} = 0"
},
{
"math_id": 15,
"text": "\\frac{d k(\\omega)}{d\\omega}\\Big|_{\\omega = \\omega_0} = \\frac{t}{x}"
},
{
"math_id": 16,
"text": "\\omega_0"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "t"
},
{
"math_id": 19,
"text": "\\phi"
},
{
"math_id": 20,
"text": "(\\omega-\\omega_0)^2"
},
{
"math_id": 21,
"text": "\\phi = \\left[k(\\omega_0) x - \\omega_0 t\\right] + \\frac{1}{2} x k''(\\omega_0) (\\omega - \\omega_0)^2 + \\cdots"
},
{
"math_id": 22,
"text": "k''"
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "(\\omega-\\omega_0)"
},
{
"math_id": 25,
"text": "\\int_{\\mathbb R} e^{\\frac{1}{2}ic x^2} d x=\\sqrt{\\frac{2i\\pi}{c}}=\\sqrt{\\frac{2\\pi}{|c|}}e^{\\pm i\\frac{\\pi}{4}}"
},
{
"math_id": 26,
"text": "f(x, t) \\approx \\frac{1}{2\\pi} e^{i \\left[k(\\omega_0) x - \\omega_0 t\\right]} \\left|F(\\omega_0)\\right| \\int_{\\mathbb R} e^{\\frac{1}{2} i x k''(\\omega_0) (\\omega - \\omega_0)^2} \\, d\\omega "
},
{
"math_id": 27,
"text": "f(x, t) \\approx \\frac{\\left|F(\\omega_0)\\right|}{2\\pi} \\sqrt{\\frac{2\\pi}{x \\left|k''(\\omega_0)\\right|}} \\cos\\left[k(\\omega_0) x - \\omega_0 t \\pm \\frac{\\pi}{4}\\right]"
},
{
"math_id": 28,
"text": "(x_1^2 + x_2^2 + \\cdots + x_j^2) - (x_{j + 1}^2 + x_{j + 2}^2 + \\cdots + x_n^2)"
},
{
"math_id": 29,
"text": "g(x) = \\prod_i h(x_i)"
},
{
"math_id": 30,
"text": "J(k) = \\int h(x) e^{i k f(x)} \\, dx"
},
{
"math_id": 31,
"text": "\\int_{-1}^1 e^{i k x^2} \\, dx = \\sqrt{\\frac{\\pi}{k}} e^{i \\pi / 4} + \\mathcal O \\mathopen{}\\left(\\frac{1}{k}\\right)\\mathclose{}"
},
{
"math_id": 32,
"text": "[-\\infty, \\infty]"
},
{
"math_id": 33,
"text": "[1,\\infty]"
},
{
"math_id": 34,
"text": "I(k)"
},
{
"math_id": 35,
"text": ">0"
},
{
"math_id": 36,
"text": "ck"
},
{
"math_id": 37,
"text": "c"
},
{
"math_id": 38,
"text": "\\sqrt{c}"
},
{
"math_id": 39,
"text": "f''(0)>0"
},
{
"math_id": 40,
"text": "\\sqrt{\\pi/k}"
},
{
"math_id": 41,
"text": "\\sqrt{\\frac{2 \\pi}{k f''(0)}}"
},
{
"math_id": 42,
"text": "f''(0)<0"
}
] |
https://en.wikipedia.org/wiki?curid=1459010
|
1459075
|
Parallel tempering
|
Parallel tempering, in physics and statistics, is a computer simulation method typically used to find the lowest energy state of a system of many interacting particles. It addresses the problem that at high temperatures, one may have a stable state different from low temperature, whereas simulations at low temperatures may become "stuck" in a metastable state. It does this by using the fact that the high temperature simulation may visit states typical of both stable and metastable low temperature states.
More specifically, parallel tempering (also known as replica exchange MCMC sampling), is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo (MCMC) sampling methods more generally. The replica exchange method was originally devised by Robert Swendsen and J. S. Wang, then extended by Charles J. Geyer, and later developed further by Giorgio Parisi,
Koji Hukushima and Koji Nemoto,
and others.
Y. Sugita and Y. Okamoto also formulated a molecular dynamics version of parallel tempering; this is usually known as replica-exchange molecular dynamics or REMD.
Essentially, one runs "N" copies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Background.
Typically a Monte Carlo simulation using a Metropolis–Hastings update consists of a single stochastic process that evaluates the energy of the system and accepts/rejects updates based on the temperature "T". At high temperatures updates that change the energy of the system are comparatively more probable. When the system is highly correlated, updates are rejected and the simulation is said to suffer from critical slowing down.
If we were to run two simulations at temperatures separated by a Δ"T", we would find that if Δ"T" is small enough, then the energy histograms obtained by collecting the values of the energies over a set of Monte Carlo steps N will create two distributions that will somewhat overlap. The overlap can be defined by the area of the histograms that falls over the same interval of energy values, normalized by the total number of samples. For Δ"T" = 0 the overlap should approach 1.
Another way to interpret this overlap is to say that system configurations sampled at temperature "T"1 are likely to appear during a simulation at "T"2. Because the Markov chain should have no memory of its past, we can create a new update for the system composed of the two systems at "T"1 and "T"2. At a given Monte Carlo step we can update the global system by swapping the configuration of the two systems, or alternatively trading the two temperatures. The update is accepted according to the Metropolis–Hastings criterion with probability
formula_0
and otherwise the update is rejected. The detailed balance condition has to be satisfied by ensuring that the reverse update has to be equally likely, all else being equal. This can be ensured by appropriately choosing regular Monte Carlo updates or parallel tempering updates with probabilities that are independent of the configurations of the two systems or of the Monte Carlo step.
This update can be generalized to more than two systems.
By a careful choice of temperatures and number of systems one can achieve an improvement in the mixing properties of a set of Monte Carlo simulations that exceeds the extra computational cost of running parallel simulations.
Other considerations to be made: increasing the number of different temperatures can have a detrimental effect, as one can think of the 'lateral' movement of a given system across temperatures as a diffusion process.
Set up is important as there must be a practical histogram overlap to achieve a reasonable probability of lateral moves.
The parallel tempering method can be used as a super simulated annealing that does not need restart, since a system at high temperature can feed new local optimizers to a system at low temperature, allowing tunneling between metastable states and improving convergence to a global optimum.
|
[
{
"math_id": 0,
"text": " p = \\min \\left( 1, \\frac{ \\exp \\left( -\\frac{E_j}{kT_i} - \\frac{E_i}{kT_j} \\right) }{ \\exp \\left( -\\frac{E_i}{kT_i} - \\frac{E_j}{kT_j} \\right) } \\right) = \\min \\left( 1, e^{(E_i - E_j) \\left( \\frac{1}{kT_i} - \\frac{1}{kT_j} \\right)} \\right) ,"
}
] |
https://en.wikipedia.org/wiki?curid=1459075
|
14593084
|
Nested set collection
|
A nested set collection or nested set family is a collection of sets that consists of chains of subsets forming a hierarchical structure, like Russian dolls.
It is used as reference concept in scientific hierarchy definitions, and many technical approaches, like the tree in computational data structures or nested set model of relational databases.
Sometimes the concept is confused with a collection of sets with a hereditary property (like finiteness in a hereditarily finite set).
Formal definition.
Some authors regard a nested set collection as a family of sets. Others prefer to classify it relation as an inclusion order.
Let "B" be a non-empty set and C a collection of subsets of "B". Then C is a nested set collection if:
The first condition states that the whole set "B", which contains all the elements of every subset, must belong to the nested set collection. Some authors do not assume that "B" is nonempty.
The second condition states that the intersection of every couple of sets in the nested set collection is not the empty set only if one set is a subset of the other.
In particular, when scanning all pairs of subsets at the second condition, it is true for any combination with "B".
Example.
Using a set of atomic elements, as the set of the playing card suits:
"B" = {♠, ♥, ♦, ♣}; "B"1 = {♠, ♥}; "B"2 = {♦, ♣}; "B"3 = {♣}; C = {"B", "B"1, "B"2, "B"3}.
The second condition of the formal definition can be checked by combining all pairs:
"B"1 ∩ "B"2 = ∅; "B"1 ∩ "B"3 = ∅; "B"3 ⊂ "B"2.
There is a hierarchy that can be expressed by two branches and its nested order: "B"3 ⊂ "B"2 ⊂ "B"; "B"1 ⊂ "B".
Derived concepts.
As sets, that are general abstraction and foundations for many concepts, the "nested set" is the foundation for "nested hierarchy", "containment hierarchy" and others.
Nested hierarchy.
A nested hierarchy or "inclusion hierarchy" is a hierarchical ordering of "nested set"s. The concept of nesting is exemplified in Russian matryoshka dolls. Each doll is encompassed by another doll, all the way to the outer doll. The outer doll holds all of the inner dolls, the next outer doll holds all the remaining inner dolls, and so on. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level. Illustrating the general concept:
formula_3
A square can always also be referred to as a quadrilateral, polygon or shape. In this way, it is a hierarchy. However, consider the set of polygons using this classification. A square can "only" be a quadrilateral; it can never be a triangle, hexagon, etc.
Nested hierarchies are the organizational schemes behind taxonomies and systematic classifications. For example, using the original Linnaean taxonomy (the version he laid out in the 10th edition of "Systema Naturae"), a human can be formulated as:
formula_4
Taxonomies may change frequently (as seen in biological taxonomy), but the underlying concept of nested hierarchies is always the same.
Containment hierarchy.
A containment hierarchy is a direct extrapolation of the nested hierarchy concept. All of the ordered sets are still nested, but every set must be "strict" — no two sets can be identical. The shapes example above can be modified to demonstrate this:
formula_5
The notation formula_6 means "x" is a subset of "y" but is not equal to "y".
Containment hierarchy is used in class inheritance of object-oriented programming.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "B \\in \\mathbf{C}"
},
{
"math_id": 1,
"text": "\\empty \\notin \\mathbf{C}"
},
{
"math_id": 2,
"text": "\\forall H,K \\in \\mathbf{C} ~:~ H \\cap K \\neq \\empty \\implies H \\subset K ~\\lor~ K \\subset H"
},
{
"math_id": 3,
"text": " \\text{square} \\subset \\text{quadrilateral} \\subset \\text{polygon} \\subset \\text{shape} \\, "
},
{
"math_id": 4,
"text": "\\text{H. sapiens} \\subset \\text{Homo} \\subset \\text{Primates} \\subset \\text{Mammalia} \\subset \\text{Animalia}"
},
{
"math_id": 5,
"text": " \\text{square} \\subsetneq \\text{quadrilateral} \\subsetneq \\text{polygon} \\subsetneq \\text{shape} \\, "
},
{
"math_id": 6,
"text": " x \\subsetneq y \\, "
}
] |
https://en.wikipedia.org/wiki?curid=14593084
|
14593201
|
Principal root of unity
|
In mathematics, a principal "n"-th root of unity (where "n" is a positive integer) of a ring is an element formula_0 satisfying the equations
formula_1
In an integral domain, every primitive "n"-th root of unity is also a principal formula_2-th root of unity. In any ring, if "n" is a power of 2, then any "n"/2-th root of −1 is a principal "n"-th root of unity.
A non-example is formula_3 in the ring of integers modulo formula_4; while formula_5 and thus formula_3 is a cube root of unity, formula_6 meaning that it is not a principal cube root of unity.
The significance of a root of unity being "principal" is that it is a necessary condition for the theory of the discrete Fourier transform to work out correctly.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\begin{align}\n& \\alpha^n = 1 \\\\\n& \\sum_{j=0}^{n-1} \\alpha^{jk} = 0 \\text{ for } 1 \\leq k < n\n\\end{align}"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "3"
},
{
"math_id": 4,
"text": "26"
},
{
"math_id": 5,
"text": "3^3 \\equiv 1 \\pmod{26}"
},
{
"math_id": 6,
"text": "1 + 3 + 3^2 \\equiv 13 \\pmod{26}"
}
] |
https://en.wikipedia.org/wiki?curid=14593201
|
14593776
|
De Longchamps point
|
Orthocenter of a triangle's anticomplementary triangle
In geometry, the de Longchamps point of a triangle is a triangle center named after French mathematician Gaston Albert Gohierre de Longchamps. It is the reflection of the orthocenter of the triangle about the circumcenter.
Definition.
Let the given triangle have vertices formula_0, formula_1, and formula_2, opposite the respective sides formula_3, formula_4, and formula_5, as is the standard notation in triangle geometry. In the 1886 paper in which he introduced this point, de Longchamps initially defined it as the center of a circle formula_6 orthogonal to the three circles formula_7, formula_8, and formula_9, where formula_7 is centered at formula_0 with radius formula_3 and the other two circles are defined symmetrically. De Longchamps then also showed that the same point, now known as the de Longchamps point, may be equivalently defined as the orthocenter of the anticomplementary triangle of formula_10, and that it is the reflection of the orthocenter of formula_10 around the circumcenter.
The Steiner circle of a triangle is concentric with the nine-point circle and has radius 3/2 the circumradius of the triangle; the de Longchamps point is the homothetic center of the Steiner circle and the circumcircle.
Additional properties.
As the reflection of the orthocenter around the circumcenter, the de Longchamps point belongs to the line through both of these points, which is the Euler line of the given triangle. Thus, it is collinear with all the other triangle centers on the Euler line, which along with the orthocenter and circumcenter include the centroid and the center of the nine-point circle.
The de Longchamp point is also collinear, along a different line, with the incenter and the Gergonne point of its triangle. The three circles centered at formula_0, formula_1, and formula_2, with radii formula_11, formula_12, and formula_13 respectively (where formula_14 is the semiperimeter) are mutually tangent, and there are two more circles tangent to all three of them, the inner and outer Soddy circles; the centers of these two circles also lie on the same line with the de Longchamp point and the incenter. The de Longchamp point is the point of concurrence of this line with the Euler line, and with three other lines defined in a similar way as the line through the incenter but using instead the three excenters of the triangle.
The Darboux cubic may be defined from the de Longchamps point, as the locus of points formula_15 such that formula_15, the isogonal conjugate of formula_15, and the de Longchamps point are collinear. It is the only cubic curve invariant of a triangle that is both isogonally self-conjugate and centrally symmetric; its center of symmetry is the circumcenter of the triangle. The de Longchamps point itself lies on this curve, as does its reflection the orthocenter.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "\\Delta"
},
{
"math_id": 7,
"text": "\\Delta_a"
},
{
"math_id": 8,
"text": "\\Delta_b"
},
{
"math_id": 9,
"text": "\\Delta_c"
},
{
"math_id": 10,
"text": "ABC"
},
{
"math_id": 11,
"text": "s-a"
},
{
"math_id": 12,
"text": "s-b"
},
{
"math_id": 13,
"text": "s-c"
},
{
"math_id": 14,
"text": "s"
},
{
"math_id": 15,
"text": "X"
}
] |
https://en.wikipedia.org/wiki?curid=14593776
|
1459427
|
Non-circular gear
|
Gear in a shape other than a circle
A non-circular gear (NCG) is a special gear design with special characteristics and purpose. While a regular gear is optimized to transmit torque to another engaged member with minimum noise and wear and with maximum efficiency, a non-circular gear's main objective might be ratio variations, axle displacement oscillations and more. Common applications include textile machines, potentiometers, CVTs (continuously variable transmissions), window shade panel drives, mechanical presses and high torque hydraulic engines.
A regular gear pair can be represented as two circles rolling together without slip. In the case of non-circular gears, those circles are replaced with anything different from a circle. For this reason NCGs in most cases are not round, but round NCGs that look like regular gears are also possible (small ratio variations result from meshing area modifications).
Generally, NCGs should meet all the requirements of regular gearing but in some cases, for example variable axle distance, could prove impossible to support, and such gears require very tight manufacturing tolerances and assembling problems arise. Because of complicated geometry, NCGs are most likely spur gears and molding or electrical discharge machining technology is used instead of generation.
Mathematical description.
Ignoring the gear teeth for the moment (i.e. assuming the gear teeth are very small), let formula_0 be the radius of the first gear wheel as a function of angle from the axis of rotation formula_1, and let formula_2 be the radius of the second gear wheel as a function of angle from its axis of rotation formula_3. If the axles remain fixed, the distance between the axles is also fixed:
formula_4
Assuming that the point of contact lies on the line connecting the axles, in order for the gears to touch without slipping, the velocity of each wheel must be equal at the point of contact and perpendicular to the line connecting the axles, which implies that:
formula_5
Each wheel must be cyclic in its angular coordinates. If the shape of the first wheel is known, the shape of the second can often be found using the above equations. If the relationship between the angles is specified, the shapes of both wheels can often be determined analytically as well.
It is more convenient to use the circular variable formula_6 when analyzing this problem. Assuming the radius of the first gear wheel is known as a function of "z", and using the relationship formula_7, the above two equations can be combined to yield the differential equation:
formula_8
where formula_9 and formula_10 describe the rotation of the first and second gears respectively. This equation can be formally solved as:
formula_11
where formula_12 is a constant of integration.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r_1(\\theta_1)"
},
{
"math_id": 1,
"text": "\\theta_1"
},
{
"math_id": 2,
"text": "r_2(\\theta_2)"
},
{
"math_id": 3,
"text": "\\theta_2"
},
{
"math_id": 4,
"text": "r_1(\\theta_1)+r_2(\\theta_2)=a\\,"
},
{
"math_id": 5,
"text": "r_1\\,d\\theta_1=r_2\\,d\\theta_2"
},
{
"math_id": 6,
"text": "z=e^{i\\theta}"
},
{
"math_id": 7,
"text": "dz=iz\\,d\\theta"
},
{
"math_id": 8,
"text": "\\frac{dz_2}{z_2}=\\frac{r_1(z_1)}{a-r_1(z_1)}\\,\\frac{dz_1}{z_1}"
},
{
"math_id": 9,
"text": "z_1"
},
{
"math_id": 10,
"text": "z_2"
},
{
"math_id": 11,
"text": "\\ln(z_2)=\\ln(K)+\\int\\frac{r_1(z_1)}{a-r_1(z_1)}\\,\\frac{dz_1}{z_1}"
},
{
"math_id": 12,
"text": "\\ln(K)"
}
] |
https://en.wikipedia.org/wiki?curid=1459427
|
14595008
|
Lagrange invariant
|
Measure of the light propagating through an optical system
In optics the Lagrange invariant is a measure of the light propagating through an optical system. It is defined by
formula_0,
where y and u are the marginal ray height and angle respectively, and ȳ and ū are the chief ray height and angle. n is the ambient refractive index. In order to reduce confusion with other quantities, the symbol Ж may be used in place of H. Ж2 is proportional to the throughput of the optical system (related to étendue). For a given optical system, the Lagrange invariant is a constant throughout all space, that is, it is invariant upon refraction and transfer.
The optical invariant is a generalization of the Lagrange invariant which is formed using the ray heights and angles of any two rays. For these rays, the optical invariant is a constant throughout all space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H = n\\overline{u}y - nu\\overline{y}"
}
] |
https://en.wikipedia.org/wiki?curid=14595008
|
14597289
|
Pythagorean addition
|
Defined for two real numbers as the square root of the sum of their squares
In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides formula_0 and formula_1, this length can be calculated as
formula_2
where formula_3 denotes the Pythagorean addition operation.
This operation can be used in the conversion of Cartesian coordinates to polar coordinates. It also provides a simple notation and terminology for some formulas when its summands are complicated; for example, the energy-momentum relation in physics becomes
formula_4
It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature; it is related to the "quadratic mean" or "root mean square".
Applications.
Pythagorean addition (and its implementation as the hypot function) is often used together with the atan2 function to convert from Cartesian coordinates formula_5 to polar coordinates formula_6:
formula_7
If measurements formula_8 have independent errors formula_9 respectively, the quadrature method gives the overall error,
formula_10
whereas the upper limit of the overall error is
formula_11
if the errors were not independent.
This is equivalent of finding the magnitude of the resultant of adding orthogonal vectors, each with magnitude equal to the uncertainty, using the Pythagorean theorem.
In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives six digital numbers of shot noise, three of dark current noise and two of Johnson–Nyquist noise under a specific condition, the overall noise is
formula_12
digital numbers, showing the dominance of larger sources of noise.
The root mean square of a finite set of formula_13 numbers is just their Pythagorean sum, normalized to form a generalized mean by dividing by formula_14.
Properties.
The operation formula_3 is associative and commutative, and
formula_15
This means that the real numbers under formula_3 form a commutative semigroup.
The real numbers under formula_3 are not a group, because formula_3 can never produce a negative number as its result, whereas each element of a group must be the result of applying the group operation to itself and the identity element. On the non-negative numbers, it is still not a group, because Pythagorean addition of one number by a second positive number can only increase the first number, so no positive number can have an inverse element. Instead, it forms a commutative monoid on the non-negative numbers, with zero as its identity.
Implementation.
Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square root function on the sum of two squares, but hypot avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula,
formula_16
the squares of very large or small values of formula_17 and formula_18 may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and overflow. The hypot function was designed to calculate the result without causing this problem.
If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).
Since C++17, there has been an additional hypot function for 3D calculations:
formula_19
Calculation order.
The difficulty with the naive implementation is that formula_20 may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that formula_21, and then to use the equivalent form
formula_22
The computation of formula_23 cannot overflow unless both formula_17 and formula_18 are zero. If formula_23 underflows, the final result is equal to formula_24, which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by formula_24 cannot underflow, and overflows only when the result is too large to represent. This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. Typically, the implementation is slower by a factor of 2.5 to 3.
More complex implementations avoid this by dividing the inputs into more cases:
However, this implementation is extremely slow when it causes incorrect jump predictions due to different cases. Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp.
Programming language support.
The function is present in many programming languages and libraries, including
CSS,
C++11,
D,
Fortran (since Fortran 2008),
Go,
JavaScript (since ES2015),
Julia,
Java (since version 1.5),
Kotlin,
MATLAB,
PHP,
Python,
Ruby,
Rust,
and Scala.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "a \\oplus b = \\sqrt{a^2+b^2},"
},
{
"math_id": 3,
"text": "\\oplus"
},
{
"math_id": 4,
"text": "E = mc^2 \\oplus pc."
},
{
"math_id": 5,
"text": "(x,y)"
},
{
"math_id": 6,
"text": "(r,\\theta)"
},
{
"math_id": 7,
"text": "\n\\begin{align}\nr&=x\\oplus y=\\operatorname{hypot}(x,y)\\\\\n\\theta&=\\operatorname{atan2}(y,x).\\\\\n\\end{align}\n"
},
{
"math_id": 8,
"text": "X,Y,Z,\\dots"
},
{
"math_id": 9,
"text": "\\Delta_X, \\Delta_Y, \\Delta_Z, \\dots"
},
{
"math_id": 10,
"text": "\\varDelta_o = \\sqrt{{\\varDelta_X}^2 + {\\varDelta_Y}^2 + {\\varDelta_Z}^2 + \\cdots}"
},
{
"math_id": 11,
"text": "\\varDelta_u = \\varDelta_X + \\varDelta_Y + \\varDelta_Z + \\cdots"
},
{
"math_id": 12,
"text": "\\sigma = 6 \\oplus 3 \\oplus 2 = \\sqrt{6^2 + 3^2 + 2^2} = 7"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\sqrt n"
},
{
"math_id": 15,
"text": "\\sqrt{x_1^2 + x_2^2 + \\cdots + x_n^2} = x_1 \\oplus x_2 \\oplus \\cdots \\oplus x_n."
},
{
"math_id": 16,
"text": "r = \\sqrt{x^2 + y^2},"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "y"
},
{
"math_id": 19,
"text": "r = \\sqrt{x^2 + y^2 + z^2}."
},
{
"math_id": 20,
"text": "x^2+y^2"
},
{
"math_id": 21,
"text": "|x|\\ge|y|"
},
{
"math_id": 22,
"text": "\nr = |x| \\sqrt{1 + \\left(\\frac{y}{x}\\right)^2}.\n"
},
{
"math_id": 23,
"text": "y/x"
},
{
"math_id": 24,
"text": "|x|"
},
{
"math_id": 25,
"text": "x\\oplus y\\approx|x|"
},
{
"math_id": 26,
"text": "x^2"
},
{
"math_id": 27,
"text": "y^2"
}
] |
https://en.wikipedia.org/wiki?curid=14597289
|
14597919
|
High Energy Astronomy Observatory 3
|
The last of NASA's three High Energy Astronomy Observatories, HEAO 3 was launched 20 September 1979 on an Atlas-Centaur launch vehicle, into a nearly circular, 43.6 degree inclination low Earth orbit with an initial perigeum of 486.4 km.
The normal operating mode was a continuous celestial scan, spinning approximately once every 20 min about the spacecraft z-axis, which was nominally pointed at the Sun.
Total mass of the observatory at launch was .
HEAO 3 included three scientific instruments: the first a cryogenic high-resolution germanium gamma-ray spectrometer, and two devoted to cosmic-ray observations.
The scientific objectives of the mission's three experiments were:
(1) to study intensity, spectrum, and time behavior of X-ray and gamma-ray sources between 0.06 and 10 MeV; measure isotropy of the diffuse X-ray and gamma-ray background; and perform an exploratory search for X-and gamma-ray line emissions;
(2) to determine the isotopic composition of the most abundant components of the cosmic-ray flux with atomic mass between 7 and 56, and the flux of each element with atomic number (Z) between Z = 4 and Z = 50;
(3) to search for super-heavy nuclei up to Z = 120 and measure the composition of the nuclei with Z >20.
The Gamma-ray Line Spectrometer Experiment.
The HEAO "C-1" instrument (as it was known before launch) was a sky-survey experiment, operating in the hard X-ray and low-energy gamma-ray bands.
The gamma-ray spectrometer was especially designed to search for the 511 keV gamma-ray line produced by the annihilation of positrons in stars, galaxies, and the interstellar medium (ISM), nuclear gamma-ray line emission expected from the interactions of cosmic rays in the ISM, the radioactive products of cosmic nucleosynthesis, and nuclear reactions due to low-energy cosmic rays.
In addition, careful study was made of the spectral and time variations of known hard X-ray sources.
The experimental package contained four cooled, p-type high-purity Ge gamma-ray detectors with a total volume of about 100 cmformula_0, enclosed in a thick (6.6 cm average) caesium iodide (CsI) scintillation shield in active anti-coincidence to suppress extraneous background.
The experiment was capable of measuring gamma-ray energies falling within the energy interval from 0.045 to 10 MeV. The Ge detector system had an initial energy resolution better than 2.5 keV at 1.33 MeV and a line sensitivity from 1.E-4 to 1.E-5 photons/cm2-s, depending on the energy. Key experimental parameters were (1) a geometry factor of 11.1 cm2-sr, (2) effective area ~75 cmformula_1 at 100 keV, (3) a field of view of ~30 deg FWHM at 45 keV, and (4) a time resolution of less than 0.1 ms for the germanium detectors and 10 s for the CsI detectors. The gamma-ray spectrometer operated until 1 June 1980, when its cryogen was exhausted. The energy resolution of the Ge detectors was subject to degradation (roughly proportional to energy and time) due to radiation damage. The primary data are available at from the NASA HESARC and at JPL. They include instrument, orbit, and aspect data plus some spacecraft housekeeping information on 1600-bpi binary tapes. Some of this material has subsequently been archived on more modern media. The experiment was proposed, developed, and managed by the Jet Propulsion Laboratory of the California Institute of Technology, under the direction of Dr. Allan S. Jacobson.
The Isotopic Composition of Primary Cosmic Rays Experiment.
The HEAO C-2 experiment measured the relative composition of the isotopes of the primary cosmic rays between beryllium and iron (Z from 4 to 26) and the elemental abundances up to tin (Z=50). Cerenkov counters and hodoscopes, together with the Earth's magnetic field, formed a spectrometer. They determined charge and mass of cosmic rays to a precision of 10% for the most abundant elements over the momentum range from 2 to 25 GeV/c (c=speed of light). Scientific direction was by Principal Investigators Prof. Bernard Peters and Dr. Lyoie Koch-Miramond. The primary data base has been archived at the Centre Etudes Nuclearires de Saclay and the Danish Space Research Institute. Information on the data products is given by Engelman et al. 1985.
The Heavy Nuclei Experiment.
The purpose of the HEAO C-3 experiment was to measure the charge spectrum of cosmic-ray nuclei over the nuclear charge (Z) range from 17 to 120, in the energy interval 0.3 to 10 GeV/nucleon; to characterize cosmic ray sources; processes of nucleosynthesis, and propagation modes. The detector consisted of a double-ended instrument of upper and lower hodoscopes and three dual-gap ion chambers. The two ends were separated by a Cerenkov radiator. The geometrical factor was 4 cm2-sr. The ion chambers could resolve charge to 0.24 charge units at low energy and 0.39 charge units at high energy and high Z. The Cerenkov counter could resolve 0.3 to 0.4 charge units. Binns "et al." give more details.
The experiment was proposed and managed by the Space Radiation Laboratory of the California Institute of Technology (Caltech), under the direction of Principal Investigator Prof. Edward C. Stone, Jr. of Caltech, and Dr. Martin H. Israel, and Dr. Cecil J. Waddington.
Project.
The HEAO 3 Project was the final mission in the High Energy Astronomy Observatory series, which was managed by the NASA Marshall Space Flight Center (MSFC), where the project scientist was Dr. Thomas A. Parnell, and the project manager was Dr. John F. Stone. The prime contractor was TRW.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "^3"
},
{
"math_id": 1,
"text": "^2"
}
] |
https://en.wikipedia.org/wiki?curid=14597919
|
14598622
|
Toeplitz algebra
|
In operator algebras, the Toeplitz algebra is the C*-algebra generated by the unilateral shift on the Hilbert space "l"2(N). Taking "l"2(N) to be the Hardy space "H"2, the Toeplitz algebra consists of elements of the form
formula_0
where "Tf" is a Toeplitz operator with continuous symbol and "K" is a compact operator.
Toeplitz operators with continuous symbols commute modulo the compact operators. So the Toeplitz algebra can be viewed as the C*-algebra extension of continuous functions on the circle by the compact operators. This extension is called the Toeplitz extension.
By Atkinson's theorem, an element of the Toeplitz algebra "Tf" + "K" is a Fredholm operator if and only if the symbol "f" of "Tf" is invertible. In that case, the Fredholm index of "Tf" + "K" is precisely the winding number of "f", the equivalence class of "f" in the fundamental group of the circle. This is a special case of the Atiyah-Singer index theorem.
Wold decomposition characterizes proper isometries acting on a Hilbert space. From this, together with properties of Toeplitz operators, one can conclude that the Toeplitz algebra is the universal C*-algebra generated by a proper isometry; this is "Coburn's theorem".
|
[
{
"math_id": 0,
"text": "T_f + K\\;"
}
] |
https://en.wikipedia.org/wiki?curid=14598622
|
14598730
|
Toeplitz operator
|
In operator theory, a Toeplitz operator is the compression of a multiplication operator on the circle to the Hardy space.
Details.
Let formula_0 be the complex unit circle, with the standard Lebesgue measure, and formula_1 be the Hilbert space of square-integrable functions. A bounded measurable function formula_2 on formula_0 defines a multiplication operator formula_3 on "formula_1" . Let formula_4 be the projection from "formula_1" onto the Hardy space formula_5. The "Toeplitz operator with symbol formula_2" is defined by
formula_6
where " | " means restriction.
A bounded operator on formula_5 is Toeplitz if and only if its matrix representation, in the basis formula_7, has constant diagonals.
Theorems.
For a proof, see . He attributes the theorem to Mark Krein, Harold Widom, and Allen Devinatz. This can be thought of as an important special case of the Atiyah-Singer index theorem.
Here, formula_13 denotes the closed subalgebra of formula_14 of analytic functions (functions with vanishing negative Fourier coefficients), formula_15 is the closed subalgebra of formula_14 generated by formula_16 and formula_17, and formula_18 is the space (as an algebraic set) of continuous functions on the circle. See .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S^1"
},
{
"math_id": 1,
"text": "L^2(S^1)"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "M_g"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "H^2"
},
{
"math_id": 6,
"text": "T_g = P M_g \\vert_{H^2},"
},
{
"math_id": 7,
"text": "\\{z^n, z \\in \\mathbb{C}, n \\geq 0\\}"
},
{
"math_id": 8,
"text": "T_g - \\lambda"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "g(S^1)"
},
{
"math_id": 11,
"text": "T_f T_g - T_{fg}"
},
{
"math_id": 12,
"text": "H^\\infty[\\bar f] \\cap H^\\infty [g] \\subseteq H^\\infty + C^0(S^1)"
},
{
"math_id": 13,
"text": "H^\\infty"
},
{
"math_id": 14,
"text": "L^\\infty (S^1)"
},
{
"math_id": 15,
"text": "H^\\infty [f]"
},
{
"math_id": 16,
"text": "f "
},
{
"math_id": 17,
"text": " H^\\infty"
},
{
"math_id": 18,
"text": "C^0(S^1)"
}
] |
https://en.wikipedia.org/wiki?curid=14598730
|
14599476
|
Casey's theorem
|
On four non-intersecting circles that lie inside a bigger circle and tangent to it
In mathematics, Casey's theorem, also known as the generalized Ptolemy's theorem, is a theorem in Euclidean geometry named after the Irish mathematician John Casey.
Formulation of the theorem.
Let formula_0 be a circle of radius formula_1. Let formula_2 be (in that order) four non-intersecting circles that lie inside formula_0 and tangent to it. Denote by formula_3 the length of the exterior common bitangent of the circles formula_4. Then:
formula_5
Note that in the degenerate case, where all four circles reduce to points, this is exactly Ptolemy's theorem.
Proof.
The following proof is attributable to Zacharias. Denote the radius of circle formula_6 by formula_7 and its tangency point with the circle formula_0 by formula_8. We will use the notation formula_9 for the centers of the circles.
Note that from Pythagorean theorem,
formula_10
We will try to express this length in terms of the points formula_11. By the law of cosines in triangle formula_12,
formula_13
Since the circles formula_14 tangent to each other:
formula_15
Let formula_16 be a point on the circle formula_0. According to the law of sines in triangle formula_17:
formula_18
Therefore,
formula_19
and substituting these in the formula above:
formula_20
formula_21
formula_22
And finally, the length we seek is
formula_23
We can now evaluate the left hand side, with the help of the original Ptolemy's theorem applied to the inscribed quadrilateral formula_24:
formula_25
Further generalizations.
It can be seen that the four circles need not lie inside the big circle. In fact, they may be tangent to it from the outside as well. In that case, the following change should be made:
If formula_4 are both tangent from the same side of formula_0 (both in or both out), formula_3 is the length of the exterior common tangent.
If formula_4 are tangent from different sides of formula_0 (one in and one out), formula_3 is the length of the interior common tangent.
The converse of Casey's theorem is also true. That is, if equality holds, the circles are tangent to a common circle.
Applications.
Casey's theorem and its converse can be used to prove a variety of statements in Euclidean geometry. For example, the shortest known proof of Feuerbach's theorem uses the converse theorem.
|
[
{
"math_id": 0,
"text": "\\,O"
},
{
"math_id": 1,
"text": "\\,R"
},
{
"math_id": 2,
"text": "\\,O_1, O_2, O_3, O_4"
},
{
"math_id": 3,
"text": "\\,t_{ij}"
},
{
"math_id": 4,
"text": "\\,O_i, O_j"
},
{
"math_id": 5,
"text": "\\,t_{12} \\cdot t_{34}+t_{14} \\cdot t_{23}=t_{13}\\cdot t_{24}."
},
{
"math_id": 6,
"text": "\\,O_i"
},
{
"math_id": 7,
"text": "\\,R_i"
},
{
"math_id": 8,
"text": "\\,K_i"
},
{
"math_id": 9,
"text": "\\,O, O_i"
},
{
"math_id": 10,
"text": "\\,t_{ij}^2=\\overline{O_iO_j}^2-(R_i-R_j)^2."
},
{
"math_id": 11,
"text": "\\,K_i,K_j"
},
{
"math_id": 12,
"text": "\\,O_iOO_j"
},
{
"math_id": 13,
"text": "\\overline{O_iO_j}^2=\\overline{OO_i}^2+\\overline{OO_j}^2-2\\overline{OO_i}\\cdot \\overline{OO_j}\\cdot \\cos\\angle O_iOO_j"
},
{
"math_id": 14,
"text": "\\,O,O_i"
},
{
"math_id": 15,
"text": "\\overline{OO_i} = R - R_i,\\, \\angle O_iOO_j = \\angle K_iOK_j"
},
{
"math_id": 16,
"text": "\\,C"
},
{
"math_id": 17,
"text": "\\,K_iCK_j"
},
{
"math_id": 18,
"text": "\\overline{K_iK_j} = 2R\\cdot \\sin\\angle K_iCK_j = 2R\\cdot \\sin\\frac{\\angle K_iOK_j}{2}"
},
{
"math_id": 19,
"text": "\\cos\\angle K_iOK_j = 1-2\\sin^2\\frac{\\angle K_iOK_j}{2}=1-2\\cdot \\left(\\frac{\\overline{K_iK_j}}{2R}\\right)^2 = 1 - \\frac{\\overline{K_iK_j}^2}{2R^2}"
},
{
"math_id": 20,
"text": "\\overline{O_iO_j}^2=(R-R_i)^2+(R-R_j)^2-2(R-R_i)(R-R_j)\\left(1-\\frac{\\overline{K_iK_j}^2}{2R^2}\\right)"
},
{
"math_id": 21,
"text": "\\overline{O_iO_j}^2=(R-R_i)^2+(R-R_j)^2-2(R-R_i)(R-R_j)+(R-R_i)(R-R_j)\\cdot \\frac{\\overline{K_iK_j}^2}{R^2}"
},
{
"math_id": 22,
"text": "\\overline{O_iO_j}^2=((R-R_i)-(R-R_j))^2+(R-R_i)(R-R_j)\\cdot \\frac{\\overline{K_iK_j}^2}{R^2}"
},
{
"math_id": 23,
"text": "t_{ij}=\\sqrt{\\overline{O_iO_j}^2-(R_i-R_j)^2}=\\frac{\\sqrt{R-R_i}\\cdot \\sqrt{R-R_j}\\cdot \\overline{K_iK_j}}{R}"
},
{
"math_id": 24,
"text": "\\,K_1K_2K_3K_4"
},
{
"math_id": 25,
"text": "\n\\begin{align}\n& t_{12}t_{34}+t_{14}t_{23} \\\\[4pt]\n= {} & \\frac{1}{R^2}\\cdot \\sqrt{R-R_1}\\sqrt{R-R_2}\\sqrt{R-R_3}\\sqrt{R-R_4} \\left(\\overline{K_1K_2} \\cdot \\overline{K_3K_4}+\\overline{K_1K_4}\\cdot \\overline{K_2K_3}\\right) \\\\[4pt]\n= {} & \\frac{1}{R^2}\\cdot \\sqrt{R-R_1}\\sqrt{R-R_2}\\sqrt{R-R_3}\\sqrt{R-R_4}\\left(\\overline{K_1K_3}\\cdot \\overline{K_2K_4}\\right) \\\\[4pt]\n= {} & t_{13}t_{24}\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=14599476
|
14599790
|
Quantaloid
|
In mathematics, a quantaloid is a category enriched over the category Sup of "suplattices". In other words, for any objects "a" and "b" the morphism object between them is not just a set but a complete lattice, in such a way that composition of morphisms preserves all joins:
formula_0
The endomorphism lattice formula_1 of any object formula_2 in a quantaloid is a quantale, whence the name.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(\\bigvee_i f_i) \\circ (\\bigvee_j g_j) = \\bigvee_{i,j} (f_i \\circ g_j) "
},
{
"math_id": 1,
"text": "\\mathrm{Hom}(X,X)"
},
{
"math_id": 2,
"text": "X"
}
] |
https://en.wikipedia.org/wiki?curid=14599790
|
14601018
|
Ogden hyperelastic model
|
The Ogden material model is a hyperelastic material model used to describe the non-linear stress–strain behaviour of complex materials such as rubbers, polymers, and biological tissue. The model was developed by Raymond Ogden in 1972. The Ogden model, like other hyperelastic material models, assumes that the material behaviour can be described by means of a strain energy density function, from which the stress–strain relationships can be derived.
Ogden material model.
In the Ogden material model, the strain energy density is expressed in terms of the principal stretches formula_0, formula_1 as:
formula_2
where formula_3, formula_4 and formula_5 are material constants. Under the assumption of incompressibility one can rewrite as
formula_6
In general the shear modulus results from
formula_7
With formula_8 and by fitting the material parameters, the material behaviour of rubbers can be described very accurately. For particular values of material constants the Ogden model will reduce to either the Neo-Hookean solid (formula_9, formula_10) or the Mooney-Rivlin material (formula_11, formula_12, formula_13, with the constraint condition formula_14).
Using the Ogden material model, the three principal values of the Cauchy stresses can now be computed as
formula_15.
Uniaxial tension.
We now consider an incompressible material under uniaxial tension, with the stretch ratio given as formula_16, where formula_17 is the stretched length and formula_18 is the original unstretched length. The pressure formula_19 is determined from incompressibility and boundary condition formula_20, yielding:
formula_21.
Equi-biaxial tension.
Considering an incompressible material under eqi-biaxial tension, with formula_22. The pressure formula_19 is determined from incompressibility, and boundary condition formula_23, gives:
formula_24.
Other hyperelastic models.
For rubber and biological materials, more sophisticated models are necessary. Such materials may exhibit a non-linear stress–strain behaviour at modest strains, or are elastic up to huge strains. These complex non-linear stress–strain behaviours need to be accommodated by specifically tailored strain-energy density functions.
The simplest of these hyperelastic models, is the Neo-Hookean solid.
formula_25
where formula_26 is the shear modulus, which can be determined by experiments. From experiments it is known that for rubbery materials under moderate straining up to 30–70%, the Neo-Hookean model usually fits the material behaviour with sufficient accuracy. To model rubber at high strains, the one-parametric Neo-Hookean model is replaced by more general models, such as the Mooney-Rivlin solid where the strain energy formula_27 is a linear combination of two invariants
formula_28
The Mooney-Rivlin material was originally also developed for rubber, but is today often applied to model (incompressible) biological tissue. For modeling rubbery and biological materials at even higher strains, the more sophisticated Ogden material model has been developed.
|
[
{
"math_id": 0,
"text": "\\,\\!\\lambda_j"
},
{
"math_id": 1,
"text": "\\,\\!j=1,2,3"
},
{
"math_id": 2,
"text": "\nW\\left( \\lambda_1,\\lambda_2,\\lambda_3 \\right) = \\sum_{p=1}^N \\frac{\\mu_p}{\\alpha_p}\\left( \\lambda_1^{\\alpha_p} + \\lambda_2^{\\alpha_p} + \\lambda_3^{\\alpha_p} -3 \\right)\n"
},
{
"math_id": 3,
"text": "N"
},
{
"math_id": 4,
"text": "\\,\\!\\mu_p"
},
{
"math_id": 5,
"text": "\\,\\!\\alpha_p"
},
{
"math_id": 6,
"text": "\nW\\left( \\lambda_1,\\lambda_2 \\right) = \\sum_{p=1}^N \\frac{\\mu_p}{\\alpha_p}\\left( \\lambda_1^{\\alpha_p} + \\lambda_2^{\\alpha_p} + \\lambda_1^{-\\alpha_p}\\lambda_2^{-\\alpha_p} -3 \\right)"
},
{
"math_id": 7,
"text": "\n2\\mu = \\sum_{p=1}^{N} \\mu_p \\alpha_{p}.\n"
},
{
"math_id": 8,
"text": "N=3"
},
{
"math_id": 9,
"text": "N=1"
},
{
"math_id": 10,
"text": "\\alpha = 2"
},
{
"math_id": 11,
"text": "N=2"
},
{
"math_id": 12,
"text": "\\alpha_1=2"
},
{
"math_id": 13,
"text": "\\alpha_2=-2"
},
{
"math_id": 14,
"text": "\\lambda_1\\lambda_2\\lambda_3=1"
},
{
"math_id": 15,
"text": "\n\\sigma_{j} = -p + \\lambda_{j}\\frac{\\partial W}{\\partial \\lambda_{j}} = -p + \\sum_{p=1}^N \\mu_{p} \\lambda_{j}^{\\alpha_p}\n"
},
{
"math_id": 16,
"text": "\\lambda=\\frac{l}{l_0}"
},
{
"math_id": 17,
"text": "l"
},
{
"math_id": 18,
"text": "{l_0}"
},
{
"math_id": 19,
"text": "p"
},
{
"math_id": 20,
"text": "\\sigma_2=\\sigma_3=0"
},
{
"math_id": 21,
"text": "\n\\sigma_{1} = \\sum_{p=1}^N\\mu_{p} \\left(\\lambda^{\\alpha_p} - \\lambda^{-\\frac{1}{2}\\alpha_p} \\right)\n"
},
{
"math_id": 22,
"text": "\\lambda_1 = \\lambda_2 =\\frac{l}{l_0}"
},
{
"math_id": 23,
"text": "\\sigma_3=0"
},
{
"math_id": 24,
"text": "\n\\sigma_{1} = \\sigma_{2} = \\sum_{p=1}^N\\mu_{p} \\left(\\lambda^{\\alpha_p} - \\lambda^{-2\\alpha_p} \\right)\n"
},
{
"math_id": 25,
"text": "\nW(\\mathbf{C})=\\frac{\\mu}{2}(I_1^C-3)\n"
},
{
"math_id": 26,
"text": "\\mu"
},
{
"math_id": 27,
"text": "W"
},
{
"math_id": 28,
"text": "\nW(\\mathbf{C})=\\frac{\\mu_1}{2}\\left(I_1^C -3 \\right) -\\frac{\\mu_2}{2}\\left(I_2^C - 3\\right)\n"
}
] |
https://en.wikipedia.org/wiki?curid=14601018
|
1460126
|
Chromatic polynomial
|
Function in algebraic graph theory
The chromatic polynomial is a graph polynomial studied in algebraic graph theory, a branch of mathematics. It counts the number of graph colorings as a function of the number of colors and was originally defined by George David Birkhoff to study the four color problem. It was generalised to the Tutte polynomial by Hassler Whitney and W. T. Tutte, linking it to the Potts model of statistical physics.
History.
George David Birkhoff introduced the chromatic polynomial in 1912, defining it only for planar graphs, in an attempt to prove the four color theorem. If formula_0 denotes the number of proper colorings of "G" with "k" colors then one could establish the four color theorem by showing formula_1 for all planar graphs "G". In this way he hoped to apply the powerful tools of analysis and algebra for studying the roots of polynomials to the combinatorial coloring problem.
Hassler Whitney generalised Birkhoff’s polynomial from the planar case to general graphs in 1932. In 1968, Ronald C. Read asked which polynomials are the chromatic polynomials of some graph, a question that remains open, and introduced the concept of chromatically equivalent graphs. Today, chromatic polynomials are one of the central objects of algebraic graph theory.
Definition.
For a graph "G", formula_2 counts the number of its (proper) vertex "k"-colorings.
Other commonly used notations include formula_3, formula_4, or formula_5.
There is a unique polynomial formula_6 which evaluated at any integer "k" ≥ 0 coincides with formula_2; it is called the chromatic polynomial of "G".
For example, to color the path graph formula_7 on 3 vertices with "k" colors, one may choose any of the "k" colors for the first vertex, any of the formula_8 remaining colors for the second vertex, and lastly for the third vertex, any of the formula_8 colors that are different from the second vertex's choice.
Therefore, formula_9 is the number of "k"-colorings of formula_7.
For a variable "x" (not necessarily integer), we thus have formula_10.
Deletion–contraction.
The fact that the number of "k"-colorings is a polynomial in "k" follows from a recurrence relation called the deletion–contraction recurrence or Fundamental Reduction Theorem. It is based on edge contraction: for a pair of vertices formula_11 and formula_12 the graph formula_13 is obtained by merging the two vertices and removing any edges between them.
If formula_11 and formula_12 are adjacent in "G", let formula_14 denote the graph obtained by removing the edge formula_15.
Then the numbers of "k"-colorings of these graphs satisfy:
formula_16
Equivalently, if formula_11 and formula_12 are not adjacent in "G" and formula_17 is the graph with the edge formula_15 added, then
formula_18
This follows from the observation that every "k"-coloring of "G" either gives different colors to formula_11 and formula_12, or the same colors. In the first case this gives a (proper) "k"-coloring of formula_17, while in the second case it gives a coloring of formula_13.
Conversely, every "k"-coloring of "G" can be uniquely obtained from a "k"-coloring of formula_17 or formula_13 (if formula_11 and formula_12 are not adjacent in "G").
The chromatic polynomial can hence be recursively defined as
formula_19 for the edgeless graph on "n" vertices, and
formula_20 for a graph "G" with an edge formula_15 (arbitrarily chosen).
Since the number of "k"-colorings of the edgeless graph is indeed formula_21, it follows by induction on the number of edges that for all "G", the polynomial formula_6 coincides with the number of "k"-colorings at every integer point "x" = "k". In particular, the chromatic polynomial is the unique interpolating polynomial of degree at most "n" through the points
formula_22
Tutte’s curiosity about which other graph invariants satisfied such recurrences led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial formula_23.
Properties.
For fixed "G" on "n" vertices, the chromatic polynomial formula_25 is a monic polynomial of degree exactly "n", with integer coefficients.
The chromatic polynomial includes at least as much information about the colorability of "G" as does the chromatic number. Indeed, the chromatic number is the smallest positive integer that is not a zero of the chromatic polynomial,
formula_26
The polynomial evaluated at formula_27, that is formula_28, yields formula_29 times the number of acyclic orientations of "G".
The derivative evaluated at 1, formula_30 equals the chromatic invariant formula_31 up to sign.
If "G" has "n" vertices and "c" components formula_32, then
We prove this via induction on the number of edges on a simple graph "G" with formula_37 vertices and formula_38 edges. When formula_39, "G" is an empty graph. Hence per definition formula_40. So the coefficient of formula_35 is formula_41, which implies the statement is true for an empty graph. When formula_42, as in "G" has just a single edge, formula_43. Thus coefficient of formula_35 is formula_44. So the statement holds for k = 1. Using strong induction assume the statement is true for formula_45. Let "G" have formula_38 edges. By the contraction-deletion principle, formula_46 Let formula_47 and formula_48 <br>Hence formula_49.<br>Since formula_50 is obtained from "G" by removal of just one edge "e", formula_51, so formula_52 and thus the statement is true for "k".
The last property is generalized by the fact that if "G" is a "k"-clique-sum of formula_56 and formula_57 (i.e., a graph obtained by gluing the two at a clique on "k" vertices), then
formula_58
A graph "G" with "n" vertices is a tree if and only if
formula_59
Chromatic equivalence.
Two graphs are said to be "chromatically equivalent" if they have the same chromatic polynomial. Isomorphic graphs have the same chromatic polynomial, but non-isomorphic graphs can be chromatically equivalent. For example, all trees on "n" vertices have the same chromatic polynomial.
In particular, formula_60 is the chromatic polynomial of both the claw graph and the path graph on 4 vertices.
A graph is "chromatically unique" if it is determined by its chromatic polynomial, up to isomorphism. In other words, "G" is chromatically unique, then formula_61 would imply that "G" and "H" are isomorphic.
All cycle graphs are chromatically unique.
Chromatic roots.
A root (or "zero") of a chromatic polynomial, called a “chromatic root”, is a value "x" where formula_62. Chromatic roots have been very well studied, in fact, Birkhoff’s original motivation for defining the chromatic polynomial was to show that for planar graphs, formula_63 for "x" ≥ 4. This would have established the four color theorem.
No graph can be 0-colored, so 0 is always a chromatic root. Only edgeless graphs can be 1-colored, so 1 is a chromatic root of every graph with at least one edge. On the other hand, except for these two points, no graph can have a chromatic root at a real number smaller than or equal to 32/27. A result of Tutte connects the golden ratio formula_64 with the study of chromatic roots, showing that chromatic roots exist very close to formula_65:
If formula_66 is a planar triangulation of a sphere then
formula_67
While the real line thus has large parts that contain no chromatic roots for any graph, every point in the complex plane is arbitrarily close to a chromatic root in the sense that there exists an infinite family of graphs whose chromatic roots are dense in the complex plane.
Colorings using all colors.
For a graph "G" on "n" vertices, let formula_68 denote the number of colorings using exactly "k" colors "up to renaming colors" (so colorings that can be obtained from one another by permuting colors are counted as one; colorings obtained by automorphisms of "G" are still counted separately).
In other words, formula_68 counts the number of partitions of the vertex set into "k" (non-empty) independent sets.
Then formula_69 counts the number of colorings using exactly "k" colors (with distinguishable colors).
For an integer "x", all "x"-colorings of "G" can be uniquely obtained by choosing an integer "k ≤ x", choosing "k" colors to be used out of "x" available, and a coloring using exactly those "k" (distinguishable) colors.
Therefore:
formula_70
where formula_71 denotes the falling factorial.
Thus the numbers formula_68 are the coefficients of the polynomial formula_6 in the basis formula_72 of falling factorials.
Let formula_73 be the "k"-th coefficient of formula_6 in the standard basis formula_74, that is:
formula_75
Stirling numbers give a change of basis between the standard basis and the basis of falling factorials.
This implies:
formula_76 and formula_77
Categorification.
The chromatic polynomial is categorified by a homology theory closely related to Khovanov homology.
Algorithms.
Computational problems associated with the chromatic polynomial include
The first problem is more general because if we knew the coefficients of formula_25 we could evaluate it at any point in polynomial time because the degree is "n". The difficulty of the second type of problem depends strongly on the value of "x" and has been intensively studied in computational complexity. When "x" is a natural number, this problem is normally viewed as computing the number of "x"-colorings of a given graph. For example, this includes the problem #3-coloring of counting the number of 3-colorings, a canonical problem in the study of complexity of counting, complete for the counting class #P.
Efficient algorithms.
For some basic graph classes, closed formulas for the chromatic polynomial are known. For instance this is true for trees and cliques, as listed in the table above.
Polynomial time algorithms are known for computing the chromatic polynomial for wider classes of graphs, including chordal graphs and graphs of bounded clique-width. The latter class includes cographs and graphs of bounded tree-width, such as outerplanar graphs.
Deletion–contraction.
The deletion-contraction recurrence gives a way of computing the chromatic polynomial, called the "deletion–contraction algorithm". In the first form (with a minus), the recurrence terminates in a collection of empty graphs. In the second form (with a plus), it terminates in a collection of complete graphs.
This forms the basis of many algorithms for graph coloring. The ChromaticPolynomial function in the Combinatorica package of the computer algebra system Mathematica uses the second recurrence if the graph is dense, and the first recurrence if the graph is sparse. The worst case running time of either formula satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case, the algorithm runs in time within a polynomial factor of
formula_80
on a graph with "n" vertices and "m" edges. The analysis can be improved to within a polynomial factor of the number formula_81 of spanning trees of the input graph. In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls, the running time depends on the heuristic used to pick the vertex pair.
Cube method.
There is a natural geometric perspective on graph colorings by observing that, as an assignment of natural numbers to each vertex, a graph coloring is a vector in the integer lattice.
Since two vertices formula_82 and formula_83 being given the same color is equivalent to the formula_82’th and formula_83’th coordinate in the coloring vector being equal, each edge can be associated with a hyperplane of the form formula_84. The collection of such hyperplanes for a given graph is called its graphic arrangement. The proper colorings of a graph are those lattice points which avoid forbidden hyperplanes.
Restricting to a set of formula_38 colors, the lattice points are contained in the cube formula_85. In this context the chromatic polynomial counts the number of lattice points in the formula_86-cube that avoid the graphic arrangement.
Computational complexity.
The problem of computing the number of 3-colorings of a given graph is a canonical example of a #P-complete problem, so the problem of computing the coefficients of the chromatic polynomial is #P-hard. Similarly, evaluating formula_87 for given "G" is #P-complete. On the other hand, for formula_78 it is easy to compute formula_0, so the corresponding problems are polynomial-time computable. For integers formula_88 the problem is #P-hard, which is established similar to the case formula_79. In fact, it is known that formula_25 is #P-hard for all "x" (including negative integers and even all complex numbers) except for the three “easy points”. Thus, from the perspective of #P-hardness, the complexity of computing the chromatic polynomial is completely understood.
In the expansion
formula_89
the coefficient formula_90 is always equal to 1, and several other properties of the coefficients are known. This raises the question if some of the coefficients are easy to compute. However the computational problem of computing "ar" for a fixed "r ≥ 1" and a given graph "G" is #P-hard, even for bipartite planar graphs.
No approximation algorithms for computing formula_25 are known for any "x" except for the three easy points. At the integer points formula_91, the corresponding decision problem of deciding if a given graph can be "k"-colored is NP-hard. Such problems cannot be approximated to any multiplicative factor by a bounded-error probabilistic algorithm unless NP = RP, because any multiplicative approximation would distinguish the values 0 and 1, effectively solving the decision version in bounded-error probabilistic polynomial time. In particular, under the same assumption, this rules out the possibility of a fully polynomial time randomised approximation scheme (FPRAS). There is no FPRAS for computing formula_25 for any "x" > 2, unless NP = RP holds.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "P(G, k)"
},
{
"math_id": 1,
"text": "P(G, 4)>0"
},
{
"math_id": 2,
"text": "P(G,k)"
},
{
"math_id": 3,
"text": "P_G(k)"
},
{
"math_id": 4,
"text": "\\chi_G(k)"
},
{
"math_id": 5,
"text": "\\pi_G(k)"
},
{
"math_id": 6,
"text": "P(G,x)"
},
{
"math_id": 7,
"text": "P_3"
},
{
"math_id": 8,
"text": "k - 1"
},
{
"math_id": 9,
"text": "P(P_3,k) = k \\cdot (k-1) \\cdot (k-1)"
},
{
"math_id": 10,
"text": "P(P_3,x)=x(x-1)^2=x^3-2x^2+x"
},
{
"math_id": 11,
"text": "u"
},
{
"math_id": 12,
"text": "v"
},
{
"math_id": 13,
"text": "G/uv"
},
{
"math_id": 14,
"text": "G-uv"
},
{
"math_id": 15,
"text": "uv"
},
{
"math_id": 16,
"text": "P(G,k)=P(G-uv, k)- P(G/uv,k)"
},
{
"math_id": 17,
"text": "G+uv"
},
{
"math_id": 18,
"text": "P(G,k)= P(G+uv, k) + P(G/uv,k)"
},
{
"math_id": 19,
"text": "P(G,x)=x^n"
},
{
"math_id": 20,
"text": "P(G,x)=P(G-uv, x)- P(G/uv,x)"
},
{
"math_id": 21,
"text": "k^n"
},
{
"math_id": 22,
"text": "\\left \\{ (0, P(G, 0)), (1, P(G, 1)), \\ldots, (n, P(G, n)) \\right \\}."
},
{
"math_id": 23,
"text": "T_G(x,y)"
},
{
"math_id": 24,
"text": "x^n"
},
{
"math_id": 25,
"text": "P(G, x)"
},
{
"math_id": 26,
"text": "\\chi (G)=\\min\\{ k\\in\\mathbb{N} : P(G, k) > 0 \\}."
},
{
"math_id": 27,
"text": "-1"
},
{
"math_id": 28,
"text": "P(G,-1)"
},
{
"math_id": 29,
"text": "(-1)^{|V(G)|}"
},
{
"math_id": 30,
"text": "P'(G, 1)"
},
{
"math_id": 31,
"text": "\\theta(G)"
},
{
"math_id": 32,
"text": "G_1, \\ldots, G_c"
},
{
"math_id": 33,
"text": " x^0, \\ldots, x^{c-1}"
},
{
"math_id": 34,
"text": " x^c, \\ldots, x^n"
},
{
"math_id": 35,
"text": "x^{n-1}"
},
{
"math_id": 36,
"text": "-|E(G)|. "
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "k"
},
{
"math_id": 39,
"text": "k = 0"
},
{
"math_id": 40,
"text": "P(G, x)= x^n"
},
{
"math_id": 41,
"text": "0"
},
{
"math_id": 42,
"text": "k = 1"
},
{
"math_id": 43,
"text": "P(G, x) = x^n - x^{n-1}"
},
{
"math_id": 44,
"text": "-1 = -|E(G)|"
},
{
"math_id": 45,
"text": "k = 0,1,2,\\ldots,(k-1)"
},
{
"math_id": 46,
"text": " P(G, x) = P(G-e, x) - P(G/e, x),"
},
{
"math_id": 47,
"text": "P(G-e, x) = x^n - a_{n-1}x^{n-1} + a_{n-2}x^{n-2}-\\cdots,"
},
{
"math_id": 48,
"text": "P(G/e, x) = x^{n-1} - b_{n-2} x^{n-2} + b_{n-3}x^{n-3}-\\cdots."
},
{
"math_id": 49,
"text": "P(G, x) = x^n - (a_{n-1} +1)x^{n-1} +\\cdots"
},
{
"math_id": 50,
"text": "G-e"
},
{
"math_id": 51,
"text": "a_{n-1} = k - 1"
},
{
"math_id": 52,
"text": "a_{n-1} + 1 = k"
},
{
"math_id": 53,
"text": "x^1"
},
{
"math_id": 54,
"text": "(-1)^{n-1}"
},
{
"math_id": 55,
"text": "\\scriptstyle P(G, x) = P(G_1, x)P(G_2,x) \\cdots P(G_c,x)"
},
{
"math_id": 56,
"text": "G_1"
},
{
"math_id": 57,
"text": "G_2"
},
{
"math_id": 58,
"text": "P(G, x) = \\frac{P(G_1,x)P(G_2,x)}{x(x-1)\\cdots(x-k+1)}."
},
{
"math_id": 59,
"text": "P(G, x) = x(x-1)^{n-1}."
},
{
"math_id": 60,
"text": "(x-1)^3x"
},
{
"math_id": 61,
"text": "P(G, x) = P(H, x)"
},
{
"math_id": 62,
"text": "P(G, x)=0"
},
{
"math_id": 63,
"text": "P(G, x)>0"
},
{
"math_id": 64,
"text": "\\varphi"
},
{
"math_id": 65,
"text": "\\varphi^2"
},
{
"math_id": 66,
"text": "G_n"
},
{
"math_id": 67,
"text": "P(G_n,\\varphi^2) \\leq \\varphi^{5-n}."
},
{
"math_id": 68,
"text": "e_k"
},
{
"math_id": 69,
"text": "k! \\cdot e_k"
},
{
"math_id": 70,
"text": "P(G,x) = \\sum_{k=0}^x \\binom{x}{k} k! \\cdot e_k = \\sum_{k=0}^x (x)_k \\cdot e_k,"
},
{
"math_id": 71,
"text": "(x)_k = x(x-1)(x-2)\\cdots(x-k+1)"
},
{
"math_id": 72,
"text": "1,(x)_1,(x)_2,(x)_3,\\ldots"
},
{
"math_id": 73,
"text": "a_k"
},
{
"math_id": 74,
"text": "1,x,x^2,x^3,\\ldots"
},
{
"math_id": 75,
"text": "P(G,x) = \\sum_{k=0}^n a_k x^k"
},
{
"math_id": 76,
"text": "a_k = \\sum_{j=0}^n (-1)^{j-k} \\begin{bmatrix}j\\\\k\\end{bmatrix} e_j"
},
{
"math_id": 77,
"text": "e_k = \\sum_{j=0}^n \\begin{Bmatrix}j\\\\k\\end{Bmatrix} a_j. "
},
{
"math_id": 78,
"text": "k=0,1,2"
},
{
"math_id": 79,
"text": "k=3"
},
{
"math_id": 80,
"text": "\\varphi^{n+m}=\\left (\\frac{1+\\sqrt{5}}{2} \\right)^{n+m}\\in O\\left(1.62^{n+m}\\right),"
},
{
"math_id": 81,
"text": "t(G)"
},
{
"math_id": 82,
"text": "i"
},
{
"math_id": 83,
"text": "j"
},
{
"math_id": 84,
"text": "\\{x\\in\\mathbb R^d:x_i=x_j\\}"
},
{
"math_id": 85,
"text": "[0,k]^n"
},
{
"math_id": 86,
"text": "[0,k]"
},
{
"math_id": 87,
"text": "P(G, 3)"
},
{
"math_id": 88,
"text": "k>3"
},
{
"math_id": 89,
"text": "P(G, x)= a_1 x + a_2x^2+\\cdots +a_nx^n,"
},
{
"math_id": 90,
"text": "a_n"
},
{
"math_id": 91,
"text": "k=3,4,\\ldots"
}
] |
https://en.wikipedia.org/wiki?curid=1460126
|
14601271
|
Autoacceleration
|
Autoacceleration (gel effect, Trommsdorff–Norrish effect) is a dangerous reaction behavior that can occur in free-radical polymerization systems. It is due to the localized increases in viscosity of the polymerizing system that slow termination reactions. The removal of reaction obstacles therefore causes a rapid increase in the overall rate of reaction, leading to possible reaction runaway and altering the characteristics of the polymers produced.
Background.
Autoacceleration of the overall rate of a free-radical polymerization system has been noted in many bulk polymerization systems. The polymerization of methyl methacrylate, for example, deviates strongly from classical mechanism behavior around 20% conversion; in this region the conversion and molecular mass of the polymer produced increases rapidly. This increase of polymerization is usually accompanied by a large rise in temperature if heat dissipation is not adequate. Without proper precautions, autoacceleration of polymerization systems could cause metallurgic failure of the reaction vessel or, worse, explosion.
To avoid the occurrence of thermal runaway due to autoacceleration, suspension polymerization techniques are employed to make polymers such as polystyrene. The droplets dispersed in the water are small reaction vessels, but the heat capacity of the water lowers the temperature rise, thus moderating the reaction.
Causes.
Norrish and Smith, Trommsdorff, and later, Schultz and Harborth, concluded that autoacceleration must be caused by a totally different polymerization mechanism. They rationalized through experiment that a decrease in the termination rate was the basis of the phenomenon. This decrease in termination rate, "kt", is caused by the raised viscosity of the polymerization region when the concentration of previously formed polymer molecules increases. Before autoacceleration, chain termination by combination of two free-radical chains is a very rapid reaction that occurs at very high frequency (about one in 104 collisions). However, when the growing polymer molecules – with active free-radical ends – are surrounded in the highly viscous mixture consisting of a growing concentration of "dead" polymer, the rate of termination becomes limited by diffusion. The Brownian motion of the larger molecules in the polymer "soup" is restricted, therefore limiting the frequency of their effective (termination) collisions.
Results.
With termination collisions restricted, the concentration of active polymerizing chains and simultaneously the consumption of monomer rises rapidly. Assuming abundant unreacted monomer, viscosity changes affect the macromolecules but do not prove high enough to prevent smaller molecules – such as the monomer – from moving relatively freely. Therefore, the propagation reaction of the free-radical polymerization process is relatively insensitive to changes in viscosity. This also implies that at the onset of autoacceleration the overall rate of reaction increases relative to the rate of unautoaccelerated reaction given by the overall rate of reaction equation for free-radical polymerization:
formula_0
Approximately, as the termination decreases by a factor of 4, the overall rate of reaction will double. The decrease of termination reactions also allows radical chains to add monomer for longer time periods, raising the mass-average molecular mass dramatically. However, the number-average molecular mass only increases slightly, leading to broadening of the molecular mass distribution (high dispersity, very polydispersed product).
|
[
{
"math_id": 0,
"text": "R_p = k_p \\cdot [M] \\left( \\frac{f \\cdot k_d \\cdot [I]}{k_t} \\right)^{1/2}."
}
] |
https://en.wikipedia.org/wiki?curid=14601271
|
14601332
|
Social value orientations
|
In social psychology, social value orientation (SVO) is a person's preference about how to allocate resources (e.g. money) between the self and another person. SVO corresponds to how much weight a person attaches to the welfare of others in relation to the own. Since people are assumed to vary in the weight they attach to other peoples' outcomes in relation to their own, SVO is an individual difference variable. The general concept underlying SVO has become widely studied in a variety of different scientific disciplines, such as economics, sociology, and biology under a multitude of different names (e.g. "social preferences", "other-regarding preferences", "welfare tradeoff ratios", "social motives", etc.).
Historical background.
The SVO construct has its history in the study of interdependent decision making, i.e. strategic interactions between two or more people. The advent of Game theory in the 1940s provided a formal language for describing and analyzing situations of interdependence based on utility theory. As a simplifying assumption for analyzing strategic interactions, it was generally presumed that people only consider their own outcomes when making decisions in interdependent situations, rather than taking into account the interaction partners' outcomes as well. However, the study of human behavior in social dilemma situations, such as the prisoner's dilemma, revealed that some people do in fact appear to have concerns for others.
In the Prisoner's dilemma, participants are asked to take the role of two criminals. In this situation, they are to pretend that they are a pair of criminals being interrogated by detectives in separate rooms. Both participants are being offered a deal and have two options. That is, the participant may remain silent or confess and implicate his or her partner. However, if both participants choose to remain silent, they will be set free. If both participants confess they will receive a moderate sentence. Conversely, if one participant remains silent while the other confesses, the person who confesses will receive a minimal sentence while the person who remained silent (and was implicated by their partner) will receive a maximum sentence. Thus, participants have to make the decision to cooperate with or compete with their partner.
When used in the lab, the dynamics of this situation are stimulated as participants play for points or for money. Participants are given one of two choices, labeled option C or D. Option C would be the cooperative choice and if both participants choose to be cooperative then they will both earn points or money. On the other hand, Option D is the competitive choice. If just one participants chooses option D, that participant will earn points or money while the other player will lose money. However, if both participant pick D, then both of them will lose money. In addition to displaying participant's social value orientations, it also displays the dynamics of a mixed-motives situation.
From behavior in strategic situations it is not possible, though, to infer peoples' motives, i.e. the joint outcome they would choose if they alone could determine it. The reason is that behavior in a strategic situation is always a function of both peoples' preferences about joint outcomes "and" their beliefs about the intentions and behavior of their interaction partners.
In an attempt to assess peoples' preferences over joint outcomes alone, disentangled from their beliefs about the other persons' behavior, David M. Messick and Charles G. McClintock in 1968 devised what has become known as the "decomposed game technique". Basically, any task where one decision maker can alone determine which one out of at least two own-other resource allocation options will be realized is a "decomposed game" (also often referred to as dictator game, especially in economics, where it is often implemented as a constant-sum situation).
By observing which own-other resource allocation a person chooses in a "decomposed game", it is possible to infer that person's preferences over own-other resource allocations, i.e. "social value orientation". Since there is no other person making a decision that affects the joint outcome, there is no interdependence, and therefore a potential effect of beliefs on behavior is ruled out.
To give an example, consider two options, A and B. If you choose option A, you will receive $100, and another (unknown) person will receive $10. If you choose option B, you will receive $85, and the other (unknown) person will also receive $85. This is a "decomposed game". If a person chooses option B, we can infer that this person does not only consider the outcome for the self when making a decision, but also takes into account the outcome for the other.
Conceptualization.
When people seek to maximize their gains, they are said to be proself. But when people are also concerned with others' gains and losses, they are said to be prosocial. There are four categories within SVO. Individualistic and competitive SVOs are proself while cooperative and altruistic SVOs are prosocial:
However, in 1973 Griesinger and Livingston provided a geometric framework of SVO (the "SVO ring", see "Figure 1") with which they could show that SVO is in principle not a categorical, but a continuous construct that allows for an infinite number of social value orientations.
The basic idea was to represent outcomes for the self (on the "x-axis") and for the other (on the "y-axis") on a Cartesian plane, and represent own-other payoff allocation options as coordinates on a circle centered at the origin of the plane. If a person chooses a particular own-other outcome allocation on the ring, that person's SVO can be represented by the angle of the line starting at the origin of the Cartesian plane and intersecting the coordinates of the respective chosen own-other outcome allocation.
If, for instance, a person would choose the option on the circle that maximizes the own outcome, this would refer to an "SVO angle" of formula_0, indicating a perfectly individualistic SVO. An angle of formula_1 would indicate a perfectly cooperative (maximizing joint outcomes) SVO, while an angle of formula_2 would indicate a perfectly competitive (maximizing relative gain) SVO. This conceptualization indicates that SVO is a continuous construct, since there is an infinite number of possible SVOs, because angular degrees are continuous.
This advancement in the conceptualization of the SVO construct also clarified that SVO as originally conceptualized can be represented in terms of a utility function of the following form
formula_3,
where formula_4 is the outcome for the self, formula_5 is the outcome for the other, and the parameters indicate the weight a person attaches to the own outcome (formula_6) and the outcome for the other (formula_7).
Measurement.
Several different measurement methods exist for assessing SVO. The basis for any of these measures is the "decomposed game technique", i.e. a set of non-constant-sum dictator games. The most commonly used SVO measures are the following.
Ring Measure.
The Ring measure was devised by Wim B. G. Liebrand in 1984 and is based on the geometric SVO framework proposed by Griesinger and Livingston in 1973. In the Ring measure, subjects are asked to choose between 24 pairs of options that allocate money to the subject and the "other". The 24 pairs of outcomes correspond to equally spaced adjacent own-other-payoff allocations on an "SVO ring", i.e. a circle with a certain radius centered at the origin of the Cartesian plane. The vertical axis (y) measures the number of points or amount of money allocated to the other and the horizontal axis (x) measures the amount allocated to the self. Each pair of outcomes corresponds to two adjacent points on the circle. Adding up a subject's 24 choices yields a "motivational vector" with a certain length and angle. The length of the vector indicates the consistency of a subject's choice behavior, while the angle indicates that subject's SVO. Subjects are then categorized into one out of eight SVO categories according to their SVO angle, given a sufficiently consistent choice pattern. This measure allows for the detection of uncommon pathological SVOs, such as masochism, sadomasochism, or martyrdom, which would indicate that a subject attaches a negative weight (formula_8) to the outcome for the self given the utility function described above.
Triple-Dominance Measure.
The triple-dominance measure is directly based on the use of "decomposed games" as suggested by Messick and McClintock (1968). Concretely, the triple-dominance measure consists of nine items, each of which asks a subject to choose one out of three own-other-outcome allocations. The three options do have the same characteristics in each of the items. One option maximizes the outcome for the self, a second option maximizes the sum of the outcomes for the self and the other (joint outcome), and the third option maximizes the relative gain (i.e. the difference between the outcome for the self and the outcome for the other). If a subject chooses an option indicating a particular SVO in at least six out of the nine items, the subject is categorized accordingly. That is, a subject is categorized as "cooperative/prosocial", "individualistic", or "competitive".
Slider Measure.
The Slider measure assess SVO on a continuous scale, rather than categorizing subjects into nominal motivational groups. The instrument consists of 6 primary and 9 secondary items. In each item of the paper-based version of the Slider measure, a subject has to indicate her most preferred own-other outcome allocation out of nine options. From a subject's choices in the primary items, the "SVO angle" can be computed. There is also an online version of the Slider measure, where subjects can "slide" along a continuum of own-other payoff allocations in the items, allowing for a very precise assessment of a person's SVO. The secondary items can be used for differentiating between the motivations to maximize the joint outcome and to minimize the difference in outcomes (inequality aversion) among prosocial subjects. The SVO Slider Measure has been shown to be more reliable than previously used measures, and yields SVO scores on a continuous scale.
Neuroscience and Social Value Orientation.
Some recent papers have explored whether Social Value Orientation is somehow reflected on human brain activity. The first functional magnetic resonance imaging study of Social Value Orientation revealed that response of the amygdala to economic inequity (i.e., absolute value of reward difference between self and the other) is correlated with the degree of prosocial orientation. A functional magnetic resonance imaging study found that responses of Medial Prefrontal Cortex - an area that is typically associated with social cognition- mirrored preferences over competitive, individualistic and cooperative allocations. Similar findings in this or neighboring areas (ventromedial and dorsomedial prefrontal cortex) have been reported elsewhere.
Stylized facts.
SVO has been shown to be predictive of important behavioral variables, such as:
Furthermore, it has been shown that individualism is prevalent among very young children, and that the frequency of expressions of prosocial and competitive SVOs increases with age. Among adults, it has been shown repeatedly that prosocial SVOs are most frequently observed (up to 60 percent), followed by individualistic SVOs (about 30-40 percent), and competitive SVOs (about 5-10 percent). Evidence also suggests that SVO is first and foremost determined by socialization, and that genetic predisposition plays a minor role in SVO development.
Broader perspectives.
The SVO construct is rooted in social psychology, but has also been studied in other disciplines, such as economics. However, the general concept underlying SVO is inherently interdisciplinary, and has been studied under different names in a variety of different scientific fields; it is the concept of distributive preferences. Originally, the SVO construct as conceptualized by the "SVO ring framework" did not include preferences such as inequality aversion, which is a distributive preference heavily studied in experimental economics. This particular motivation can also not be assessed with commonly used measures of SVO, except with the "SVO Slider Measure". The original SVO concept can be extended, though, by representing peoples' distributive preferences in terms of utility functions, as is standard in economics. For instance, a representation of SVO that includes the expression of a motivation to minimize differences between outcomes could be formalized as follows.
formula_9.
Several utility functions as representations of peoples' concerns for the welfare of others have been devised and used (for a very prominent example, see Fehr & Schmidt, 1999) in economics. It is a challenge for future interdisciplinary research to combine the findings from different scientific disciplines and arrive at a unifying theory of SVO. Representing SVO in terms of a utility function and going beyond the construct's original conceptualization may facilitate the achievement of this ambitious goal.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "0^{\\circ}"
},
{
"math_id": 1,
"text": "45^{\\circ}"
},
{
"math_id": 2,
"text": "-45^{\\circ}"
},
{
"math_id": 3,
"text": " U_{(\\pi_{s}, \\pi_{o})} = a*\\pi_{s} + b*\\pi_{o}"
},
{
"math_id": 4,
"text": "\\pi_{s}"
},
{
"math_id": 5,
"text": "\\pi_{o}"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": " a "
},
{
"math_id": 9,
"text": " U_{(\\pi_{s}, \\pi_{o})} = a*\\pi_{s} + b*\\pi_{o} - c*|\\pi_{s} - \\pi_{o}| "
}
] |
https://en.wikipedia.org/wiki?curid=14601332
|
14601469
|
Gradient copolymer
|
Copolymer in which the transition between different types of monomer is gradual
In polymer chemistry, gradient copolymers are copolymers in which the change in monomer composition is gradual from predominantly one species to predominantly the other, unlike with block copolymers, which have an abrupt change in composition, and random copolymers, which have no continuous change in composition (see Figure 1).
In the gradient copolymer, as a result of the gradual compositional change along the length of the polymer chain less intrachain and interchain repulsion are observed.
The development of controlled radical polymerization as a synthetic methodology in the 1990s allowed for increased study of the concepts and properties of gradient copolymers because the synthesis of this group of novel polymers was now straightforward.
Due to the similar properties of gradient copolymers to that of block copolymers, they have been considered as a cost-effective alternative in applications for other preexisting copolymers.
Polymer Composition.
In the gradient copolymer, there is a continuous change in monomer composition along the polymer chain (see Figure 2). This change in composition can be depicted in a mathematical expression. The local composition gradient fraction formula_0 is described by molar fraction of monomer 1 in the copolymer formula_1 and degree of polymerization formula_2 and its relationship is as follows:
formula_3
The above equation supposes all of the local monomer composition is continuous. To make up for this assumption, another equation of ensemble average is used:
formula_4
The formula_5 refers ensemble average of the local chain composition, formula_6 refers degree of polymerization, formula_7 refers number of polymer chains in the sample and formula_8 refers composition of polymer chain i at position formula_6.
This second equation identifies the average composition over all present polymer chains at a given position, formula_6.
Synthesis.
Prior to the development of controlled radical polymerization (CRP), gradient copolymers (as distinguished from statistical copolymers) were not synthetically possible. While a "gradient" can be achieved through compositional drift due to a difference in reactivity of the two monomers, this drift will not encompass the entire possible compositional range. All of the common CRP methods including atom transfer radical polymerization and Reversible addition−fragmentation chain transfer polymerization as well as other living polymerization techniques including anionic addition polymerization and ring-opening polymerization have been used to synthesize gradient copolymers.
The gradient can be formed through either a spontaneous or a forced gradient. Spontaneous gradient polymerization is due to a difference in reactivity of the monomers. The resulting change in composition throughout the polymerization creates an inconsistent gradient along the polymer. Forced gradient polymerization involves varying the comonomer composition of the feed being throughout the reaction time. Because the rate of addition of the second monomer influences the polymerization and therefore properties of the formed polymer, continuous information about the polymer composition is vital. The online compositional information is often gathered through automatic continuous online monitoring of polymerization reactions, a process which provides "in situ" information allowing for constant composition adjustment to achieve the desired gradient composition.
Properties.
The wide range of composition possible in a gradient polymer due to the variety of monomers incorporated and the change of the composition results in a large variety of properties. In general, the glass transition temperature (Tg) is broad in comparison with the homopolymers. Micelles of the gradient copolymer can form when the gradient copolymer concentration is too high in a block copolymer solution. As the micelles form, the micelle diameter actually shrinks creating a "reel in" effect. The general structure of these copolymers in solution is not yet well established.
The composition can be determined by gel permeation chromatography(GPC) and nuclear magnetic resonance (NMR). Generally the composition has a narrow polydispersity index (PDI) and the molecular weight increases with time as the polymer forms.
Applications.
Compatibilizing phase-separated polymer blends.
For the compatiabilization of immiscible blends, the gradient copolymer can be used by improving mechanical and optical properties of immiscible polymers and decreasing its dispersed phase to droplet size. The compatibilization has been tested by reduction in interfacial tension and steric hindrance against coalescence. This application is not available for block and graft copolymer because of its very low critical micelle concentration (cmc). However, the gradient copolymer, which has higher cmc and exhibits a broader interfacial coverage, can be applied to effective blend compatibilizers.
A small amount of gradient copolymer (i.e.styrene/4-hydroxystyrene) is added to a polymer blend (i.e. polystyrene/polycaprolactone) during melt processing. The resulting interfacial copolymer helps to stabilize the dispersed phase due to the hydrogen-bonding effects of hydroxylstyrene with the polycaprolactone ester group.
Impact modifiers and sound or vibration dampers.
The gradient copolymer have very broad glass transition temperature (Tg) in comparison with other copolymers, at least four times bigger than that of a random copolymer. This broad glass transition is one of the important features for vibration and acoustic damping applications. The broad Tg gives wide range of mechanical properties of material. The glass transition breadth can be adjusted by selection of monomers with different degrees of reactivity in their controlled radical polymerization (CRP). The strongly segregated styrene/4-hydroxystyrene (S/HS) gradient copolymer is used to study damping properties due to its unusual broad glass transition breadth.
Potential applications.
There are many possible applications for gradient copolymer like pressure-sensitive adhesives, wetting agent, coating, or dispersion. However, these applications are not proved about its practical performance and stability as gradient copolymers.
|
[
{
"math_id": 0,
"text": "g(X)"
},
{
"math_id": 1,
"text": "(F_1)"
},
{
"math_id": 2,
"text": "(X)"
},
{
"math_id": 3,
"text": "g(X)=\\frac{dF_1(X)}{dX}"
},
{
"math_id": 4,
"text": "F^{(loc)}_1(X)=\\frac{1}{N}\\sum_{i=1}^NF_{1,i}(X)"
},
{
"math_id": 5,
"text": "F^{(loc)}_1(X)"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "N"
},
{
"math_id": 8,
"text": "F_{1,i}(X)"
}
] |
https://en.wikipedia.org/wiki?curid=14601469
|
1460172
|
Cyclic homology
|
In noncommutative geometry and related branches of mathematics, cyclic homology and cyclic cohomology are certain (co)homology theories for associative algebras which generalize the de Rham (co)homology of manifolds. These notions were independently introduced by Boris Tsygan (homology) and Alain Connes (cohomology) in the 1980s. These invariants have many interesting relationships with several older branches of mathematics, including de Rham theory, Hochschild (co)homology, group cohomology, and the K-theory. Contributors to the development of the theory include Max Karoubi, Yuri L. Daletskii, Boris Feigin, Jean-Luc Brylinski, Mariusz Wodzicki, Jean-Louis Loday, Victor Nistor, Daniel Quillen, Joachim Cuntz, Ryszard Nest, Ralf Meyer, and Michael Puschnigg.
Hints about definition.
The first definition of the cyclic homology of a ring "A" over a field of characteristic zero, denoted
"HC""n"("A") or "H""n"λ("A"),
proceeded by the means of the following explicit chain complex related to the Hochschild homology complex of "A", called the Connes complex:
For any natural number "n ≥ 0", define the operator formula_0 which generates the natural cyclic action of formula_1 on the "n"-th tensor product of "A":
formula_2
Recall that the Hochschild complex groups of "A" with coefficients in "A" itself are given by setting formula_3 for all "n ≥ 0". Then the components of the Connes complex are defined as formula_4, and the differential formula_5 is the restriction of the Hochschild differential to this quotient. One can check that the Hochschild differential does indeed factor through to this space of coinvariants.
Connes later found a more categorical approach to cyclic homology using a notion of cyclic object in an abelian category, which is analogous to the notion of simplicial object. In this way, cyclic homology (and cohomology) may be interpreted as a derived functor, which can be explicitly computed by the means of the ("b", "B")-bicomplex. If the field "k" contains the rational numbers, the definition in terms of the Connes complex calculates the same homology.
One of the striking features of cyclic homology is the existence of a long exact sequence connecting
Hochschild and cyclic homology. This long exact sequence is referred to as the periodicity sequence.
Case of commutative rings.
Cyclic cohomology of the commutative algebra "A" of regular functions on an affine algebraic variety over a field "k" of characteristic zero can be computed in terms of Grothendieck's algebraic de Rham complex. In particular, if the variety "V"=Spec "A" is smooth, cyclic cohomology of "A" are expressed in terms of the de Rham cohomology of "V" as follows:
formula_6
This formula suggests a way to define de Rham cohomology for a 'noncommutative spectrum' of a noncommutative algebra "A", which was extensively developed by Connes.
Variants of cyclic homology.
One motivation of cyclic homology was the need for an approximation of K-theory that is defined, unlike K-theory, as the homology of a chain complex. Cyclic cohomology is in fact endowed with a pairing with K-theory, and one hopes this pairing to be non-degenerate.
There has been defined a number of variants whose purpose is to fit better with algebras with topology, such as Fréchet algebras, formula_7-algebras, etc. The reason is that K-theory behaves much better on topological algebras such as Banach algebras or C*-algebras than on algebras without additional structure. Since, on the other hand, cyclic homology degenerates on C*-algebras, there came up the need to define modified theories. Among them are entire cyclic homology due to Alain Connes, analytic cyclic homology due to Ralf Meyer or asymptotic and local cyclic homology due to Michael Puschnigg. The last one is very close to K-theory as it is endowed with a bivariant Chern character from KK-theory.
Applications.
One of the applications of cyclic homology is to find new proofs and generalizations of the Atiyah-Singer index theorem. Among these generalizations are index theorems based on spectral triples and deformation quantization of Poisson structures.
An elliptic operator D on a compact smooth manifold defines a class in K homology. One invariant of this class is the analytic index of the operator. This is seen as the pairing of the class [D], with the element 1 in HC(C(M)). Cyclic cohomology can be seen as a way to get higher invariants of elliptic differential operators not only for smooth manifolds, but also for foliations, orbifolds, and singular spaces that appear in noncommutative geometry.
Computations of algebraic K-theory.
The cyclotomic trace map is a map from algebraic K-theory (of a ring "A", say), to cyclic homology:
formula_8
In some situations, this map can be used to compute K-theory by means of this map. A pioneering result in this direction is a theorem of : it asserts that the map
formula_9
between the relative K-theory of "A" with respect to a "nilpotent" two-sided ideal "I" to the relative cyclic homology (measuring the difference between K-theory or cyclic homology of "A" and of "A"/"I") is an isomorphism for "n"≥1.
While Goodwillie's result holds for arbitrary rings, a quick reduction shows that it is in essence only a statement about formula_10. For rings not containing Q, cyclic homology must be replaced by topological cyclic homology in order to keep a close connection to K-theory. (If Q is contained in "A", then cyclic homology and topological cyclic homology of "A" agree.) This is in line with the fact that (classical) Hochschild homology is less well-behaved than topological Hochschild homology for rings not containing Q. proved a far-reaching generalization of Goodwillie's result, stating that for a commutative ring "A" so that the Henselian lemma holds with respect to the ideal "I", the relative K-theory is isomorphic to relative topological cyclic homology (without tensoring both with Q). Their result also encompasses a theorem of , asserting that in this situation the relative K-theory spectrum modulo an integer "n" which is invertible in "A" vanishes. used Gabber's result and Suslin rigidity to reprove Quillen's computation of the K-theory of finite fields.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " t_n "
},
{
"math_id": 1,
"text": " \\mathbb{Z}/ n \\mathbb{Z} "
},
{
"math_id": 2,
"text": "\\begin{align} \nt_n : A^{\\otimes n} \\to A^{\\otimes n}, \\quad a_1 \\otimes \\dots \\otimes a_n \\mapsto (-1)^{n-1} a_n \\otimes a_1 \\otimes \\dots \\otimes a_{n-1}.\n\\end{align}"
},
{
"math_id": 3,
"text": " HC_n(A) := A^{\\otimes n+1} "
},
{
"math_id": 4,
"text": " C^\\lambda_n(A) := HC_n(A)/ \\langle 1 - t_{n+1} \\rangle "
},
{
"math_id": 5,
"text": " d : C^\\lambda_n(A) \\to C^\\lambda_{n-1}(A)"
},
{
"math_id": 6,
"text": " HC_n(A)\\simeq \\Omega^n\\!A/d\\Omega^{n-1}\\!A\\oplus \\bigoplus_{i\\geq 1}H^{n-2i}_{\\text{dR}}(V)."
},
{
"math_id": 7,
"text": "C^*"
},
{
"math_id": 8,
"text": "tr: K_n (A) \\to HC_{n-1} (A)."
},
{
"math_id": 9,
"text": "K_n(A, I) \\otimes \\mathbf Q \\to HC_{n-1} (A, I) \\otimes \\mathbf Q "
},
{
"math_id": 10,
"text": "A \\otimes_{\\mathbf Z} \\mathbf Q"
}
] |
https://en.wikipedia.org/wiki?curid=1460172
|
14602178
|
Coherent topology
|
Topology determined by family of subspaces
In topology, a coherent topology is a topology that is uniquely determined by a family of subspaces. Loosely speaking, a topological space is coherent with a family of subspaces if it is a "topological union" of those subspaces. It is also sometimes called the weak topology generated by the family of subspaces, a notion that is quite different from the notion of a weak topology generated by a set of maps.
Definition.
Let formula_0 be a topological space and let formula_1 be a family of subsets of formula_2 each with its induced subspace topology. (Typically formula_3 will be a cover of formula_0.) Then formula_0 is said to be coherent with formula_3 (or determined by formula_3) if the topology of formula_0 is recovered as the one coming from the final topology coinduced by the inclusion maps
formula_5
By definition, this is the finest topology on (the underlying set of) formula_0 for which the inclusion maps are continuous.
formula_0 is coherent with formula_3 if either of the following two equivalent conditions holds:
Given a topological space formula_0 and any family of subspaces formula_3 there is a unique topology on (the underlying set of) formula_0 that is coherent with formula_4 This topology will, in general, be finer than the given topology on formula_10
Topological union.
Let formula_13 be a family of (not necessarily disjoint) topological spaces such that the induced topologies agree on each intersection formula_14
Assume further that formula_15 is closed in formula_16 for each formula_17 Then the topological union formula_0 is the set-theoretic union
formula_18
endowed with the final topology coinduced by the inclusion maps formula_19. The inclusion maps will then be topological embeddings and formula_0 will be coherent with the subspaces formula_20
Conversely, if formula_0 is a topological space and is coherent with a family of subspaces formula_21 that cover formula_2 then formula_0 is homeomorphic to the topological union of the family formula_22
One can form the topological union of an arbitrary family of topological spaces as above, but if the topologies do not agree on the intersections then the inclusions will not necessarily be embeddings.
One can also describe the topological union by means of the disjoint union. Specifically, if formula_0 is a topological union of the family formula_23 then formula_0 is homeomorphic to the quotient of the disjoint union of the family formula_24 by the equivalence relation
formula_25
for all formula_17; that is,
formula_26
If the spaces formula_24 are all disjoint then the topological union is just the disjoint union.
Assume now that the set A is directed, in a way compatible with inclusion: formula_27 whenever
formula_28. Then there is a unique map from formula_29 to formula_2 which is in fact a homeomorphism. Here formula_29 is the direct (inductive) limit (colimit)
of formula_24 in the category Top.
Properties.
Let formula_0 be coherent with a family of subspaces formula_22 A function formula_30 from formula_0 to a topological space formula_31 is continuous if and only if the restrictions
formula_32
are continuous for each formula_9 This universal property characterizes coherent topologies in the sense that a space formula_0 is coherent with formula_3 if and only if this property holds for all spaces formula_31 and all functions formula_33
Let formula_0 be determined by a cover formula_34 Then
Let formula_30 be a surjective map and suppose formula_31 is determined by formula_41 For each formula_42 let formula_43be the restriction of formula_44 to formula_45 Then
Given a topological space formula_47 and a family of subspaces formula_48 there is a unique topology formula_49 on formula_0 that is coherent with formula_4 The topology formula_49 is finer than the original topology formula_50 and strictly finer if formula_51 was not coherent with formula_4 But the topologies formula_51 and formula_49 induce the same subspace topology on each of the formula_52 in the family formula_4 And the topology formula_49 is always coherent with formula_4
As an example of this last construction, if formula_3 is the collection of all compact subspaces of a topological space formula_53 the resulting topology formula_49 defines the k-ification formula_54 of formula_10 The spaces formula_0 and formula_54 have the same compact sets, with the same induced subspace topologies on them. And the k-ification formula_54 is compactly generated.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "C = \\left\\{ C_{\\alpha} : \\alpha \\in A \\right\\}"
},
{
"math_id": 2,
"text": "X,"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "C."
},
{
"math_id": 5,
"text": "i_\\alpha : C_\\alpha \\to X \\qquad \\alpha \\in A."
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "U \\cap C_{\\alpha}"
},
{
"math_id": 8,
"text": "C_{\\alpha}"
},
{
"math_id": 9,
"text": "\\alpha \\in A."
},
{
"math_id": 10,
"text": "X."
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "X_n."
},
{
"math_id": 13,
"text": "\\left\\{ X_\\alpha : \\alpha \\in A \\right\\}"
},
{
"math_id": 14,
"text": "X_{\\alpha} \\cap X_{\\beta}."
},
{
"math_id": 15,
"text": "X_{\\alpha} \\cap X_{\\beta}"
},
{
"math_id": 16,
"text": "X_{\\alpha}"
},
{
"math_id": 17,
"text": "\\alpha, \\beta \\in A."
},
{
"math_id": 18,
"text": "X^{set} = \\bigcup_{\\alpha\\in A} X_\\alpha"
},
{
"math_id": 19,
"text": "i_\\alpha : X_\\alpha \\to X^{set}"
},
{
"math_id": 20,
"text": "\\left\\{ X_{\\alpha} \\right\\}."
},
{
"math_id": 21,
"text": "\\left\\{ C_{\\alpha} \\right\\}"
},
{
"math_id": 22,
"text": "\\left\\{ C_{\\alpha} \\right\\}."
},
{
"math_id": 23,
"text": "\\left\\{ X_{\\alpha} \\right\\},"
},
{
"math_id": 24,
"text": "\\left\\{ X_{\\alpha} \\right\\}"
},
{
"math_id": 25,
"text": "(x,\\alpha) \\sim (y,\\beta) \\Leftrightarrow x = y"
},
{
"math_id": 26,
"text": "X \\cong \\coprod_{\\alpha\\in A}X_\\alpha / \\sim ."
},
{
"math_id": 27,
"text": "\\alpha \\leq \\beta"
},
{
"math_id": 28,
"text": "X_\\alpha\\subset X_{\\beta}"
},
{
"math_id": 29,
"text": "\\varinjlim X_\\alpha"
},
{
"math_id": 30,
"text": "f : X \\to Y"
},
{
"math_id": 31,
"text": "Y"
},
{
"math_id": 32,
"text": "f\\big\\vert_{C_{\\alpha}} : C_{\\alpha} \\to Y\\,"
},
{
"math_id": 33,
"text": "f : X \\to Y."
},
{
"math_id": 34,
"text": "C = \\{ C_{\\alpha} \\}."
},
{
"math_id": 35,
"text": "D,"
},
{
"math_id": 36,
"text": "D."
},
{
"math_id": 37,
"text": "D=\\{D_\\beta\\}"
},
{
"math_id": 38,
"text": "D_{\\beta}"
},
{
"math_id": 39,
"text": "\\left\\{ Y \\cap C_{\\alpha} \\right\\}."
},
{
"math_id": 40,
"text": "\\left\\{ f(C_{\\alpha}) \\right\\}."
},
{
"math_id": 41,
"text": "\\left\\{ D_{\\alpha} : \\alpha \\in A \\right\\}."
},
{
"math_id": 42,
"text": "\\alpha \\in A"
},
{
"math_id": 43,
"text": "f_\\alpha : f^{-1}(D_\\alpha) \\to D_\\alpha\\,"
},
{
"math_id": 44,
"text": "f"
},
{
"math_id": 45,
"text": "f^{-1}(D_{\\alpha})."
},
{
"math_id": 46,
"text": "f_{\\alpha}"
},
{
"math_id": 47,
"text": "(X,\\tau)"
},
{
"math_id": 48,
"text": "C=\\{C_\\alpha\\}"
},
{
"math_id": 49,
"text": "\\tau_C"
},
{
"math_id": 50,
"text": "\\tau,"
},
{
"math_id": 51,
"text": "\\tau"
},
{
"math_id": 52,
"text": "C_\\alpha"
},
{
"math_id": 53,
"text": "(X,\\tau),"
},
{
"math_id": 54,
"text": "kX"
}
] |
https://en.wikipedia.org/wiki?curid=14602178
|
1460235
|
Indeterminate (variable)
|
Symbol treated as a mathematical variable
In mathematics, an indeterminate is a variable that is used formally, without reference to any value. In other words, this is is just a symbol used in a formal way.
Indeterminates occur in polynomials, formal power series, and, more generally, in expressions that are viewed as independent mathematical objects.
A fundamental property of an indeterminate is that it can be substituted with any mathematical expressions to which the same operations apply as the operations aplied to the indeterminate.
The concept of an "indeterminate" is relatively recent, and was initially introduced for distinguishing a polynomial from its associated polynomial function. Indeterminates resemble free variables. The main difference is that a free variable is intended to represent a unspecified element of some domain, often the real numbers, while indeterminates do not represent anything.
Many authors do not distinguish indeterminates from other sorts of variables.
Some authors of abstract algebra textbooks define an "indeterminate" over a ring R as an element of a larger ring that is transcendental over R. This uncommon definition implies that every transcendental number and every nonconstant polynomial must be considered as indeterminates.
Polynomials.
A polynomial in an indeterminate formula_0 is an expression of the form formula_1, where the "formula_2" are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. In contrast, two polynomial functions in a variable "formula_3" may be equal or not at a particular value of "formula_3".
For example, the functions
formula_4
are equal when "formula_5" and not equal otherwise. But the two polynomials
formula_6
are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact,
formula_7
does not hold "unless" "formula_8" and "formula_9". This is because "formula_0" is not, and does not designate, a number.
The distinction is subtle, since a polynomial in "formula_0" can be changed to a function in "formula_3" by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2, we have that:
formula_10
so the polynomial function "formula_11" is identically equal to 0 for "formula_3" having any value in the modulo-2 system. However, the polynomial "formula_12" is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero.
Formal power series.
A formal power series in an indeterminate "formula_0" is an expression of the form formula_13, where no value is assigned to the symbol "formula_0". This is similar to the definition of a polynomial, except that an infinite number of the coefficients may be nonzero. Unlike the power series encountered in calculus, questions of convergence are irrelevant (since there is no function at play). So power series that would diverge for values of "formula_3", such as "formula_14", are allowed.
As generators.
Indeterminates are useful in abstract algebra for generating mathematical structures. For example, given a field "formula_15", the set of polynomials with coefficients in "formula_15" is the polynomial ring with polynomial addition and multiplication as operations. In particular, if two indeterminates "formula_0" and "formula_16" are used, then the polynomial ring "formula_17" also uses these operations, and convention holds that "formula_18".
Indeterminates may also be used to generate a free algebra over a commutative ring "formula_19". For instance, with two indeterminates "formula_0" and "formula_16", the free algebra "formula_20" includes sums of strings in "formula_0" and "formula_16", with coefficients in "formula_19", and with the understanding that "formula_21" and "formula_22" are not necessarily identical (since free algebra is by definition non-commutative).
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "a_0 + a_1X + a_2X^2 + \\ldots + a_nX^n"
},
{
"math_id": 2,
"text": "a_i"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "f(x) = 2 + 3x, \\quad g(x) = 5 + 2x"
},
{
"math_id": 5,
"text": "x = 3"
},
{
"math_id": 6,
"text": "2 + 3X, \\quad 5 + 2X"
},
{
"math_id": 7,
"text": "2 + 3X = a + bX"
},
{
"math_id": 8,
"text": "a = 2"
},
{
"math_id": 9,
"text": "b = 3"
},
{
"math_id": 10,
"text": "0 - 0^2 = 0, \\quad 1 - 1^2 = 0,"
},
{
"math_id": 11,
"text": "x - x^2"
},
{
"math_id": 12,
"text": "X - X^2"
},
{
"math_id": 13,
"text": "a_0 + a_1X + a_2X^2 + \\ldots"
},
{
"math_id": 14,
"text": "1 + x + 2x^2 + 6x^3 + \\ldots + n!x^n + \\ldots\\,"
},
{
"math_id": 15,
"text": "K"
},
{
"math_id": 16,
"text": "Y"
},
{
"math_id": 17,
"text": "K[X,Y]"
},
{
"math_id": 18,
"text": "XY=YX"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "A\\langle X,Y \\rangle"
},
{
"math_id": 21,
"text": "XY"
},
{
"math_id": 22,
"text": "YX"
}
] |
https://en.wikipedia.org/wiki?curid=1460235
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.