id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
13902144 | 21-hydroxysteroid dehydrogenase (NAD+) | Enzyme
In enzymology, a 21-hydroxysteroid dehydrogenase (NAD+) (EC 1.1.1.150) is an enzyme that catalyzes the chemical reaction
pregnan-21-ol + NAD+ formula_0 pregnan-21-al + NADH + H+
Thus, the two substrates of this enzyme are pregnan-21-ol and NAD+, whereas its 3 products are pregnan-21-al, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 21-hydroxysteroid:NAD+ 21-oxidoreductase. This enzyme is also called 21-hydroxysteroid dehydrogenase (NAD+).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902144 |
13902153 | 21-hydroxysteroid dehydrogenase (NADP+) | Enzyme
In enzymology, a 21-hydroxysteroid dehydrogenase (NADP+) (EC 1.1.1.151) is an enzyme that catalyzes the chemical reaction
pregnan-21-ol + NADP+ formula_0 pregnan-21-al + NADPH + H+
Thus, the two substrates of this enzyme are pregnan-21-ol and NADP+, whereas its 3 products are pregnan-21-al, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 21-hydroxysteroid:NADP+ 21-oxidoreductase. Other names in common use include 21-hydroxy steroid dehydrogenase, 21-hydroxy steroid (nicotinamide adenine dinucleotide phosphate), dehydrogenase, 21-hydroxy steroid dehydrogenase (nicotinamide adenine dinucleotide, phosphate), NADP+-21-hydroxysteroid dehydrogenase, and 21-hydroxysteroid dehydrogenase (NADP+).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902153 |
13902172 | 2,5-didehydrogluconate reductase | Class of enzymes
In enzymology, a 2,5-didehydrogluconate reductase (EC 1.1.1.274) is an enzyme that catalyzes the chemical reaction
2-dehydro-D-gluconate + NADP+ formula_0 2,5-didehydro-D-gluconate + NADPH + H+
Thus, the two substrates of this enzyme are 2-dehydro-D-gluconate and NADP+, whereas its 3 products are 2,5-didehydro-D-gluconate, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-dehydro-D-gluconate:NADP+ 2-oxidoreductase. Other names in common use include 2,5-diketo-D-gluconate reductase, and YqhE reductase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1VP5.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902172 |
13902182 | 2-alkyn-1-ol dehydrogenase | Class of enzymes
In enzymology, a 2-alkyn-1-ol dehydrogenase (EC 1.1.1.165) is an enzyme that catalyzes the chemical reaction below:
2-butyne-1,4-diol + NAD+ formula_0 4-hydroxy-2-butynal + NADH + H+
The two substrates of this enzyme are 2-butyne-1,4-diol and NAD+, whereas its 3 products are 4-hydroxy-2-butynal, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-butyne-1,4-diol:NAD+ 1-oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902182 |
13902187 | Glycerol-3-phosphate dehydrogenase (NAD(P)+) | In enzymology, a glycerol-3-phosphate dehydrogenase [NAD(P)+] (EC 1.1.1.94) is an enzyme that catalyzes the chemical reaction
"sn"-glycerol 3-phosphate + NAD(P)+ formula_0 glycerone phosphate + NAD(P)H + H+
The 3 substrates of this enzyme are sn-glycerol 3-phosphate, NAD+, and NADP+, whereas its 4 products are glycerone phosphate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is "sn"-glycerol-3-phosphate:NAD(P)+ 2-oxidoreductase. Other names in common use include L-glycerol-3-phosphate:NAD(P)+ oxidoreductase, glycerol phosphate dehydrogenase (nicotinamide adenine dinucleotide, (phosphate)), glycerol 3-phosphate dehydrogenase (NADP+), and glycerol-3-phosphate dehydrogenase [NAD(P)+]. This enzyme participates in glycerophospholipid metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1TXG.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902187 |
13902192 | 2-dehydro-3-deoxy-D-gluconate 5-dehydrogenase | Class of enzymes
In enzymology, a 2-dehydro-3-deoxy-D-gluconate 5-dehydrogenase (EC 1.1.1.127) is an enzyme that catalyzes the chemical reaction
2-dehydro-3-deoxy-D-gluconate + NAD+ formula_0 (4S)-4,6-dihydroxy-2,5-dioxohexanoate + NADH + H+
Thus, the two substrates of this enzyme are 2-dehydro-3-deoxy-D-gluconate and NAD+, whereas its 3 products are (4S)-4,6-dihydroxy-2,5-dioxohexanoate, NADH, and H+.
This enzyme participates in pentose and glucuronate interconversions.
Nomenclature.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-dehydro-3-deoxy-D-gluconate:NAD+ 5-oxidoreductase. Other names in common use include 2-keto-3-deoxygluconate 5-dehydrogenase, 2-keto-3-deoxy-D-gluconate dehydrogenase, 2-keto-3-deoxygluconate (nicotinamide adenine dinucleotide, (phosphate)) dehydrogenase, 2-keto-3-deoxy-D-gluconate (3-deoxy-D-glycero-2,5-hexodiulosonic, and acid) dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902192 |
13902201 | 2-dehydro-3-deoxy-D-gluconate 6-dehydrogenase | Class of enzymes
In enzymology, a 2-dehydro-3-deoxy-D-gluconate 6-dehydrogenase (EC 1.1.1.126) is an enzyme that catalyzes the chemical reaction
2-dehydro-3-deoxy-D-gluconate + NADP+ formula_0 (4S,5S)-4,5-dihydroxy-2,6-dioxohexanoate + NADPH + H+
Thus, the two substrates of this enzyme are 2-dehydro-3-deoxy-D-gluconate and NADP+, whereas its 3 products are (4S,5S)-4,5-dihydroxy-2,6-dioxohexanoate, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-dehydro-3-deoxy-D-gluconate:NADP+ 6-oxidoreductase. Other names in common use include 2-keto-3-deoxy-D-gluconate dehydrogenase, and 2-keto-3-deoxygluconate dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902201 |
13902214 | 2-dehydropantoate 2-reductase | Class of enzymes
In enzymology, a 2-dehydropantoate 2-reductase (EC 1.1.1.169) is an enzyme that catalyzes the chemical reaction
(R)-pantoate + NADP+ formula_0 2-dehydropantoate + NADPH + H+
Thus, the two substrates of this enzyme are (R)-pantoate and NADP+, whereas its 3 products are 2-dehydropantoate, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-pantoate:NADP+ 2-oxidoreductase. Other names in common use include 2-oxopantoate reductase, 2-ketopantoate reductase, 2-ketopantoic acid reductase, ketopantoate reductase, and ketopantoic acid reductase. This enzyme participates in pantothenate and coa biosynthesis.
Structural studies.
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1KS9, 1YJQ, 1YON, 2EW2, and 2OFP.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902214 |
13902228 | 2-dehydropantolactone reductase (A-specific) | Class of enzymes
In enzymology, a 2-dehydropantolactone reductase (A-specific) (EC 1.1.1.168) is an enzyme that catalyzes the chemical reaction
(R)-pantolactone + NADP+ formula_0 2-dehydropantolactone + NADPH + H+
Thus, the two substrates of this enzyme are (R)-pantolactone and NADP+, whereas its 3 products are 2-dehydropantolactone, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-pantolactone:NADP+ oxidoreductase (A-specific). Other names in common use include 2-oxopantoyl lactone reductase, ketopantoyl lactone reductase, 2-ketopantoyl lactone reductase, and 2-dehydropantoyl-lactone reductase (A-specific).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902228 |
13902245 | 2-dehydropantolactone reductase (B-specific) | Class of enzymes
In enzymology, a 2-dehydropantolactone reductase (B-specific) (EC 1.1.1.214) is an enzyme that catalyzes the chemical reaction
(R)-pantolactone + NADP+ formula_0 2-dehydropantolactone + NADPH + H+
Thus, the two substrates of this enzyme are (R)-pantolactone and NADP+, whereas its 3 products are 2-dehydropantolactone, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-pantolactone:NADP+ oxidoreductase (B-specific). Other names in common use include 2-oxopantoyl lactone reductase, 2-ketopantoyl lactone reductase, ketopantoyl lactone reductase, and 2-dehydropantoyl-lactone reductase (B-specific).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902245 |
13902261 | 2-deoxy-D-gluconate 3-dehydrogenase | Class of enzymes
2-deoxy-D-gluconate 3-dehydrogenase (EC 1.1.1.125) is an enzyme that catalyzes the chemical reaction
2-deoxy-D-gluconate + NAD+ formula_0 3-dehydro-2-deoxy-D-gluconate + NADH + H+
Thus, the two substrates of this enzyme are 2-deoxy-D-gluconate and NAD+, whereas its 3 products are 3-dehydro-2-deoxy-D-gluconate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-deoxy-D-gluconate:NAD+ 3-oxidoreductase. This enzyme is also called 2-deoxygluconate dehydrogenase. This enzyme participates in pentose and glucuronate interconversions.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1X1E.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902261 |
13902272 | 2-hydroxy-3-oxopropionate reductase | InterPro Family
In enzymology, a 2-hydroxy-3-oxopropionate reductase (EC 1.1.1.60) is an enzyme that catalyzes the chemical reaction
(R)-glycerate + NAD(P)+ formula_0 2-hydroxy-3-oxopropanoate + NAD(P)H + H+
The 3 substrates of this enzyme are (R)-glycerate, NAD+, and NADP+, whereas its 4 products are 2-hydroxy-3-oxopropanoate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-glycerate:NAD(P)+ oxidoreductase. This enzyme is also called tartronate semialdehyde reductase. This enzyme participates in glyoxylate and dicarboxylate metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1YB4.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902272 |
13902284 | 2-hydroxymethylglutarate dehydrogenase | Class of enzymes
In enzymology, a 2-hydroxymethylglutarate dehydrogenase (EC 1.1.1.291) is an enzyme that catalyzes the chemical reaction
(S)-2-hydroxymethylglutarate + NAD+ formula_0 2-formylglutarate + NADH + H+
Thus, the two substrates of this enzyme are (S)-2-hydroxymethylglutarate and NAD+, whereas its 3 products are 2-formylglutarate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (S)-2-hydroxymethylglutarate:NAD+ oxidoreductase. This enzyme is also called HgD.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902284 |
13902292 | 2-oxoadipate reductase | Class of enzymes
In enzymology, a 2-oxoadipate reductase (EC 1.1.1.172) is an enzyme that catalyzes the chemical reaction
2-hydroxyadipate + NAD+ formula_0 2-oxoadipate + NADH + H+
Thus, the two substrates of this enzyme are 2-hydroxyadipate and NAD+, whereas its 3 products are 2-oxoadipate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-hydroxyadipate:NAD+ 2-oxidoreductase. Other names in common use include 2-ketoadipate reductase, alpha-ketoadipate reductase, and 2-ketoadipate reductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902292 |
13902305 | 2-(R)-hydroxypropyl-CoM dehydrogenase | Class of enzymes
In enzymology, a 2-(R)-hydroxypropyl-CoM dehydrogenase (EC 1.1.1.268) is an enzyme that catalyzes the chemical reaction
2-(R)-hydroxypropyl-CoM + NAD+ formula_0 2-oxopropyl-CoM + NADH + H+
Thus, the two substrates of this enzyme are 2-(R)-hydroxypropyl-CoM and NAD+, whereas its 3 products are 2-oxopropyl-CoM, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-[2-(R)-hydroxypropylthio]ethanesulfonate:NAD+ oxidoreductase. This enzyme is also called 2-(2-(R)-hydroxypropylthio)ethanesulfonate dehydrogenase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2CFC.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902305 |
13902324 | 2-(S)-hydroxypropyl-CoM dehydrogenase | Class of enzymes
In enzymology, a 2-(S)-hydroxypropyl-CoM dehydrogenase (EC 1.1.1.269) is an enzyme that catalyzes the chemical reaction
2-(S)-hydroxypropyl-CoM + NAD+ formula_0 2-oxopropyl-CoM + NADH + H+
Thus, the two substrates of this enzyme are 2-(S)-hydroxypropyl-CoM and NAD+, whereas its 3 products are 2-oxopropyl-CoM, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-[2-(S)-hydroxypropylthio]ethanesulfonate:NAD+ oxidoreductase. This enzyme is also called 2-(2-(S)-hydroxypropylthio)ethanesulfonate dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902324 |
13902357 | 3alpha(17beta)-hydroxysteroid dehydrogenase (NAD+) | Enzyme
In enzymology, a 3alpha(17beta)-hydroxysteroid dehydrogenase (NAD+) (EC 1.1.1.239) is an enzyme that catalyzes the chemical reaction:
testosterone + NAD+ formula_0 androst-4-ene-3,17-dione + NADH + H+
Thus, the two substrates of this enzyme are testosterone and NAD+, whereas its 3 products are androst-4-ene-3,17-dione, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3alpha(or 17beta)-hydroxysteroid:NAD+ oxidoreductase. Other names in common use include 3alpha,17beta-hydroxy steroid dehydrogenase, 3alpha(17beta)-HSD, and 3alpha(17beta)-hydroxysteroid dehydrogenase (NAD+). This enzyme participates in androgen and estrogen metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902357 |
13902375 | 3alpha-hydroxy-5beta-androstane-17-one 3alpha-dehydrogenase | Enzyme
In enzymology, a 3alpha-hydroxy-5beta-androstane-17-one 3alpha-dehydrogenase (EC 1.1.1.152) is an enzyme that catalyzes the chemical reaction
3alpha-hydroxy-5beta-androstane-17-one + NAD+ formula_0 5beta-androstane-3,17-dione + NADH + H+
Thus, the two substrates of this enzyme are 3alpha-hydroxy-5beta-androstane-17-one and NAD+, whereas its 3 products are 5beta-androstane-3,17-dione, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3alpha-hydroxy-5beta-steroid:NAD+ 3-oxidoreductase. Other names in common use include etiocholanolone 3alpha-dehydrogenase, etiocholanolone 3alpha-dehydrogenase, and 3alpha-hydroxy-5beta-steroid dehydrogenase. This enzyme participates in androgen and estrogen metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902375 |
13902386 | 3alpha-hydroxycholanate dehydrogenase | Class of enzymes
In enzymology, a 3alpha-hydroxycholanate dehydrogenase (EC 1.1.1.52) is an enzyme that catalyzes the chemical reaction
3alpha-hydroxy-5beta-cholanate + NAD+ formula_0 3-oxo-5beta-cholanate + NADH + H+
Thus, the two substrates of this enzyme are 3alpha-hydroxy-5beta-cholanate and NAD+, whereas its 3 products are 3-oxo-5beta-cholanate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3alpha-hydroxy-5beta-cholanate:NAD+ oxidoreductase. This enzyme is also called alpha-hydroxy-cholanate dehydrogenase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1IHI.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902386 |
13902407 | 3alpha-hydroxyglycyrrhetinate dehydrogenase | Enzyme
In enzymology, a 3alpha-hydroxyglycyrrhetinate dehydrogenase (EC 1.1.1.230) is an enzyme that catalyzes the chemical reaction
3alpha-hydroxyglycyrrhetinate + NADP+ formula_0 3-oxoglycyrrhetinate + NADPH + H+
Thus, the two substrates of this enzyme are 3alpha-hydroxyglycyrrhetinate and NADP+, whereas its 3 products are 3-oxoglycyrrhetinate, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3alpha-hydroxyglycyrrhetinate:NADP+ 3-oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902407 |
13902424 | 3alpha-hydroxysteroid dehydrogenase (A-specific) | Enzyme
In enzymology, a 3alpha-hydroxysteroid dehydrogenase (A-specific) (EC 1.1.1.213) is an enzyme that catalyzes the chemical reaction
androsterone + NAD(P)+ formula_0 5alpha-androstane-3,17-dione + NAD(P)H + H+
The 3 substrates of this enzyme are androsterone, NAD+, and NADP+, whereas its 4 products are 5alpha-androstane-3,17-dione, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor, more specifically it is part of the group of hydroxysteroid dehydrogenases. The systematic name of this enzyme class is 3alpha-hydroxysteroid:NAD(P)+ oxidoreductase (A-specific).
Structural studies.
As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes 1J96, 1LWI, 1S1P, 1S1R, 1S2A, 1S2C, 1XJB, 2F38, 2FGB, 2HDJ, and 2IPJ.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902424 |
13902437 | 3alpha-hydroxysteroid dehydrogenase (B-specific) | Enzyme
In enzymology, a 3alpha-hydroxysteroid dehydrogenase (B-specific) (EC 1.1.1.50) is an enzyme that catalyzes the chemical reaction
androsterone + NAD(P)+ formula_0 5alpha-androstane-3,17-dione + NAD(P)H + H+
The 3 substrates of this enzyme are androsterone, NAD+, and NADP+, whereas its 4 products are 5alpha-androstane-3,17-dione, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor, more specifically it is part of the group of hydroxysteroid dehydrogenases. The systematic name of this enzyme class is 3alpha-hydroxysteroid:NAD(P)+ oxidoreductase (B-specific). Other names in common use include hydroxyprostaglandin dehydrogenase, 3alpha-hydroxysteroid oxidoreductase, and sterognost 3alpha. This enzyme participates in 3 metabolic pathways: bile acid biosynthesis, c21-steroid hormone metabolism, and androgen and estrogen metabolism.
Structural studies.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1AFS, 1FJH, 1FK8, 1LWI, 1RAL, 2DKN, and 2FVL.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902437 |
13902450 | 3alpha(or 20beta)-hydroxysteroid dehydrogenase | Class of enzymes
In enzymology, a 3alpha(or 20beta)-hydroxysteroid dehydrogenase (EC 1.1.1.53) is an enzyme that catalyzes the chemical reaction
androstan-3alpha,17beta-diol + NAD+ formula_0 17beta-hydroxyandrostan-3-one + NADH + H+
Thus, the two substrates of this enzyme are androstan-3alpha,17beta-diol and NAD+, whereas its 3 products are 17beta-hydroxyandrostan-3-one, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3alpha(or 20beta)-hydroxysteroid:NAD+ oxidoreductase. Other names in common use include cortisone reductase, (R)-20-hydroxysteroid dehydrogenase, dehydrogenase, 20beta-hydroxy steroid, Delta4-3-ketosteroid hydrogenase, 20beta-hydroxysteroid dehydrogenase, 3alpha,20beta-hydroxysteroid:NAD+-oxidoreductase, NADH-20beta-hydroxysteroid dehydrogenase, and 20beta-HSD. This enzyme participates in bile acid biosynthesis and c21-steroid hormone metabolism.
Structural studies.
As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1HDC, 1N5D, 1NFF, 1NFQ, 1NFR, and 2HSD.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902450 |
13902465 | 3beta-hydroxy-5alpha-steroid dehydrogenase | Enzyme
In enzymology, a 3β-hydroxy-5α-steroid dehydrogenase (EC 1.1.1.278) is an enzyme that catalyzes the chemical reaction
3β-hydroxy-5α-pregnane-20-one + NADP+ formula_0 5α-pregnan-3,20-dione + NADPH + H+
Thus, the two substrates of this enzyme are 3β-hydroxy-5α-pregnane-20-one (allopregnanolone) and NADP+, whereas its 3 products are 5α-pregnan-3,20-dione, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3β-hydroxy-5α-steroid:NADP+ 3-oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902465 |
13902476 | 3beta-hydroxy-5beta-steroid dehydrogenase | Enzyme
In enzymology, a 3beta-hydroxy-5beta-steroid dehydrogenase (EC 1.1.1.277) is an enzyme that catalyzes the chemical reaction
3beta-hydroxy-5beta-pregnane-20-one + NADP+ formula_0 5beta-pregnan-3,20-dione + NADPH + H+
Thus, the two substrates of this enzyme are 3beta-hydroxy-5beta-pregnane-20-one and NADP+, whereas its 3 products are 5beta-pregnan-3,20-dione, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3beta-hydroxy-5beta-steroid:NADP+ 3-oxidoreductase. Other names in common use include 3beta-hydroxysteroid 5beta-oxidoreductase, and 3beta-hydroxysteroid 5beta-progesterone oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902476 |
13902495 | 3beta(or 20alpha)-hydroxysteroid dehydrogenase | Enzyme
In enzymology, a 3-β(or 20-α)-hydroxysteroid dehydrogenase (EC 1.1.1.210) is an enzyme that catalyzes the chemical reaction
5α-androstan-3β,17β-diol + NADP+ formula_0 17β-hydroxy-5α-androstan-3-one + NADPH + H+
This enzyme possesses the combined activities of the 3-β-hydroxysteroid dehydrogenase/Δ-5-4 isomerase and 20-α-hydroxysteroid dehydrogenase enzymes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902495 |
13902506 | 3''-deamino-3''-oxonicotianamine reductase | Class of enzymes
In enzymology, a 3''-deamino-3''-oxonicotianamine reductase (EC 1.1.1.285) is an enzyme that catalyzes the chemical reaction
2'-deoxymugineic acid + NAD(P)+ formula_0 3''-deamino-3''-oxonicotianamine + NAD(P)H + H+
The 3 substrates of this enzyme are 2'-deoxymugineic acid, NAD+, and NADP+, whereas its 4 products are 3"-deamino-3"-oxonicotianamine, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2"-deoxymugineic acid:NAD(P)+ 3"-oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902506 |
13902521 | 3-dehydro-L-gulonate 2-dehydrogenase | InterPro Family
In enzymology, a 3-dehydro-L-gulonate 2-dehydrogenase (EC 1.1.1.130) is an enzyme that catalyzes the chemical reaction:
3-dehydro-L-gulonate + NAD(P)+ formula_0 (4R,5S)-4,5,6-trihydroxy-2,3-dioxohexanoate + NAD(P)H + H+
The 3 substrates of this enzyme are 3-dehydro-L-gulonate, NAD+, and NADP+, whereas its 4 products are (4R,5S)-4,5,6-trihydroxy-2,3-dioxohexanoate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-dehydro-L-gulonate:NAD(P)+ 2-oxidoreductase. Other names in common use include 3-keto-L-gulonate dehydrogenase, 3-ketogulonate dehydrogenase, 3-keto-L-gulonate dehydrogenase, and 3-ketogulonate dehydrogenase. This enzyme participates in pentose and glucuronate interconversions and ascorbate and aldarate metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902521 |
13902536 | 3-dehydrosphinganine reductase | Protein-coding gene in the species Homo sapiens
3-dehydrosphinganine reductase (EC 1.1.1.102) also known as 3-ketodihydrosphingosine reductase (KDSR) or follicular variant translocation protein 1 (FVT1) is an enzyme that in humans is encoded by the "KDSR" gene.
Function.
3-dehydrosphinganine reductase catalyzes the chemical reaction:
sphinganine + NADP+ formula_0 3-dehydrosphinganine + NADPH + H+
Thus, the two substrates of this enzyme are sphinganine and NADP+, whereas its 3 products are 3-dehydrosphinganine, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. This enzyme participates in sphingolipid metabolism.
Tissue distribution.
Follicular lymphoma variant translocation 1 is a secreted protein which is weakly expressed in hematopoietic tissue.
Clinical significance.
FVT1 shows a high rate of transcription in some T cell malignancies and in phytohemagglutinin-stimulated lymphocytes. The proximity of FVT1 to BCL2 suggests that it may participate in the tumoral process.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902536 |
13902549 | 3-hydroxy-2-methylbutyryl-CoA dehydrogenase | Class of enzymes
In enzymology, a 3-hydroxy-2-methylbutyryl-CoA dehydrogenase (EC 1.1.1.178) is an enzyme that catalyzes the chemical reaction
(2S,3S)-3-hydroxy-2-methylbutanoyl-CoA + NAD+ formula_0 2-methylacetoacetyl-CoA + NADH + H+
Thus, the two substrates of this enzyme are (2S,3S)-3-hydroxy-2-methylbutanoyl-CoA and NAD+, whereas its 3 products are 2-methylacetoacetyl-CoA, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (2S,3S)-3-hydroxy-2-methylbutanoyl-CoA:NAD+ oxidoreductase. Other names in common use include 2-methyl-3-hydroxybutyryl coenzyme A dehydrogenase, 2-methyl-3-hydroxybutyryl coenzyme A dehydrogenase, and 2-methyl-3-hydroxy-butyryl CoA dehydrogenase. This enzyme participates in valine, leucine and isoleucine degradation.
Structural studies.
As of 20 January 2010, 6 structure have been solved for this class of enzymes, with the PDB accession code 1E3S, 1E3W, 1E6W, 1SO8, 1U7T, 2O23.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902549 |
13902627 | 3-hydroxyacyl-CoA dehydrogenase | Enzyme
In enzymology, a 3-hydroxyacyl-CoA dehydrogenase (EC 1.1.1.35) is an enzyme that catalyzes the chemical reaction
(S)-3-hydroxyacyl-CoA + NAD+ formula_0 3-oxoacyl-CoA + NADH + H+
Thus, the two substrates of this enzyme are (S)-3-hydroxyacyl-CoA and NAD+, whereas its 3 products are 3-oxoacyl-CoA, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, to be specific those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor.
Isozymes.
In humans, the following genes encode proteins with 3-hydroxyacyl-CoA dehydrogenase activity:
Function.
3-Hydroxyacyl CoA dehydrogenase is classified as an oxidoreductase. It is involved in fatty acid metabolic processes. Specifically it catalyzes the third step of beta oxidation; the oxidation of L-3-hydroxyacyl CoA by NAD+. The reaction converts the hydroxyl group into a keto group.
The end product is 3-ketoacyl CoA.
Metabolic pathways.
This enzyme participates in 8 metabolic pathways:
<templatestyles src="Div col/styles.css"/>
Nomenclature.
The systematic name of this enzyme class is (S)-3-hydroxyacyl-CoA:NAD+ oxidoreductase. Other names in common use include:
<templatestyles src="Div col/styles.css"/>
Structural studies.
As of 20 January 2010, 22 structures have been solved for this class of enzymes, with PDB accession codes 1F0Y, 1F12, 1F14, 1F17, 1F67, 1GZ6, 1IKT, 1IL0, 1LSJ, 1LSO, 1M75, 1M76, 1S9C, 1WDK, 1WDL, 1WDM, 1ZBQ, 1ZCJ, 2D3T, 2HDH, 3HAD, and 3HDH.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902627 |
13902638 | 3-hydroxybenzyl-alcohol dehydrogenase | Class of enzymes
In enzymology, a 3-hydroxybenzyl-alcohol dehydrogenase (EC 1.1.1.97) is an enzyme that catalyzes the chemical reaction
3-hydroxybenzyl alcohol + NADP+ formula_0 3-hydroxybenzaldehyde + NADPH + H+
Thus, the two substrates of this enzyme are 3-hydroxybenzyl alcohol and NADP+, whereas its 3 products are 3-hydroxybenzaldehyde, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-hydroxybenzyl-alcohol:NADP+ oxidoreductase. Other names in common use include m-hydroxybenzyl alcohol dehydrogenase, m-hydroxybenzyl alcohol (NADP+) dehydrogenase, and m-hydroxybenzylalcohol dehydrogenase. This enzyme participates in toluene and xylene degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902638 |
13902649 | 3-Hydroxybutyrate dehydrogenase | Enzyme
In enzymology, 3-hydroxybutyrate dehydrogenase (EC 1.1.1.30) is an enzyme that catalyzes the chemical reaction:
("R")-3-hydroxybutanoate + NAD+ formula_0 acetoacetate + NADH + H+
Thus, the two substrates of this enzyme are ("R")-3-hydroxybutanoate and NAD+, whereas its three products are acetoacetate, NADH, and H+. This enzyme belongs to the family of oxidoreductases, to be specific, those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor.
This enzyme participates in the synthesis and degradation of ketone bodies and the metabolism of butyric acid.
Classification.
This enzyme has a classification number of EC 1.1.1.30. The first digit means that this enzyme is an oxidoreductase which means the purpose is to catalyze oxidation and reduction reaction pathways. The following two 1s indicate the subclass and sub-sub of the enzyme. In this case, 1.1.1 means this enzyme is an oxidoreductase that acts on the CH-OH group of the donor molecule using NAD(+) or NADP(+) as the acceptor. The 4th number, or 30 in this case, is the serial number of the enzyme to define it within its sub-subclass. 3-Hydroxybutryate dehydrogenase is also known as beta-hydroxybutyric dehydrogenase and is abbreviated BHBDH. Other common synonyms are shown below.
The systematic name of this enzyme class is ("R")-3-hydroxybutanoate:NAD+ oxidoreductase. Other names in common use include:
Reaction mechanism.
BHBDH is found in the mitochondria and catalyzes the oxidation of 3-hydroxybutyyrate to acetoacetate and it uses NAD as a coenzyme. The reaction is shown below and as denoted by the formula it is reversible. As outlined in the reaction formula, this enzyme catalyzes the reaction of (R)-3-hydroxybutanoate and NAD+ into acetoacetate into NADH and a free H+.
("R")-3-hydroxybutanoate + NAD+ formula_0 acetoacetate + NADH + H+
The first step in the reaction is the substrate binding and this occurs by the carboxylate group of the substrate binding to the carboxylate group of the acetate part of the enzyme. Then the C3 atom from the substrate will form a hydrogen bond with the C4 atom of NAD+. Then when the reaction is occurring at the optimum pH a proton is removed from the hydroxyl group of the substrate and this allows for a carbonyl-bond to form. Simultaneously, the negative hydrogen ion on the C3 atom of the enzyme is transferred to the C4 atom on NAD+ and thus forming acetoacetate and NADH.
Species distribution.
BHBDH is found in dogfish sharks ("Squalus acanthias") rectal glands and has been found to have a large increase in activity in activity after feeding. The largest and most significant peak of BHBDH activity occurred 4–8 hours in the rectal glands of the sharks. Besides dogfish, this enzyme is found in a large range of organisms all the way from unicellular organisms to higher order primates such as humans. In humans, this enzyme is used medically in diabetes patients to detect ketone bodies which are associated with diabetic ketoacidosis. This is by no means an exhaustive list of organisms where BHBDH is found, these organisms are merely some of the common examples of this enzyme in action.
Function.
In the dogfish shark, the main function of BHBDH is to help with the breakdown of ketone bodies in the cells. This function is supported by experimental evidence of starved dogfish sharks after they are fed. When starved, the ketone levels in the shark bodies increases, especially after long-term starvation. Once they are fed, the presence of ketone bodies in the body declines rapidly. The rapid decline is correlated with significant elevations of BHBDH activity, which points towards this enzyme being very important to process ketone bodies.
Structure.
There are currently 2 published crystal structures of BHBDH which are shown below and available on the following links.
Both structures consist of 1 sheet, 5 beta alpha beta units, 7 strands, 9 beta turns and 1 gamma turn. The two structures differ in the number of helices and helix-helix interacs. In the left structure there are 13 helices and 8 helix-helix interacs. In the right structure there are 12 helices and 6 helix-helix interacs. Both structures have C2H6AsO2 ligands. Both structures have magnesium ions on them, but they differ again on interactions involving the metal. For the left structure there is an MG301(A) group while on the right structure there is a 1301(A) group (6,7). The links in the captions of the photo provide a website with more information on these enzymes. They also provide a rotational 3D structure to examine all angles of the known structures. Please visit them for additional information.
Active site.
The active site of the second structure has 2 tunnels, one with a radius of 1.21Å and one with a radius of 1.19Å. The 1.21Å tunnel has a length of 26.7Å and the 1.19Å tunnel has a length of 27.5Å. The active site of the first version has one tunnel that has a radius of 1.14Å and a length of 26.0Å. As with the structures, these parts of the enzyme can be examined further using the links in the caption.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902649 |
13902658 | 3-Hydroxybutyryl-CoA dehydrogenase | Class of enzymes
In enzymology, a 3-hydroxybutyryl-CoA dehydrogenase (EC 1.1.1.157) is an enzyme that catalyzes the chemical reaction
("S")-3-hydroxybutanoyl-CoA + NADP+ formula_0 3-acetoacetyl-CoA + NADPH + H+
Thus, the two substrates of this enzyme are ("S")-3-hydroxybutanoyl-CoA and NADP+; its 3 products are acetoacetyl-CoA, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, to be specific those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is ("S")-3-hydroxybutanoyl-CoA:NADP+ oxidoreductase. Other names in common use include beta-hydroxybutyryl coenzyme A dehydrogenase, -(+)-3-hydroxybutyryl-CoA dehydrogenase, BHBD, dehydrogenase, -3-hydroxybutyryl coenzyme A (nicotinamide adenine, dinucleotide phosphate), -(+)-3-hydroxybutyryl-CoA dehydrogenase, and beta-hydroxybutyryl-CoA dehydrogenase. This enzyme participates in benzoate degradation via coa ligation and butanoate metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902658 |
13902669 | 3-hydroxyisobutyrate dehydrogenase | Protein-coding gene in the species Homo sapiens
In enzymology, a 3-hydroxyisobutyrate dehydrogenase (EC 1.1.1.31) also known as β-hydroxyisobutyrate dehydrogenase or 3-hydroxyisobutyrate dehydrogenase, mitochondrial (HIBADH) is an enzyme that in humans is encoded by the "HIBADH" gene.
3-Hydroxyisobutyrate dehydrogenase catalyzes the chemical reaction:
3-hydroxy-2-methylpropanoate + NAD+ formula_0 2-methyl-3-oxopropanoate + NADH + H+
Thus, the two substrates of this enzyme are 3-hydroxy-2-methylpropanoate and NAD+, whereas its 3 products are 2-methyl-3-oxopropanoate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-hydroxy-2-methylpropanoate:NAD+ oxidoreductase. This enzyme participates in valine, leucine and isoleucine degradation.
Function.
3-hydroxyisobutyrate dehydrogenase is a tetrameric mitochondrial enzyme that catalyzes the NAD+-dependent, reversible oxidation of 3-hydroxyisobutyrate, an intermediate of valine catabolism, to methylmalonate semialdehyde.
Structural studies.
As of late 2007, five structures have been solved for this class of enzymes, with PDB accession codes 1WP4, 2CVZ, 2GF2, 2H78, and 2I9P.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902669 |
13902687 | 3-hydroxypimeloyl-CoA dehydrogenase | Class of enzymes
In enzymology, a 3-hydroxypimeloyl-CoA dehydrogenase (EC 1.1.1.259) is an enzyme that catalyzes the chemical reaction
3-hydroxypimeloyl-CoA + NAD+ formula_0 3-oxopimeloyl-CoA + NADH + H+
Thus, the two substrates of this enzyme are 3-hydroxypimeloyl-CoA and NAD+, whereas its 3 products are 3-oxopimeloyl-CoA, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-hydroxypimeloyl-CoA:NAD+ oxidoreductase. This enzyme participates in benzoate degradation via coa ligation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902687 |
13902699 | 3-hydroxypropionate dehydrogenase | Class of enzymes
In enzymology, a 3-hydroxypropionate dehydrogenase (EC 1.1.1.59) is an enzyme that catalyzes the chemical reaction
3-hydroxypropanoate + NAD+ formula_0 3-oxopropanoate + NADH + H+
Thus, the two substrates of this enzyme are 3-hydroxypropanoate and NAD+, whereas its 3 products are 3-oxopropanoate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-hydroxypropanoate:NAD+ oxidoreductase. This enzyme participates in beta-alanine metabolism and propanoate metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902699 |
13902711 | 3-Ketosteroid reductase | Enzyme
In enzymology, a 3-keto-steroid reductase (EC 1.1.1.270) is an enzyme that catalyzes the chemical reaction
4alpha-methyl-5alpha-cholest-7-en-3beta-ol + NADP+ formula_0 4alpha-methyl-5alpha-cholest-7-en-3-one + NADPH + H+
Thus, the two substrates of this enzyme are 4alpha-methyl-5alpha-cholest-7-en-3beta-ol and NADP+, whereas its 3 products are 4alpha-methyl-5alpha-cholest-7-en-3-one, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3beta-hydroxy-steroid:NADP+ 3-oxidoreductase. This enzyme is also called 3-KSR. This enzyme participates in biosynthesis of steroids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902711 |
13902729 | 3-methylbutanal reductase | Class of enzymes
In enzymology, a 3-methylbutanal reductase (EC 1.1.1.265) is an enzyme that catalyzes the chemical reaction
3-methylbutan-1-ol + NAD(P)+ formula_0 3-methylbutanal + NAD(P)H + H+
The three substrates of this enzyme are 3-methylbutan-1-ol (isoamyl alcohol), NAD+, and NADP+, whereas its four products are 3-methylbutanal, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-methylbutanol:NAD(P)+ oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902729 |
13902833 | 5-amino-6-(5-phosphoribosylamino)uracil reductase | Class of enzymes
In enzymology, a 5-amino-6-(5-phosphoribosylamino)uracil reductase (EC 1.1.1.193) is an enzyme that catalyzes the chemical reaction
5-amino-6-(5-phosphoribitylamino)uracil + NADP+ formula_0 5-amino-6-(5-phosphoribosylamino)uracil + NADPH + H+
Thus, the two substrates of this enzyme are 5-amino-6-(5-phosphoribitylamino)uracil and NADP+, whereas its 3 products are 5-amino-6-(5-phosphoribosylamino)uracil, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 5-amino-6-(5-phosphoribitylamino)uracil:NADP+ 1'-oxidoreductase. This enzyme is also called aminodioxyphosphoribosylaminopyrimidine reductase. This enzyme participates in riboflavin metabolism.
Structural studies.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 2AZN, 2B3Z, 2D5N, 2G6V, 2HXV, 2O7P, and 2OBC.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902833 |
13902842 | 6-endo-hydroxycineole dehydrogenase | Class of enzymes
In enzymology, a 6-endo-hydroxycineole dehydrogenase (EC 1.1.1.241) is an enzyme that catalyzes the chemical reaction
6-endo-hydroxycineole + NAD+ formula_0 6-oxocineole + NADH + H+
Thus, the two substrates of this enzyme are 6-endo-hydroxycineole and NAD+, whereas its 3 products are 6-oxocineole, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 6-endo-hydroxycineole:NAD+ 6-oxidoreductase. This enzyme participates in terpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902842 |
13902853 | 6-hydroxyhexanoate dehydrogenase | Class of enzymes
In enzymology, a 6-hydroxyhexanoate dehydrogenase (EC 1.1.1.258) is an enzyme that catalyzes the chemical reaction
6-hydroxyhexanoate + NAD+ formula_0 6-oxohexanoate + NADH + H+
Thus, the two substrates of this enzyme are 6-hydroxyhexanoate and NAD+, whereas its 3 products are 6-oxohexanoate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 6-hydroxyhexanoate:NAD+ oxidoreductase. This enzyme participates in caprolactam degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902853 |
13902864 | 6-pyruvoyltetrahydropterin 2'-reductase | Class of enzymes
In enzymology, a 6-pyruvoyltetrahydropterin 2'-reductase (EC 1.1.1.220) is an enzyme that catalyzes the chemical reaction
6-lactoyl-5,6,7,8-tetrahydropterin + NADP+ formula_0 6-pyruvoyltetrahydropterin + NADPH + H+
Thus, the two substrates of this enzyme are 6-lactoyl-5,6,7,8-tetrahydropterin and NADP+, whereas its 3 products are 6-pyruvoyltetrahydropterin, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 6-lactoyl-5,6,7,8-tetrahydropterin:NADP+ 2'-oxidoreductase. Other names in common use include 6-pyruvoyltetrahydropterin reductase, 6PPH4(2'-oxo) reductase, 6-pyruvoyl tetrahydropterin (2'-oxo)reductase, 6-pyruvoyl-tetrahydropterin 2'-reductase, and pyruvoyl-tetrahydropterin reductase. This enzyme participates in folate biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902864 |
13902875 | 7alpha-hydroxysteroid dehydrogenase | Class of enzymes
In enzymology, a 7alpha-hydroxysteroid dehydrogenase (EC 1.1.1.159) is an enzyme that catalyzes the chemical reaction
3alpha,7alpha,12alpha-trihydroxy-5beta-cholanate + NAD+ formula_0 3alpha,12alpha-dihydroxy-7-oxo-5beta-cholanate + NADH + H+
Thus, the two substrates of this enzyme are 3alpha,7alpha,12alpha-trihydroxy-5beta-cholanate and NAD+, whereas its 3 products are 3alpha,12alpha-dihydroxy-7-oxo-5beta-cholanate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 7alpha-hydroxysteroid:NAD+ 7-oxidoreductase. Other names in common use include 7alpha-hydroxy steroid dehydrogenase, and 7alpha-HSDH.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1AHH, 1AHI, and 1FMC.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902875 |
1390288 | Loop group | In mathematics, a loop group (not to be confused with a loop) is a group of loops in a topological group "G" with multiplication defined pointwise.
Definition.
In its most general form a loop group is a group of continuous mappings from a manifold "M" to a topological group "G".
More specifically, let "M"
"S"1, the circle in the complex plane, and let "LG" denote the space of continuous maps "S"1 → "G", i.e.
formula_0
equipped with the compact-open topology. An element of "LG" is called a "loop" in "G".
Pointwise multiplication of such loops gives "LG" the structure of a topological group. Parametrize "S"1 with θ,
formula_1
and define multiplication in "LG" by
formula_2
Associativity follows from associativity in "G". The inverse is given by
formula_3
and the identity by
formula_4
The space "LG" is called the free loop group on "G". A loop group is any subgroup of the free loop group "LG".
Examples.
An important example of a loop group is the group
formula_5
of based loops on "G". It is defined to be the kernel of the evaluation map
formula_6,
and hence is a closed normal subgroup of "LG". (Here, "e"1 is the map that sends a loop to its value at formula_7.) Note that we may embed "G" into "LG" as the subgroup of constant loops. Consequently, we arrive at a split exact sequence
formula_8.
The space "LG" splits as a semi-direct product,
formula_9.
We may also think of Ω"G" as the loop space on "G". From this point of view, Ω"G" is an H-space with respect to concatenation of loops. On the face of it, this seems to provide Ω"G" with two very different product maps. However, it can be shown that concatenation and pointwise multiplication are homotopic. Thus, in terms of the homotopy theory of Ω"G", these maps are interchangeable.
Loop groups were used to explain the phenomenon of Bäcklund transforms in soliton equations by Chuu-Lian Terng and Karen Uhlenbeck.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "LG = \\{\\gamma:S^1 \\to G|\\gamma \\in C(S^1, G)\\},"
},
{
"math_id": 1,
"text": "\\gamma:\\theta \\in S^1 \\mapsto \\gamma(\\theta) \\in G,"
},
{
"math_id": 2,
"text": "(\\gamma_1 \\gamma_2)(\\theta) \\equiv \\gamma_1(\\theta)\\gamma_2(\\theta)."
},
{
"math_id": 3,
"text": "\\gamma^{-1}:\\gamma^{-1}(\\theta) \\equiv \\gamma(\\theta)^{-1},"
},
{
"math_id": 4,
"text": "e:\\theta \\mapsto e \\in G."
},
{
"math_id": 5,
"text": "\\Omega G \\,"
},
{
"math_id": 6,
"text": "e_1: LG \\to G,\\gamma\\mapsto \\gamma(1)"
},
{
"math_id": 7,
"text": "1 \\in S^1"
},
{
"math_id": 8,
"text": "1\\to \\Omega G \\to LG \\to G\\to 1"
},
{
"math_id": 9,
"text": "LG = \\Omega G \\rtimes G"
}
]
| https://en.wikipedia.org/wiki?curid=1390288 |
13902887 | 7beta-hydroxysteroid dehydrogenase (NADP+) | Class of enzymes
In enzymology, a 7beta-hydroxysteroid dehydrogenase (NADP+) (EC 1.1.1.201) is an enzyme that catalyzes the chemical reaction
a 7beta-hydroxysteroid + NADP+ formula_0 a 7-oxosteroid + NADPH + H+
Thus, the two substrates of this enzyme are 7beta-hydroxysteroid and NADP+, whereas its 3 products are 7-oxosteroid, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 7beta-hydroxysteroid:NADP+ 7-oxidoreductase. Other names in common use include NADP+-dependent 7beta-hydroxysteroid dehydrogenase, and 7beta-hydroxysteroid dehydrogenase (NADP+).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902887 |
13902902 | 8-oxocoformycin reductase | Class of enzymes
In enzymology, a 8-oxocoformycin reductase (EC 1.1.1.235) is an enzyme that catalyzes the chemical reaction
coformycin + NADP+ formula_0 8-oxocoformycin + NADPH + H+
Thus, the two substrates of this enzyme are coformycin and NADP+, whereas its 3 products are 8-oxocoformycin, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is coformycin:NADP+ 8-oxidoreductase. This enzyme is also called 8-ketodeoxycoformycin reductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902902 |
13902942 | (−)-borneol dehydrogenase | Class of enzymes
In enzymology, a (−)-borneol dehydrogenase (EC 1.1.1.227) is an enzyme that catalyzes the chemical reaction
(−)-borneol + NAD+ formula_0 (−)-camphor + NADH + H+
Thus, the two substrates of this enzyme are (−)-borneol and NAD+, whereas its 3 products are (−)-camphor, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (−)-borneol:NAD+ oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902942 |
13902956 | (+)-borneol dehydrogenase | Class of enzymes
In enzymology, a (+)-borneol dehydrogenase (EC 1.1.1.198) is an enzyme that increases the rate of, or catalyzes, the chemical reaction
(+)-borneol + NAD+ formula_0 (+)-camphor + NADH + H+
Thus, the two substrates of this enzyme are (+)-borneol and NAD+, whereas its 3 products are (+)-camphor, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (+)-borneol:NAD+ oxidoreductase. This enzyme is also called bicyclic monoterpenol dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902956 |
13902968 | (−)-menthol dehydrogenase | Class of enzymes
A (−)-menthol dehydrogenase (EC 1.1.1.207) is an enzyme that catalyzes the chemical reaction
(−)-menthol + NADP+ formula_0 (−)-menthone + NADPH + H+,
i.e., catalyses the breakdown of menthol. Thus, the two substrates of this enzyme are (−)-menthol and NADP+, whereas its 3 products are (−)-menthone, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (−)-menthol:NADP+ oxidoreductase. This enzyme is also called monoterpenoid dehydrogenase. This enzyme participates in monoterpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902968 |
13902980 | (+)-neomenthol dehydrogenase | Class of enzymes
In enzymology, a (+)-neomenthol dehydrogenase (EC 1.1.1.208) is an enzyme that catalyzes the chemical reaction
(+)-neomenthol + NADP+ formula_0 (−)-menthone + NADPH + H+
Thus, the two substrates of this enzyme are (+)-neomenthol and NADP+, whereas its 3 products are (−)-menthone, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (+)-neomenthol:NADP+ oxidoreductase. This enzyme is also called monoterpenoid dehydrogenase. This enzyme participates in monoterpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902980 |
13902994 | (R)-2-hydroxyacid dehydrogenase | Class of enzymes
In enzymology, a (R)-2-hydroxyacid dehydrogenase (EC 1.1.1.272) is an enzyme that catalyzes the chemical reaction
(2R)-3-sulfolactate + NAD(P)+ formula_0 3-sulfopyruvate + NAD(P)H + H+
The 3 substrates of this enzyme are (2R)-3-sulfolactic acid, NAD+, and NADP+, whereas its 4 products are 3-sulfopyruvic acid, NADH, NADPH, and H+. This enzyme is important in the metabolism of archaea, particularly their biosynthesis of coenzymes such as coenzyme M, tetrahydromethanopterin and methanofuran.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-2-hydroxyacid:NAD(P)+ oxidoreductase. Other names in common use include (R)-sulfolactate:NAD(P)+ oxidoreductase, L-sulfolactate dehydrogenase, ComC, and (R)-sulfolactate dehydrogenase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1RFM.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13902994 |
13903000 | (R)-2-hydroxy-fatty-acid dehydrogenase | Class of enzymes
In enzymology, a (R)-2-hydroxy-fatty-acid dehydrogenase (EC 1.1.1.98) is an enzyme that catalyzes the chemical reaction
(R)-2-hydroxystearate + NAD+ formula_0 2-oxostearate + NADH + H+
Thus, the two substrates of this enzyme are (R)-2-hydroxystearate and NAD+, whereas its 3 products are 2-oxostearate, NADH, and H+. This reaction is important in fatty acid metabolism.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-2-hydroxystearate:NAD+ oxidoreductase. Other names in common use include D-2-hydroxy fatty acid dehydrogenase, and 2-hydroxy fatty acid oxidase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903000 |
13903022 | (R)-3-hydroxyacid-ester dehydrogenase | Class of enzymes
In enzymology, a (R)-3-hydroxyacid-ester dehydrogenase (EC 1.1.1.279) is an enzyme that catalyzes the chemical reaction
ethyl (R)-3-hydroxyhexanoate + NADP+ formula_0 ethyl 3-oxohexanoate + NADPH + H+
Thus, the two substrates of this enzyme are ethyl (R)-3-hydroxyhexanoate and NADP+, whereas its 3 products are ethyl 3-oxohexanoate, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is ethyl-(R)-3-hydroxyhexanoate:NADP+ 3-oxidoreductase. This enzyme is also called 3-oxo ester (R)-reductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903022 |
13903029 | (R)-4-hydroxyphenyllactate dehydrogenase | Class of enzymes
(R)-4-hydroxyphenyllactate dehydrogenase (EC 1.1.1.222) is an enzyme that catalyzes a chemical reaction
(R)-3-(4-hydroxyphenyl)lactate + NAD(P)+ formula_0 3-(4-hydroxyphenyl)pyruvate + NAD(P)H + H+
The 3 substrates of this enzyme are (R)-3-(4-hydroxyphenyl)lactate, NAD+, and NADP+, whereas its 4 products are 3-(4-hydroxyphenyl)pyruvate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-3-(4-hydroxyphenyl)lactate:NAD(P)+ 2-oxidoreductase. Other names in common use include (R)-aromatic lactate dehydrogenase, and D-hydrogenase, D-aryllactate. This enzyme participates in tyrosine and phenylalanine catabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903029 |
13903044 | (R)-aminopropanol dehydrogenase | Class of enzymes
In enzymology, a (R)-aminopropanol dehydrogenase (EC 1.1.1.75) is an enzyme that catalyzes the chemical reaction
(R)-1-aminopropan-2-ol + NAD+ formula_0 aminoacetone + NADH + H+
Thus, the two substrates of this enzyme are (R)-1-aminopropan-2-ol and NAD+, whereas its 3 products are aminoacetone, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-1-aminopropan-2-ol:NAD+ oxidoreductase. Other names in common use include L-aminopropanol dehydrogenase, 1-aminopropan-2-ol-NAD+ dehydrogenase, L(+)-1-aminopropan-2-ol:NAD+ oxidoreductase, 1-aminopropan-2-ol-dehydrogenase, DL-1-aminopropan-2-ol: NAD+ dehydrogenase, and L(+)-1-aminopropan-2-ol-NAD+/NADP+ oxidoreductase. This enzyme participates in glycine, serine and threonine metabolism. It requires potassium as a cofactor.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903044 |
13903051 | 3-(imidazol-5-yl)lactate dehydrogenase | Class of enzymes
In enzymology, a 3-(imidazol-5-yl)lactate dehydrogenase (EC 1.1.1.111) is an enzyme that catalyzes the chemical reaction
(S)-3-(imidazol-5-yl)lactate + NAD(P)+ formula_0 3-(imidazol-5-yl)pyruvate + NAD(P)H + H+
The 3 substrates of this enzyme are (S)-3-(imidazol-5-yl)lactate, NAD+, and NADP+, whereas its 4 products are 3-(imidazol-5-yl)pyruvate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (S)-3-(imidazol-5-yl)lactate:NAD(P)+ oxidoreductase. This enzyme is also called imidazol-5-yl lactate dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903051 |
13903055 | (S)-2-hydroxy-fatty-acid dehydrogenase | Class of enzymes
In enzymology, a (S)-2-hydroxy-fatty-acid dehydrogenase (EC 1.1.1.99) is an enzyme that catalyzes the chemical reaction
(S)-2-hydroxystearate + NAD+ formula_0 2-oxostearate + NADH + H+
Thus, the two substrates of this enzyme are (S)-2-hydroxystearate and NAD+, whereas its 3 products are 2-oxostearate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (S)-2-hydroxystearate:NAD+ oxidoreductase. Other names in common use include dehydrogenase, L-2-hydroxy fatty acid, L-2-hydroxy fatty acid dehydrogenase, and 2-hydroxy fatty acid oxidase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903055 |
13903064 | (S)-3-hydroxyacid-ester dehydrogenase | Class of enzymes
In enzymology, a (S)-3-hydroxyacid-ester dehydrogenase (EC 1.1.1.280) is an enzyme that catalyzes the chemical reaction
ethyl (S)-3-hydroxyhexanoate + NADP+ formula_0 ethyl 3-oxohexanoate + NADPH + H+
Thus, the two substrates of this enzyme are ethyl (S)-3-hydroxyhexanoate and NADP+, whereas its 3 products are ethyl 3-oxohexanoate, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is ethyl-(S)-3-hydroxyhexanoate:NADP+ 3-oxidoreductase. This enzyme is also called 3-oxo ester (S)-reductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903064 |
13903078 | (+)-sabinol dehydrogenase | Class of enzymes
In enzymology, a (+)-sabinol dehydrogenase (EC 1.1.1.228) is an enzyme that catalyzes the chemical reaction
(+)-cis-sabinol + NAD+ formula_0 (+)-sabinone + NADH + H+
Thus, the two substrates of this enzyme are (+)-cis-sabinol and NAD+, whereas its 3 products are (+)-sabinone, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (+)-cis-sabinol:NAD+ oxidoreductase. This enzyme is also called (+)-cis-sabinol dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903078 |
13903087 | 3-oxoacyl-(acyl-carrier-protein) reductase | Enzyme
In enzymology, a 3-oxoacyl-[acyl-carrier-protein] reductase (EC 1.1.1.100) is an enzyme that catalyzes the chemical reaction
3-oxoacyl-[acyl-carrier-protein](ACP) + NADPH + H+ formula_0 (3"R")-3-hydroxyacyl-[acyl-carrier-protein](ACP) + NADP+
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group as hydride donor with NAD+ or NADP+ as hydride acceptor. The systematic name of this enzyme class is (3R)-3-hydroxyacyl-[acyl-carrier-protein]:NADP+ oxidoreductase. Other names in common use include beta-ketoacyl-[acyl-carrier protein](ACP) reductase, beta-ketoacyl acyl carrier protein (ACP) reductase, beta-ketoacyl reductase, beta-ketoacyl thioester reductase, beta-ketoacyl-ACP reductase, beta-ketoacyl-acyl carrier protein reductase, 3-ketoacyl acyl carrier protein reductase, 3-ketoacyl ACP reductase, NADPH-specific 3-oxoacyl-[acylcarrier protein]reductase, and 3-oxoacyl-[ACP]reductase. This enzyme participates in fatty acid biosynthesis and polyunsaturated fatty acid biosynthesis.
Structural studies.
As of late 2007, 21 structures have been solved for this class of enzymes, with PDB accession codes 1I01, 1O5I, 1Q7B, 1Q7C, 1ULS, 1UZL, 1UZM, 1UZN, 2A4K, 2B4Q, 2C07, 2FR0, 2FR1, 2NM0, 2NTN, 2P68, 2PFF, 2PH3, 2PNF, 2UVD, and 2Z5L.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903087 |
13903088 | (S)-carnitine 3-dehydrogenase | Class of enzymes
In enzymology, a (S)-carnitine 3-dehydrogenase (EC 1.1.1.254) is an enzyme that catalyzes the chemical reaction
(S)-carnitine + NAD+ formula_0 3-dehydrocarnitine + NADH + H+
Thus, the two substrates of this enzyme are (S)-carnitine and NAD+, whereas its 3 products are 3-dehydrocarnitine, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (S)-carnitine:NAD+ oxidoreductase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903088 |
13903099 | (S,S)-butanediol dehydrogenase | Class of enzymes
In enzymology, a (S,S)-butanediol dehydrogenase (EC 1.1.1.76) is an enzyme that catalyzes the chemical reaction
(S,S)-butane-2,3-diol + NAD+ formula_0 acetoin + NADH + H+
Thus, the two substrates of this enzyme are (S,S)-butane-2,3-diol and NAD+, whereas its 3 products are acetoin, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (S,S)-butane-2,3-diol:NAD+ oxidoreductase. Other names in common use include L-butanediol dehydrogenase, L-BDH, and L(+)-2,3-butanediol dehydrogenase (L-acetoin forming). This enzyme participates in butanoic acid metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903099 |
13903106 | (S)-usnate reductase | Class of enzymes
In enzymology, a ("S")-usnate reductase (EC 1.1.1.199) is an enzyme that catalyzes the chemical reaction
(6"R")-2-acetyl-6-(3-acetyl-2,4,6-trihydroxy-5-methylphenyl)-3-hydroxy-6- methyl-2,4-cyclohexadien-1-one + NAD+ formula_0 ("S")-usnic acid + NADH + H+
In the reverse direction, ("S")-usnate is reduced by NADH with cleavage of the ether bond to form a 7-hydroxy group.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is reduced-("S")-usnate:NAD+ oxidoreductase (ether-bond-forming). This enzyme is also called L-usnic acid dehydrogenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903106 |
13903113 | 3-oxoacyl-(acyl-carrier-protein) reductase (NADH) | Class of enzymes
In enzymology, a 3-oxoacyl-[acyl-carrier-protein] reductase (NADH) (EC 1.1.1.212) is an enzyme that catalyzes the chemical reaction
(3R)-3-hydroxyacyl-[acyl-carrier-protein] + NAD+ formula_0 3-oxoacyl-[acyl-carrier-protein] + NADH + H+
Thus, the two substrates of this enzyme are (3R)-3-hydroxyacyl-[acyl-carrier-protein] and NAD+, whereas its 3 products are 3-oxoacyl-[acyl-carrier-protein], NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (3R)-3-hydroxyacyl-[acyl-carrier-protein]:NAD+ oxidoreductase. Other names in common use include 3-oxoacyl-[acyl carrier protein] (reduced nicotinamide adenine, dinucleotide) reductase, and 3-oxoacyl-[acyl-carrier-protein] reductase (NADH).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903113 |
13903120 | (+)-trans-carveol dehydrogenase | Class of enzymes
In enzymology, a (+)-trans-carveol dehydrogenase (EC 1.1.1.275) is an enzyme that catalyzes the chemical reaction
(+)-trans-carveol + NAD+ formula_0 (+)-(S)-carvone + NADH + H+
Thus, the two substrates of this enzyme are (+)-trans-carveol and NAD+, whereas its 3 products are (+)-(S)-carvone, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (+)-trans-carveol:NAD+ oxidoreductase. This enzyme is also called carveol dehydrogenase. This enzyme participates in monoterpenoid biosynthesis and the degradation of the terpenes limonene and pinene.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13903120 |
13903462 | Condenser (heat transfer) | System for condensing gas into liquid by cooling
In systems involving heat transfer, a condenser is a heat exchanger used to condense a gaseous substance into a liquid state through cooling. In doing so, the latent heat is released by the substance and transferred to the surrounding environment. Condensers are used for efficient heat rejection in many industrial systems. Condensers can be made according to numerous designs and come in many sizes ranging from rather small (hand-held) to very large (industrial-scale units used in plant processes). For example, a refrigerator uses a condenser to get rid of heat extracted from the interior of the unit to the outside air.
Condensers are used in air conditioning, industrial chemical processes such as distillation, steam power plants, and other heat-exchange systems. The use of cooling water or surrounding air as the coolant is common in many condensers.
History.
The earliest laboratory condenser, a "Gegenstromkühler" (counter-flow condenser), was invented in 1771 by the Swedish-German chemist Christian Weigel. By the mid-19th century, German chemist Justus von Liebig would provide his own improvements on the preceding designs of Weigel and Johann Friedrich August Göttling, with the device becoming known as the Liebig condenser.
Principle of operation.
A condenser is designed to transfer heat from a working fluid (e.g. water in a steam power plant) to a secondary fluid or the surrounding air. The condenser relies on the efficient heat transfer that occurs during phase changes, in this case during the condensation of a vapor into a liquid. The vapor typically enters the condenser at a temperature above that of the secondary fluid. As the vapor cools, it reaches the saturation temperature, condenses into liquid, and releases large quantities of latent heat. As this process occurs along the condenser, the quantity of vapor decreases and the quantity of liquid increases; at the outlet of the condenser, only liquid remains. Some condenser designs contain an additional length to subcool this condensed liquid below the saturation temperature.
Countless variations exist in condenser design, with design variables including the working fluid, the secondary fluid, the geometry, and the material. Common secondary fluids include water, air, refrigerants, or phase-change materials.
Condensers have two significant design advantages over other cooling technologies:
Examples of condensers.
Surface condenser.
A surface condenser is one in which condensing medium and vapors are physically separated and used when direct contact is not desired. It is a shell and tube heat exchanger installed at the outlet of every steam turbine in thermal power stations. Commonly, the cooling water flows through the tube side and the steam enters the shell side where the condensation occurs on the outside of the heat transfer tubes. The condensate drips down and collects at the bottom, often in a built-in pan called a "hotwell". The shell side often operates at a vacuum or partial vacuum, produced by the difference in specific volume between the steam and condensate. Conversely, the vapor can be fed through the tubes with the coolant water or air flowing around the outside.
Chemistry.
In chemistry, a condenser is the apparatus that cools hot vapors, causing them to condense into a liquid. Examples include the Liebig condenser, Graham condenser, and Allihn condenser. This is not to be confused with a condensation reaction which links two fragments into a single molecule by an addition reaction and an elimination reaction.
In laboratory distillation, reflux, and rotary evaporators, several types of condensers are commonly used. The Liebig condenser is simply a straight tube within a cooling water jacket and is the simplest (and relatively least expensive) form of condenser. The Graham condenser is a spiral tube within a water jacket, and the Allihn condenser has a series of large and small constrictions on the inside tube, each increasing the surface area upon which the vapor constituents may condense. Being more complex shapes to manufacture, these latter types are also more expensive to purchase. These three types of condensers are laboratory glassware items since they are typically made of glass. Commercially available condensers usually are fitted with ground glass joints and come in standard lengths of 100, 200, and 400 mm. Air-cooled condensers are unjacketed, while water-cooled condensers contain a jacket for the water.
Industrial distillation.
Larger condensers are also used in industrial-scale distillation processes to cool distilled vapor into liquid distillate. Commonly, the coolant flows through the tube side and distilled vapor through the shell side with distillate collecting at or flowing out the bottom.
Air conditioning.
A "condenser unit" used in central air conditioning systems typically has a heat exchanger section to cool down and condense incoming refrigerant vapor into liquid, a compressor to raise the pressure of the refrigerant and move it along, and a fan for blowing outside air through the heat exchanger section to cool the refrigerant inside. A typical configuration of such a condenser unit is as follows: The heat exchanger section wraps around the sides of the unit with the compressor inside. In this heat exchanger section, the refrigerant goes through multiple tube passes, which are surrounded by heat transfer fins through which cooling air can circulate from outside to inside the unit. This also increases the surface area. There is a motorized fan inside the condenser unit near the top, which is covered by some grating to keep any objects from accidentally falling inside on the fan. The fan is used to pull outside cooling air in through the heat exchanger section at the sides and blow it out the top through the grating. These condenser units are located on the outside of the building they are trying to cool, with tubing between the unit and building, one for vapor refrigerant entering and another for liquid refrigerant leaving the unit. Of course, an electric power supply is needed for the compressor and fan inside the unit.
Direct-contact.
In a "direct-contact condenser", hot vapor and cool liquid are introduced into a vessel and allowed to mix directly, rather than being separated by a barrier such as the wall of a heat exchanger tube. The vapor gives up its latent heat and condenses to a liquid, while the liquid absorbs this heat and undergoes a temperature rise. The entering vapor and liquid typically contain a single condensable substance, such as a water spray being used to cool air and adjust its humidity.
Equation.
For an ideal single-pass condenser whose coolant has constant density, constant heat capacity, linear enthalpy over the temperature range, perfect cross-sectional heat transfer, and zero longitudinal heat transfer, and whose tubing has constant perimeter, constant thickness, and constant heat conductivity, and whose condensible fluid is perfectly mixed and at a constant temperature, the coolant temperature varies along its tube according to:
formula_0
where:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Theta(x) = \\frac{T_H-T(x)}{T_H-T(0)} = e^{-NTU} = e^{-\\frac{h P x}{\\dot{m} c}} = e^{-\\frac{G x}{\\dot{m} c L}} "
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "T(x)"
},
{
"math_id": 3,
"text": "T_H"
},
{
"math_id": 4,
"text": "NTU"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "c"
},
{
"math_id": 7,
"text": "h"
},
{
"math_id": 8,
"text": "P"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "UA"
},
{
"math_id": 11,
"text": "L"
}
]
| https://en.wikipedia.org/wiki?curid=13903462 |
13905493 | Czenakowski distance | The Czenakowski distance (sometimes shortened as CZD) is a per-pixel quality metric that estimates quality or similarity by measuring differences between pixels. Because it compares vectors with strictly non-negative elements, it is often used to compare colored images, as color values cannot be negative. This different approach has a better correlation with subjective quality assessment than PSNR.
Definition.
Androutsos et al. give the Czenakowski coefficient as follows:
formula_0
Where a pixel formula_1 is being compared to a pixel formula_2 on the "k"-th band of color – usually one for each of red, green and blue.
For a pixel matrix of size formula_3, the Czenakowski coefficient can be used in an arithmetic mean spanning all pixels to calculate the Czenakowski distance as follows:
formula_4
Where formula_5 is the "(i, j)"-th pixel of the "k"-th band of a color image and, similarly, formula_6 is the pixel that it is being compared to.
Uses.
In the context of image forensics – for example, detecting if an image has been manipulated –, Rocha et al. report the Czenakowski distance is a popular choice for Color Filter Array (CFA) identification.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d_z(i,j) = 1 - \\frac{ 2\\sum^{p}_{k=1} \\text{min}(x_{ik},\\ x_{jk})}{ \\sum^{p}_{k=1}( x_{ik} + x_{jk} ) }"
},
{
"math_id": 1,
"text": "x_i"
},
{
"math_id": 2,
"text": "x_j"
},
{
"math_id": 3,
"text": "M \\times N"
},
{
"math_id": 4,
"text": "\\frac{1}{MN}\\sum^{M-1}_{i=0}\\sum^{N-1}_{j=0}\\begin{pmatrix}1 - \\frac{ 2\\sum^{3}_{k=1} \\text{min}(A_k(i,j),\\ B_k(i,j))}{ \\sum^{3}_{k=1}( A_k(i,j) + B_k(i, j) ) }\\end{pmatrix}"
},
{
"math_id": 5,
"text": "A_k(i,j)"
},
{
"math_id": 6,
"text": "B_k(i,j)"
}
]
| https://en.wikipedia.org/wiki?curid=13905493 |
1391009 | Plasma stealth | Proposed aircraft stealth technology
Plasma stealth is a proposed process to use ionized gas (plasma) to reduce the radar cross-section (RCS) of an aircraft. Interactions between electromagnetic radiation and ionized gas have been extensively studied for many purposes, including concealing aircraft from radar as stealth technology. Various methods might plausibly be able to form a layer or cloud of plasma around a vehicle to deflect or absorb radar, from simpler electrostatic or radio frequency discharges to more complex laser discharges. It is theoretically possible to reduce RCS in this way, but it may be very difficult to do so in practice. Some Russian missiles e.g. the 3M22 Zircon (SS-N-33) and Kh-47M2 Kinzhal missiles have been reported to make use of plasma stealth.
First claims.
In 1956, Arnold Eldredge, of General Electric, filed a patent application for an "Object Camouflage Method and Apparatus," which proposed using a particle accelerator in an aircraft to create a cloud of ionization that would "...refract or absorb incident radar beams." It is unclear who funded this work or whether it was prototyped and tested. U.S. Patent 3,127,608 was granted in 1964.
During Project OXCART, the operation of the Lockheed A-12 reconnaissance aircraft, the CIA funded an attempt to reduce the RCS of the A-12's inlet cones. Known as Project KEMPSTER, this used an electron beam generator to create a cloud of ionization in front of each inlet. The system was flight tested but was never deployed on operational A-12s or SR-71s. The A-12 also had the capability to use a cesium-based fuel additive called "A-50" to ionize the exhaust gases, thus blocking radar waves from reflecting off the aft quadrant and engine exhaust pipes. Cesium was used because it was easily ionized by the hot exhaust gases. Radar physicist Ed Lovick Jr. claimed this additive saved the A-12 program.
In 1992, Hughes Research Laboratory conducted a research project to study electromagnetic wave propagation in unmagnetized plasma. A series of high voltage spark gaps were used to generate UV radiation, which creates plasma via photoionization in a waveguide. Plasma filled missile radomes were tested in an anechoic chamber for attenuation of reflection. At about the same time, R. J. Vidmar studied the use of atmospheric pressure plasma as electromagnetic reflectors and absorbers. Other investigators also studied the case of a non-uniform magnetized plasma slab.
Despite the apparent technical difficulty of designing a plasma stealth device for combat aircraft, there are claims that a system was offered for export by Russia in 1999. In January 1999, the Russian ITAR-TASS news agency published an interview with Doctor Anatoliy Koroteyev, the director of the Keldysh Research Center (FKA Scientific Research Institute for Thermal Processes), who talked about the plasma stealth device developed by his organization. The claim was particularly interesting in light of the solid scientific reputation of Dr. Koroteyev and the Institute for Thermal Processes, which is one of the top scientific research organizations in the world in the field of fundamental physics.
The "Journal of Electronic Defense" reported that "plasma-cloud-generation technology for stealth applications" developed in Russia reduces an aircraft's RCS by a factor of 100 (20 dB). According to this June 2002 article, the Russian plasma stealth device has been tested aboard a Sukhoi Su-27IB fighter-bomber. The Journal also reported that similar research into applications of plasma for RCS reduction is being carried out by Accurate Automation Corporation (Chattanooga, Tennessee) and Old Dominion University (Norfolk, Virginia) in the U.S.; and by Dassault Aviation (Saint-Cloud, France) and Thales (Paris, France).
Plasma and its properties.
A plasma is a "quasineutral" (total electrical charge is close to zero) mix of ions (atoms which have been ionized, and therefore possess a net positive charge), electrons, and neutral particles (un-ionized atoms or molecules). Most plasmas are only partially ionized, in fact, the ionization degree of common plasma devices like fluorescent lamp is fairly low ( less than 1%). Almost all the matter in the universe is very low density plasma: solids, liquids and gases are uncommon away from planetary bodies. Plasmas have many technological applications, from fluorescent lighting to plasma processing for semiconductor manufacture.
Plasmas can interact strongly with electromagnetic radiation: this is why plasmas might plausibly be used to modify an object's radar signature. Interaction between plasma and electromagnetic radiation is strongly dependent on the physical properties and parameters of the plasma, most notably the electron temperature and plasma density.
formula_0
Plasmas can have a wide range of values in both temperature and density; plasma temperatures range from close to absolute zero and to well beyond 109 kelvins (for comparison, tungsten melts at 3700 kelvins), and plasma may contain less than one particle per cubic metre. Electron temperature is usually expressed as electronvolt (eV), and 1 eV is equivalent to 11,604 K. Common plasmas temperature and density in fluorescent light tubes and semiconductor manufacturing processes are around several eV and 109-12per cm3. For a wide range of parameters and frequencies, plasma is electrically conductive, and its response to low-frequency electromagnetic waves is similar to that of a metal: a plasma simply reflects incident low-frequency radiation. Low-frequency means it is lower than the characteristic electron plasma frequency. The use of plasmas to control the reflected electromagnetic radiation from an object (Plasma stealth) is feasible at suitable frequency where the conductivity of the plasma allows it to interact strongly with the incoming radio wave, and the wave can either be absorbed and converted into thermal energy, or reflected, or transmitted depending on the relationship between the radio wave frequency and the characteristic plasma frequency. If the frequency of the radio wave is lower than the plasma frequency, it is reflected. if it is higher, it is transmitted. If these two are equal, then resonance occurs. There is also another mechanism where reflection can be reduced. If the electromagnetic wave passes through the plasma, and is reflected by the metal, and the reflected wave and incoming wave are roughly equal in power, then they may form two phasors. When these two phasors are of opposite phase they can cancel each other out. In order to obtain substantial attenuation of radar signal, the plasma slab needs adequate thickness and density.
Plasmas support a wide range of waves, but for unmagnetised plasmas, the most relevant are the Langmuir waves, corresponding to a dynamic compression of the electrons. For magnetised plasmas, many different wave modes can be excited which might interact with radiation at radar frequencies.
Absorption of EM radiation.
When electromagnetic waves, such as radar signals, propagate into a conductive plasma, ions and electrons are displaced as a result of the time varying electric and magnetic fields. The wave field gives energy to the particles. The particles generally return some fraction of the energy they have gained to the wave, but some energy may be permanently absorbed as heat by processes like scattering or resonant acceleration, or transferred into other wave types by mode conversion or nonlinear effects. A plasma can, at least in principle, absorb all the energy in an incoming wave, and this is the key to plasma stealth. However, plasma stealth implies a substantial reduction of an aircraft's RCS, making it more difficult (but not necessarily impossible) to detect. The mere fact of detection of an aircraft by a radar does not guarantee an accurate targeting solution needed to intercept the aircraft or to engage it with missiles. A reduction in RCS also results in a proportional reduction in detection range, allowing an aircraft to get closer to the radar before being detected.
The central issue here is frequency of the incoming signal. A plasma will simply reflect radio waves below a certain frequency (characteristic electron plasma frequency). This is the basic principle of short wave radios and long-range communications, because low-frequency radio signals bounce between the Earth and the ionosphere and may therefore travel long distances. Early-warning over-the-horizon radars utilize such low-frequency radio waves (typically lower than 50 MHz). Most military airborne and air defense radars, however, operate in VHF, UHF, and microwave band, which have frequencies higher than the characteristic plasma frequency of ionosphere, therefore microwave can penetrate the ionosphere and communication between the ground and communication satellites demonstrates is possible. ("Some" frequencies can penetrate the ionosphere).
Plasma surrounding an aircraft might be able to absorb incoming radiation, and therefore reduces signal reflection from the metal parts of the aircraft: the aircraft would then be effectively invisible to radar at long range due to weak signals received. A plasma might also be used to modify the reflected waves to confuse the opponent's radar system: for example, frequency-shifting the reflected radiation would frustrate Doppler filtering and might make the reflected radiation more difficult to distinguish from noise.
Control of plasma properties like density and temperature is important for a functioning plasma stealth device, and it may be necessary to dynamically adjust the plasma density, temperature, or combinations, or the magnetic field, in order to effectively defeat different types of radar systems. The great advantage Plasma Stealth possesses over traditional radio frequency stealth techniques like low-observability geometry and use of radar-absorbent materials is that plasma is tunable and wideband. When faced with frequency hopping radar, it is possible, at least in principle, to change the plasma temperature and density to deal with the situation. The greatest challenge is to generate a large area or volume of plasma with good energy efficiency.
Plasma stealth technology also faces various technical problems. For example, the plasma itself emits EM radiation, although it is usually weak and noise-like in spectrum. Also, it takes some time for plasma to be re-absorbed by the atmosphere and a trail of ionized air would be created behind the moving aircraft, but at present there is no method to detect this kind of plasma trail at long distance. Thirdly, plasmas (like glow discharges or fluorescent lights) tend to emit a visible glow: this is not compatible with overall low observability concept. Last but not least, it is extremely difficult to produce a radar-absorbent plasma around an entire aircraft traveling at high speed, the electrical power needed is tremendous. However, a substantial reduction of an aircraft's RCS may be still be achieved by generating radar-absorbent plasma around the most reflective surfaces of the aircraft, such as the turbojet engine fan blades, engine air intakes, vertical stabilizers, and airborne radar antenna.
There have been several computational studies on plasma-based radar cross section reduction technique using three-dimensional finite-difference time-domain simulations. Chung studied the radar cross change of a metal cone when it is covered with plasma, a phenomenon that occurs during reentry into the atmosphere. Chung simulated the radar cross section of a generic satellite, and also the radar cross section when it is covered with artificially generated plasma cones.
Theoretical work with Sputnik.
Due to the obvious military applications of the subject, there are few readily available experimental studies of plasma's effect on the radar cross section (RCS) of aircraft, but plasma interaction with microwaves is a well explored area of general plasma physics. Standard plasma physics reference texts are a good starting point and usually spend some time discussing wave propagation in plasmas.
One of the most interesting articles related to the effect of plasma on the RCS of aircraft was published in 1963 by the IEEE. The article is entitled "Radar cross sections of dielectric or plasma coated conducting spheres and circular cylinders" (IEEE Transactions on Antennas and Propagation, September 1963, pp. 558–569). Six years earlier, in 1957, the Soviets had launched the first artificial satellite. While trying to track Sputnik it was noticed that its electromagnetic scattering properties were different from what was expected for a conductive sphere. This was due to the satellite's traveling inside of a plasma shell: the ionosphere.
The Sputnik's simple shape serves as an ideal illustration of plasma's effect on the RCS of an aircraft. Naturally, an aircraft would have a far more elaborate shape and be made of a greater variety of materials, but the basic effect should remain the same. In the case of the Sputnik flying through the ionosphere at high velocity and surrounded by a naturally occurring plasma shell, there are two separate radar reflections: the first from the conductive surface of the satellite, and the second from the dielectric plasma shell.
The authors of the paper found that a dielectric (plasma) shell may either decrease or increase the echo area of the object. If either one of the two reflections is considerably greater, then the weaker reflection will not contribute much to the overall effect. The authors also stated that the EM signal that penetrates the plasma shell and reflects off the object's surface will drop in intensity while traveling through plasma, as was explained in the prior section.
The most interesting effect is observed when the two reflections are of the same order of magnitude. In this situation the two components (the two reflections) will be added as phasors and the resulting field will determine the overall RCS. When these two components are out of phase relative to each other, cancellation occurs. This means that under such circumstances the RCS becomes null and the object is completely invisible to the radar.
It is immediately apparent that performing similar numeric approximations for the complex shape of an aircraft would be difficult. This would require a large body of experimental data for the specific airframe, properties of plasma, aerodynamic aspects, incident radiation, etc. In contrast, the original computations discussed in this paper were done by a handful of people on an IBM 704 computer made in 1956, and at the time, this was a novel subject with very little research background. So much has changed in science and engineering since 1963, that differences between a metal sphere and a modern combat jet pale in comparison.
A simple application of plasma stealth is the use of plasma as an antenna: metal antenna masts often have large radar cross sections, but a hollow glass tube filled with low pressure plasma can also be used as an antenna, and is entirely transparent to radar when not in use.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega_{pe} = (4\\pi n_ee^2/m_e)^{1/2} = 5.64 \\times 10^4 n_e^{1/2} \\mbox{rad/s} = 9000 \\times n_e^{1/2} \\mbox{Hz} "
}
]
| https://en.wikipedia.org/wiki?curid=1391009 |
13911197 | Katz's back-off model | Type of language model
Katz back-off is a generative "n"-gram language model that estimates the conditional probability of a word given its history in the "n"-gram. It accomplishes this estimation by "backing off" through progressively shorter history models under certain conditions. By doing so, the model with the most reliable information about a given history is used to provide the better results.
The model was introduced in 1987 by Slava M. Katz. Prior to that, n-gram language models were constructed by training individual models for different n-gram orders using maximum likelihood estimation and then interpolating them together.
Method.
The equation for Katz's back-off model is:
formula_0
where
"C"("x") = number of times "x" appears in training
"w""i" = "i"th word in the given context
Essentially, this means that if the "n"-gram has been seen more than "k" times in training, the conditional probability of a word given its history is proportional to the maximum likelihood estimate of that "n"-gram. Otherwise, the conditional probability is equal to the back-off conditional probability of the ("n" − 1)-gram.
The more difficult part is determining the values for "k", "d" and "α".
formula_1 is the least important of the parameters. It is usually chosen to be 0. However, empirical testing may find better values for k.
formula_2 is typically the amount of discounting found by Good–Turing estimation. In other words, if Good–Turing estimates formula_3 as formula_4, then formula_5
To compute formula_6, it is useful to first define a quantity β, which is the left-over probability mass for the ("n" − 1)-gram:
formula_7
Then the back-off weight, α, is computed as follows:
formula_8
The above formula only applies if there is data for the "("n" − 1)-gram". If not, the algorithm skips n-1 entirely and uses the Katz estimate for n-2. (and so on until an n-gram with data is found)
Discussion.
This model generally works well in practice, but fails in some circumstances. For example, suppose that the bigram "a b" and the unigram "c" are very common, but the trigram "a b c" is never seen. Since "a b" and "c" are very common, it may be significant (that is, not due to chance) that "a b c" is never seen. Perhaps it's not allowed by the rules of the grammar. Instead of assigning a more appropriate value of 0, the method will back off to the bigram and estimate "P"("c" | "b"), which may be too high. | [
{
"math_id": 0,
"text": "\n\\begin{align}\n& P_{bo} (w_i \\mid w_{i-n+1} \\cdots w_{i-1}) \\\\[4pt]\n= {} & \\begin{cases}\n d_{w_{i-n+1} \\cdots w_{i}} \\dfrac{C(w_{i-n+1} \\cdots w_{i-1}w_{i})}{C(w_{i-n+1} \\cdots w_{i-1})} & \\text{if } C(w_{i-n+1} \\cdots w_i) > k \\\\[10pt]\n \\alpha_{w_{i-n+1} \\cdots w_{i-1}} P_{bo}(w_i \\mid w_{i-n+2} \\cdots w_{i-1}) & \\text{otherwise}\n\\end{cases}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "C^*"
},
{
"math_id": 5,
"text": "d = \\frac{C^*}{C}"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": "\\beta_{w_{i-n+1} \\cdots w_{i -1}} = 1 - \\sum_{ \\{w_i : C(w_{i-n+1} \\cdots w_{i}) > k \\} } d_{w_{i-n+1} \\cdots w_{i}} \\frac{C(w_{i-n+1}\\cdots w_{i-1} w_{i})}{C(w_{i-n+1} \\cdots w_{i-1})} "
},
{
"math_id": 8,
"text": "\\alpha_{w_{i-n+1} \\cdots w_{i -1}} = \\frac{\\beta_{w_{i-n+1} \\cdots w_{i -1}}} {\\sum_{ \\{ w_i : C(w_{i-n+1} \\cdots w_{i}) \\leq k \\} } P_{bo}(w_i \\mid w_{i-n+2} \\cdots w_{i-1})}"
}
]
| https://en.wikipedia.org/wiki?curid=13911197 |
13911218 | Per cent mille | One-thousandth of a percent
A per cent mille or pcm is one one-thousandth of a percent. It can be thought of as a "milli-percent". It is commonly used in epidemiology, and in nuclear reactor engineering as a unit of reactivity.
Epidemiology.
Statistics of crime rates, mortality and disease prevalence in a population are often given in "per 100 000".
Nuclear Reactivity.
In nuclear reactor engineering, a per cent mille is equal to one-thousandth of a percent of the reactivity, denoted by Greek lowercase letter rho. Reactivity is a dimensionless unit representing a departure from criticality, calculated by:
formula_0
where keff denotes the effective multiplication factor for the reaction. Therefore, one pcm is equal to:
formula_1
This unit is commonly used in the operation of light-water reactor sites because reactivity values tend to be small, so measuring in pcm allows reactivity to be expressed using whole numbers.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho=(k_{\\text{eff}}-1)/k_{\\text{eff}}"
},
{
"math_id": 1,
"text": "1~\\text{pcm} = \\rho \\cdot 10^5"
}
]
| https://en.wikipedia.org/wiki?curid=13911218 |
13916072 | Symplectization | In mathematics, the symplectization of a contact manifold is a symplectic manifold which naturally corresponds to it.
Definition.
Let formula_0 be a contact manifold, and let formula_1. Consider the set
formula_2
of all nonzero 1-forms at formula_3, which have the contact plane formula_4 as their kernel. The union
formula_5
is a symplectic submanifold of the cotangent bundle of formula_6, and thus possesses a natural symplectic structure.
The projection formula_7 supplies the symplectization with the structure of a principal bundle over formula_6 with structure group formula_8.
The coorientable case.
When the contact structure formula_9 is cooriented by means of a contact form formula_10, there is another version of symplectization, in which only forms giving the same coorientation to formula_9 as formula_10 are considered:
formula_11
formula_12
Note that formula_9 is coorientable if and only if the bundle formula_7 is trivial. Any section of this bundle is a coorienting form for the contact structure. | [
{
"math_id": 0,
"text": "(V,\\xi)"
},
{
"math_id": 1,
"text": "x \\in V"
},
{
"math_id": 2,
"text": "S_xV = \\{\\beta \\in T^*_xV - \\{ 0 \\} \\mid \\ker \\beta = \\xi_x\\} \\subset T^*_xV"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "\\xi_x"
},
{
"math_id": 5,
"text": "SV = \\bigcup_{x \\in V}S_xV \\subset T^*V"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "\\pi : SV \\to V"
},
{
"math_id": 8,
"text": "\\R^* \\equiv \\R - \\{0\\}"
},
{
"math_id": 9,
"text": "\\xi"
},
{
"math_id": 10,
"text": "\\alpha"
},
{
"math_id": 11,
"text": "S^+_xV = \\{\\beta \\in T^*_xV - \\{0\\} \\,|\\, \\beta = \\lambda\\alpha,\\,\\lambda > 0\\} \\subset T^*_xV,"
},
{
"math_id": 12,
"text": "S^+V = \\bigcup_{x \\in V}S^+_xV \\subset T^*V."
}
]
| https://en.wikipedia.org/wiki?curid=13916072 |
1391942 | Orientation (vector space) | Choice of reference for distinguishing an object and its mirror image
The orientation of a real vector space or simply orientation of a vector space is the arbitrary choice of which ordered bases are "positively" oriented and which are "negatively" oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, but the choice is arbitrary, as they may also be assigned a negative orientation. A vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called <templatestyles src="Template:Visible anchor/styles.css" />unoriented.
In mathematics, "orientability" is a broader notion that, in two dimensions, allows one to say when a cycle goes around clockwise or counterclockwise, and in three dimensions when a figure is left-handed or right-handed. In linear algebra over the real numbers, the notion of orientation makes sense in arbitrary finite dimension, and is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple displacement. Thus, in three dimensions, it is impossible to make the left hand of a human figure into the right hand of the figure by applying a displacement alone, but it is possible to do so by reflecting the figure in a mirror. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed (or right-chiral and left-chiral).
Definition.
Let "V" be a finite-dimensional real vector space and let "b"1 and "b"2 be two ordered bases for "V". It is a standard result in linear algebra that there exists a unique linear transformation "A" : "V" → "V" that takes "b"1 to "b"2. The bases "b"1 and "b"2 are said to have the "same orientation" (or be consistently oriented) if "A" has positive determinant; otherwise they have "opposite orientations". The property of having the same orientation defines an equivalence relation on the set of all ordered bases for "V". If "V" is non-zero, there are precisely two equivalence classes determined by this relation. An orientation on "V" is an assignment of +1 to one equivalence class and −1 to the other.
Every ordered basis lives in one equivalence class or another. Thus any choice of a privileged ordered basis for "V" determines an orientation: the orientation class of the privileged basis is declared to be positive.
For example, the standard basis on R"n" provides a standard orientation on R"n" (in turn, the orientation of the standard basis depends on the orientation of the Cartesian coordinate system on which it is built). Any choice of a linear isomorphism between "V" and R"n" will then provide an orientation on "V".
The ordering of elements in a basis is crucial. Two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1. This is because the determinant of a permutation matrix is equal to the signature of the associated permutation.
Similarly, let "A" be a nonsingular linear mapping of vector space R"n" to R"n". This mapping is orientation-preserving if its determinant is positive. For instance, in R3 a rotation around the "Z" Cartesian axis by an angle "α" is orientation-preserving:
formula_0
while a reflection by the "XY" Cartesian plane is not orientation-preserving:
formula_1
Zero-dimensional case.
The concept of orientation degenerates in the zero-dimensional case. A zero-dimensional vector space has only a single point, the zero vector. Consequently, the only basis of a zero-dimensional vector space is the empty set formula_2. Therefore, there is a single equivalence class of ordered bases, namely, the class formula_3 whose sole member is the empty set. This means that an orientation of a zero-dimensional space is a function
formula_4
It is therefore possible to orient a point in two different ways, positive and negative.
Because there is only a single ordered basis formula_2, a zero-dimensional vector space is the same as a zero-dimensional vector space with ordered basis. Choosing formula_5 or formula_6 therefore chooses an orientation of every basis of every zero-dimensional vector space. If all zero-dimensional vector spaces are assigned this orientation, then, because all isomorphisms among zero-dimensional vector spaces preserve the ordered basis, they also preserve the orientation. This is unlike the case of higher-dimensional vector spaces where there is no way to choose an orientation so that it is preserved under all isomorphisms.
However, there are situations where it is desirable to give different orientations to different points. For example, consider the fundamental theorem of calculus as an instance of Stokes' theorem. A closed interval ["a", "b"] is a one-dimensional manifold with boundary, and its boundary is the set {"a", "b"}. In order to get the correct statement of the fundamental theorem of calculus, the point "b" should be oriented positively, while the point "a" should be oriented negatively.
On a line.
The one-dimensional case deals with a line which may be traversed in one of two directions. There are two orientations to a line just as there are two orientations to a circle. In the case of a line segment (a connected subset of a line), the two possible orientations result in "directed line segments". An orientable surface sometimes has the selected orientation indicated by the orientation of a line perpendicular to the surface.
Alternate viewpoints.
Multilinear algebra.
For any "n"-dimensional real vector space "V" we can form the "k"th-exterior power of "V", denoted Λ"k""V". This is a real vector space of dimension formula_7. The vector space Λ"n""V" (called the "top exterior power") therefore has dimension 1. That is, Λ"n""V" is just a real line. There is no "a priori" choice of which direction on this line is positive. An orientation is just such a choice. Any nonzero linear form "ω" on Λ"n""V" determines an orientation of "V" by declaring that "x" is in the positive direction when "ω"("x") > 0. To connect with the basis point of view we say that the positively-oriented bases are those on which "ω" evaluates to a positive number (since "ω" is an "n"-form we can evaluate it on an ordered set of "n" vectors, giving an element of R). The form "ω" is called an orientation form. If {"e""i"} is a privileged basis for "V" and {"e""i"∗} is the dual basis, then the orientation form giving the standard orientation is "e"1∗ ∧ "e"2∗ ∧ … ∧ "e""n"∗.
The connection of this with the determinant point of view is: the determinant of an endomorphism formula_8 can be interpreted as the induced action on the top exterior power.
Lie group theory.
Let "B" be the set of all ordered bases for "V". Then the general linear group GL("V") acts freely and transitively on "B". (In fancy language, "B" is a GL("V")-torsor). This means that as a manifold, "B" is (noncanonically) homeomorphic to GL("V"). Note that the group GL("V") is not connected, but rather has two connected components according to whether the determinant of the transformation is positive or negative (except for GL0, which is the trivial group and thus has a single connected component; this corresponds to the canonical orientation on a zero-dimensional vector space). The identity component of GL("V") is denoted GL+("V") and consists of those transformations with positive determinant. The action of GL+("V") on "B" is "not" transitive: there are two orbits which correspond to the connected components of "B". These orbits are precisely the equivalence classes referred to above. Since "B" does not have a distinguished element (i.e. a privileged basis) there is no natural choice of which component is positive. Contrast this with GL("V") which does have a privileged component: the component of the identity. A specific choice of homeomorphism between "B" and GL("V") is equivalent to a choice of a privileged basis and therefore determines an orientation.
More formally: formula_9,
and the Stiefel manifold of "n"-frames in formula_10 is a formula_11-torsor, so formula_12 is a torsor over formula_13, i.e., its 2 points, and a choice of one of them is an orientation.
Geometric algebra.
The various objects of geometric algebra are charged with three attributes or "features": attitude, orientation, and magnitude. For example, a vector has an attitude given by a straight line parallel to it, an orientation given by its sense (often indicated by an arrowhead) and a magnitude given by its length. Similarly, a bivector in three dimensions has an attitude given by the family of planes associated with it (possibly specified by the normal line common to these planes ), an orientation (sometimes denoted by a curved arrow in the plane) indicating a choice of sense of traversal of its boundary (its "circulation"), and a magnitude given by the area of the parallelogram defined by its two vectors.
Orientation on manifolds.
Each point "p" on an "n"-dimensional differentiable manifold has a tangent space "T""p""M" which is an "n"-dimensional real vector space. Each of these vector spaces can be assigned an orientation. Some orientations "vary smoothly" from point to point. Due to certain topological restrictions, this is not always possible. A manifold that admits a smooth choice of orientations for its tangent spaces is said to be "orientable".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\mathbf {A}_1 = \\begin{pmatrix}\n \\cos \\alpha & -\\sin \\alpha & 0 \\\\\n \\sin \\alpha & \\cos \\alpha & 0 \\\\\n 0 & 0 & 1 \n\\end{pmatrix}\n"
},
{
"math_id": 1,
"text": "\n\\mathbf {A}_2 = \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & -1 \n\\end{pmatrix}\n"
},
{
"math_id": 2,
"text": "\\emptyset"
},
{
"math_id": 3,
"text": "\\{\\emptyset\\}"
},
{
"math_id": 4,
"text": "\\{\\{\\emptyset\\}\\} \\to \\{\\pm 1\\}. "
},
{
"math_id": 5,
"text": "\\{\\emptyset\\} \\mapsto +1"
},
{
"math_id": 6,
"text": "\\{\\emptyset\\} \\mapsto -1"
},
{
"math_id": 7,
"text": "\\tbinom{n}{k}"
},
{
"math_id": 8,
"text": "T : V \\to V"
},
{
"math_id": 9,
"text": "\\pi_0(\\operatorname{GL}(V)) = (\\operatorname{GL}(V)/\\operatorname{GL}^+(V) = \\{\\pm 1\\}"
},
{
"math_id": 10,
"text": "V"
},
{
"math_id": 11,
"text": "\\operatorname{GL}(V)"
},
{
"math_id": 12,
"text": "V_n(V)/\\operatorname{GL}^+(V)"
},
{
"math_id": 13,
"text": "\\{\\pm 1\\}"
}
]
| https://en.wikipedia.org/wiki?curid=1391942 |
13922792 | Inversion (music) | Musical term
<score sound="1">
\new PianoStaff «
\new Staff «
\relative c" {
\set Score.currentBarNumber = #21
\set Score.proportionalNotationDuration = #(ly:make-moment 1/8)
\bar ""
\clef treble \key d \minor \time 3/4
\once \override TextScript.script-priority = #-100 a4~^\mordent^\markup { \sharp } a16 g! f e g f e d
\override NoteHead.color = #red \stemUp e8 e' d cis b d
\override NoteHead.color = #black cis16
»
\new Staff «
\clef bass \key d \minor \time 3/4
\new Voice \relative c' {
\override NoteHead.color = #red a8 a, b cis d b
\override NoteHead.color = #black cis16 a gis a f'4-. d\trill a'8
\new Voice \relative c' {
\stemUp \override NoteHead.color = #red a4 r r
»
</score>An example of melodic inversion from the fugue in D minor from J. S. Bach's "The Well-Tempered Clavier", Book 1. Though they start on different pitches (A and E), the second highlighted melody is the upside-down version of the first highlighted melody. That is, when the first goes up, the second goes down the same number of diatonic steps (with some chromatic alteration); and when the first goes down, the second goes up the same number of steps.
In music theory, an inversion is a rearrangement of the top-to-bottom elements in an interval, a chord, a melody, or a group of contrapuntal lines of music. In each of these cases, "inversion" has a distinct but related meaning. The concept of inversion also plays an important role in musical set theory.
Intervals.
An interval is inverted by raising or lowering either of the notes by one or more octaves so that the higher note becomes the lower note and vice versa. For example, the inversion of an interval consisting of a C with an E above it (the third measure below) is an E with a C above it – to work this out, the C may be moved up, the E may be lowered, or both may be moved.
<score lang="lilypond">
\override Score.TimeSignature
\override Score.SpacingSpanner.strict-note-spacing = ##t
\set Score.proportionalNotationDuration = #(ly:make-moment 1/4)
\new Staff «
\clef treble \time 4/4
\new Voice \relative c' {
\stemUp c2 c' c, c' c, c' c, c'
\new Voice \relative c' {
\stemDown c2 c d d e e f f
</score>
The tables to the right show the changes in interval quality and interval number under inversion. Thus, perfect intervals remain perfect, major intervals become minor and vice versa, and augmented intervals become diminished and vice versa. (Doubly diminished intervals become doubly augmented intervals, and vice versa.).
Traditional interval numbers add up to nine: seconds become sevenths and vice versa, thirds become sixths and vice versa, and so on. Thus, a perfect fourth becomes a perfect fifth, an augmented fourth becomes a diminished fifth, and a simple interval (that is, one that is narrower than an octave) and its inversion, when added together, equal an octave. See also complement (music).
Chords.
<score sound="1" override_midi="Rustington3.mid">
\override Score.SpacingSpanner.strict-note-spacing = ##t
\set Score.proportionalNotationDuration = #(ly:make-moment 1/8)
\new PianoStaff «
\new Staff «
\new Voice \relative c' {
\clef treble \time 4/4
\once \override NoteHead.color = #red <c g'>4 <c f> \once \override NoteHead.color = #red <c e> c
\stemDown c4 b \once \override NoteHead.color = #red c2
\new Voice \relative c' {
s1 \stemUp d4. d8 \once \override NoteHead.color = #red c2
»
\new Staff «
\new Voice \relative c, {
\clef bass \time 4/4
\once \override NoteHead.color = #red <e g'>4 <f a'> \once \override NoteHead.color = #red <g g'>
<f a'> \stemDown g \once \override NoteHead.color = #red c2
\new Voice \relative c' {
s1 s4 \stemUp g8 f \once \override NoteHead.color = #red e2
\figures {
<6>2 <6 4>2 <6 5>4. <7>8
»
</score>The closing phrase of the hymn-setting "Rustington" by the English composer Hubert Parry (1897), showing all three positions of the C major chord. See figured bass below for a description of the numerical symbols.A chord's inversion describes the relationship of its lowest notes to the other notes in the chord. For instance, a C major triad contains the tones C, E and G; its inversion is determined by which of these tones is the lowest note (or bass note) in the chord.
The term "inversion" often categorically refers to the different possibilities, though it may also be restricted to only those chords where the lowest note is not also the root of the chord. Texts that follow this restriction may use the term "position" instead, to refer to all of the possibilities as a category.
Root position and inverted chords.
A chord is in root position if its root is the lowest note. This is sometimes known as the "parent chord" of its inversions. For example, the root of a C-major triad is C, so a C-major triad will be in root position if C is the lowest note and its third and fifth (E and G, respectively) are above it – or, on occasion, do not sound at all.
The following C-major triads are both in root position, since the lowest note is the root. The rearrangement of the notes above the bass into different octaves (here, the note E) and the doubling of notes (here, G), is known as "voicing" – the first voicing is close voicing, while the second is open.
<templatestyles src="Block indent/styles.css"/>
In an inverted chord, the root is not the lowest note. The inversions are numbered in the order their lowest notes appear in a close root-position chord (from bottom to top).
<templatestyles src="Block indent/styles.css"/>
As shown above, a C-major triad (or any chord with three notes) has two inversions:
Chords with four notes (such as seventh chords) work in a similar way, except that they have three inversions, instead of just two. The three inversions of a G dominant seventh chord are:
<templatestyles src="Block indent/styles.css"/>
Notating root position and inversions.
Figured bass.
Figured bass is a notation in which chord inversions are indicated by Arabic numerals (the "figures") either above or below the bass notes, indicating a harmonic progression. Each numeral expresses the interval that results from the voices above it (usually assuming octave equivalence). For example, in root-position triad C–E–G, the intervals above bass note C are a third and a fifth, giving the figures . If this triad were in first inversion (e.g., E–G–C), the figure would apply, due to the intervals of a third and a sixth appearing above the bass note E.
Certain conventional abbreviations exist in the use of figured bass. For instance, root-position triads appear without symbols (the is understood), and first-inversion triads are customarily abbreviated as just 6, rather than . The table to the right displays these conventions.
Figured-bass numerals express distinct intervals in a chord only as they relate to the bass note. They make no reference to the key of the progression (unlike Roman-numeral harmonic analysis), they do not express intervals between pairs of upper voices themselves – for example, in a C–E–G triad, the figured bass does not signify the interval relationship between E–G, and they do not express notes in upper voices that double, or are unison with, the bass note.
However, the figures are often used on their own (without the bass) in music theory simply to specify a chord's inversion. This is the basis for the terms given above such as "chord" for a second inversion triad. Similarly, in harmonic analysis the term I6 refers to a tonic triad in first inversion.
Popular-music notation.
A notation for chord inversion often used in popular music is to write the name of a chord followed by a forward slash and then the name of the bass note. This is called a "slash chord". For example, a C-major chord in first inversion (i.e., with E in the bass) would be notated as "C/E". This notation works even when a note not present in a triad is the bass; for example, F/G is a way of notating a particular approach to voicing an Fadd9 chord (G–F–A–C). This is quite different from analytical notations of "function"; e.g., the notation "IV/V" represents the subdominant of the dominant.
Lower-case letters.
Lower-case letters may be placed after a chord symbol to indicate root position or inversion. Hence, in the key of C major, a C-major chord in first inversion may be notated as "Ib", indicating "chord I, first inversion". (Less commonly, the root of the chord is named, followed by a lower-case letter: "Cb"). If no letter is added, the chord is assumed to be in root inversion, as though "a" had been inserted.
History.
In Jean-Philippe Rameau's "Treatise on Harmony" (1722), chords in different inversions are considered functionally equivalent and he has been credited as being the first person to recognise their underlying similarity. Earlier theorists spoke of different intervals using alternative descriptions, such as the "regola delle terze e seste" ("rule of sixths and thirds"). This required the resolution of imperfect consonances to perfect ones and would not propose, for example, a resemblance between and chords.
Counterpoint.
<score sound="1">
\new PianoStaff «
\new Staff «
\relative c' {
\clef treble \key a \minor \time 4/4
\set Score.currentBarNumber = #18
\bar ""
r16 \override NoteHead.color = #red e a c b e, b' d
\override NoteHead.color = #blue c8 a gis e \override NoteHead.color = #black
a16
»
\new Staff «
\relative c' {
\clef bass \key a \minor \time 4/4
\override NoteHead.color = #blue c8 a gis e
\override NoteHead.color = #black a16 \override NoteHead.color = #red e a c b e, b' d
\override NoteHead.color = #black c
»
</score>An example of contrapuntal inversion in one measure of J.S. Bach's Invention No. 13 in A minor, BWV 784.
In contrapuntal inversion, two melodies, having previously accompanied each other once, accompany each other again but with the melody that had been in the high voice now in the low, and vice versa. The action of changing the voices is called "textural inversion". This is called "double counterpoint" when two voices are involved and "triple counterpoint" when three are involved. The inversion in two-part invertible counterpoint is also known as "rivolgimento".
Invertible counterpoint.
Themes that can be developed in this way without violating the rules of counterpoint are said to be in "invertible counterpoint". Invertible counterpoint can occur at various intervals, usually the octave, less often at the tenth or twelfth. To calculate the interval of inversion, add the intervals by which each voice has moved and subtract one. For example: If motif A in the high voice moves down a sixth, and motif B in the low voice moves up a fifth, in such a way as to result in A and B having exchanged registers, then the two are in double counterpoint at the tenth (6 + 5 – 1 = 10).
In J.S. Bach's "The Art of Fugue", the first canon is at the octave, the second canon at the tenth, the third canon at the twelfth, and the fourth canon in augmentation and contrary motion. Other exemplars can be found in the fugues in G minor and B♭ major [external Shockwave movies] from J.S. Bach's "The Well-Tempered Clavier," Book 2, both of which contain invertible counterpoint at the octave, tenth, and twelfth.
Examples.
For example, in the keyboard prelude in A♭ major from J.S. Bach's "The Well-Tempered Clavier", Book 1, the following passage, from bars 9–18, involves two lines, one in each hand:
When this passage returns in bars 25–35 these lines are exchanged:
J.S. Bach's Three-Part Invention in F minor, BWV 795 involves exploring the combination of three themes. Two of these are announced in the opening two bars. A third idea joins them in bars 3–4. When this passage is repeated a few bars later in bars 7–9, the three parts are interchanged:
The piece goes on to explore four of the six possible permutations of how these three lines can be combined in counterpoint.
One of the most spectacular examples of invertible counterpoint occurs in the finale of Mozart's "Jupiter Symphony". Here, no less than five themes are heard together:
The whole passage brings the symphony to a conclusion in a blaze of brilliant orchestral writing. According to Tom Service:
<templatestyles src="Template:Blockquote/styles.css" />Mozart's composition of the finale of the Jupiter Symphony is a palimpsest on music history as well as his own. As a musical achievement, its most obvious predecessor is really the fugal finale of his G major String Quartet K. 387, but this symphonic finale trumps even that piece in its scale and ambition. If the story of that operatic tune first movement is to turn instinctive emotion into contrapuntal experience, the finale does exactly the reverse, transmuting the most complex arts of compositional craft into pure, exhilarating feeling. Its models in Michael and Joseph Haydn are unquestionable, but Mozart simultaneously pays homage to them – and transcends them. Now that's what I call real originality.
Melodies.
<score sound="1"> {
\set Score.currentBarNumber = #1
\bar ""
\key g \major \time 6/8
\relative c" {
\clef treble
g8 a16 g fis g a8 b16 a g a
b8 a g d c'4
b8 a g fis e'4
</score>
<score sound="1"> {
\set Score.currentBarNumber = #28
\bar ""
\key g \major \time 6/8
\relative c {
\clef bass
d8 c16 d e d c8 b16 c d c
b8 c d g a,4
b8 c d e fis,4
</score>Two lines from the fugue in G major from J. S. Bach's "The Well-Tempered Clavier", Book 1. The lowest voice in mm. 28–30 is an inversion of the opening melody in mm. 1–3.
A melody is inverted by flipping it "upside-down", reversing the melody's contour. For instance, if the original melody has a rising major third, then the inverted melody has a falling major third (or, especially in tonal music, perhaps a falling minor third).
According to "The Harvard Dictionary of Music", "The intervals between successive pitches may remain exact or, more often in tonal music, they may be the equivalents in the diatonic scale. Hence c'–d–e' may become c'–b–a (where the first descent is by a semitone rather than by a whole tone) instead of c'–b♭–a♭." Moreover, the inversion may start on the same pitch as the original melody, but it does not have to, as illustrated by the example to the right.
Twelve-tone music.
In twelve-tone technique, the inversion of a tone row is one of its four traditional permutations (the others being the prime form, the retrograde, and the retrograde inversion). These four permutations (labeled "p"rime, "r"etrograde, "i"nversion, and "r"etrograde "i"nversion) for the tone row used in Arnold Schoenberg's Variations for Orchestra, Op. 31 are shown below.
<templatestyles src="Block indent/styles.css"/><score>
\override Score.TimeSignature
\override Score.SpacingSpanner.strict-note-spacing = ##t
\set Score.proportionalNotationDuration = #(ly:make-moment 3/1)
\new StaffGroup «
\new Staff
\relative c" {
\time 12/1
bes1^\markup { P } e, fis dis f a d cis g aes b c
c^\markup { R } b aes g cis d a f dis fis e bes'
\new Staff {
\relative c" {
bes1^\markup { I } e d f dis b fis g cis c a aes
aes^\markup { RI } a c cis g fis b dis f d e bis
»
</score>
In set theory, the inverse operation is sometimes designated as formula_0, where formula_1 means "invert" and formula_2 means "transpose by some interval formula_3" measured in number of semitones. Thus, inversion is a combination of an inversion followed by a transposition. To apply the inversion operation formula_1, you subtract the pitch class, in integer notation, from 12 (by convention, inversion is around pitch class 0). Then we apply the transposition operation formula_2 by adding formula_3. For example, to calculate formula_4, first subtract 3 from 12 (giving 9) and then add 5 (giving 14, which is equivalent to 2). Thus, formula_5. To invert a set of pitches, simply invert each pitch in the set in turn.
Inversional equivalency and symmetry.
Set theory.
In set theory, "inversional equivalency" is the concept that intervals, chords, and other sets of pitches are the same when inverted. It is similar to enharmonic equivalency, octave equivalency and even transpositional equivalency. Inversional equivalency is used little in tonal theory, though it is assumed that sets that can be inverted into each other are remotely in common. However, they are only assumed identical or nearly identical in musical set theory.
Sets are said to be inversionally symmetrical if they map onto themselves under inversion. The pitch that the sets must be inverted around is said to be the axis of symmetry (or center). An axis may either be at a specific pitch or halfway between two pitches (assuming that microtones are not used). For example, the set C–E♭–E–F♯–G–B♭ has an axis at F, and an axis, a tritone away, at B if the set is listed as F♯–G–B♭–C–E♭–E. As another example, the set C–E–F–F♯–G–B has an axis at the dyad F/F♯ and an axis at B/C if it is listed as F♯–G–B–C–E–F.
Jazz theory.
<score> { #(set-global-staff-size 15)
\set Score.tempoHideNote = ##t \tempo 4 = 120
\key c \major \time 4/4
\set Score.proportionalNotationDuration = #(ly:make-moment 1/2)
\relative c' {
\clef treble
\once \override NoteHead.color = #red c4^\markup { Melody } \once \override NoteHead.color = #red c g' g \once \override NoteHead.color = #red a \once \override NoteHead.color = #red a g2 f4 f e e d d c2 \bar "Pitch axis inversions of "Twinkle, Twinkle, Little Star" about C and A
In jazz theory, a pitch axis is the center around which a melody is inverted.
The "pitch axis" works in the context of the compound operation transpositional inversion, where transposition is carried out after inversion. However, unlike in set theory, the transposition may be a chromatic or diatonic transposition. Thus, if D-A-G (P5 up, M2 down) is inverted to D-G-A (P5 down, M2 up) the "pitch axis" is D. However, if it is inverted to C-F-G the pitch axis is G while if the pitch axis is A, the melody inverts to E-A-B.
The notation of octave position may determine how many lines and spaces appear to share the axis. The pitch axis of D-A-G and its inversion A-D-E either appear to be between C/B♮ or the single pitch F.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " T_nI "
},
{
"math_id": 1,
"text": " I "
},
{
"math_id": 2,
"text": " T_n "
},
{
"math_id": 3,
"text": " n "
},
{
"math_id": 4,
"text": " T_5I(3) "
},
{
"math_id": 5,
"text": " T_5I(3)=2 "
}
]
| https://en.wikipedia.org/wiki?curid=13922792 |
139229 | Gauss–Bonnet theorem | Theorem in differential geometry
In the mathematical field of differential geometry, the Gauss–Bonnet theorem (or Gauss–Bonnet formula) is a fundamental formula which links the curvature of a surface to its underlying topology.
In the simplest application, the case of a triangle on a plane, the sum of its angles is 180 degrees. The Gauss–Bonnet theorem extends this to more complicated shapes and curved surfaces, connecting the local and global geometries.
The theorem is named after Carl Friedrich Gauss, who developed a version but never published it, and Pierre Ossian Bonnet, who published a special case in 1848.
Statement.
Suppose M is a compact two-dimensional Riemannian manifold with boundary ∂"M". Let K be the Gaussian curvature of M, and let "k""g" be the geodesic curvature of ∂"M". Then
formula_0
where dA is the element of area of the surface, and ds is the line element along the boundary of M. Here, "χ"("M") is the Euler characteristic of M.
If the boundary ∂"M" is piecewise smooth, then we interpret the integral ∫∂"M" "k""g" "ds" as the sum of the corresponding integrals along the smooth portions of the boundary, plus the sum of the angles by which the smooth portions turn at the corners of the boundary.
Many standard proofs use the theorem of turning tangents, which states roughly that the winding number of a Jordan curve is exactly ±1.
A simple example.
Suppose M is the northern hemisphere cut out from a sphere of radius R. Its Euler characteristic is 1. On the left hand side of the theorem, we have formula_1 and formula_2, because the boundary is the equator and the equator is a geodesic of the sphere. Then formula_3.
On the other hand, suppose we flatten the hemisphere to make it into a disk. This transformation is a homeomorphism, so the Euler characteristic is still 1. However, on the left hand side of the theorem we now have formula_4 and formula_5, because a circumference is not a geodesic of the plane. Then formula_6.
Finally, take a sphere octant, also homeomorphic to the previous cases. Then formula_7. Now formula_2 almost everywhere along the border, which is a geodesic triangle. But we have three right-angle corners, so formula_8.
Interpretation and significance.
The theorem applies in particular to compact surfaces without boundary, in which case the integral
formula_9
can be omitted. It states that the total Gaussian curvature of such a closed surface is equal to 2π times the Euler characteristic of the surface. Note that for orientable compact surfaces without boundary, the Euler characteristic equals 2 − 2"g", where g is the genus of the surface: Any orientable compact surface without boundary is topologically equivalent to a sphere with some handles attached, and g counts the number of handles.
If one bends and deforms the surface M, its Euler characteristic, being a topological invariant, will not change, while the curvatures at some points will. The theorem states, somewhat surprisingly, that the total integral of all curvatures will remain the same, no matter how the deforming is done. So for instance if you have a sphere with a "dent", then its total curvature is 4π (the Euler characteristic of a sphere being 2), no matter how big or deep the dent.
Compactness of the surface is of crucial importance. Consider for instance the open unit disc, a non-compact Riemann surface without boundary, with curvature 0 and with Euler characteristic 1: the Gauss–Bonnet formula does not work. It holds true however for the compact closed unit disc, which also has Euler characteristic 1, because of the added boundary integral with value 2π.
As an application, a torus has Euler characteristic 0, so its total curvature must also be zero. If the torus carries the ordinary Riemannian metric from its embedding in R3, then the inside has negative Gaussian curvature, the outside has positive Gaussian curvature, and the total curvature is indeed 0. It is also possible to construct a torus by identifying opposite sides of a square, in which case the Riemannian metric on the torus is flat and has constant curvature 0, again resulting in total curvature 0. It is not possible to specify a Riemannian metric on the torus with everywhere positive or everywhere negative Gaussian curvature.
For triangles.
Sometimes the Gauss–Bonnet formula is stated as
formula_10
where T is a geodesic triangle. Here we define a "triangle" on M to be a simply connected region whose boundary consists of three geodesics. We can then apply GB to the surface T formed by the inside of that triangle and the piecewise boundary of the triangle.
The geodesic curvature the bordering geodesics is 0, and the Euler characteristic of T being 1.
Hence the sum of the turning angles of the geodesic triangle is equal to 2π minus the total curvature within the triangle. Since the turning angle at a corner is equal to π minus the interior angle, we can rephrase this as follows:
The sum of interior angles of a geodesic triangle is equal to π plus the total curvature enclosed by the triangle: formula_11
In the case of the plane (where the Gaussian curvature is 0 and geodesics are straight lines), we recover the familiar formula for the sum of angles in an ordinary triangle. On the standard sphere, where the curvature is everywhere 1, we see that the angle sum of geodesic triangles is always bigger than π.
Special cases.
A number of earlier results in spherical geometry and hyperbolic geometry, discovered over the preceding centuries, were subsumed as special cases of Gauss–Bonnet.
Triangles.
In spherical trigonometry and hyperbolic trigonometry, the area of a triangle is proportional to the amount by which its interior angles fail to add up to 180°, or equivalently by the (inverse) amount by which its exterior angles fail to add up to 360°.
The area of a spherical triangle is proportional to its excess, by Girard's theorem – the amount by which its interior angles add up to more than 180°, which is equal to the amount by which its exterior angles add up to less than 360°.
The area of a hyperbolic triangle, conversely is proportional to its "defect", as established by Johann Heinrich Lambert.
Polyhedra.
Descartes' theorem on total angular defect of a polyhedron is the polyhedral analog:
it states that the sum of the defect at all the vertices of a polyhedron which is homeomorphic to the sphere is 4π. More generally, if the polyhedron has Euler characteristic "χ"
2 − 2"g" (where g is the genus, meaning "number of holes"), then the sum of the defect is 2"πχ". This is the special case of Gauss–Bonnet, where the curvature is concentrated at discrete points (the vertices).
Thinking of curvature as a measure, rather than as a function, Descartes' theorem is Gauss–Bonnet where the curvature is a discrete measure, and Gauss–Bonnet for measures generalizes both Gauss–Bonnet for smooth manifolds and Descartes' theorem.
Combinatorial analog.
There are several combinatorial analogs of the Gauss–Bonnet theorem. We state the following one. Let M be a finite 2-dimensional pseudo-manifold. Let "χ"("v") denote the number of triangles containing the vertex v. Then
formula_12
where the first sum ranges over the vertices in the interior of M, the second sum is over the boundary vertices, and "χ"("M") is the Euler characteristic of M.
Similar formulas can be obtained for 2-dimensional pseudo-manifold when we replace triangles with higher polygons. For polygons of n vertices, we must replace 3 and 6 in the formula above with and , respectively.
For example, for quadrilaterals we must replace 3 and 6 in the formula above with 2 and 4, respectively. More specifically, if M is a closed 2-dimensional digital manifold, the genus turns out
formula_13
where "M""i" indicates the number of surface-points each of which has i adjacent points on the surface. This is the simplest formula of Gauss–Bonnet theorem in three-dimensional digital space.
Generalizations.
The Chern theorem (after Shiing-Shen Chern 1945) is the 2"n"-dimensional generalization of GB (also see Chern–Weil homomorphism).
The Riemann–Roch theorem can also be seen as a generalization of GB to complex manifolds.
A far-reaching generalization that includes all the abovementioned theorems is the Atiyah–Singer index theorem.
A generalization to 2-manifolds that need not be compact is Cohn-Vossen's inequality.
In popular culture.
In Greg Egan's novel "Diaspora", two characters discuss the derivation of this theorem.
The theorem can be used directly as a system to control sculpture - for example, in work by Edmund Harriss in the collection of the University of Arkansas Honors College.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_M K\\,dA+\\int_{\\partial M}k_g\\,ds=2\\pi\\chi(M), \\, "
},
{
"math_id": 1,
"text": "K=1/R^2"
},
{
"math_id": 2,
"text": "k_g=0"
},
{
"math_id": 3,
"text": "\\int_MK dA=2\\pi"
},
{
"math_id": 4,
"text": "K=0"
},
{
"math_id": 5,
"text": "k_g=1/R"
},
{
"math_id": 6,
"text": "\\int_{\\partial M}k_gds=2\\pi"
},
{
"math_id": 7,
"text": "\\int_MK dA=\\frac{1}{R^2}\\frac{4\\pi R^2}{8}=\\frac{\\pi}{2}"
},
{
"math_id": 8,
"text": "\\int_{\\partial M}k_gds=\\frac{3\\pi}{2}"
},
{
"math_id": 9,
"text": "\\int_{\\partial M}k_g\\,ds"
},
{
"math_id": 10,
"text": "\\int_T K = 2\\pi - \\sum \\alpha - \\int_{\\partial T} \\kappa_g,"
},
{
"math_id": 11,
"text": "\\sum (\\pi - \\alpha) = \\pi + \\int_T K."
},
{
"math_id": 12,
"text": " \\sum_{v\\,\\in\\,\\operatorname{int}M}\\bigl(6 - \\chi(v)\\bigr) + \\sum_{v\\,\\in\\,\\partial M}\\bigl(3 - \\chi(v)\\bigr) = 6\\chi(M),\\ "
},
{
"math_id": 13,
"text": " g = 1 + \\frac{M_5 + 2 M_6 - M_3}{8}, "
}
]
| https://en.wikipedia.org/wiki?curid=139229 |
1392495 | Stiefel manifold | In mathematics, the Stiefel manifold formula_0 is the set of all orthonormal "k"-frames in formula_1 That is, it is the set of ordered orthonormal "k"-tuples of vectors in formula_1 It is named after Swiss mathematician Eduard Stiefel. Likewise one can define the complex Stiefel manifold formula_2 of orthonormal "k"-frames in formula_3 and the quaternionic Stiefel manifold formula_4 of orthonormal "k"-frames in formula_5. More generally, the construction applies to any real, complex, or quaternionic inner product space.
In some contexts, a non-compact Stiefel manifold is defined as the set of all linearly independent "k"-frames in formula_6 or formula_7 this is homotopy equivalent, as the compact Stiefel manifold is a deformation retract of the non-compact one, by Gram–Schmidt. Statements about the non-compact form correspond to those for the compact form, replacing the orthogonal group (or unitary or symplectic group) with the general linear group.
Topology.
Let formula_8 stand for formula_9 or formula_10 The Stiefel manifold formula_11 can be thought of as a set of "n" × "k" matrices by writing a "k"-frame as a matrix of "k" column vectors in formula_12 The orthonormality condition is expressed by "A"*"A" = formula_13 where "A"* denotes the conjugate transpose of "A" and formula_14 denotes the "k" × "k" identity matrix. We then have
formula_15
The topology on formula_11 is the subspace topology inherited from formula_16 With this topology formula_11 is a compact manifold whose dimension is given by
formula_17
As a homogeneous space.
Each of the Stiefel manifolds formula_11 can be viewed as a homogeneous space for the action of a classical group in a natural manner.
Every orthogonal transformation of a "k"-frame in formula_18 results in another "k"-frame, and any two "k"-frames are related by some orthogonal transformation. In other words, the orthogonal group O("n") acts transitively on formula_19 The stabilizer subgroup of a given frame is the subgroup isomorphic to O("n"−"k") which acts nontrivially on the orthogonal complement of the space spanned by that frame.
Likewise the unitary group U("n") acts transitively on formula_2 with stabilizer subgroup U("n"−"k") and the symplectic group Sp("n") acts transitively on formula_4 with stabilizer subgroup Sp("n"−"k").
In each case formula_11 can be viewed as a homogeneous space:
formula_20
When "k" = "n", the corresponding action is free so that the Stiefel manifold formula_21 is a principal homogeneous space for the corresponding classical group.
When "k" is strictly less than "n" then the special orthogonal group SO("n") also acts transitively on formula_0 with stabilizer subgroup isomorphic to SO("n"−"k") so that
formula_22
The same holds for the action of the special unitary group on formula_2
formula_23
Thus for "k" = "n" − 1, the Stiefel manifold is a principal homogeneous space for the corresponding "special" classical group.
Uniform measure.
The Stiefel manifold can be equipped with a uniform measure, i.e. a Borel measure that is invariant under the action of the groups noted above. For example, formula_24 which is isomorphic to the unit circle in the Euclidean plane, has as its uniform measure the obvious uniform measure (arc length) on the circle. It is straightforward to sample this measure on formula_11 using Gaussian random matrices: if formula_25 is a random matrix with independent entries identically distributed according to the standard normal distribution on formula_8 and "A" = "QR" is the QR factorization of "A", then the matrices, formula_26 are independent random variables and "Q" is distributed according to the uniform measure on formula_27 This result is a consequence of the Bartlett decomposition theorem.
Special cases.
A 1-frame in formula_28 is nothing but a unit vector, so the Stiefel manifold formula_29 is just the unit sphere in formula_30 Therefore:
formula_31
Given a 2-frame in formula_32 let the first vector define a point in "S""n"−1 and the second a unit tangent vector to the sphere at that point. In this way, the Stiefel manifold formula_33 may be identified with the unit tangent bundle to "S""n"−1.
When "k" = "n" or "n"−1 we saw in the previous section that formula_34 is a principal homogeneous space, and therefore diffeomorphic to the corresponding classical group:
formula_35
formula_36
Functoriality.
Given an orthogonal inclusion between vector spaces formula_37 the image of a set of "k" orthonormal vectors is orthonormal, so there is an induced closed inclusion of Stiefel manifolds, formula_38 and this is functorial. More subtly, given an "n"-dimensional vector space "X", the dual basis construction gives a bijection between bases for "X" and bases for the dual space formula_39 which is continuous, and thus yields a homeomorphism of top Stiefel manifolds formula_40 This is also functorial for isomorphisms of vector spaces.
As a principal bundle.
There is a natural projection
formula_41
from the Stiefel manifold formula_11 to the Grassmannian of "k"-planes in formula_28 which sends a "k"-frame to the subspace spanned by that frame. The fiber over a given point "P" in formula_42 is the set of all orthonormal "k"-frames contained in the space "P".
This projection has the structure of a principal "G"-bundle where "G" is the associated classical group of degree "k". Take the real case for concreteness. There is a natural right action of O("k") on formula_0 which rotates a "k"-frame in the space it spans. This action is free but not transitive. The orbits of this action are precisely the orthonormal "k"-frames spanning a given "k"-dimensional subspace; that is, they are the fibers of the map "p". Similar arguments hold in the complex and quaternionic cases.
We then have a sequence of principal bundles:
formula_43
The vector bundles associated to these principal bundles via the natural action of "G" on formula_44 are just the tautological bundles over the Grassmannians. In other words, the Stiefel manifold formula_11 is the orthogonal, unitary, or symplectic frame bundle associated to the tautological bundle on a Grassmannian.
When one passes to the formula_45 limit, these bundles become the universal bundles for the classical groups.
Homotopy.
The Stiefel manifolds fit into a family of fibrations:
formula_46
thus the first non-trivial homotopy group of the space formula_0 is in dimension "n" − "k". Moreover,
formula_47
This result is used in the obstruction-theoretic definition of Stiefel–Whitney classes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_k(\\R^n)"
},
{
"math_id": 1,
"text": "\\R^n."
},
{
"math_id": 2,
"text": "V_k(\\Complex^n)"
},
{
"math_id": 3,
"text": "\\Complex^n"
},
{
"math_id": 4,
"text": "V_k(\\mathbb{H}^n)"
},
{
"math_id": 5,
"text": "\\mathbb{H}^n"
},
{
"math_id": 6,
"text": "\\R^n, \\Complex^n,"
},
{
"math_id": 7,
"text": "\\mathbb{H}^n;"
},
{
"math_id": 8,
"text": "\\mathbb{F}"
},
{
"math_id": 9,
"text": "\\R,\\Complex,"
},
{
"math_id": 10,
"text": "\\mathbb{H}."
},
{
"math_id": 11,
"text": "V_k(\\mathbb F^n)"
},
{
"math_id": 12,
"text": "\\mathbb F^n."
},
{
"math_id": 13,
"text": "I_k"
},
{
"math_id": 14,
"text": " I_k"
},
{
"math_id": 15,
"text": "V_k(\\mathbb F^n) = \\left\\{A \\in \\mathbb F^{n\\times k} : A^* A = I_k \\right\\}."
},
{
"math_id": 16,
"text": "\\mathbb{F}^{n\\times k}."
},
{
"math_id": 17,
"text": "\\begin{align}\n\\dim V_k(\\R^n) &= nk - \\frac{1}{2}k(k+1) \\\\\n\\dim V_k(\\Complex^n) &= 2nk - k^2 \\\\\n\\dim V_k(\\mathbb{H}^n) &= 4nk - k(2k-1)\n\\end{align}"
},
{
"math_id": 18,
"text": "\\R^n"
},
{
"math_id": 19,
"text": "V_k(\\R^n)."
},
{
"math_id": 20,
"text": "\\begin{align}\nV_k(\\R^n) &\\cong \\mbox{O}(n)/\\mbox{O}(n-k)\\\\\nV_k(\\Complex^n) &\\cong \\mbox{U}(n)/\\mbox{U}(n-k)\\\\\nV_k(\\mathbb{H}^n) &\\cong \\mbox{Sp}(n)/\\mbox{Sp}(n-k)\n\\end{align}"
},
{
"math_id": 21,
"text": "V_n(\\mathbb F^n)"
},
{
"math_id": 22,
"text": "V_k(\\R^n) \\cong \\mbox{SO}(n)/\\mbox{SO}(n-k)\\qquad\\mbox{for } k < n."
},
{
"math_id": 23,
"text": "V_k(\\Complex^n) \\cong \\mbox{SU}(n)/\\mbox{SU}(n-k)\\qquad\\mbox{for } k < n."
},
{
"math_id": 24,
"text": "V_1(\\R^2)"
},
{
"math_id": 25,
"text": "A\\in\\mathbb{F}^{n\\times k}"
},
{
"math_id": 26,
"text": "Q\\in\\mathbb{F}^{n\\times k}, R\\in\\mathbb{F}^{k\\times k}"
},
{
"math_id": 27,
"text": "V_k(\\mathbb F^n)."
},
{
"math_id": 28,
"text": "\\mathbb{F}^n"
},
{
"math_id": 29,
"text": "V_1(\\mathbb F^n)"
},
{
"math_id": 30,
"text": "\\mathbb{F}^n."
},
{
"math_id": 31,
"text": "\\begin{align}\nV_1(\\R^n) &= S^{n-1}\\\\\nV_1(\\Complex^n) &= S^{2n-1}\\\\\nV_1(\\mathbb{H}^n) &= S^{4n-1}\n\\end{align}"
},
{
"math_id": 32,
"text": "\\R^n,"
},
{
"math_id": 33,
"text": "V_2(\\R^n)"
},
{
"math_id": 34,
"text": "V_k(\\mathbb{F}^n)"
},
{
"math_id": 35,
"text": "\\begin{align}\nV_{n-1}(\\R^n) &\\cong \\mathrm{SO}(n)\\\\\nV_{n-1}(\\Complex^n) &\\cong \\mathrm{SU}(n)\n\\end{align}"
},
{
"math_id": 36,
"text": "\\begin{align}\nV_{n}(\\R^n) &\\cong \\mathrm O(n)\\\\\nV_{n}(\\Complex^n) &\\cong \\mathrm U(n)\\\\\nV_{n}(\\mathbb{H}^n) &\\cong \\mathrm{Sp}(n)\n\\end{align}"
},
{
"math_id": 37,
"text": "X \\hookrightarrow Y,"
},
{
"math_id": 38,
"text": "V_k(X) \\hookrightarrow V_k(Y),"
},
{
"math_id": 39,
"text": "X^*,"
},
{
"math_id": 40,
"text": "V_n(X) \\stackrel{\\sim}{\\to} V_n(X^*)."
},
{
"math_id": 41,
"text": "p : V_k(\\mathbb F^n) \\to G_k(\\mathbb F^n)"
},
{
"math_id": 42,
"text": "G_k(\\mathbb F^n)"
},
{
"math_id": 43,
"text": "\\begin{align}\n\\mathrm O(k) &\\to V_k(\\R^n) \\to G_k(\\R^n)\\\\\n\\mathrm U(k) &\\to V_k(\\Complex^n) \\to G_k(\\Complex^n)\\\\\n\\mathrm{Sp}(k) &\\to V_k(\\mathbb{H}^n) \\to G_k(\\mathbb{H}^n)\n\\end{align}"
},
{
"math_id": 44,
"text": "\\mathbb{F}^k"
},
{
"math_id": 45,
"text": "n\\to \\infty"
},
{
"math_id": 46,
"text": "V_{k-1}(\\R^{n-1}) \\to V_k(\\R^n) \\to S^{n-1},"
},
{
"math_id": 47,
"text": "\\pi_{n-k} V_k(\\R^n) \\simeq \\begin{cases} \\Z & n-k \\text{ even or } k=1 \\\\ \\Z_2 & n-k \\text{ odd and } k>1 \\end{cases}"
}
]
| https://en.wikipedia.org/wiki?curid=1392495 |
13925681 | Bramble–Hilbert lemma | In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function formula_0 by a polynomial of order at most formula_1 in terms of derivatives of formula_0 of order formula_2. Both the error of the approximation and the derivatives of formula_0 are measured by formula_3 norms on a bounded domain in formula_4. This is similar to classical numerical analysis, where, for example, the error of linear interpolation formula_0 can be bounded using the second derivative of formula_0. However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of formula_0 are measured by more general norms involving averages, not just the maximum norm.
Additional assumptions on the domain are needed for the Bramble–Hilbert lemma to hold. Essentially, the boundary of the domain must be "reasonable". For example, domains that have a spike or a slit with zero angle at the tip are excluded. Lipschitz domains are reasonable enough, which includes convex domains and domains with continuously differentiable boundary.
The main use of the Bramble–Hilbert lemma is to prove bounds on the error of interpolation of function formula_0 by an operator that preserves polynomials of order up to formula_1, in terms of the derivatives of formula_0 of order formula_2. This is an essential step in error estimates for the finite element method. The Bramble–Hilbert lemma is applied there on the domain consisting of one element (or, in some superconvergence results, a small number of elements).
The one-dimensional case.
Before stating the lemma in full generality, it is useful to look at some simple special cases. In one dimension and for a function formula_0 that has formula_2 derivatives on interval formula_5, the lemma reduces to
formula_6
where formula_7 is the space of all polynomials of degree at most formula_1 and formula_8 indicates
the formula_9th derivative of a function formula_10.
In the case when formula_11, formula_12, formula_13, and formula_0 is twice differentiable, this means that there exists a polynomial formula_14 of degree one such that for all formula_15,
formula_16
This inequality also follows from the well-known error estimate for linear interpolation by choosing formula_14 as the linear interpolant of formula_0.
Statement of the lemma.
Suppose formula_17 is a bounded domain in formula_18, formula_19, with boundary formula_20 and diameter formula_21. formula_22 is the Sobolev space of all function formula_0 on formula_17 with weak derivatives formula_23 of order formula_24 up to formula_25 in formula_26. Here, formula_27 is a multiindex, formula_28 formula_29 and formula_30 denotes the derivative formula_31 times with respect to formula_32, formula_33 times with respect to formula_34, and so on. The Sobolev seminorm on formula_35 consists of the formula_36 norms of the highest order derivatives,
formula_37
and
formula_38
formula_39 is the space of all polynomials of order up to formula_25 on formula_18. Note that formula_40 for all formula_41 and formula_42, so formula_43 has the same value for any formula_41.
Lemma (Bramble and Hilbert) Under additional assumptions on the domain formula_17, specified below, there exists a constant formula_44 independent of formula_45 and formula_0 such that for any formula_46 there exists a polynomial formula_41 such that for all formula_47
formula_48
The original result.
The lemma was proved by Bramble and Hilbert under the assumption that formula_17 satisfies the strong cone property; that is, there exists a finite open covering formula_49 of formula_20 and corresponding cones formula_50 with vertices at the origin such that formula_51 is contained in formula_17 for any formula_52 formula_53.
The statement of the lemma here is a simple rewriting of the right-hand inequality stated in Theorem 1 in. The actual statement in is that the norm of the factorspace formula_54 is equivalent to the formula_55 seminorm. The formula_55 norm is not the usual one but the terms are scaled with formula_21 so that the right-hand inequality in the equivalence of the seminorms comes out exactly as in the statement here.
In the original result, the choice of the polynomial is not specified, and the value of constant and its dependence on the domain formula_17 cannot be determined from the proof.
A constructive form.
An alternative result was given by Dupont and Scott under the assumption that the domain formula_17 is star-shaped; that is, there exists a ball formula_56 such that for any formula_57, the closed convex hull of formula_58 is a subset of formula_17. Suppose that formula_59 is the supremum of the diameters of such balls. The ratio formula_60 is called the chunkiness of formula_17.
Then the lemma holds with the constant formula_61, that is, the constant depends on the domain formula_17 only through its chunkiness formula_62 and the dimension of the space formula_63. In addition, formula_64 can be chosen as formula_65, where formula_66 is the averaged Taylor polynomial, defined as
formula_67
where
formula_68
is the Taylor polynomial of degree at most formula_1 of formula_0 centered at formula_69 evaluated at formula_52, and formula_70 is a function that has derivatives of all orders, equals to zero outside of formula_56, and such that
formula_71
Such function formula_72 always exists.
For more details and a tutorial treatment, see the monograph by Brenner and Scott. The result can be extended to the case when the domain formula_17 is the union of a finite number of star-shaped domains, which is slightly more general than the strong cone property, and other polynomial spaces than the space of all polynomials up to a given degree.
Bound on linear functionals.
This result follows immediately from the above lemma, and it is also called sometimes the Bramble–Hilbert lemma, for example by Ciarlet. It is essentially Theorem 2 from.
Lemma Suppose that formula_73 is a continuous linear functional on formula_55 and formula_74 its dual norm. Suppose that formula_75 for all formula_41. Then there exists a constant formula_76 such that
formula_77
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle u"
},
{
"math_id": 1,
"text": "\\textstyle m-1"
},
{
"math_id": 2,
"text": "\\textstyle m"
},
{
"math_id": 3,
"text": "\\textstyle L^{p}"
},
{
"math_id": 4,
"text": "\\textstyle \\mathbb{R}^{n}"
},
{
"math_id": 5,
"text": "\\textstyle \\left( a,b\\right) "
},
{
"math_id": 6,
"text": " \\inf_{v\\in P_{m-1}}\\bigl\\Vert u^{\\left( k\\right) }-v^{\\left( k\\right) }\\bigr\\Vert_{L^{p}\\left( a,b\\right) }\\leq C\\left( m,k\\right) \\left( b-a\\right) ^{m-k}\\bigl\\Vert u^{\\left( m\\right) }\\bigr\\Vert_{L^{p}\\left( a,b\\right) }\\text{ for each integer }k\\leq m\\text{ and extended real }p\\geq1,"
},
{
"math_id": 7,
"text": "\\textstyle P_{m-1}"
},
{
"math_id": 8,
"text": "f^{(k)}"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "\\textstyle p=\\infty"
},
{
"math_id": 12,
"text": "\\textstyle m=2"
},
{
"math_id": 13,
"text": "\\textstyle k=0"
},
{
"math_id": 14,
"text": "\\textstyle v"
},
{
"math_id": 15,
"text": "\\textstyle x\\in\\left( a,b\\right) "
},
{
"math_id": 16,
"text": " \\left\\vert u\\left( x\\right) -v\\left( x\\right) \\right\\vert \\leq C\\left( b-a\\right) ^{2}\\sup_{\\left( a,b\\right) }\\left\\vert u^{\\prime\\prime }\\right\\vert. "
},
{
"math_id": 17,
"text": "\\textstyle \\Omega"
},
{
"math_id": 18,
"text": "\\textstyle \\mathbb{R}^n"
},
{
"math_id": 19,
"text": "\\textstyle n\\geq1"
},
{
"math_id": 20,
"text": "\\textstyle \\partial\\Omega"
},
{
"math_id": 21,
"text": "\\textstyle d"
},
{
"math_id": 22,
"text": "\\textstyle W_p^k(\\Omega)"
},
{
"math_id": 23,
"text": "\\textstyle D^\\alpha u"
},
{
"math_id": 24,
"text": "\\textstyle \\left\\vert \\alpha\\right\\vert "
},
{
"math_id": 25,
"text": "\\textstyle k"
},
{
"math_id": 26,
"text": "\\textstyle L^p(\\Omega)"
},
{
"math_id": 27,
"text": "\\textstyle \\alpha=\\left( \\alpha_1,\\alpha_2,\\ldots,\\alpha_n\\right) "
},
{
"math_id": 28,
"text": "\\textstyle \\left\\vert \\alpha\\right\\vert ="
},
{
"math_id": 29,
"text": "\\textstyle \\alpha_1+\\alpha_2+\\cdots+\\alpha_n"
},
{
"math_id": 30,
"text": "\\textstyle D^\\alpha"
},
{
"math_id": 31,
"text": "\\textstyle \\alpha_1"
},
{
"math_id": 32,
"text": "\\textstyle x_1"
},
{
"math_id": 33,
"text": "\\textstyle \\alpha_2"
},
{
"math_id": 34,
"text": "\\textstyle x_2"
},
{
"math_id": 35,
"text": "\\textstyle W_p^m(\\Omega)"
},
{
"math_id": 36,
"text": "\\textstyle L^p"
},
{
"math_id": 37,
"text": " \\left\\vert u\\right\\vert _{W_p^m(\\Omega)}=\\left( \\sum_{\\left\\vert \\alpha\\right\\vert =m}\\left\\Vert D^\\alpha u\\right\\Vert_{L^p(\\Omega)}^p\\right) ^{1/p}\\text{ if }1\\leq p<\\infty "
},
{
"math_id": 38,
"text": " \\left\\vert u\\right\\vert _{W_\\infty^{m}(\\Omega)}=\\max_{\\left\\vert \\alpha\\right\\vert =m}\\left\\Vert D^{\\alpha}u\\right\\Vert _{L^\\infty(\\Omega)}"
},
{
"math_id": 39,
"text": "\\textstyle P_k"
},
{
"math_id": 40,
"text": "\\textstyle D^{\\alpha}v=0"
},
{
"math_id": 41,
"text": "\\textstyle v\\in P_{m-1}"
},
{
"math_id": 42,
"text": "\\textstyle \\left\\vert \\alpha\\right\\vert =m"
},
{
"math_id": 43,
"text": "\\textstyle \\left\\vert u+v\\right\\vert _{W_p^m(\\Omega)}"
},
{
"math_id": 44,
"text": "\\textstyle C=C\\left( m,\\Omega\\right) "
},
{
"math_id": 45,
"text": "\\textstyle p"
},
{
"math_id": 46,
"text": "\\textstyle u\\in W_p^m(\\Omega)"
},
{
"math_id": 47,
"text": "\\textstyle k=0,\\ldots,m,"
},
{
"math_id": 48,
"text": " \\left\\vert u-v\\right\\vert _{W_p^k(\\Omega)}\\leq Cd^{m-k}\\left\\vert u\\right\\vert _{W_p^m(\\Omega)}. "
},
{
"math_id": 49,
"text": "\\textstyle \\left\\{ O_{i}\\right\\} "
},
{
"math_id": 50,
"text": "\\textstyle \\{C_{i}\\}"
},
{
"math_id": 51,
"text": "\\textstyle x+C_{i}"
},
{
"math_id": 52,
"text": "\\textstyle x"
},
{
"math_id": 53,
"text": "\\textstyle \\in\\Omega\\cap O_{i}"
},
{
"math_id": 54,
"text": "\\textstyle W_{p}^{m}(\\Omega)/P_{m-1}"
},
{
"math_id": 55,
"text": "\\textstyle W_{p}^{m}(\\Omega)"
},
{
"math_id": 56,
"text": "\\textstyle B"
},
{
"math_id": 57,
"text": "\\textstyle x\\in\\Omega"
},
{
"math_id": 58,
"text": "\\textstyle \\left\\{ x\\right\\} \\cup B"
},
{
"math_id": 59,
"text": "\\textstyle \\rho _\\max"
},
{
"math_id": 60,
"text": "\\textstyle \\gamma=d/\\rho_\\max"
},
{
"math_id": 61,
"text": "\\textstyle C=C\\left( m,n,\\gamma\\right) "
},
{
"math_id": 62,
"text": "\\textstyle \\gamma"
},
{
"math_id": 63,
"text": "\\textstyle n"
},
{
"math_id": 64,
"text": "v"
},
{
"math_id": 65,
"text": "v=Q^m u"
},
{
"math_id": 66,
"text": "\\textstyle Q^m u"
},
{
"math_id": 67,
"text": " Q^{m}u=\\int_B T_y^mu\\left( x\\right) \\psi\\left( y\\right) \\, dx, "
},
{
"math_id": 68,
"text": " T_y^m u\\left( x\\right) =\\sum\\limits_{k=0}^{m-1}\\sum\\limits_{\\left\\vert \\alpha\\right\\vert =k}\\frac{1}{\\alpha!}D^\\alpha u\\left( y\\right) \\left( x-y\\right)^\\alpha"
},
{
"math_id": 69,
"text": "\\textstyle y"
},
{
"math_id": 70,
"text": "\\textstyle \\psi\\geq0"
},
{
"math_id": 71,
"text": " \\int_B\\psi \\, dx=1. "
},
{
"math_id": 72,
"text": "\\textstyle \\psi"
},
{
"math_id": 73,
"text": "\\textstyle \\ell"
},
{
"math_id": 74,
"text": "\\textstyle \\left\\Vert \\ell\\right\\Vert _{W_{p}^{m}(\\Omega )^{^{\\prime}}}"
},
{
"math_id": 75,
"text": "\\textstyle \\ell\\left( v\\right) =0"
},
{
"math_id": 76,
"text": "\\textstyle C=C\\left( \\Omega\\right) "
},
{
"math_id": 77,
"text": " \\left\\vert \\ell\\left( u\\right) \\right\\vert \\leq C\\left\\Vert \\ell\\right\\Vert _{W_{p}^{m}(\\Omega)^{^{\\prime}}}\\left\\vert u\\right\\vert _{W_{p}^{m}(\\Omega)}. "
}
]
| https://en.wikipedia.org/wiki?curid=13925681 |
1392891 | Mu Alpha Theta | International honor society for mathematics
Mu Alpha Theta () is an International mathematics honor society for high school and two-year college students. As of June 2015, it served over 108,000 student members in over 2,200 chapters in the United States and 20 foreign countries. Its main goals are to inspire keen interest in mathematics, develop strong scholarship in the subject, and promote the enjoyment of mathematics in high school and two-year college students. Its name is a rough transliteration of "math" into Greek (Mu Alpha Theta). Buchholz High School in Gainesville FL won first place in 2023 for the 15th time in the annually held national convention.
History.
The Mu Alpha Theta National High School and Three-Year College Mathematics Honor Society was founded in by Dr. Richard V. Andree and his wife, Josephine Andree, at the University of Oklahoma. In Andree's words, Mu Alpha Theta is "an organization dedicated to promoting scholarship in mathematics and establishing math as an integral part of high school and junior college education".
Pi Mu Epsilon, the National Collegiate Honor Society of Mathematics, contributed funds for the organization's initial expenses; the University of Oklahoma provided space, clerical help, and technical assistance. The Mathematical Association of America, a primary sponsor of the organization since , and the National Council of Teachers of Mathematics nominated the first officers and Board of Governors. The Society for Industrial and Applied Mathematics became an official sponsor in , followed by The American Mathematical Association of Two-Year Colleges in .
The official journal of Mu Alpha Theta, "The Mathematical Log", was first issued in on mimeograph and was in printed form starting in . It was published four times during the school year until and featured articles, reports, news, and problems for students.
As of June 2015, Mu Alpha Theta served over 108,000 student members in over 2,200 chapters in the United States and 20 foreign countries. Its headquarters are located in Norman, Oklahoma.
Symbols.
The name Mu Alpha Theta is a rough transliteration of "math" into Greek (Mu Alpha Theta). Its colors are turquoise blue and gold. Its symbol is the Pythagorean theorem.
Activities.
Awards and scholarships.
Mu Alpha Theta presents several awards, including the Kalin Award to outstanding students. The Andree award is awarded to students who plan to become mathematics teachers. Chapter sponsors are also recognized by the Regional Sponsor Award, the Sister Scholastica, and the Huneke Award for the most dedicated sponsors. The Rubin Award is presented to a chapter doing volunteer work to help others to enjoy mathematics. Mu Alpha Theta presents numerous scholarships and grants to its members.
National conventions.
The first Mu Alpha Theta national convention was held at Trinity University in San Antonio, Texas in . Each year the convention brings together hundreds of teachers and students from across the country for five days of math-related events. The location of each national convention is announced at the convention held the previous year.
Competitions.
ΜΑΘ is primarily a venue for mathematical competition. Different competitions have varying ways to test the student's mathematical knowledge.
Competition is divided into six levels or divisions, calculus, pre-calculus, algebra II, geometry, algebra I, and statistics. At state and national competitions, only three levels are used: Theta (geometry and Algebra II), Alpha (pre-calculus), and calculus. There is only a Mu division at the national level. Additionally, there are usually open tests, which can be taken by students from any division, including Statistics, number theory, and the history of math. Most students start at the level of math that they are currently enrolled in or have last taken and progress to higher levels. A student can begin at another level, but it must be higher. The only exception to this is that students enrolled in either Algebra II or geometry can take whichever of the two they want because not all schools offer these courses in the same sequence. Students competing in a higher level, such as pre-calculus cannot then go back and compete at the algebra II level. This encourages students to compete with other students who are taking classes of similar mathematical difficulty.
Structure of competitions.
Individual test.
Each student who chooses to participate in a competition takes an "individual" test that corresponds to his or her level of competition. All competitions include this feature. Most individual tests consist of 30 multiple-choice questions (not including tie-breakers), A-E, where answer choice "E" is "None of the Above", or "None of These Answers"; abbreviated NOTA. Students are typically allotted 1 hour for the entire test. In most states they are graded on the following scale: +4 points for a correct answer, −1 points for an incorrect answer that was chosen, and 0 points if the question was left blank. This scoring system makes guessing statistically neutral. 120 points is considered a perfect score. Some competitions (e.g., Nationals and—as of the 2012/13 season—Florida) use alternate but equivalent systems of scoring, such as +5 for a correct answer, 0 for an incorrect answer, and +1 for a blank. A perfect score under this system would be 150. Calculators are never allowed to be used in competitions; the statistics division is the exception to this rule. This rule is in place for multiple reasons, the first being that modern calculators may include the ability to solve entire problems without any analysis of the equation, which would mean that students not having the mathematical knowledge but the ability to use a calculator could unfairly get problems correct. The second reason is so problems can remain arithmetically simple, in other words, so that a problem can utilize simple numbers and focus on the concepts without worrying that a calculator would give an advantage of some sort. Statistics is an exception because the field of statistics utilizes calculators and computers tremendously and not allowing calculators would require the students to carry out unavoidable tedious calculations by hand, thus taking away focus from the concepts.
Tie-breakers are only done for students who tie but do not get a perfect score. They are sometimes used in the case when money or prizes are being distributed to the winners of the competition, and a tiebreaker will be used even if both students have a perfect score. Tie-breakers are conducted according to the "sudden death" method. For example, in a tie-breaker, if student A scored the same as student B, and each missed 1 question, the student who missed question #5 will win over the student who missed question #3; students who start missing questions last are ranked higher, given the same scores. If the sudden death method doesn't resolve the tie, in other words, both students have the same answers, then a tie-breaker question is made and the person to turn in the correct answer the fastest wins the tie. If both get it wrong or if both turn in the correct answer at the same time then the process is repeated until the tie is resolved. All students who get a perfect score are considered to place 1st. Due to the large number of students, as compared to a typical high school classroom, who participate in competitions, scantrons are used as answer sheets; their main advantage is that they can be graded by a computer. These are similar in type to the answer sheets used in standardized tests such as the SAT and the ACT.
Team round.
In most competitions, the sponsor or "coach" is allowed to select 4 students per division to participate in a "team" test (formally called "Team Bowl".) Each team member sits with the rest of their team and is allowed to communicate and collaborate during the team round. A few competitions do not allow the team members to sit together; rather every member of the division takes the team test alone and without conversing, then the 4 highest scores are averaged together; these 4 people are on the team. Some competitions allow each school to have a second team for each division, "Team II". When there is enough space, schools may take advantage of this multiple-team rule and have up to four teams in one division's team round, though only the first two teams are considered for the Sweepstakes.
The grading scale is different for the team round. Questions are given one by one, whereas in the individual round students are given the test in its entirety. There are usually 12 questions (not including tie-breakers), and each team has 4 minutes to answer the question. If they answer the question correctly before the first minute, they receive 16 points, if they answer before the second they receive 12 points, before 3 minutes, 8 points, 4 points before 4 minutes has expired, and 0 points for anything, even the correct answer, after 4 minutes. In some competitions, a sliding scale is used. For example, if no team turned in an answer to a particular question in the first minute but another team answered correctly in the second minute, the team will be awarded the full 16 points even though they answered it in the second minute; a third minute-answering team would get twelve points; and the fourth minute-answering team would get eight points. The answer is usually written in and the students are not penalized for guessing. The team score from the team round is then summed up with the score of the individuals on the team to acquire the total team score used in rankings. The same calculator rule in the individual round is in effect in the team round; with statistics still the exception to the rule.
Ciphering.
States and nationals include a ciphering round that is not present at other competitions. Students are given a stack of ten questions. They have three minutes to complete each question. For solving it in the first minute, they receive twelve points, during the second minute, eight points, and during the third minute, four points. At the competitions with this test, it is included along with the individual test scores and team score for the total team score.
Disputes.
For fifteen minutes after the individual round and fifteen minutes after the team round students can file what are known as "Disputes". If a student is confident that they arrived at the correct answer and that the given answer is incorrect they can fill out a dispute form showing their work and explaining why their answer is the correct one. A resolution committee then reviews all disputes submitted and either denies them or accepts them based on the correctness of the student's reasoning. In this case, the official answer is changed and each student/team's score is recalculated using the new answer. Most competitions have an errata sheet and verification forms to provide a central location for accepted and denied disputes.
Disputes can change the accepted answer, accept two answers if they are both valid according to competition rules, or more rarely throw out a flawed question. There is an extremely rare status given to a dispute that is called a "unique interpretation". This occurs when a student interprets a problem in a drastically different, yet completely legitimate, way than the problem intended and thus changes the problem entirely. In this case only that student is given credit for their answer and the original answer remains the same for the rest of the competitors.
Sweepstakes.
"Sweepstakes" awards are given to the top (normally ten) schools whose students average the best performance in each test or division. Sweepstakes points are awarded on a transformed t-score based system, which awards points not only for relative place but for relative scores. Students or teams who win by a large margin, relative to the standard deviation of the rest of the group, contribute higher t-scores to their teams. T-scores from each test and team round are weighted and added to comprise the total sweepstakes score of a school.
Note that the T-scores used in scoring are a transformation of the statistical T-score:
formula_0
Some tests, such as trivia competitions, may be excluded from sweepstakes calculations. They include the Gemini, Mental Math, and Speed Math competitions available at some States' and the National competitions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_{scoring} = 50 + 10(T_{stats})"
}
]
| https://en.wikipedia.org/wiki?curid=1392891 |
13933711 | Dynamic voltage scaling | Power management technique of varying the voltage used by a component
In computer architecture, dynamic voltage scaling is a power management technique in which the voltage used in a component is increased or decreased, depending upon circumstances. Dynamic voltage scaling to increase voltage is known as overvolting; dynamic voltage scaling to decrease voltage is known as undervolting. Undervolting is done in order to conserve power, particularly in laptops and other mobile devices, where energy comes from a battery and thus is limited, or in rare cases, to increase reliability. Overvolting is done in order to support higher frequencies for performance.
The term "overvolting" is also used to refer to increasing static operating voltage of computer components to allow operation at higher speed (overclocking).
Background.
MOSFET-based digital circuits operate using voltages at circuit nodes to represent logical state. The voltage at these nodes switches between a high voltage and a low voltage during normal operation—when the inputs to a logic gate transition, the transistors making up that gate may toggle the gate's output.
Toggling a MOSFET's state requires changing its gate voltage from below the transistor's threshold voltage to above it (or from above it to below it). However, changing the gate's voltage requires charging or discharging the capacitance at its node. This capacitance is the sum of capacitances from various sources: primarily transistor gate capacitance, diffusion capacitance, and wires (coupling capacitance).
Higher supply voltages result in faster slew rate (rate of change of voltage per unit of time) when charging and discharging, which allows for quicker transitioning through the MOSFET's threshold voltage. Additionally, the more the gate voltage exceeds the threshold voltage, the lower the resistance of the transistor's conducting channel. This results in a lower RC time constant for quicker charging and discharging of the capacitance of the subsequent logic stage. Quicker transitioning afforded by higher supply voltages allows for operating at higher frequencies.
Methods.
Many modern components allow voltage regulation to be controlled through software (for example, through the BIOS). It is usually possible to control the voltages supplied to the CPU, RAM, PCI, and PCI Express (or AGP) port through a PC's BIOS.
However, some components do not allow software control of supply voltages, and hardware modification is required by overclockers seeking to overvolt the component for extreme overclocks. Video cards and motherboard northbridges are components which frequently require hardware modifications to change supply voltages. These modifications are known as "voltage mods" or "Vmod" in the overclocking community.
Undervolting.
Undervolting is reducing the voltage of a component, usually the processor, reducing temperature and cooling requirements, and possibly allowing a fan to be omitted. Just like overclocking, undervolting is highly subject to the so-called silicon lottery: one CPU can undervolt slightly better than the other and vice versa.
Power.
The "switching power" dissipated by a chip using static CMOS gates is formula_0, where formula_1 is the capacitance being switched per clock cycle, formula_2 is the supply voltage, formula_3 is the switching frequency, and formula_4 is the activity factor. Since formula_2 is squared, this part of the power consumption decreases quadratically with voltage. The formula is not exact however, as many modern chips are not implemented using 100% CMOS, but also use special memory circuits, dynamic logic such as domino logic, etc. Moreover, there is also a static leakage current, which has become more and more accentuated as feature sizes have become smaller (below 90 nanometres) and threshold levels lower.
Accordingly, dynamic voltage scaling is widely used as part of strategies to manage switching power consumption in battery powered devices such as cell phones and laptop computers. Low voltage modes are used in conjunction with lowered clock frequencies to minimize power consumption associated with components such as CPUs and DSPs; only when significant computational power is needed will the voltage and frequency be raised.
Some peripherals also support low voltage operational modes. For example, low power MMC and SD cards can run at 1.8 V as well as at 3.3 V, and driver stacks may conserve power by switching to the lower voltage after detecting a card which supports it.
When leakage current is a significant factor in terms of power consumption, chips are often designed so that portions of them can be powered completely off. This is not usually viewed as being dynamic voltage scaling, because it is not transparent to software. When sections of chips can be turned off, as for example on TI OMAP3 processors, drivers and other support software need to support that.
Program execution speed.
The speed at which a digital circuit can switch states - that is, to go from "low" (VSS) to "high" (VDD) or vice versa - is proportional to the voltage differential in that circuit. Reducing the voltage means that circuits switch slower, reducing the maximum frequency at which that circuit can run. This, in turn, reduces the rate at which program instructions that can be issued, which may increase run time for program segments which are sufficiently CPU-bound.
This again highlights why dynamic voltage scaling is generally done in conjunction with dynamic frequency scaling, at least for CPUs. There are complex tradeoffs to consider, which depend on the particular system, the load presented to it, and power management goals. When quick responses are needed (e.g. Mobile Sensors and Context-Aware Computing), clocks and voltages might be raised together. Otherwise, they may both be kept low to maximize battery life.
Implementations.
The 167-processor AsAP 2 chip enables individual processors to make extremely fast (on the order of 1-2ns) and locally controlled changes to their own supply voltages. Processors connect their local power grid to either a higher (VddHi) or lower (VddLow) supply voltage, or can be cut off entirely from either grid to dramatically cut leakage power.
Another approach uses per-core on-chip switching regulators for dynamic voltage and frequency scaling (DVFS).
Operating system API.
Unix system provides a userspace governor, allowing to modify the CPU frequencies (though limited to hardware capabilities).
System stability.
Dynamic frequency scaling is another power conservation technique that works on the same principles as dynamic voltage scaling. Both dynamic voltage scaling and dynamic frequency scaling can be used to prevent computer system overheating, which can result in program or operating system crashes, and possibly hardware damage. Reducing the voltage supplied to the CPU below the manufacturer's recommended minimum setting can result in system instability.
Temperature.
The efficiency of some electrical components, such as voltage regulators, decreases with increasing temperature, so the power used may increase with temperature causing thermal runaway. Increases in voltage or frequency may increase system power demands even faster than the CMOS formula indicates, and vice versa.
Caveats.
The primary caveat of overvolting is increased heat: the power dissipated by a circuit increases with the square of the voltage applied, so even small voltage increases significantly affect power. At higher temperatures, transistor performance is adversely affected, and at some threshold, the performance reduction due to the heat exceeds the potential gains from the higher voltages. Overheating and damage to circuits can occur very quickly when using high voltages.
There are also longer-term concerns: various adverse device-level effects such as hot carrier injection and electromigration occur more rapidly at higher voltages, decreasing the lifespan of overvolted components.
In order to mitigate the increased heat from overvolting, it's recommended to use liquid cooling to achieve higher ceilings and thresholds than you normally would with an aftermarket cooler. Also known as 'all-in-one' (AIO) coolers, they offer a far more effective method of unit cooling by relocating heat outside a computer case via the fans on the radiator whereas air cooling only disperses heat from the affected unit, increasing overall ambient temperatures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha \\cdot C \\cdot V^2 \\cdot f"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\\alpha "
}
]
| https://en.wikipedia.org/wiki?curid=13933711 |
13934390 | Load line (electronics) | Graphical analysis tool for electronic circuit engineering
In graphical analysis of nonlinear electronic circuits, a load line is a line drawn on the current–voltage characteristic graph for a nonlinear device like a diode or transistor. It represents the constraint put on the voltage and current in the nonlinear device by the external circuit. The load line, usually a straight line, represents the response of the linear part of the circuit, connected to the nonlinear device in question. The points where the characteristic curve and the load line intersect are the possible operating point(s) (Q points) of the circuit; at these points the current and voltage parameters of both parts of the circuit match.
The example at right shows how a load line is used to determine the current and voltage in a simple diode circuit. The diode, a nonlinear device, is in series with a linear circuit consisting of a resistor, R and a voltage source, VDD. The characteristic curve "(curved line)", representing the current "I" through the diode for any given voltage across the diode "V"D, is an exponential curve. The load line "(diagonal line)", representing the relationship between current and voltage due to Kirchhoff's voltage law applied to the resistor and voltage source, is
formula_0
Since the same current flows through each of the three elements in series, and the voltage produced by the voltage source and resistor is the voltage across the terminals of the diode, the operating point of the circuit will be at the intersection of the curve with the load line.
In a circuit with a three terminal device, such as a transistor, the current–voltage curve of the collector-emitter current depends on the base current. This is depicted on graphs by a series of (IC–VCE) curves at different base currents. A load line drawn on this graph shows how the base current will affect the operating point of the circuit.
Load lines for common configurations.
Transistor load line.
The load line diagram at right is for a resistive load in a common emitter circuit. The load line shows how the collector load resistor (RL) constrains the circuit voltage and current. The diagram also plots the transistor's collector current "I"C versus collector voltage "V"CE for different values of base current "I"base. The intersections of the load line with the transistor characteristic curves represent the circuit-constrained values of "I"C and "V"CE at different base currents.
If the transistor could pass all the current available, with no voltage dropped across it, the collector current would be the supply voltage VCC over RL. This is the point where the load line crosses the vertical axis. Even at saturation, however, there will always be some voltage from collector to emitter.
Where the load line crosses the horizontal axis, the transistor current is minimum (approximately zero). The transistor is said to be cut off, passing only a very small leakage current, and so very nearly the entire supply voltage appears as VCE.
The "operating point" of the circuit in this configuration (labelled Q) is generally designed to be in the "active region", approximately in the middle of the load line's active region for amplifier applications. Adjusting the base current so that the circuit is at this operating point with no signal applied is called biasing the transistor. Several techniques are used to stabilize the operating point against minor changes in temperature or transistor operating characteristics. When a signal is applied, the base current varies, and the collector-emitter voltage in turn varies, following the load line - the result is an amplifier stage with gain.
A load line is normally drawn on Ic-Vce characteristics curves for the transistor used in an amplifier circuit. The same technique is applied to other types of non-linear elements such as vacuum tubes or field effect transistors.
DC and AC load lines.
Semiconductor circuits typically have both DC and AC currents in them, with a source of DC current to bias the nonlinear semiconductor to the correct operating point, and the AC signal superimposed on the DC. Load lines can be used separately for both DC and AC analysis. The DC load line is the load line of the DC equivalent circuit, defined by reducing the reactive components to zero (replacing capacitors by open circuits and inductors by short circuits). It is used to determine the correct DC operating point, often called the Q point.
Once a DC operating point is defined by the DC load line, an AC load line can be drawn through the Q point. The AC load line is a straight line with a slope equal to the AC impedance facing the nonlinear device, which is in general different from the DC resistance. The ratio of AC voltage to current in the device is defined by this line. Because the impedance of the reactive components will vary with frequency, the slope of the AC load line depends on the frequency of the applied signal. So there are many AC load lines, that vary from the DC load line (at low frequency) to a limiting AC load line, all having a common intersection at the DC operating point. This limiting load line, generally referred to as the "AC load line", is the load line of the circuit at "infinite frequency", and can be found by replacing capacitors with short circuits, and inductors with open circuits. | [
{
"math_id": 0,
"text": "V_D = V_{DD} - I R \\,"
}
]
| https://en.wikipedia.org/wiki?curid=13934390 |
1393678 | PGL2 | PGL2 may refer to
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination. | [
{
"math_id": 0,
"text": " \\mathrm{PGL}_2 "
}
]
| https://en.wikipedia.org/wiki?curid=1393678 |
13938136 | Kepler triangle | Right triangle related to the golden ratio
A Kepler triangle is a special right triangle with edge lengths in geometric progression. The ratio of the progression is formula_0 where formula_1 is the golden ratio, and the progression can be written: formula_2, or approximately formula_3. Squares on the edges of this triangle have areas in another geometric progression, formula_4. Alternative definitions of the same triangle characterize it in terms of the three Pythagorean means of two numbers, or via the inradius of isosceles triangles.
This triangle is named after Johannes Kepler, but can be found in earlier sources. Although some sources claim that ancient Egyptian pyramids had proportions based on a Kepler triangle, most scholars believe that the golden ratio was not known to Egyptian mathematics and architecture.
History.
The Kepler triangle is named after the German mathematician and astronomer Johannes Kepler (1571–1630), who wrote about this shape in a 1597 letter. Two concepts that can be used to analyze this triangle, the Pythagorean theorem and the golden ratio, were both of interest to Kepler, as he wrote elsewhere:
<templatestyles src="Template:Blockquote/styles.css" />Geometry has two great treasures: one is the theorem of Pythagoras, the other the division of a line into extreme and mean ratio. The first we may compare to a mass of gold, the second we may call a precious jewel.
However, Kepler was not the first to describe this triangle. Kepler himself credited it to "a music professor named Magirus". The same triangle appears earlier in a book of Arabic mathematics, the "Liber mensurationum" of Abû Bekr, known from a 12th-century translation by Gerard of Cremona into Latin, and in the "Practica geometriae" of Fibonacci (published in 1220–1221), who defined it in a similar way to Kepler. A little earlier than Kepler, Pedro Nunes wrote about it in 1567, and it is "likely to have been widespread in late medieval and Renaissance manuscript traditions". It has also been independently rediscovered several times, later than Kepler.
According to some authors, a "golden pyramid" with a doubled Kepler triangle as its cross-section accurately describes the design of Egyptian pyramids such as the Great Pyramid of Giza; one source of this theory is a 19th-century misreading of Herodotus by pyramidologist John Taylor. Many other theories of proportion have been proposed for the same pyramid, unrelated to the Kepler triangle. Because these different theories are very similar in the numeric values they obtain, and because of inaccuracies in measurement, in part caused by the destruction of the outer surface of the pyramid, such theories are difficult to resolve based purely on physical evidence. The match in proportions to the Kepler triangle may well be a numerical coincidence: according to scholars who have investigated this relationship, the ancient Egyptians most likely did not know about or use the golden ratio in their mathematics or architecture. Instead, the proportions of the pyramid can be adequately explained using integer ratios, based on a right triangle with sides 11 and 14.
The name "Kepler triangle" for this shape was used by Roger Herz-Fischler, based on Kepler's 1597 letter, as early as 1979. Another name for the same triangle, used by Matila Ghyka in his 1946 book on the golden ratio, "The Geometry of Art and Life", is the "triangle of Price", after pyramidologist W. A. Price.
Definitions.
The Kepler triangle is uniquely defined by the properties of being a right triangle and of having its side lengths in geometric progression,
or equivalently having the squares on its sides in geometric progression. The ratio of the progression of side lengths is formula_0, where formula_1 is the golden ratio, and the progression can be written: formula_2, or approximately 1 : 1.272 : 1.618. Squares on the edges of this triangle have areas in another geometric progression, formula_4.
The fact that the triangle with these proportions is a right triangle follows from the fact that, for squared edge lengths with these proportions,
the defining polynomial of the golden ratio is the same as the formula given by the Pythagorean theorem for the squared edge lengths of a right triangle:
formula_5
Because this equation is true for the golden ratio, these three lengths obey the Pythagorean theorem, and form a right triangle. Conversely, in any right triangle whose squared edge lengths are in geometric progression with any ratio formula_6, the Pythagorean theorem implies that this ratio obeys the identity formula_7. Therefore, the ratio must be the unique positive solution to this equation, the golden ratio, and the triangle must be a Kepler triangle.
The three edge lengths formula_8, formula_0 and formula_9 are the harmonic mean, geometric mean, and arithmetic mean, respectively, of the two numbers formula_10. These three ways of combining two numbers were all studied in ancient Greek mathematics, and are called the Pythagorean means. Conversely, this can be taken as an alternative definition of the Kepler triangle: it is a right triangle whose edge lengths are the three Pythagorean means of some two numbers. The only triangles for which this is true are the Kepler triangles.
A third, equivalent way of defining this triangle comes from a problem of maximizing the inradius of isosceles triangles.
Among all isosceles triangles with a fixed choice of the length of the two equal sides but with a variable base length, the one with the largest inradius is formed from two copies of the Kepler triangle, reflected across their longer sides from each other. Therefore, the Kepler triangle can be defined as the right triangle that, among all right triangles with the same hypotenuse, forms with its reflection the isosceles triangle of maximum inradius. The same reflection also forms an isosceles triangle that, for a given perimeter, contains the largest possible semicircle.
Properties.
If the short side of a Kepler triangle has length formula_11, the other sides will have lengths formula_12 and formula_13. The area can be calculated by the standard formula for the area of right triangles (half the product of the two short sides) as formula_14. The cosine of the larger of the two non-right angles is the ratio of the adjacent side (the shorter of the two sides) to the hypotenuse, formula_9, from which it follows that the two non-right angles are
formula_15
and
formula_16
Jerzy Kocik has observed that the larger of these two angles is also the angle formed by the centers of triples of consecutive circles in Coxeter's loxodromic sequence of tangent circles.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt\\varphi"
},
{
"math_id": 1,
"text": "\\varphi=(1+\\sqrt{5})/2"
},
{
"math_id": 2,
"text": " 1 : \\sqrt\\varphi : \\varphi"
},
{
"math_id": 3,
"text": "1 : 1.272 : 1.618"
},
{
"math_id": 4,
"text": "1:\\varphi:\\varphi^2"
},
{
"math_id": 5,
"text": "\\varphi^2 = \\varphi + 1."
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "\\rho^2=\\rho+1"
},
{
"math_id": 8,
"text": "1"
},
{
"math_id": 9,
"text": "\\varphi"
},
{
"math_id": 10,
"text": "\\varphi\\pm1"
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "s\\sqrt\\varphi"
},
{
"math_id": 13,
"text": "s\\varphi"
},
{
"math_id": 14,
"text": "\\tfrac{s^2}{2}\\sqrt\\varphi"
},
{
"math_id": 15,
"text": "\\theta=\\sin^{-1}\\frac{1}{\\varphi}\\approx 38.1727^\\circ"
},
{
"math_id": 16,
"text": "\\theta=\\cos^{-1}\\frac{1}{\\varphi}\\approx 51.8273^\\circ."
},
{
"math_id": 17,
"text": "1:\\sqrt2:\\sqrt3"
}
]
| https://en.wikipedia.org/wiki?curid=13938136 |
13939428 | 74181 | First arithmetic logic unit (ALU) on a single chip
The 74181 is a 4-bit slice arithmetic logic unit (ALU), implemented as a 7400 series TTL integrated circuit. Introduced by Texas Instruments in February 1970, it was the first complete ALU on a single chip. It was used as the arithmetic/logic core in the CPUs of many historically significant minicomputers and other devices.
The 74181 represents an evolutionary step between the CPUs of the 1960s, which were constructed using discrete logic gates, and today's single-chip microprocessor CPUs. Although no longer used in commercial products, the 74181 is still referenced in computer organization textbooks and technical papers. It is also sometimes used in "hands-on" college courses to train future computer architects.
Specifications.
The 74181 is a 7400 series medium-scale integration (MSI) TTL integrated circuit, containing the equivalent of 75 logic gates
and most commonly packaged as a 24-pin DIP. The 4-bit wide ALU can perform all the traditional add / subtract / decrement operations with or without carry, as well as AND / NAND, OR / NOR, XOR, and shift. Many variations of these basic functions are available, for a total of 16 arithmetic and 16 logical operations on two four-bit words. Multiply and divide functions are not provided but can be performed in multiple steps using the shift and add or subtract functions.
Shift is not an explicit function but can be derived from several available functions; e.g., selecting function "A plus A" with carry (M=0) will give an arithmetic left shift of the A input.
The 74181 performs these operations on two four-bit operands generating a four-bit result with carry in 22 nanoseconds (45 MHz). The 74S181 performs the same operations in 11 nanoseconds (90 MHz), while the 74F181 performs the operations in 7 nanoseconds (143 MHz) (typical).
Multiple 'slices' can be combined for arbitrarily large word sizes. For example, sixteen 74S181s and five 74S182 look ahead carry generators can be combined to perform the same operations on 64-bit operands in 28 nanoseconds (36 MHz). Although overshadowed by the performance of today's multi-gigahertz 64-bit microprocessors, this was quite impressive when compared to the sub-megahertz clock speeds of the early four- and eight-bit microprocessors.
Implemented functions.
The 74181 implements all 16 possible logical functions with two variables. Its arithmetic functions include addition and subtraction with and without carry. It can be used with data in active-high (high corresponds to 1) and active-low (low corresponds to 1) logic levels.
Inputs and outputs.
There are four selection inputs, codice_0 to codice_1, to select the function. codice_2 is used to select between logical and arithmetic operation, and codice_3 is the carry-in.
codice_4 and codice_5 is the data to be processed (four bits). codice_6 is the number output. There are also codice_7 and a codice_8 signals for a carry-lookahead adder, which can be implemented via one or several 74182 chips.
Function table for output F.
In the following table, AND is denoted as a product, OR with a formula_0 sign, XOR with formula_1, logical NOT with an overbar and arithmetic plus and minus using the words plus and minus.
Significance.
The 74181 greatly simplified the development and manufacture of computers and other devices that required high speed computation during the 1970s through the early 1980s, and is still referenced as a "classic" ALU design.
Prior to the introduction of the 74181, computer CPUs occupied multiple circuit boards and even very simple computers could fill multiple cabinets. The 74181 allowed an entire CPU and in some cases, an entire computer to be constructed on a single large printed circuit board. The 74181 occupies a historically significant stage between older CPUs based on discrete logic functions spread over multiple circuit boards and modern microprocessors that incorporate all CPU functions in a single chip. The 74181 was used in various minicomputers and other devices beginning in the 1970s, but as microprocessors became more powerful the practice of building a CPU from discrete components fell out of favour and the 74181 was not used in any new designs.
Today.
By 1994, CPU designs based on the 74181 were not commercially viable due to the comparatively low price and high performance of microprocessors. However, the 74181 is still of interest in the teaching of computer organization and CPU design because it provides opportunities for hands-on design and experimentation that are rarely available to students.
Computers.
Many computer CPUs and subsystems were based on the 74181, including several historically significant models.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Manufacturer's data sheets:
Explanation of how the chip works | [
{
"math_id": 0,
"text": "+"
},
{
"math_id": 1,
"text": "\\oplus"
}
]
| https://en.wikipedia.org/wiki?curid=13939428 |
139410 | Homological algebra | Branch of mathematics
Homological algebra is the branch of mathematics that studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert.
Homological algebra is the study of homological functors and the intricate algebraic structures that they entail; its development was closely intertwined with the emergence of category theory. A central concept is that of chain complexes, which can be studied through their homology and cohomology.
Homological algebra affords the means to extract information contained in these complexes and present it in the form of homological invariants of rings, modules, topological spaces, and other "tangible" mathematical objects. A spectral sequence is a powerful tool for this.
It has played an enormous role in algebraic topology. Its influence has gradually expanded and presently includes commutative algebra, algebraic geometry, algebraic number theory, representation theory, mathematical physics, operator algebras, complex analysis, and the theory of partial differential equations. "K"-theory is an independent discipline which draws upon methods of homological algebra, as does the noncommutative geometry of Alain Connes.
History.
Homological algebra began to be studied in its most basic form in the 1800s as a branch of topology and in the 1940s became an independent subject with the study of objects such as the ext functor and the tor functor, among others.
Chain complexes and homology.
The notion of chain complex is central in homological algebra. An abstract chain complex is a sequence formula_0 of abelian groups and group homomorphisms,
with the property that the composition of any two consecutive maps is zero:
formula_1
The elements of "C""n" are called "n"-chains and the homomorphisms "d""n" are called the boundary maps or differentials. The chain groups "C""n" may be endowed with extra structure; for example, they may be vector spaces or modules over a fixed ring "R". The differentials must preserve the extra structure if it exists; for example, they must be linear maps or homomorphisms of "R"-modules. For notational convenience, restrict attention to abelian groups (more correctly, to the category Ab of abelian groups); a celebrated theorem by Barry Mitchell implies the results will generalize to any abelian category. Every chain complex defines two further sequences of abelian groups, the cycles "Z""n" = Ker "d""n" and the boundaries "B""n" = Im "d""n"+1, where Ker "d" and Im "d" denote the kernel and the image of "d". Since the composition of two consecutive boundary maps is zero, these groups are embedded into each other as
formula_2
Subgroups of abelian groups are automatically normal; therefore we can define the "n"th homology group "H""n"("C") as the factor group of the "n"-cycles by the "n"-boundaries,
formula_3
A chain complex is called acyclic or an exact sequence if all its homology groups are zero.
Chain complexes arise in abundance in algebra and algebraic topology. For example, if "X" is a topological space then the singular chains "C""n"("X") are formal linear combinations of continuous maps from the standard "n"-simplex into "X"; if "K" is a simplicial complex then the simplicial chains "C""n"("K") are formal linear combinations of the "n"-simplices of "K"; if "A" = "F"/"R" is a presentation of an abelian group "A" by generators and relations, where "F" is a free abelian group spanned by the generators and "R" is the subgroup of relations, then letting "C"1("A") = "R", "C"0("A") = "F", and "C""n"("A") = 0 for all other "n" defines a sequence of abelian groups. In all these cases, there are natural differentials "d""n" making "C""n" into a chain complex, whose homology reflects the structure of the topological space "X", the simplicial complex "K", or the abelian group "A". In the case of topological spaces, we arrive at the notion of singular homology, which plays a fundamental role in investigating the properties of such spaces, for example, manifolds.
On a philosophical level, homological algebra teaches us that certain chain complexes associated with algebraic or geometric objects (topological spaces, simplicial complexes, "R"-modules) contain a lot of valuable algebraic information about them, with the homology being only the most readily available part. On a technical level, homological algebra provides the tools for manipulating complexes and extracting this information. Here are two general illustrations.
Standard tools.
Exact sequences.
In the context of group theory, a sequence
formula_6
of groups and group homomorphisms is called exact if the image of each homomorphism is equal to the kernel of the next:
formula_7
Note that the sequence of groups and homomorphisms may be either finite or infinite.
A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. More generally, the notion of an exact sequence makes sense in any category with kernels and cokernels.
Short.
The most common type of exact sequence is the short exact sequence. This is an exact sequence of the form
formula_8
where ƒ is a monomorphism and "g" is an epimorphism. In this case, "A" is a subobject of "B", and the corresponding quotient is isomorphic to "C":
formula_9
(where "f(A)" = im("f")).
A short exact sequence of abelian groups may also be written as an exact sequence with five terms:
formula_10
where 0 represents the zero object, such as the trivial group or a zero-dimensional vector space. The placement of the 0's forces ƒ to be a monomorphism and "g" to be an epimorphism (see below).
Long.
A long exact sequence is an exact sequence indexed by the natural numbers.
Five lemma.
Consider the following commutative diagram in any abelian category (such as the category of abelian groups or the category of vector spaces over a given field) or in the category of groups.
The five lemma states that, if the rows are exact, "m" and "p" are isomorphisms, "l" is an epimorphism, and "q" is a monomorphism, then "n" is also an isomorphism.
Snake lemma.
In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram:
where the rows are exact sequences and 0 is the zero object.
Then there is an exact sequence relating the kernels and cokernels of "a", "b", and "c":
formula_11
Furthermore, if the morphism "f" is a monomorphism, then so is the morphism ker "a" → ker "b", and if "g"' is an epimorphism, then so is coker "b" → coker "c".
Abelian categories.
In mathematics, an abelian category is a category in which morphisms and objects can be added and in which kernels and cokernels exist and have desirable properties. The motivating prototype example of an abelian category is the category of abelian groups, Ab. The theory originated in a tentative attempt to unify several cohomology theories by Alexander Grothendieck. Abelian categories are very "stable" categories, for example they are regular and they satisfy the snake lemma. The class of Abelian categories is closed under several categorical constructions, for example, the category of chain complexes of an Abelian category, or the category of functors from a small category to an Abelian category are Abelian as well. These stability properties make them inevitable in homological algebra and beyond; the theory has major applications in algebraic geometry, cohomology and pure category theory. Abelian categories are named after Niels Henrik Abel.
More concretely, a category is abelian if
Derived functor.
Suppose we are given a covariant left exact functor "F" : A → B between two abelian categories A and B. If 0 → "A" → "B" → "C" → 0 is a short exact sequence in A, then applying "F" yields the exact sequence 0 → "F"("A") → "F"("B") → "F"("C") and one could ask how to continue this sequence to the right to form a long exact sequence. Strictly speaking, this question is ill-posed, since there are always numerous different ways to continue a given exact sequence to the right. But it turns out that (if A is "nice" enough) there is one canonical way of doing so, given by the right derived functors of "F". For every "i"≥1, there is a functor "RiF": A → B, and the above sequence continues like so: 0 → "F"("A") → "F"("B") → "F"("C") → "R"1"F"("A") → "R"1"F"("B") → "R"1"F"("C") → "R"2"F"("A") → "R"2"F"("B") → ... . From this we see that "F" is an exact functor if and only if "R"1"F" = 0; so in a sense the right derived functors of "F" measure "how far" "F" is from being exact.
Ext functor.
Let "R" be a ring and let Mod"R" be the category of modules over "R". Let "B" be in Mod"R" and set "T"("B") = Hom"R"("A,B"), for fixed "A" in Mod"R". This is a left exact functor and thus has right derived functors "RnT". The Ext functor is defined by
formula_12
This can be calculated by taking any injective resolution
formula_13
and computing
formula_14
Then ("RnT")("B") is the cohomology of this complex. Note that Hom"R"("A,B") is excluded from the complex.
An alternative definition is given using the functor "G"("A")=Hom"R"("A,B"). For a fixed module "B", this is a contravariant left exact functor, and thus we also have right derived functors "RnG", and can define
formula_15
This can be calculated by choosing any projective resolution
formula_16
and proceeding dually by computing
formula_17
Then ("RnG")("A") is the cohomology of this complex. Again note that Hom"R"("A,B") is excluded.
These two constructions turn out to yield isomorphic results, and so both may be used to calculate the Ext functor.
Tor functor.
Suppose "R" is a ring, and denoted by "R"-Mod the category of left "R"-modules and by Mod-"R" the category of right "R"-modules (if "R" is commutative, the two categories coincide). Fix a module "B" in "R"-Mod. For "A" in Mod-"R", set "T"("A") = "A"⊗"R""B". Then "T" is a right exact functor from Mod-"R" to the category of abelian groups Ab (in the case when "R" is commutative, it is a right exact functor from Mod-"R" to Mod-"R") and its left derived functors "LnT" are defined. We set
formula_18
i.e., we take a projective resolution
formula_19
then remove the "A" term and tensor the projective resolution with "B" to get the complex
formula_20
(note that "A"⊗"R""B" does not appear and the last arrow is just the zero map) and take the homology of this complex.
Spectral sequence.
Fix an abelian category, such as a category of modules over a ring. A spectral sequence is a choice of a nonnegative integer "r"0 and a collection of three sequences:
A doubly graded spectral sequence has a tremendous amount of data to keep track of, but there is a common visualization technique which makes the structure of the spectral sequence clearer. We have three indices, "r", "p", and "q". For each "r", imagine that we have a sheet of graph paper. On this sheet, we will take "p" to be the horizontal direction and "q" to be the vertical direction. At each lattice point we have the object formula_21.
It is very common for "n" = "p" + "q" to be another natural index in the spectral sequence. "n" runs diagonally, northwest to southeast, across each sheet. In the homological case, the differentials have bidegree (−"r", "r" − 1), so they decrease "n" by one. In the cohomological case, "n" is increased by one. When "r" is zero, the differential moves objects one space down or up. This is similar to the differential on a chain complex. When "r" is one, the differential moves objects one space to the left or right. When "r" is two, the differential moves objects just like a knight's move in chess. For higher "r", the differential acts like a generalized knight's move.
Functoriality.
A continuous map of topological spaces gives rise to a homomorphism between their "n"th homology groups for all "n". This basic fact of algebraic topology finds a natural explanation through certain properties of chain complexes. Since it is very common to study
several topological spaces simultaneously, in homological algebra one is led to simultaneous consideration of multiple chain complexes.
A morphism between two chain complexes, formula_22 is a family of homomorphisms of abelian groups formula_23 that commute with the differentials, in the sense that formula_24 for all "n". A morphism of chain complexes induces a morphism formula_25 of their homology groups, consisting of the homomorphisms formula_26 for all "n". A morphism "F" is called a quasi-isomorphism if it induces an isomorphism on the "n"th homology for all "n".
Many constructions of chain complexes arising in algebra and geometry, including singular homology, have the following functoriality property: if two objects "X" and "Y" are connected by a map "f", then the associated chain complexes are connected by a morphism formula_27 and moreover, the composition formula_28 of maps "f": "X" → "Y" and "g": "Y" → "Z" induces the morphism formula_29 that coincides with the composition formula_30 It follows that the homology groups formula_5 are functorial as well, so that morphisms between algebraic or topological objects give rise to compatible maps between their homology.
The following definition arises from a typical situation in algebra and topology. A triple consisting of three chain complexes formula_31 and two morphisms between them, formula_32 is called an exact triple, or a short exact sequence of complexes, and written as
formula_33
if for any "n", the sequence
formula_34
is a short exact sequence of abelian groups. By definition, this means that "f""n" is an injection, "g""n" is a surjection, and Im "f""n" = Ker "g""n". One of the most basic theorems of homological algebra, sometimes known as the zig-zag lemma, states that, in this case, there is a long exact sequence in homology
formula_35
where the homology groups of "L", "M", and "N" cyclically follow each other, and "δ""n" are certain homomorphisms determined by "f" and "g", called the connecting homomorphisms. Topological manifestations of this theorem include the Mayer–Vietoris sequence and the long exact sequence for relative homology.
Foundational aspects.
Cohomology theories have been defined for many different objects such as topological spaces, sheaves, groups, rings, Lie algebras, and C*-algebras. The study of modern algebraic geometry would be almost unthinkable without sheaf cohomology.
Central to homological algebra is the notion of exact sequence; these can be used to perform actual calculations. A classical tool of homological algebra is that of derived functor; the most basic examples are functors Ext and Tor.
With a diverse set of applications in mind, it was natural to try to put the whole subject on a uniform basis. There were several attempts before the subject settled down. An approximate history can be stated as follows:
These move from computability to generality.
The computational sledgehammer "par excellence" is the spectral sequence; these are essential in the Cartan-Eilenberg and Tohoku approaches where they are needed, for instance, to compute the derived functors of a composition of two functors. Spectral sequences are less essential in the derived category approach, but still play a role whenever concrete computations are necessary.
There have been attempts at 'non-commutative' theories which extend first cohomology as "torsors" (important in Galois cohomology).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (C_\\bullet, d_\\bullet)"
},
{
"math_id": 1,
"text": " C_\\bullet: \\cdots \\longrightarrow \nC_{n+1} \\stackrel{d_{n+1}}{\\longrightarrow}\nC_n \\stackrel{d_n}{\\longrightarrow}\nC_{n-1} \\stackrel{d_{n-1}}{\\longrightarrow}\n\\cdots, \\quad d_n \\circ d_{n+1}=0."
},
{
"math_id": 2,
"text": " B_n \\subseteq Z_n \\subseteq C_n. "
},
{
"math_id": 3,
"text": " H_n(C) = Z_n/B_n = \\operatorname{Ker}\\, d_n/ \\operatorname{Im}\\, d_{n+1}. "
},
{
"math_id": 4,
"text": "C_\\bullet(X)"
},
{
"math_id": 5,
"text": "H_\\bullet(C)"
},
{
"math_id": 6,
"text": "G_0 \\;\\xrightarrow{f_1}\\; G_1 \\;\\xrightarrow{f_2}\\; G_2 \\;\\xrightarrow{f_3}\\; \\cdots \\;\\xrightarrow{f_n}\\; G_n"
},
{
"math_id": 7,
"text": "\\mathrm{im}(f_k) = \\mathrm{ker}(f_{k+1}).\\!"
},
{
"math_id": 8,
"text": "A \\;\\overset{f}{\\hookrightarrow}\\; B \\;\\overset{g}{\\twoheadrightarrow}\\; C"
},
{
"math_id": 9,
"text": "C \\cong B/f(A)."
},
{
"math_id": 10,
"text": "0 \\;\\xrightarrow{}\\; A \\;\\xrightarrow{f}\\; B \\;\\xrightarrow{g}\\; C \\;\\xrightarrow{}\\; 0"
},
{
"math_id": 11,
"text": "\\ker a \\to \\ker b \\to \\ker c \\overset{d}{\\to} \\operatorname{coker}a \\to \\operatorname{coker}b \\to \\operatorname{coker}c"
},
{
"math_id": 12,
"text": "\\operatorname{Ext}_R^n(A,B)=(R^nT)(B)."
},
{
"math_id": 13,
"text": "0 \\rightarrow B \\rightarrow I^0 \\rightarrow I^1 \\rightarrow \\cdots, "
},
{
"math_id": 14,
"text": "0 \\rightarrow \\operatorname{Hom}_R(A,I^0) \\rightarrow \\operatorname{Hom}_R(A,I^1) \\rightarrow \\cdots."
},
{
"math_id": 15,
"text": "\\operatorname{Ext}_R^n(A,B)=(R^nG)(A)."
},
{
"math_id": 16,
"text": "\\dots \\rightarrow P^1 \\rightarrow P^0 \\rightarrow A \\rightarrow 0, "
},
{
"math_id": 17,
"text": "0\\rightarrow\\operatorname{Hom}_R(P^0,B)\\rightarrow \\operatorname{Hom}_R(P^1,B) \\rightarrow \\cdots."
},
{
"math_id": 18,
"text": "\\mathrm{Tor}_n^R(A,B)=(L_nT)(A)"
},
{
"math_id": 19,
"text": "\\cdots\\rightarrow P_2 \\rightarrow P_1 \\rightarrow P_0 \\rightarrow A\\rightarrow 0"
},
{
"math_id": 20,
"text": "\\cdots \\rightarrow P_2\\otimes_R B \\rightarrow P_1\\otimes_R B \\rightarrow P_0\\otimes_R B \\rightarrow 0"
},
{
"math_id": 21,
"text": "E_r^{p,q}"
},
{
"math_id": 22,
"text": " F: C_\\bullet\\to D_\\bullet,"
},
{
"math_id": 23,
"text": "F_n: C_n \\to D_n"
},
{
"math_id": 24,
"text": "F_{n-1} \\circ d_n^C = d_n^D \\circ F_n"
},
{
"math_id": 25,
"text": " H_\\bullet(F)"
},
{
"math_id": 26,
"text": "H_n(F) : H_n(C) \\to H_n(D)"
},
{
"math_id": 27,
"text": "F=C(f) : C_\\bullet(X) \\to C_\\bullet(Y),"
},
{
"math_id": 28,
"text": "g\\circ f"
},
{
"math_id": 29,
"text": "C(g\\circ f): C_\\bullet(X) \\to C_\\bullet(Z)"
},
{
"math_id": 30,
"text": "C(g) \\circ C(f)."
},
{
"math_id": 31,
"text": "L_\\bullet, M_\\bullet, N_\\bullet"
},
{
"math_id": 32,
"text": "f:L_\\bullet\\to M_\\bullet, g: M_\\bullet\\to N_\\bullet,"
},
{
"math_id": 33,
"text": " 0 \\longrightarrow L_\\bullet \\overset{f}{\\longrightarrow} M_\\bullet \\overset{g}{\\longrightarrow} N_\\bullet \\longrightarrow 0,"
},
{
"math_id": 34,
"text": " 0 \\longrightarrow L_n \\overset{f_n}{\\longrightarrow} M_n \\overset{g_n}{\\longrightarrow}\nN_n \\longrightarrow 0 "
},
{
"math_id": 35,
"text": " \\cdots \\longrightarrow H_n(L) \\overset{H_n(f)}{\\longrightarrow} H_n(M) \\overset{H_n(g)}{\\longrightarrow} H_n(N) \\overset{\\delta_n}{\\longrightarrow} H_{n-1}(L) \\overset{H_{n-1}(f)}{\\longrightarrow} H_{n-1}(M) \\longrightarrow \\cdots, "
}
]
| https://en.wikipedia.org/wiki?curid=139410 |
1394160 | Thermal shock | Load caused by rapid temperature change
Thermal shock is a phenomenon characterized by a rapid change in temperature that results in a transient mechanical load on an object. The load is caused by the differential expansion of different parts of the object due to the temperature change. This differential expansion can be understood in terms of strain, rather than stress. When the strain exceeds the tensile strength of the material, it can cause cracks to form, and eventually lead to structural failure.
Methods to prevent thermal shock include:
Effect on materials.
Borosilicate glass is made to withstand thermal shock better than most other glass through a combination of reduced expansion coefficient, and greater strength, though fused quartz outperforms it in both these respects. Some glass-ceramic materials (mostly in the lithium aluminosilicate (LAS) system) include a controlled proportion of material with a negative expansion coefficient, so that the overall coefficient can be reduced to almost exactly zero over a reasonably wide range of temperatures.
Among the best thermomechanical materials, there are alumina, zirconia, tungsten alloys, silicon nitride, silicon carbide, boron carbide, and some stainless steels.
Reinforced carbon-carbon is extremely resistant to thermal shock, due to graphite's extremely high thermal conductivity and low expansion coefficient, the high strength of carbon fiber, and a reasonable ability to deflect cracks within the structure.
To measure thermal shock, the impulse excitation technique proved to be a useful tool. It can be used to measure Young's modulus, Shear modulus, Poisson's ratio, and damping coefficient in a non destructive way. The same test-piece can be measured after different thermal shock cycles, and this way the deterioration in physical properties can be mapped out.
Thermal shock resistance.
Thermal shock resistance measures can be used for material selection in applications subject to rapid temperature changes. A common measure of thermal shock resistance is the maximum temperature differential, formula_0, which can be sustained by the material for a given thickness.
Strength-controlled thermal shock resistance.
Thermal shock resistance measures can be used for material selection in applications subject to rapid temperature changes. The maximum temperature jump, formula_0, sustainable by a material can be defined for strength-controlled models by:
formula_1
where formula_2 is the failure stress (which can be yield or fracture stress), formula_3 is the coefficient of thermal expansion, formula_4 is the Young's modulus, and formula_5 is a constant depending upon the part constraint, material properties, and thickness.
formula_6
where formula_7 is a system constrain constant dependent upon the Poisson's ratio, formula_8, and formula_9 is a non-dimensional parameter dependent upon the Biot number, formula_10.
formula_11
formula_9 may be approximated by:
formula_12
where formula_13 is the thickness, formula_14 is the heat transfer coefficient, and formula_15 is the thermal conductivity.
Perfect heat transfer.
If perfect heat transfer (formula_16) is assumed, the maximum heat transfer supported by the material is:
formula_17
A material index for material selection according to thermal shock resistance in the fracture stress derived perfect heat transfer case is therefore:
formula_20
Poor heat transfer.
For cases with poor heat transfer (formula_21), the maximum heat differential supported by the material is:
formula_22
In the poor heat transfer case, a higher thermal conductivity is beneficial for thermal shock resistance. The material index for the poor heat transfer case is often taken as:
formula_25
According to both the perfect and poor heat transfer models, larger temperature differentials can be tolerated for hot shock than for cold shock.
Fracture toughness controlled thermal shock resistance.
In addition to thermal shock resistance defined by material fracture strength, models have also been defined within the fracture mechanics framework. Lu and Fleck produced criteria for thermal shock cracking based on fracture toughness controlled cracking. The models were based on thermal shock in ceramics (generally brittle materials). Assuming an infinite plate, and mode I cracking, the crack was predicted to start from the edge for cold shock, but the center of the plate for hot shock. Cases were divided into perfect, and poor heat transfer to further simplify the models.
Perfect heat transfer.
The sustainable temperature jump decreases, with increasing convective heat transfer (and therefore larger Biot number). This is represented in the model shown below for perfect heat transfer (formula_16).
formula_26
where formula_27 is the mode I fracture toughness, formula_4 is the Young's modulus, formula_3 is the thermal expansion coefficient, and formula_13 is half the thickness of the plate.
A material index for material selection in the fracture mechanics derived perfect heat transfer case is therefore:
formula_30
Poor heat transfer.
For cases with poor heat transfer, the Biot number is an important factor in the sustainable temperature jump.
formula_31
Critically, for poor heat transfer cases, materials with higher thermal conductivity, "k", have higher thermal shock resistance. As a result, a commonly chosen material index for thermal shock resistance in the poor heat transfer case is:
formula_32
Kingery thermal shock methods.
The temperature difference to initiate fracture has been described by William David Kingery to be:
formula_33
where formula_34 is a shape factor, formula_35 is the fracture stress, formula_15 is the thermal conductivity, formula_4 is the Young's modulus, formula_3 is the coefficient of thermal expansion, formula_14 is the heat transfer coefficient, and formula_36 is a fracture resistance parameter. The fracture resistance parameter is a common metric used to define the thermal shock tolerance of materials.
formula_37
The formulas were derived for ceramic materials, and make the assumptions of a homogeneous body with material properties independent of temperature, but can be well applied to other brittle materials.
Testing.
Thermal shock testing exposes products to alternating low and high temperatures to accelerate failures caused by temperature cycles or thermal shocks during normal use. The transition between temperature extremes occurs very rapidly, greater than 15 °C per minute.
Equipment with single or multiple chambers is typically used to perform thermal shock testing. When using single chamber thermal shock equipment, the products remain in one chamber and the chamber air temperature is rapidly cooled and heated. Some equipment uses separate hot and cold chambers with an elevator mechanism that transports the products between two or more chambers.
Glass containers can be sensitive to sudden changes in temperature. One method of testing involves rapid movement from cold to hot water baths, and back.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta T"
},
{
"math_id": 1,
"text": "B\\Delta T = \\frac{\\sigma_f}{\\alpha E}"
},
{
"math_id": 2,
"text": "\\sigma_f"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "E"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "B = \\frac{C}{A}"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "\\nu"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "\\mathrm{Bi}"
},
{
"math_id": 11,
"text": "C = \\begin{cases}\n1 & \\text{axial stress} \\\\\n(1-\\nu) & \\text{biaxial constraint} \\\\\n(1-2\\nu) & \\text{triaxial constraint}\n\\end{cases}"
},
{
"math_id": 12,
"text": "A = \\frac{Hh/k}{1 + Hh/k} = \\frac{\\mathrm{Bi}}{1 + \\mathrm{Bi}}"
},
{
"math_id": 13,
"text": "H"
},
{
"math_id": 14,
"text": "h"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "\\mathrm{Bi} = \\infty"
},
{
"math_id": 17,
"text": "\\Delta T = A_1\\frac{\\sigma_f}{E\\alpha}"
},
{
"math_id": 18,
"text": "A_1 \\approx 1"
},
{
"math_id": 19,
"text": "A_1 \\approx 3.2"
},
{
"math_id": 20,
"text": "\\frac{\\sigma_f}{E\\alpha}"
},
{
"math_id": 21,
"text": "\\mathrm{Bi} < 1"
},
{
"math_id": 22,
"text": "\\Delta T = A_2\\frac{\\sigma_f}{E\\alpha}\\frac{1}{\\mathrm{Bi}} = A_2\\frac{\\sigma_f}{E\\alpha}\\frac{k}{hH}"
},
{
"math_id": 23,
"text": "A_2 \\approx 3.2"
},
{
"math_id": 24,
"text": "A_2 \\approx 6.5"
},
{
"math_id": 25,
"text": "\\frac{k\\sigma_f}{E\\alpha}"
},
{
"math_id": 26,
"text": "\\Delta T = A_3 \\frac{K_{Ic}}{E \\alpha \\sqrt {\\pi H}}"
},
{
"math_id": 27,
"text": "K_{Ic}"
},
{
"math_id": 28,
"text": "A_3 \\approx 4.5"
},
{
"math_id": 29,
"text": "A_4 \\approx 5.6"
},
{
"math_id": 30,
"text": "\\frac{K_{Ic}}{E\\alpha}"
},
{
"math_id": 31,
"text": "\\Delta T = A_4 \\frac{K_{Ic}}{E \\alpha \\sqrt{\\pi H}}\\frac{k}{hH}"
},
{
"math_id": 32,
"text": "\\frac{kK_{Ic}}{E\\alpha}"
},
{
"math_id": 33,
"text": "\\Delta T_c = S \\frac{k\\sigma^*(1-\\nu)}{E\\alpha} \\frac{1}{h} = \\frac{S}{hR^'}"
},
{
"math_id": 34,
"text": "S"
},
{
"math_id": 35,
"text": "\\sigma^*"
},
{
"math_id": 36,
"text": "R'"
},
{
"math_id": 37,
"text": "R' = \\frac{k\\sigma^*(1-v)}{E\\alpha}"
}
]
| https://en.wikipedia.org/wiki?curid=1394160 |
1394358 | Discrete Laplace operator | Analog of the continuous Laplace operator
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices), the discrete Laplace operator is more commonly called the Laplacian matrix.
The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing, where it is known as the Laplace filter, and in machine learning for clustering and semi-supervised learning on neighborhood graphs.
Definitions.
Graph Laplacians.
There are various definitions of the "discrete Laplacian" for graphs, differing by sign and scale factor (sometimes one averages over the neighboring vertices, other times one just sums; this makes no difference for a regular graph). The traditional definition of the graph Laplacian, given below, corresponds to the negative continuous Laplacian on a domain with a free boundary.
Let formula_0 be a graph with vertices formula_1 and edges formula_2. Let formula_3 be a function of the vertices taking values in a ring. Then, the discrete Laplacian formula_4 acting on formula_5 is defined by
formula_6
where formula_7 is the graph distance between vertices w and v. Thus, this sum is over the nearest neighbors of the vertex "v". For a graph with a finite number of edges and vertices, this definition is identical to that of the Laplacian matrix. That is, formula_8 can be written as a column vector; and so formula_9 is the product of the column vector and the Laplacian matrix, while formula_10 is just the "v"'th entry of the product vector.
If the graph has weighted edges, that is, a weighting function formula_11 is given, then the definition can be generalized to
formula_12
where formula_13 is the weight value on the edge formula_14.
Closely related to the discrete Laplacian is the averaging operator:
formula_15
Mesh Laplacians.
In addition to considering the connectivity of nodes and edges in a graph, mesh Laplace operators take into account the geometry of a surface (e.g. the angles at the nodes). For a two-dimensional manifold triangle mesh, the Laplace-Beltrami operator of a scalar function formula_16 at a vertex formula_17 can be approximated as
formula_18
where the sum is over all adjacent vertices formula_19 of formula_17, formula_20 and formula_21 are the two angles opposite of the edge formula_22, and formula_23 is the "vertex area" of formula_17; that is, e.g. one third of the summed areas of triangles incident to formula_17.
It is important to note that the sign of the discrete Laplace-Beltrami operator is conventionally opposite the sign of the ordinary Laplace operator.
The above cotangent formula can be derived using many different methods among which are piecewise linear finite elements, finite volumes, and discrete exterior calculus
(PDF download: ).
To facilitate computation, the Laplacian is encoded in a matrix formula_24 such that formula_25. Let formula_26 be the (sparse) "cotangent matrix" with entries
formula_27
Where formula_28 denotes the neighborhood of formula_29.
And let formula_30 be the diagonal "mass matrix" formula_30 whose formula_17-th entry along the diagonal is the vertex area formula_31. Then formula_32 is the sought discretization of the Laplacian.
A more general overview of mesh operators is given in.
Finite differences.
Approximations of the Laplacian, obtained by the finite-difference method or by the finite-element method, can also be called discrete Laplacians. For example, the Laplacian in two dimensions can be approximated using the five-point stencil finite-difference method, resulting in
formula_33
where the grid size is "h" in both dimensions, so that the five-point stencil of a point ("x", "y") in the grid is
formula_34
If the grid size "h" = 1, the result is the negative discrete Laplacian on the graph, which is the square lattice grid. There are no constraints here on the values of the function "f"("x", "y") on the boundary of the lattice grid, thus this is the case of no source at the boundary, that is, a no-flux boundary condition (aka, insulation, or homogeneous Neumann boundary condition). The control of the state variable at the boundary, as
"f"("x", "y") given on the boundary of the grid (aka, Dirichlet boundary condition), is rarely used for graph Laplacians, but is common in other applications.
Multidimensional discrete Laplacians on rectangular cuboid regular grids have very special properties, e.g., they are Kronecker sums of one-dimensional discrete Laplacians, see Kronecker sum of discrete Laplacians, in which case all its eigenvalues and eigenvectors can be explicitly calculated.
Finite-element method.
In this approach, the domain is discretized into smaller elements, often triangles or tetrahedra, but other elements such as rectangles or cuboids are possible. The solution space is then approximated using so called form-functions of a pre-defined degree. The differential equation containing the Laplace operator is then transformed into a variational formulation, and a system of equations is constructed (linear or eigenvalue problems). The resulting matrices are usually very sparse and can be solved with iterative methods.
Image processing.
Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives Laplace operator#Coordinate expressions and calculated as sum of differences over the nearest neighbours of the central pixel. Since derivative filters are often sensitive to noise in an image, the Laplace operator is often preceded by a smoothing filter (such as a Gaussian filter) in order to remove the noise before calculating the derivative. The smoothing filter and Laplace filter are often combined into a single filter.
Implementation via operator discretization.
For one-, two- and three-dimensional signals, the discrete Laplacian can be given as convolution with the following kernels:
1D filter: formula_35,
2D filter: formula_36.
formula_37 corresponds to the (Five-point stencil) finite-difference formula seen previously. It is stable for very smoothly varying fields, but for equations with rapidly varying solutions a more stable and isotropic form of the Laplacian operator is required, such as the nine-point stencil, which includes the diagonals:
2D filter: formula_38,
3D filter: formula_39 using seven-point stencil is given by:
first plane = formula_40; second plane = formula_41; third plane = formula_40.
and using 27-point stencil by:
first plane = formula_42; second plane = formula_43; third plane = formula_42.
"nD filter": For the element formula_44 of the kernel formula_45
formula_46
where is the position (either −1, 0 or 1) of the element in the kernel in the i-th direction, and is the number of directions for which .
Note that the "n"D version, which is based on the graph generalization of the Laplacian, assumes all neighbors to be at an equal distance, and hence leads to the following 2D filter with diagonals included, rather than the version above:
2D filter: formula_47
These kernels are deduced by using discrete differential quotients.
It can be shown that the following discrete approximation of the two-dimensional Laplacian operator as a convex combination of difference operators
formula_48
for γ ∈ [0, 1] is compatible with discrete scale-space properties, where specifically the value γ = 1/3 gives the best approximation of rotational symmetry. Regarding three-dimensional signals, it is shown that the Laplacian operator can be approximated by the two-parameter family of difference operators
formula_49
where
formula_50
formula_51
formula_52
Implementation via continuous reconstruction.
A discrete signal, comprising images, can be viewed as a discrete representation of a continuous function formula_53, where the coordinate vector formula_54 and the value domain is real formula_55.
Derivation operation is therefore directly applicable to the continuous function, formula_56.
In particular any discrete image, with reasonable presumptions on the discretization process, e.g. assuming band limited functions, or wavelets expandable functions, etc. can be reconstructed by means of well-behaving interpolation functions underlying the reconstruction formulation,
formula_57
where formula_58 are discrete representations of formula_56 on grid formula_59 and formula_60 are interpolation functions specific to the grid formula_59. On a uniform grid, such as images, and for bandlimited functions, interpolation functions are shift invariant amounting to formula_61 with formula_62 being an appropriately dilated sinc function defined in formula_63-dimensions i.e. formula_64. Other approximations of formula_65 on uniform grids, are appropriately dilated Gaussian functions in formula_63-dimensions. Accordingly, the discrete Laplacian becomes a discrete version of the Laplacian of the continuous formula_53
formula_66
which in turn is a convolution with the Laplacian of the interpolation function on the uniform (image) grid formula_59.
An advantage of using Gaussians as interpolation functions is that they yield linear operators, including Laplacians, that are free from rotational artifacts of the coordinate frame in which formula_56 is represented via formula_67, in formula_63-dimensions, and are frequency aware by definition. A linear operator has not only a limited range in the formula_68 domain but also an effective range in the frequency domain (alternatively Gaussian scale space) which can be controlled explicitly via the variance of the Gaussian in a principled manner. The resulting filtering can be implemented by separable filters and decimation (signal processing)/pyramid (image processing) representations for further computational efficiency in formula_63-dimensions. In other words, the discrete Laplacian filter of any size can be generated conveniently as the sampled Laplacian of Gaussian with spatial size befitting the needs of a particular application as controlled by its variance. Monomials which are non-linear operators can also be implemented using a similar reconstruction and approximation approach provided that the signal is sufficiently over-sampled. Thereby, such non-linear operators e.g. Structure Tensor, and Generalized Structure Tensor which are used in pattern recognition for their total least-square optimality in orientation estimation, can be realized.
Spectrum.
The spectrum of the discrete Laplacian on an infinite grid is of key interest; since it is a self-adjoint operator, it has a real spectrum. For the convention formula_69 on formula_70, the spectrum lies within formula_71 (as the averaging operator has spectral values in formula_72). This may also be seen by applying the Fourier transform. Note that the discrete Laplacian on an infinite grid has purely absolutely continuous spectrum, and therefore, no eigenvalues or eigenfunctions.
Theorems.
If the graph is an infinite square lattice grid, then this definition of the Laplacian can be shown to correspond to the continuous Laplacian in the limit of an infinitely fine grid. Thus, for example, on a one-dimensional grid we have
formula_73
This definition of the Laplacian is commonly used in numerical analysis and in image processing. In image processing, it is considered to be a type of digital filter, more specifically an edge filter, called the Laplace filter.
Discrete heat equation.
Suppose formula_74 describes a temperature distribution across a graph, where formula_75 is the temperature at vertex formula_76. According to Newton's law of cooling, the heat transferred from node formula_76 to node formula_77 is proportional to formula_78 if nodes formula_76 and formula_77 are connected (if they are not connected, no heat is transferred). Then, for thermal conductivity formula_79,
formula_80
In matrix-vector notation,
formula_81
which gives
formula_82
Notice that this equation takes the same form as the heat equation, where the matrix −"L" is replacing the Laplacian operator formula_83; hence, the "graph Laplacian".
To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write formula_74 as a linear combination of eigenvectors formula_84 of "L" (so that formula_85) with time-dependent coefficients, formula_86
Plugging into the original expression (because "L" is a symmetric matrix, its unit-norm eigenvectors formula_84 are orthogonal):
formula_87
whose solution is
formula_88
As shown before, the eigenvalues formula_89 of "L" are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given formula_89 and the initial condition formula_90, the solution at any time "t" can be found.
To find formula_90 for each formula_76 in terms of the overall initial condition formula_91, simply project formula_91 onto the unit-norm eigenvectors formula_84;
formula_92.
This approach has been applied to quantitative heat transfer modelling on unstructured grids.
In the case of undirected graphs, this works because formula_93 is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of formula_93 is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other.
Equilibrium behavior.
To understand formula_94, the only terms formula_95 that remain are those where formula_96, since
formula_97
In other words, the equilibrium state of the system is determined completely by the kernel of formula_93.
Since by definition, formula_98, the vector formula_99 of all ones is in the kernel. If there are formula_79 disjoint connected components in the graph, then this vector of all ones can be split into the sum of formula_79 independent formula_100 eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere.
The consequence of this is that for a given initial condition formula_101 for a graph with formula_102 vertices
formula_103
where
formula_104
For each element formula_105 of formula_74, i.e. for each vertex formula_77 in the graph, it can be rewritten as
formula_106.
In other words, at steady state, the value of formula_74 converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other.
Example of the operator on a grid.
This section shows an example of a function formula_74 diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid.
The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions.
N = 20; % The number of pixels along a dimension of the image
A = zeros(N, N); % The image
Adj = zeros(N * N, N * N); % The adjacency matrix
% Use 8 neighbors, and fill in the adjacency matrix
dx = [- 1, 0, 1, - 1, 1, - 1, 0, 1];
dy = [- 1, - 1, - 1, 0, 0, 1, 1, 1];
for x = 1:N
for y = 1:N
index = (x - 1) * N + y;
for ne = 1:length(dx)
newx = x + dx(ne);
newy = y + dy(ne);
if newx > 0 && newx <= N && newy > 0 && newy <= N
index2 = (newx - 1) * N + newy;
Adj(index, index2) = 1;
end
end
end
end
% BELOW IS THE KEY CODE THAT COMPUTES THE SOLUTION TO THE DIFFERENTIAL EQUATION
Deg = diag(sum(Adj, 2)); % Compute the degree matrix
L = Deg - Adj; % Compute the laplacian matrix in terms of the degree and adjacency matrices
[V, D] = eig(L); % Compute the eigenvalues/vectors of the laplacian matrix
D = diag(D);
% Initial condition (place a few large positive values around and
% make everything else zero)
C0 = zeros(N, N);
C0(2:5, 2:5) = 5;
C0(10:15, 10:15) = 10;
C0(2:5, 8:13) = 7;
C0 = C0(:);
C0V = V'*C0; % Transform the initial condition into the coordinate system
% of the eigenvectors
for t = 0:0.05:5
% Loop through times and decay each initial component
Phi = C0V .* exp(- D * t); % Exponential decay for each component
Phi = V * Phi; % Transform from eigenvector coordinate system to original coordinate system
Phi = reshape(Phi, N, N);
% Display the results and write to GIF file
imagesc(Phi);
caxis([0, 10]);
title(sprintf('Diffusion t = %3f', t));
frame = getframe(1);
im = frame2im(frame);
[imind, cm] = rgb2ind(im, 256);
if t == 0
imwrite(imind, cm, 'out.gif', 'gif', 'Loopcount', inf, 'DelayTime', 0.1);
else
imwrite(imind, cm, 'out.gif', 'gif', 'WriteMode', 'append', 'DelayTime', 0.1);
end
end
Discrete Schrödinger operator.
Let formula_107 be a potential function defined on the graph. Note that "P" can be considered to be a multiplicative operator acting diagonally on formula_5
formula_108
Then formula_109 is the discrete Schrödinger operator, an analog of the continuous Schrödinger operator.
If the number of edges meeting at a vertex is uniformly bounded, and the potential is bounded, then "H" is bounded and self-adjoint.
The spectral properties of this Hamiltonian can be studied with Stone's theorem; this is a consequence of the duality between posets and Boolean algebras.
On regular lattices, the operator typically has both traveling-wave as well as Anderson localization solutions, depending on whether the potential is periodic or random.
The Green's function of the discrete Schrödinger operator is given in the resolvent formalism by
formula_110
where formula_111 is understood to be the Kronecker delta function on the graph: formula_112; that is, it equals "1" if "v"="w" and "0" otherwise.
For fixed formula_113 and formula_114 a complex number, the Green's function considered to be a function of "v" is the unique solution to
formula_115
ADE classification.
Certain equations involving the discrete Laplacian only have solutions on the simply-laced Dynkin diagrams (all edges multiplicity 1), and are an example of the ADE classification. Specifically, the only positive solutions to the homogeneous equation:
formula_116
in words,
"Twice any label is the sum of the labels on adjacent vertices,"
are on the extended (affine) ADE Dynkin diagrams, of which there are 2 infinite families (A and D) and 3 exceptions (E). The resulting numbering is unique up to scale, and if the smallest value is set at 1, the other numbers are integers, ranging up to 6.
The ordinary ADE graphs are the only graphs that admit a positive labeling with the following property:
Twice any label minus two is the sum of the labels on adjacent vertices.
In terms of the Laplacian, the positive solutions to the inhomogeneous equation:
formula_117
The resulting numbering is unique (scale is specified by the "2"), and consists of integers; for E8 they range from 58 to 270, and have been observed as early as 1968.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "G = (V,E)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "\\phi\\colon V\\to R"
},
{
"math_id": 4,
"text": "\\Delta"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "(\\Delta \\phi)(v)=\\sum_{w:\\,d(w,v)=1}\\left[\\phi(v)-\\phi(w)\\right]"
},
{
"math_id": 7,
"text": "d(w,v)"
},
{
"math_id": 8,
"text": " \\phi"
},
{
"math_id": 9,
"text": "\\Delta\\phi"
},
{
"math_id": 10,
"text": "(\\Delta \\phi)(v)"
},
{
"math_id": 11,
"text": "\\gamma\\colon E\\to R"
},
{
"math_id": 12,
"text": "(\\Delta_\\gamma\\phi)(v)=\\sum_{w:\\,d(w,v)=1}\\gamma_{wv}\\left[\\phi(v)-\\phi(w)\\right]"
},
{
"math_id": 13,
"text": "\\gamma_{wv}"
},
{
"math_id": 14,
"text": "wv\\in E"
},
{
"math_id": 15,
"text": "(M\\phi)(v)=\\frac{1}{\\deg v}\\sum_{w:\\,d(w,v)=1}\\phi(w)."
},
{
"math_id": 16,
"text": "u"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "\n(\\Delta u)_{i} \\equiv \\frac{1}{2A_i} \\sum_{j} (\\cot \\alpha_{ij} + \\cot \\beta_{ij}) (u_j - u_i),\n"
},
{
"math_id": 19,
"text": "j"
},
{
"math_id": 20,
"text": "\\alpha_{ij}"
},
{
"math_id": 21,
"text": "\\beta_{ij}"
},
{
"math_id": 22,
"text": "ij"
},
{
"math_id": 23,
"text": "A_i"
},
{
"math_id": 24,
"text": "L\\in\\mathbb{R}^{|V|\\times|V|}"
},
{
"math_id": 25,
"text": " Lu = (\\Delta u)_i "
},
{
"math_id": 26,
"text": "C"
},
{
"math_id": 27,
"text": "\nC_{ij} = \n\\begin{cases} \n \\frac{1}{2}(\\cot \\alpha_{ij} + \\cot \\beta_{ij}) & ij \\text{ is an edge, that is } j \\in N(i), \\\\\n -\\sum\\limits_{k \\in N(i)}C_{ik} & i = j, \\\\\n 0 & \\text{otherwise}\n \n\\end{cases}\n"
},
{
"math_id": 28,
"text": "N(i) "
},
{
"math_id": 29,
"text": " i"
},
{
"math_id": 30,
"text": " M "
},
{
"math_id": 31,
"text": " A_i "
},
{
"math_id": 32,
"text": " L=M^{-1}C "
},
{
"math_id": 33,
"text": " \\Delta f(x,y) \\approx \\frac{f(x-h,y) + f(x+h,y) + f(x,y-h) + f(x,y+h) - 4f(x,y)}{h^2}, "
},
{
"math_id": 34,
"text": "\\{(x-h, y), (x, y), (x+h, y), (x, y-h), (x, y+h)\\}."
},
{
"math_id": 35,
"text": "\\vec{D}^2_x=\\begin{bmatrix}1 & -2 & 1\\end{bmatrix}"
},
{
"math_id": 36,
"text": "\\mathbf{D}^2_{xy}=\\begin{bmatrix}0 & 1 & 0\\\\1 & -4 & 1\\\\0 & 1 & 0\\end{bmatrix}"
},
{
"math_id": 37,
"text": "\\mathbf{D}^2_{xy}"
},
{
"math_id": 38,
"text": "\\mathbf{D}^2_{xy}=\\begin{bmatrix}0.25 & 0.5 & 0.25\\\\0.5 & -3 & 0.5\\\\0.25 & 0.5 & 0.25\\end{bmatrix}"
},
{
"math_id": 39,
"text": "\\mathbf{D}^2_{xyz}"
},
{
"math_id": 40,
"text": "\\begin{bmatrix}0 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 0\\end{bmatrix}"
},
{
"math_id": 41,
"text": "\\begin{bmatrix}0 & 1 & 0\\\\1 & -6 & 1\\\\0 & 1 & 0\\end{bmatrix}"
},
{
"math_id": 42,
"text": "\\frac{1}{26}\\begin{bmatrix}2 & 3 & 2\\\\3 & 6 & 3\\\\2 & 3 & 2\\end{bmatrix}"
},
{
"math_id": 43,
"text": "\\frac{1}{26}\\begin{bmatrix}3 & 6 & 3\\\\6 & -88 & 6\\\\3 & 6 & 3\\end{bmatrix}"
},
{
"math_id": 44,
"text": "a_{x_1, x_2, \\dots , x_n}"
},
{
"math_id": 45,
"text": "\\mathbf{D}^2_{x_1, x_2, \\dots , x_n},"
},
{
"math_id": 46,
"text": "a_{x_1, x_2, \\dots , x_n} = \\left\\{\\begin{array}{ll}\n-2n & \\text{if } s = n, \\\\\n1 & \\text{if } s = n - 1, \\\\\n0 & \\text{otherwise,}\n\\end{array}\\right."
},
{
"math_id": 47,
"text": "\\mathbf{D}^2_{xy}=\\begin{bmatrix}1 & 1 & 1\\\\1 & -8 & 1\\\\1 & 1 & 1\\end{bmatrix}."
},
{
"math_id": 48,
"text": "\\nabla^2_{\\gamma}= (1 - \\gamma) \\nabla^2_{5} + \\gamma \\nabla ^2_{\\times} \n = (1 - \\gamma) \\begin{bmatrix}0 & 1 & 0\\\\1 & -4 & 1\\\\0 & 1 & 0\\end{bmatrix}\n + \\gamma \\begin{bmatrix}1/2 & 0 & 1/2\\\\0 & -2 & 0\\\\1/2 & 0 & 1/2\\end{bmatrix}\n"
},
{
"math_id": 49,
"text": "\n\\nabla^2_{\\gamma_1,\\gamma_2} \n = (1 - \\gamma_1 - \\gamma_2) \\, \\nabla_7^2 + \\gamma_1 \\, \\nabla_{+^3}^2 + \\gamma_2 \\, \\nabla_{\\times^3}^2 ),\n"
},
{
"math_id": 50,
"text": "\n (\\nabla_7^2 f)_{0, 0, 0}\n =\n f_{-1, 0, 0} + f_{+1, 0, 0} + f_{0, -1, 0} + f_{0, +1, 0} + f_{0, 0, -1} + f_{0, 0, +1} - 6 f_{0, 0, 0},\n"
},
{
"math_id": 51,
"text": "\n (\\nabla_{+^3}^2 f)_{0, 0, 0}\n = \\frac{1}{4}\n (f_{-1, -1, 0} + f_{-1, +1, 0} + f_{+1, -1, 0} + f_{+1, +1, 0} \n + f_{-1, 0, -1} + f_{-1, 0, +1} + f_{+1, 0, -1} + f_{+1, 0, +1} \n + f_{0, -1, -1} + f_{0, -1, +1} + f_{0, +1, -1} + f_{0, +1, +1}\n - 12 f_{0, 0, 0}),\n"
},
{
"math_id": 52,
"text": "\n (\\nabla_{\\times^3}^2 f)_{0, 0, 0}\n = \\frac{1}{4}\n (f_{-1, -1, -1} + f_{-1, -1, +1} + f_{-1, +1, -1} + f_{-1, +1, +1}\n + f_{+1, -1, -1} + f_{+1, -1, +1} + f_{+1, +1, -1} + f_{+1, +1, +1}\n - 8 f_{0, 0, 0}).\n"
},
{
"math_id": 53,
"text": "f(\\bar r)"
},
{
"math_id": 54,
"text": "\\bar r \\in R^n "
},
{
"math_id": 55,
"text": "f\\in R"
},
{
"math_id": 56,
"text": "f"
},
{
"math_id": 57,
"text": "\nf(\\bar r)=\\sum_{k\\in K}f_k \\mu_k(\\bar r) \n"
},
{
"math_id": 58,
"text": "f_k\\in R"
},
{
"math_id": 59,
"text": "K"
},
{
"math_id": 60,
"text": "\\mu_k "
},
{
"math_id": 61,
"text": "\\mu_k(\\bar r)= \\mu(\\bar r-\\bar r_k) "
},
{
"math_id": 62,
"text": "\\mu "
},
{
"math_id": 63,
"text": "n"
},
{
"math_id": 64,
"text": "\\bar r=(x_1,x_2...x_n)^T"
},
{
"math_id": 65,
"text": "\\mu"
},
{
"math_id": 66,
"text": "\n\\nabla^2 f(\\bar r_k)= \\sum_{k'\\in K}f_{k'} (\\nabla^2 \\mu(\\bar r-\\bar r_{k'}))|_{\\bar r= \\bar r_k}\n"
},
{
"math_id": 67,
"text": "f_k"
},
{
"math_id": 68,
"text": "\\bar r"
},
{
"math_id": 69,
"text": "\\Delta = I - M"
},
{
"math_id": 70,
"text": "Z"
},
{
"math_id": 71,
"text": "[0,2]"
},
{
"math_id": 72,
"text": "[-1,1]"
},
{
"math_id": 73,
"text": "\\frac{\\partial^2F}{\\partial x^2} = \n\\lim_{\\epsilon \\rightarrow 0} \n \\frac{[F(x+\\epsilon)-F(x)]-[F(x)-F(x-\\epsilon)]}{\\epsilon^2}.\n"
},
{
"math_id": 74,
"text": "\\phi"
},
{
"math_id": 75,
"text": "\\phi_i"
},
{
"math_id": 76,
"text": "i"
},
{
"math_id": 77,
"text": "j"
},
{
"math_id": 78,
"text": "\\phi_i - \\phi_j"
},
{
"math_id": 79,
"text": "k"
},
{
"math_id": 80,
"text": "\\begin{align}\n \\frac{d \\phi_i}{d t}\n &= -k \\sum_j A_{ij} \\left(\\phi_i - \\phi_j \\right) \\\\\n &= -k \\left(\\phi_i \\sum_j A_{ij} - \\sum_j A_{ij} \\phi_j \\right) \\\\\n &= -k \\left(\\phi_i \\ \\deg(v_i) - \\sum_j A_{ij} \\phi_j \\right) \\\\\n &= -k \\sum_j \\left(\\delta_{ij} \\ \\deg(v_i) - A_{ij} \\right) \\phi_j \\\\\n &= -k \\sum_j \\left(L_{ij} \\right) \\phi_j.\n\\end{align}"
},
{
"math_id": 81,
"text": "\\begin{align}\n \\frac{d\\phi}{dt} &= -k(D - A)\\phi \\\\\n &= -kL \\phi,\n\\end{align}"
},
{
"math_id": 82,
"text": "\\frac{d \\phi}{d t} + kL\\phi = 0."
},
{
"math_id": 83,
"text": "\\nabla^2"
},
{
"math_id": 84,
"text": "\\mathbf{v}_i"
},
{
"math_id": 85,
"text": "L\\mathbf{v}_i = \\lambda_i \\mathbf{v}_i"
},
{
"math_id": 86,
"text": "\\phi(t) = \\sum_i c_i(t) \\mathbf{v}_i."
},
{
"math_id": 87,
"text": "\\begin{align}\n 0 ={} &\\frac{d\\left(\\sum_i c_i(t) \\mathbf{v}_i\\right)}{dt} + kL\\left(\\sum_i c_i(t) \\mathbf{v}_i\\right) \\\\\n {}={} &\\sum_i \\left[\\frac{dc_i(t)}{dt} \\mathbf{v}_i + k c_i(t) L \\mathbf{v}_i\\right] \\\\\n {}={} &\\sum_i \\left[\\frac{dc_i(t)}{dt} \\mathbf{v}_i + k c_i(t) \\lambda_i \\mathbf{v}_i\\right] \\\\\n \\Rightarrow 0 ={} &\\frac{dc_i(t)}{dt} + k \\lambda_i c_i(t), \\\\\n\\end{align}"
},
{
"math_id": 88,
"text": "c_i(t) = c_i(0) e^{-k \\lambda_i t}."
},
{
"math_id": 89,
"text": "\\lambda_i"
},
{
"math_id": 90,
"text": "c_i(0)"
},
{
"math_id": 91,
"text": "\\phi(0)"
},
{
"math_id": 92,
"text": "c_i(0) = \\left\\langle \\phi(0), \\mathbf{v}_i \\right\\rangle "
},
{
"math_id": 93,
"text": "L"
},
{
"math_id": 94,
"text": "\\lim_{t \\to \\infty}\\phi(t)"
},
{
"math_id": 95,
"text": " c_i(t) = c_i(0) e^{-k \\lambda_i t}"
},
{
"math_id": 96,
"text": "\\lambda_i = 0"
},
{
"math_id": 97,
"text": "\\lim_{t\\to\\infty} e^{-k \\lambda_i t} = \\begin{cases}\n 0, & \\text{if} & \\lambda_i > 0 \\\\\n 1, & \\text{if} & \\lambda_i = 0\n\\end{cases}"
},
{
"math_id": 98,
"text": "\\sum_{j}L_{ij} = 0"
},
{
"math_id": 99,
"text": "\\mathbf{v}^1"
},
{
"math_id": 100,
"text": "\\lambda = 0"
},
{
"math_id": 101,
"text": "c(0)"
},
{
"math_id": 102,
"text": "N"
},
{
"math_id": 103,
"text": "\\lim_{t\\to\\infty}\\phi(t) = \\left\\langle c(0), \\mathbf{v^1} \\right\\rangle \\mathbf{v^1}"
},
{
"math_id": 104,
"text": "\\mathbf{v^1} = \\frac{1}{\\sqrt{N}} [1, 1, \\ldots, 1] "
},
{
"math_id": 105,
"text": "\\phi_j"
},
{
"math_id": 106,
"text": "\\lim_{t\\to\\infty}\\phi_j(t) = \\frac{1}{N} \\sum_{i = 1}^N c_i(0) "
},
{
"math_id": 107,
"text": "P\\colon V\\rightarrow R"
},
{
"math_id": 108,
"text": "(P\\phi)(v)=P(v)\\phi(v)."
},
{
"math_id": 109,
"text": "H=\\Delta+P"
},
{
"math_id": 110,
"text": "G(v,w;\\lambda)=\\left\\langle\\delta_v\\left| \\frac{1}{H-\\lambda}\\right| \\delta_w\\right\\rangle "
},
{
"math_id": 111,
"text": "\\delta_w"
},
{
"math_id": 112,
"text": "\\delta_w(v)=\\delta_{wv}"
},
{
"math_id": 113,
"text": "w\\in V"
},
{
"math_id": 114,
"text": "\\lambda"
},
{
"math_id": 115,
"text": "(H-\\lambda)G(v,w;\\lambda)=\\delta_w(v)."
},
{
"math_id": 116,
"text": "\\Delta \\phi = \\phi,"
},
{
"math_id": 117,
"text": "\\Delta \\phi = \\phi - 2."
}
]
| https://en.wikipedia.org/wiki?curid=1394358 |
1394385 | Feynman slash notation | Notation for contractions with gamma matrices
In the study of Dirac fields in quantum field theory, Richard Feynman invented the convenient Feynman slash notation (less commonly known as the Dirac slash notation). If "A" is a covariant vector (i.e., a 1-form),
formula_0
where "γ" are the gamma matrices. Using the Einstein summation notation, the expression is simply
formula_1.
Identities.
Using the anticommutators of the gamma matrices, one can show that for any formula_2 and formula_3,
formula_4
where formula_5 is the identity matrix in four dimensions.
In particular,
formula_6
Further identities can be read off directly from the gamma matrix identities by replacing the metric tensor with inner products. For example,
formula_7
where:
With four-momentum.
This section uses the (+ − − −) metric signature. Often, when using the Dirac equation and solving for cross sections, one finds the slash notation used on four-momentum: using the Dirac basis for the gamma matrices,
formula_11
as well as the definition of contravariant four-momentum in natural units,
formula_12
we see explicitly that
formula_13
Similar results hold in other bases, such as the Weyl basis.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "{A\\!\\!\\!/} \\ \\stackrel{\\mathrm{def}}{=}\\ \\gamma^0 A_0 + \\gamma^1 A_1 + \\gamma^2 A_2 + \\gamma^3 A_3 "
},
{
"math_id": 1,
"text": "{A\\!\\!\\!/} \\ \\stackrel{\\mathrm{def}}{=}\\ \\gamma^\\mu A_\\mu"
},
{
"math_id": 2,
"text": "a_\\mu"
},
{
"math_id": 3,
"text": "b_\\mu"
},
{
"math_id": 4,
"text": "\\begin{align}\n {a\\!\\!\\!/}{a\\!\\!\\!/} = a^\\mu a_\\mu \\cdot I_4 = a^2 \\cdot I_4 \\\\\n {a\\!\\!\\!/}{b\\!\\!\\!/} + {b\\!\\!\\!/}{a\\!\\!\\!/} = 2 a \\cdot b \\cdot I_4.\n\\end{align}"
},
{
"math_id": 5,
"text": "I_4"
},
{
"math_id": 6,
"text": "{\\partial\\!\\!\\!/}^2 = \\partial^2 \\cdot I_4."
},
{
"math_id": 7,
"text": "\\begin{align}\n\n\n\\gamma_\\mu {a\\!\\!\\!/} \\gamma^\\mu &= -2 {a\\!\\!\\!/} \\\\\n\n\\gamma_\\mu {a\\!\\!\\!/} {b\\!\\!\\!/} \\gamma^\\mu &= 4 a \\cdot b \\cdot I_4 \\\\\n\n\\gamma_\\mu {a\\!\\!\\!/} {b\\!\\!\\!/} {c\\!\\!\\!/} \\gamma^\\mu &= -2 {c\\!\\!\\!/}{b\\!\\!\\!/} {a\\!\\!\\!/} \\\\\n\n\\gamma_\\mu {a\\!\\!\\!/} {b\\!\\!\\!/} {c\\!\\!\\!/}{d\\!\\!\\!/} \\gamma^\\mu &= 2( {d\\!\\!\\!/} {a\\!\\!\\!/} {b\\!\\!\\!/}{c\\!\\!\\!/}+{c\\!\\!\\!/} {b\\!\\!\\!/} {a\\!\\!\\!/}{d\\!\\!\\!/}) \\\\\n\n\\operatorname{tr}({a\\!\\!\\!/}{b\\!\\!\\!/}) &= 4 a \\cdot b \\\\\n\n\\operatorname{tr}({a\\!\\!\\!/}{b\\!\\!\\!/}{c\\!\\!\\!/}{d\\!\\!\\!/}) &= 4 \\left[(a \\cdot b)(c \\cdot d) - (a \\cdot c)(b \\cdot d) + (a \\cdot d)(b \\cdot c) \\right] \\\\\n\n\\operatorname{tr}({a\\!\\!\\!/}{\\gamma^\\mu}{b\\!\\!\\!/}{\\gamma^\\nu }) &= 4 \\left[a^\\mu b^\\nu + a^\\nu b^\\mu - \\eta^{\\mu \\nu}(a \\cdot b) \\right] \\\\\n\n\\operatorname{tr}(\\gamma_5 {a\\!\\!\\!/}{b\\!\\!\\!/}{c\\!\\!\\!/}{d\\!\\!\\!/}) &= 4 i \\varepsilon_{\\mu \\nu \\lambda \\sigma} a^\\mu b^\\nu c^\\lambda d^\\sigma \\\\\n\n\\operatorname{tr}({\\gamma^\\mu}{a\\!\\!\\!/}{\\gamma^\\nu}) &= 0 \\\\\n\n\\operatorname{tr}({\\gamma^5}{a\\!\\!\\!/}{b\\!\\!\\!/}) &= 0 \\\\\n\n\\operatorname{tr}({\\gamma^0}({a\\!\\!\\!/}+m){\\gamma^0}({b\\!\\!\\!/}+m)) &= 8a^0b^0-4(a.b)+4m^2 \\\\\n\n\\operatorname{tr}(({a\\!\\!\\!/}+m){\\gamma^\\mu}({b\\!\\!\\!/}+m){\\gamma^\\nu}) &=\n4 \\left[a^\\mu b^\\nu+a^\\nu b^\\mu - \\eta^{\\mu \\nu}((a \\cdot b)-m^2) \\right] \\\\\n\n\\operatorname{tr}({a\\!\\!\\!/}_1...{a\\!\\!\\!/}_{2n}) &= \\operatorname{tr}({a\\!\\!\\!/}_{2n}...{a\\!\\!\\!/}_1) \\\\\n\n\\operatorname{tr}({a\\!\\!\\!/}_1...{a\\!\\!\\!/}_{2n+1}) &= 0\n\n\\end{align}"
},
{
"math_id": 8,
"text": "\\varepsilon_{\\mu \\nu \\lambda \\sigma}"
},
{
"math_id": 9,
"text": "\\eta^{\\mu \\nu}"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "\\gamma^0 = \\begin{pmatrix} I & 0 \\\\ 0 & -I \\end{pmatrix},\\quad \\gamma^i = \\begin{pmatrix} 0 & \\sigma^i \\\\ -\\sigma^i & 0 \\end{pmatrix} \\,"
},
{
"math_id": 12,
"text": " p^\\mu = \\left(E, p_x, p_y, p_z \\right) \\,"
},
{
"math_id": 13,
"text": "\\begin{align}\n {p\\!\\!/} &= \\gamma^\\mu p_\\mu = \\gamma^0 p^0 - \\gamma^i p^i \\\\\n &= \\begin{bmatrix} p^0 & 0 \\\\ 0 & -p^0 \\end{bmatrix} - \\begin{bmatrix} 0 & \\sigma^i p^i \\\\ -\\sigma^i p^i & 0 \\end{bmatrix} \\\\\n &= \\begin{bmatrix} E & -\\vec{\\sigma} \\cdot \\vec{p} \\\\ \\vec{\\sigma} \\cdot \\vec{p} & -E \\end{bmatrix}.\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=1394385 |
1394454 | Vertex function | Effective particle coupling beyond tree level
In quantum electrodynamics, the vertex function describes the coupling between a photon and an electron beyond the leading order of perturbation theory. In particular, it is the one particle irreducible correlation function involving the fermion formula_0, the antifermion formula_1, and the vector potential A.
Definition.
The vertex function formula_2 can be defined in terms of a functional derivative of the effective action Seff as
formula_3
The dominant (and classical) contribution to formula_2 is the gamma matrix formula_4, which explains the choice of the letter. The vertex function is constrained by the symmetries of quantum electrodynamics — Lorentz invariance; gauge invariance or the transversality of the photon, as expressed by the Ward identity; and invariance under parity — to take the following form:
formula_5
where formula_6, formula_7 is the incoming four-momentum of the external photon (on the right-hand side of the figure), and F1(q2) and F2(q2) are "form factors" that depend only on the momentum transfer q2. At tree level (or leading order), F1(q2) = 1 and F2(q2) = 0. Beyond leading order, the corrections to F1(0) are exactly canceled by the field strength renormalization. The form factor F2(0) corresponds to the anomalous magnetic moment "a" of the fermion, defined in terms of the Landé g-factor as:
formula_8 | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\bar{\\psi}"
},
{
"math_id": 2,
"text": "\\Gamma^\\mu"
},
{
"math_id": 3,
"text": "\\Gamma^\\mu = -{1\\over e}{\\delta^3 S_{\\mathrm{eff}}\\over \\delta \\bar{\\psi} \\delta \\psi \\delta A_\\mu}"
},
{
"math_id": 4,
"text": "\\gamma^\\mu"
},
{
"math_id": 5,
"text": " \\Gamma^\\mu = \\gamma^\\mu F_1(q^2) + \\frac{i \\sigma^{\\mu\\nu} q_{\\nu}}{2 m} F_2(q^2) "
},
{
"math_id": 6,
"text": " \\sigma^{\\mu\\nu} = (i/2) [\\gamma^{\\mu}, \\gamma^{\\nu}] "
},
{
"math_id": 7,
"text": " q_{\\nu} "
},
{
"math_id": 8,
"text": " a = \\frac{g-2}{2} = F_2(0) "
}
]
| https://en.wikipedia.org/wiki?curid=1394454 |
13948331 | Studentized range | In statistics, the studentized range, denoted "q", is the difference between the largest and smallest data in a sample normalized by the sample standard deviation.
It is named after William Sealy Gosset (who wrote under the pseudonym "Student"), and was introduced by him in 1927.
The concept was later discussed by Newman (1939), Keuls (1952), and John Tukey in some unpublished notes.
Its statistical distribution is the "studentized range distribution", which is used for multiple comparison procedures, such as the single step procedure Tukey's range test, the Newman–Keuls method, and the Duncan's step down procedure, and establishing confidence intervals that are still valid after data snooping has occurred.
Description.
The value of the studentized range, most often represented by the variable "q", can be defined based on a random sample "x"1, ..., "x""n" from the "N"(0, 1) distribution of numbers, and another random variable "s" that is independent of all the "xi", and "νs"2 has a "χ"2 distribution with "ν" degrees of freedom. Then
formula_0
has the Studentized range distribution for "n" groups and "ν" degrees of freedom. In applications, the "xi" are typically the means of samples each of size "m", "s"2 is the pooled variance, and the degrees of freedom are "ν" = "n"("m" − 1).
The critical value of "q" is based on three factors:
Distribution.
If "X"1, ..., "X""n" are independent identically distributed random variables that are normally distributed, the probability distribution of their studentized range is what is usually called the "studentized range distribution". Note that the definition of "q" does not depend on the expected value or the standard deviation of the distribution from which the sample is drawn, and therefore its probability distribution is the same regardless of those parameters.
"Studentization".
Generally, the term "studentized" means that the variable's scale was adjusted by dividing by an estimate of a population standard deviation (see also studentized residual). The fact that the standard deviation is a "sample" standard deviation rather than the "population" standard deviation, and thus something that differs from one random sample to the next, is essential to the definition and the distribution of the "Studentized" data. The variability in the value of the "sample" standard deviation contributes additional uncertainty into the values calculated. This complicates the problem of finding the probability distribution of any statistic that is "studentized".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nq _{n,\\nu}= \\frac{\\max\\{\\,x_1,\\ \\dots, \\ x_n\\,\\} - \\min\\{\\,x_1,\\ \\dots,\\ x_n\\}}{s} = \\max_{i,j=1, \\dots, n} \\left\\{\\frac{x_i - x_j}{s}\\right\\}"
}
]
| https://en.wikipedia.org/wiki?curid=13948331 |
13949634 | Lazard's universal ring | In mathematics, Lazard's universal ring is a ring introduced by Michel Lazard in over which the universal commutative one-dimensional formal group law is defined.
There is a universal commutative one-dimensional formal group law over a universal commutative ring defined as follows. We let
formula_0
be
formula_1
for indeterminates formula_2, and we define the universal ring "R" to be the commutative ring generated by the elements formula_2, with the relations that are forced by the associativity and commutativity laws for formal group laws. More or less by definition, the ring "R" has the following universal property:
For every commutative ring "S", one-dimensional formal group laws over "S" correspond to ring homomorphisms from "R" to "S".
The commutative ring "R" constructed above is known as Lazard's universal ring. At first sight it seems to be incredibly complicated: the relations between its generators are very messy. However Lazard proved that it has a very simple structure: it is just a polynomial ring (over the integers) on generators of degree 1, 2, 3, ..., where formula_2 has degree formula_3. Daniel Quillen (1969) proved that the coefficient ring of complex cobordism is naturally isomorphic as a graded ring to Lazard's universal ring. Hence, topologists commonly regrade the Lazard ring so that formula_2 has degree formula_4, because the coefficient ring of complex cobordism is evenly graded. | [
{
"math_id": 0,
"text": "F(x,y)"
},
{
"math_id": 1,
"text": "x+y+\\sum_{i,j} c_{i,j} x^i y^j"
},
{
"math_id": 2,
"text": "c_{i,j}"
},
{
"math_id": 3,
"text": "(i+j-1)"
},
{
"math_id": 4,
"text": "2(i+j-1)"
}
]
| https://en.wikipedia.org/wiki?curid=13949634 |
13949996 | Pyrazolidine | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Pyrazolidine is a heterocyclic compound. It is a liquid that is stable in air, but it is hygroscopic.
Preparation.
Pyrazolidine can be produced by cyclization of 1,3-dichloropropane or 1,3-dibromopropane with hydrazine:
formula_0
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Cl{-}(CH_2)_3{-}Cl\\ +\\ N_2H_4\\ \\xrightarrow {\\Delta T}\\ C_3H_8N_2\\ +\\ 2 HCl}"
},
{
"math_id": 1,
"text": "\\mathrm{Br{-}(CH_2)_3{-}Br\\ +\\ N_2H_4\\ \\xrightarrow {\\Delta T}\\ C_3H_8N_2\\ +\\ 2 HBr}"
}
]
| https://en.wikipedia.org/wiki?curid=13949996 |
1395171 | QBD (electronics) | QBD is the term applied to the charge-to-breakdown measurement of a semiconductor device. It is a standard destructive test method used to determine the quality of gate oxides in MOS devices. It is equal to the total charge passing through the dielectric layer (i.e. electron or hole fluence multiplied by the elementary charge) just before failure. Thus QBD is a measure of time-dependent gate oxide breakdown. As a measure of oxide quality, QBD can also be a useful predictor of product reliability under specified electrical stress conditions.
Test method.
Voltage is applied to the MOS structure to force a controlled current through the oxide, i.e. to inject a controlled amount of charge into the dielectric layer. By measuring the time after which the measured voltage drops towards zero (when electrical breakdown occurs) and integrating the injected current over time, the charge needed to break the gate oxide is determined.
This gate charge integral is defined as:
formula_0
where formula_1 is the measurement time at the step just prior to destructive avalanche breakdown.
Variants.
There are five common variants of the QBD test method:
For the V-ramp test procedure, the measured current is integrated to obtain QBD. The measured current is also used as a detection criterion for terminating the voltage ramp. One of the defined criteria is the change of logarithmic current slope between successive voltage steps.
Analysis.
The cumulative distribution of measured QBD is commonly analysed using a Weibull chart.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q_\\text{bd} = \\int_{0}^{t_\\text{bd}} i(t)\\, dt"
},
{
"math_id": 1,
"text": "t_\\text{bd}"
}
]
| https://en.wikipedia.org/wiki?curid=1395171 |
1395200 | Dual lattice | Construction analogous to that of a dual vector space
In the theory of lattices, the dual lattice is a construction analogous to that of a dual vector space. In certain respects, the geometry of the dual lattice of a lattice formula_0 is the reciprocal of the geometry of formula_0, a perspective which underlies many of its uses.
Dual lattices have many applications inside of lattice theory, theoretical computer science, cryptography and mathematics more broadly. For instance, it is used in the statement of the Poisson summation formula, transference theorems provide connections between the geometry of a lattice and that of its dual, and many lattice algorithms exploit the dual lattice.
For an article with emphasis on the physics / chemistry applications, see Reciprocal lattice. This article focuses on the mathematical notion of a dual lattice.
Definition.
Let formula_1 be a lattice. That is, formula_2 for some matrix formula_3.
The dual lattice is the set of linear functionals on formula_0 which take integer values on each point of formula_0:
formula_4
If formula_5 is identified with formula_6 using the dot-product, we can write formula_7 It is important to restrict to vectors in the span of formula_8, otherwise the resulting object is not a lattice.
Despite this identification of ambient Euclidean spaces, it should be emphasized that a lattice and its dual are fundamentally different kinds of objects; one consists of vectors in Euclidean space, and the other consists of a set of linear functionals on that space. Along these lines, one can also give a more abstract definition as follows:
formula_9
However, we note that the dual is not considered just as an abstract Abelian group of functionals, but comes with a natural inner product: formula_10, where formula_11 is an orthonormal basis of formula_12. (Equivalently, one can declare that, for an orthonormal basis formula_11 of formula_13, the dual vectors formula_14, defined by formula_15 are an orthonormal basis.) One of the key uses of duality in lattice theory is the relationship of the geometry of the primal lattice with the geometry of its dual, for which we need this inner product. In the concrete description given above, the inner product on the dual is generally implicit.
Properties.
We list some elementary properties of the dual lattice:
Examples.
Using the properties listed above, the dual of a lattice can be efficiently calculated, by hand or computer.
Transference theorems.
Each formula_47 partitions formula_8 according to the level sets corresponding to each of the integer values. Smaller choices of formula_48 produce level sets with more distance between them; in particular, the distance between the layers is formula_49. Reasoning this way, one can show that finding small vectors in formula_50 provides a lower bound on the largest size of non-overlapping spheres that can be placed around points of formula_8. In general, theorems relating the properties of a lattice with properties of its dual are known as transference theorems. In this section we explain some of them, along with some consequences for complexity theory.
We recall some terminology: For a lattice formula_51 , let formula_52 denote the smallest radius ball that contains a set of formula_53 linearly independent vectors of formula_51. For instance, formula_54 is the length of the shortest vector of formula_51. Let formula_55 denote the covering radius of formula_0.
In this notation, the lower bound mentioned in the introduction to this section states that formula_56.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Banaszczyk) — For a lattice formula_51:
There always an efficiently checkable certificate for the claim that a lattice has a short nonzero vector, namely the vector itself. An important corollary of Banaszcyk's transference theorem is that formula_59, which implies that to prove that a lattice has no short vectors, one can show a basis for the dual lattice consisting of short vectors. Using these ideas one can show that approximating the shortest vector of a lattice to within a factor of n (the formula_60 problem) is in formula_61.
Other transference theorems:
Poisson summation formula.
The dual lattice is used in the statement of a general Poisson summation formula.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Theorem (Poisson Summation)
Let formula_66 be a well-behaved function, such as a Schwartz function, and let formula_67 denote its Fourier transform. Let formula_1 be a full rank lattice. Then:
formula_68.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " L "
},
{
"math_id": 1,
"text": " L \\subseteq \\mathbb{R}^n "
},
{
"math_id": 2,
"text": " L = B \\mathbb{Z}^n "
},
{
"math_id": 3,
"text": " B "
},
{
"math_id": 4,
"text": " L^* = \\{ f \\in (\\text{span}(L))^* : \\forall x \\in L, f(x) \\in \\mathbb{Z} \\}. "
},
{
"math_id": 5,
"text": " (\\mathbb{R}^n)^* "
},
{
"math_id": 6,
"text": " \\mathbb{R}^n "
},
{
"math_id": 7,
"text": " L^* = \\{ v \\in \\text{span}(L) : \\forall x \\in L, v \\cdot x \\in \\mathbb{Z} \\}. "
},
{
"math_id": 8,
"text": " L "
},
{
"math_id": 9,
"text": " L^* = \\{ f : L \\to \\mathbb{Z} : \\text{f is a linear function} \\} = \\text{Hom}_{\\text{Ab}}(L, \\mathbb{Z}). "
},
{
"math_id": 10,
"text": " f \\cdot g = \\sum_i f(e_i) g(e_i) "
},
{
"math_id": 11,
"text": " e_i "
},
{
"math_id": 12,
"text": " \\text{span}(L)"
},
{
"math_id": 13,
"text": " \\text{span}(L) "
},
{
"math_id": 14,
"text": " e^*_i "
},
{
"math_id": 15,
"text": " e_i^*(e_j) = \\delta_{ij} "
},
{
"math_id": 16,
"text": " B = [b_1, \\ldots, b_n] "
},
{
"math_id": 17,
"text": " L "
},
{
"math_id": 18,
"text": " z \\in \\text{span}(L) "
},
{
"math_id": 19,
"text": " z \\in L^* \\iff b^T_i z \\in \\mathbb{Z}, i = 1, \\ldots, n \\iff B^T z \\in \\mathbb{Z}^n"
},
{
"math_id": 20,
"text": " B (B^T B)^{-1} "
},
{
"math_id": 21,
"text": " B^{-T} "
},
{
"math_id": 22,
"text": " z \\in L^* \\iff B^T z \\in \\mathbb{Z}^n \\iff z \\in B^{-T} \\mathbb{Z}^n "
},
{
"math_id": 23,
"text": " (L^*)^* = L "
},
{
"math_id": 24,
"text": " L,M "
},
{
"math_id": 25,
"text": " L \\subseteq M "
},
{
"math_id": 26,
"text": " L^* \\supseteq M^* "
},
{
"math_id": 27,
"text": " \\text{det}(L^*) = \\frac{1}{\\text{det}(L)} "
},
{
"math_id": 28,
"text": " q "
},
{
"math_id": 29,
"text": " (qL)^* = \\frac{1}{q} L^* "
},
{
"math_id": 30,
"text": " R "
},
{
"math_id": 31,
"text": " (RL)^* = R L^* "
},
{
"math_id": 32,
"text": " x \\cdot y \\in \\mathbb{Z} "
},
{
"math_id": 33,
"text": " x,y \\in L "
},
{
"math_id": 34,
"text": " L \\subseteq L^* "
},
{
"math_id": 35,
"text": " L' \\subseteq L "
},
{
"math_id": 36,
"text": " |L/L'| <\\infty "
},
{
"math_id": 37,
"text": " \\text{det}(L') = \\text{det}(L) | L/L'| "
},
{
"math_id": 38,
"text": " \\text{det}(L)^2 = | L^* / L| "
},
{
"math_id": 39,
"text": " L = L^*"
},
{
"math_id": 40,
"text": " \\text{det}(L) = 1. "
},
{
"math_id": 41,
"text": " \\mathbb{Z}^n "
},
{
"math_id": 42,
"text": " 2\\mathbb{Z} \\oplus \\mathbb{Z} "
},
{
"math_id": 43,
"text": " \\frac{1}{2} \\mathbb{Z} \\oplus \\mathbb{Z} "
},
{
"math_id": 44,
"text": " L = \\{ x \\in \\mathbb{Z}^n : \\sum x_i = 0 \\mod 2 \\}"
},
{
"math_id": 45,
"text": " L^* = \\mathbb{Z}^n + (\\frac{1}{2}, \\ldots, \\frac{1}{2}) "
},
{
"math_id": 46,
"text": " 1/2 "
},
{
"math_id": 47,
"text": " f \\in L^* \\setminus \\{0\\} "
},
{
"math_id": 48,
"text": " f "
},
{
"math_id": 49,
"text": " 1 / ||f|| "
},
{
"math_id": 50,
"text": " L^* "
},
{
"math_id": 51,
"text": " L"
},
{
"math_id": 52,
"text": " \\lambda_i(L) "
},
{
"math_id": 53,
"text": " i "
},
{
"math_id": 54,
"text": " \\lambda_1(L) "
},
{
"math_id": 55,
"text": " \\mu(L) = \\text{max}_{x \\in \\mathbb{R}^n } d(x, L) "
},
{
"math_id": 56,
"text": " \\mu(L) \\geq \\frac{1}{2 \\lambda_1(L^*)} "
},
{
"math_id": 57,
"text": " 1 \\leq 2 \\lambda_1(L) \\mu(L^*) \\leq n "
},
{
"math_id": 58,
"text": " 1 \\leq \\lambda_i(L) \\lambda_{n - i + 1}(L^*) \\leq n "
},
{
"math_id": 59,
"text": " \\lambda_1(L) \\geq \\frac{1}{\\lambda_n(L^*)} "
},
{
"math_id": 60,
"text": " \\text{GAPSVP}_n "
},
{
"math_id": 61,
"text": " \\text{NP} \\cap \\text{coNP} "
},
{
"math_id": 62,
"text": " \\lambda_1(L) \\lambda_1(L^*) \\leq n "
},
{
"math_id": 63,
"text": " \\lambda_1(L) \\leq \\sqrt{n} (\\text{det}(L)^{1/n}) "
},
{
"math_id": 64,
"text": " \\lambda_1(L^*) \\leq \\sqrt{n} (\\text{det}(L^*)^{1/n}) "
},
{
"math_id": 65,
"text": " \\text{det}(L) = \\frac{1}{\\text{det}(L^*)} "
},
{
"math_id": 66,
"text": " f : \\mathbb{R}^n \\to \\mathbb{R} "
},
{
"math_id": 67,
"text": " \\hat{f} "
},
{
"math_id": 68,
"text": " \\sum_{x \\in L} f(x) = \\frac{1}{\\det(L)} \\sum_{y \\in L^*} \\hat{f}(y) "
}
]
| https://en.wikipedia.org/wiki?curid=1395200 |
13953265 | Bicone | 3D shape obtained by revolving a rhombus about one of its axes of symmetry
In geometry, a bicone or dicone (from , and Greek: "di-", both meaning "two") is the three-dimensional surface of revolution of a rhombus around one of its axes of symmetry. Equivalently, a bicone is the surface created by joining two congruent, right, circular cones at their bases.
A bicone has circular symmetry and orthogonal bilateral symmetry.
Geometry.
For a circular bicone with radius "R" and height center-to-top "H", the formula for volume becomes
formula_0
For a right circular cone, the surface area is
formula_1 where formula_2 is the slant height.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = \\frac{2}{3} \\pi R^2 H. "
},
{
"math_id": 1,
"text": "SA =2\\pi R S\\,"
},
{
"math_id": 2,
"text": "S = \\sqrt{R^2 + H^2}"
}
]
| https://en.wikipedia.org/wiki?curid=13953265 |
13953779 | Hudson's equation | Hudson's equation, also known as Hudson formula, is an equation used by coastal engineers to calculate the minimum size of riprap (armourstone) required to provide "satisfactory" stability characteristics for rubble structures such as breakwaters under attack from storm wave conditions.
The equation was developed by the United States Army Corps of Engineers, Waterways Experiment Station (WES), following extensive investigations by Hudson (1953, 1959, 1961a, 1961b)
Initial equation.
The equation itself is:
formula_0
where:
* "K""D" = around 3 for natural quarry rock
* "K""D" = around 10 for artificial interlocking concrete blocks
Updated equation.
This equation was rewritten as follows in the nineties:
formula_2
where:
* "K""D" = around 3 for natural quarry rock
* "K""D" = around 10 for artificial interlocking concrete blocks
The armourstone may be considered stable if the "stability number" "Ns = Hs / Δ Dn50" < 1.5 to 2, with damage rapidly increasing for Ns > 3. This formula has been for many years the US standard for the design of rock structures under influence of wave action Obviously, these equations may be used for preliminary design, but scale model testing (2D in wave flume, and 3D in wave basin) is absolutely needed before construction is undertaken.
The drawback of the Hudson formula is that it is only valid for relatively steep waves (so for waves during storms, and less for swell waves). Also it is not valid for breakwaters and shore protections with an impermeable core. It is not possible to estimate the degree of damage on a breakwater during a storm with this formula. Therefore nowadays for armourstone the Van der Meer formula or a variant of it is used. For concrete breakwater elements often a variant of the Hudson formula is used.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W =\\frac{\\gamma_r H^3}{K_D \\Delta^3\\cot\\theta}"
},
{
"math_id": 1,
"text": "\\gamma_r"
},
{
"math_id": 2,
"text": "\\frac{H_s}{\\Delta D_{n50}}= \\frac{(K_D \\cot \\theta)^{1/3}}{1.27}"
}
]
| https://en.wikipedia.org/wiki?curid=13953779 |
1395554 | Distance modulus | Logarithmic distance scale
The distance modulus is a way of expressing distances that is often used in astronomy. It describes distances on a logarithmic scale based on the astronomical magnitude system.
Definition.
The distance modulus formula_0 is the difference between the apparent magnitude formula_1 (ideally, corrected from the effects of interstellar absorption) and the absolute magnitude formula_2 of an astronomical object. It is related to the luminous distance formula_3 in parsecs by:
formula_4
This definition is convenient because the observed brightness of a light source is related to its distance by the inverse square law (a source twice as far away appears one quarter as bright) and because brightnesses are usually expressed not directly, but in magnitudes.
Absolute magnitude formula_2 is defined as the apparent magnitude of an object when seen at a distance of 10 parsecs. If a light source has flux "F"("d") when observed from a distance of formula_3 parsecs, and flux "F"(10) when observed from a distance of 10 parsecs, the inverse-square law is then written like:
formula_5
The magnitudes and flux are related by:
formula_6
Substituting and rearranging, we get:
formula_7
which means that the apparent magnitude is the absolute magnitude plus the distance modulus.
Isolating formula_3 from the equation formula_8, finds that the distance (or, the luminosity distance) in parsecs is given by
formula_9
The uncertainty in the distance in parsecs ("δd") can be computed from the uncertainty in the distance modulus ("δμ") using
formula_10
which is derived using standard error analysis.
Different kinds of distance moduli.
Distance is not the only quantity relevant in determining the difference between absolute and apparent magnitude. Absorption is another important factor, and it may even be a dominant one in particular cases ("e.g.", in the direction of the Galactic Center). Thus a distinction is made between distance moduli uncorrected for interstellar absorption, the values of which would overestimate distances if used naively, and absorption-corrected moduli.
The first ones are termed "visual distance moduli" and are denoted by formula_11, while the second ones are called "true distance moduli" and denoted by formula_12.
Visual distance moduli are computed by calculating the difference between the observed apparent magnitude and some theoretical estimate of the absolute magnitude. True distance moduli require a further theoretical step; that is, the estimation of the interstellar absorption coefficient.
Usage.
Distance moduli are most commonly used when expressing the distance to other galaxies in the relatively nearby universe. For example, the Large Magellanic Cloud (LMC) is at a distance modulus of 18.5, the Andromeda Galaxy's distance modulus is 24.4, and the galaxy NGC 4548 in the Virgo Cluster has a DM of 31.0. In the case of the LMC, this means that Supernova 1987A, with a peak apparent magnitude of 2.8, had an absolute magnitude of −15.7, which is low by supernova standards.
Using distance moduli makes computing magnitudes easy. As for instance, a solar type star (M= 5) in the Andromeda Galaxy (DM= 24.4) would have an apparent magnitude (m) of 5 + 24.4 = 29.4, so it would be barely visible for the Hubble Space Telescope which has a limiting magnitude of about 30. Since it is apparent magnitudes which are actually measured at a telescope, many discussions about distances in astronomy are really discussions about the putative or derived absolute magnitudes of the distant objects being observed.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu=m-M"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "\\begin{align}\n\\log_{10}(d) &= 1 + \\frac{\\mu}{5} \\\\\n\\mu &= 5\\log_{10}(d) - 5\n\\end{align}"
},
{
"math_id": 5,
"text": "F(d) = \\frac{F(10)}{\\left(\\frac{d}{10}\\right)^2} "
},
{
"math_id": 6,
"text": "\\begin{align}\nm &= -2.5 \\log_{10} F(d) \\\\[1ex]\nM &= -2.5 \\log_{10} F(d=10)\n\\end{align}"
},
{
"math_id": 7,
"text": "\\mu = m - M = 5 \\log_{10}(d) - 5 = 5 \\log_{10}\\left(\\frac{d}{10\\,\\mathrm{pc}}\\right)"
},
{
"math_id": 8,
"text": "5 \\log_{10}(d) - 5 = \\mu "
},
{
"math_id": 9,
"text": "d = 10^{\\frac{\\mu}{5}+1} "
},
{
"math_id": 10,
"text": " \\delta d = 0.2 \\ln(10) 10^{0.2\\mu+1} \\delta\\mu \\approx 0.461 d \\ \\delta\\mu"
},
{
"math_id": 11,
"text": "{(m - M)}_{v}"
},
{
"math_id": 12,
"text": "{(m - M)}_{0}"
}
]
| https://en.wikipedia.org/wiki?curid=1395554 |
13955557 | Mechanostat | The Mechanostat is a term describing the way in which mechanical loading influences bone structure by changing the mass (amount of bone) and architecture (its arrangement) to provide a structure that resists habitual loads with an economical amount of material. As changes in the skeleton are accomplished by the processes of formation (bone growth) and resorption (bone loss), the mechanostat models the effect of influences on the skeleton by those processes, through their effector cells, osteocytes, osteoblasts, and osteoclasts. The term was invented by Harold Frost: an orthopaedic surgeon and researcher described extensively in articles referring to Frost and Webster Jee's "Utah Paradigm of Skeletal Physiology" in the 1960s. The Mechanostat is often defined as a practical description of Wolff's law described by Julius Wolff (1836–1902), but this is not completely accurate. Wolff wrote his treatises on bone after images of bone sections were described by Culmann and von Meyer, who suggested that the arrangement of the struts (trabeculae) at the ends of the bones were aligned with the stresses experienced by the bone. It has since been established that the static methods used for those calculations of lines of stress were inappropriate for work on what were, in effect, curved beams, a finding described by Lance Lanyon, a leading researcher in the area as "a triumph of a good idea over mathematics." While Wolff pulled together the work of Culmann and von Meyer, it was the French scientist Roux, who first used the term "functional adaptation" to describe the way that the skeleton optimized itself for its function, though Wolff is credited by many for that.
According to the Mechanostat, bone growth and bone loss is stimulated by the local, mechanical, elastic deformation of bone. The reason for the elastic deformation of bone is the peak forces caused by muscles (e.g. measurable using mechanography). The adaptation (feed-back control loop) of bone according to the maximum forces is considered to be a lifelong process. Hence, bone adapts its mechanical properties according to the needed mechanical function: bone mass, bone geometry, and bone strength (see also Stress-strain index, SSI) adapt to everyday usage/needs. "Maximal force" in this context is a simplification of the real input to bone that initiates adaptive changes. While the magnitude of a force (the weight of a load for example) is an important determinant of its effect on the skeleton, it is not the only one. The rate of application of force is also critical. Slow application of force over several seconds is not experienced by bone cells as a stimulus, but they are sensitive to very rapid application of forces (such as impacts) even of lower magnitude. High frequency vibration of bone at very low magnitudes is thought to stimulate changes, but the research in the area is not completely unequivocal. It is clear that bones respond better to loading/exercise with gaps between individual events, so that two loads separated by ten seconds of rest are more potent stimuli than ten loads within the same ten seconds.
Due to this control loop, there is a linear relationship in the healthy body between muscle cross sectional area (as a surrogate for typical maximum forces the muscle is able to produce under physiological conditions) and the bone cross sectional area (as a surrogate for bone strength).
These relations are of immense importance, especially for conditions of bone loss like osteoporosis, since an adapted training utilizing the needed maximum forces on the bone can be used to stimulate bone growth and thereby prevent or help to minimize bone loss. An example for such an efficient training is vibration training or whole body vibration.
Modeling and remodeling.
Frost defined four regions of elastic bone deformation which result in different consequences on the control loop:
According to this, a typical bone (e.g., the tibia) has a security margin of about 5 to 7 between typical load (2000 to 3000 μStrain) and fracture load (about 15000μStrain).
The comments above are all one part of how the skeleton responds to loading, because the different bones of the skeleton have a range of habitual strain environments (encompassing magnitude, rate, frequency, rest periods, etc.), and they are not uniform. The numbers in the table are only theoretical and may reflect the response of the center of a long bone under specific circumstances. Other parts of the same bone and other bones in the same individual experience different loading and adapt to them despite different thresholds between disuse, maintenance and adaptive formation. Furthermore, bone structure is controlled by a complex series of different influences, such as calcium status, the effects of hormones, age, diet, sex, disease, and pharmaceuticals. A bone experiencing what would in some circumstances be seen as a stimulus to form more material could either be maintained at a constant level where circulating calcium was low, or the same loading could merely temper the amount of resorption experienced in an old person with a bone-wasting disease.
Unit: Strain "E".
The elastic deformation of bone is measured in "μStrain". 1000μStrain = 0.1% change of length of the bone.
It has to be considered that bone strength is highly dependent on geometry and direction of the acting forces in relation to this geometry. The fracture load for axial forces of the tibia for example is about 50 to 60 times the body weight. The fracture load for forces perpendicular to the axial direction is about 10 times lower.
Different types of bones can have different modeling and remodeling thresholds. The modeling threshold of the tibia is about 1500 μStrain (0.15% change of length), while the modeling threshold for parts of the bones of the skull is quite different. Some parts of the skull such as the lower jaw (mandible) experience significant forces and strains during chewing, but the dome of the cranium must remain strong to protect the brain, even if it does not experience what would be seen as stimulating strains. In one study where the strains were measured in the skull of a live human, it was shown that strains in the skull never exceeded 1/10 of the peak strain in the tibia of the same individual, with similar differences in strain rates. This suggests that either bones of the skull are very sensitive to extremely low strains, or that the "genetic baseline" amount of bone in the skull in what is effectively disuse is not modified by the effects of loading. Whether the skulls of boxers are thicker than normal individuals is an intriguing question that has not been answered.
Since the physical, material properties of bone are not altered in the different bone types of the body, this difference in modeling threshold results in an increased bone mass and bone strength, thus in an increased safety factor (relation between fracture load and typical loads) for the skull compared to the tibia. A lower modeling threshold means that the same typical daily forces result in a ‘thicker’ and hence stronger bone at the skull.
Examples.
Typical examples of the influence of maximum forces and the resulting elastic deformations on bone growth or bone loss are extended flights of astronauts and cosmonauts, as well as patients with paraplegia due to an accident. Extended periods in free fall do not lead to loss of bone from the skull, providing support to the idea that its bone is maintained by a genetic not a mechanical influence (skull bone often increases in long term space flights, something thought to be related to fluid shifts within the body).
A paraplegic patient in a wheelchair who is using his arms but not his legs will suffer massive muscle and bone loss in only his legs, due to the lack of usage of the legs. However, the muscles and bones of the arms which are used every day will stay the same, or might even increase, depending on the usage.
The same effect can be observed for long flight astronauts or cosmonauts. While they still use their arms in an almost normal manner, due to the lack of gravity in space there are no maximum forces induced on the bones of the legs. On earth, long term players of racquet sports experience similar effects, where the dominant arm can have 30% more bone than the other due to the asymmetric applications of force.
Harold Frost applied the Mechanostat model not only to skeletal tissues, but also to fibrous, collagenous connective tissues, such as ligaments, tendons, and fascia. He described their adaptational responsiveness to strain in his "stretch-hypertrophy rule":
"Intermittent stretch causes collagenous tissues to hypertrophy until the resulting increase in strength reduces elongation in tension to some minimum level".
Similar to the responsiveness of bony tissues, this adaptational response occurs only if the mechanical strain exceeds a certain threshold value. Harold Frost proposed that for dense, collagenous connective tissues, the related threshold value is around 4% strain elongation.
Literature.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = \\frac{\\Delta l}{l}"
}
]
| https://en.wikipedia.org/wiki?curid=13955557 |
1395967 | Signaling game | Game class in game theory
In game theory, a signaling game is a simple type of a dynamic Bayesian game.
The essence of a signalling game is that one player takes an action, the signal, to convey information to another player, where sending the signal is more costly if they are conveying false information. A manufacturer, for example, might provide a warranty for its product in order to signal to consumers that its product is unlikely to break down. The classic example is of a worker who acquires a college degree not because it increases their skill, but because it conveys their ability to employers.
A simple signalling game would have two players, the sender and the receiver. The sender has one of two types that might be called "desirable" and "undesirable" with different payoff functions, where the receiver knows the probability of each type but not which one this particular sender has. The receiver has just one possible type.
The sender moves first, choosing an action called the "signal" or "message" (though the term "message" is more often used in non-signalling "cheap talk" games where sending messages is costless). The receiver moves second, after observing the signal.
The two players receive payoffs dependent on the sender's type, the message chosen by the sender and the action chosen by the receiver.
The tension in the game is that the sender wants to persuade the receiver that they have the desirable type, and they will try to choose a signal to do that. Whether this succeeds depends on whether the undesirable type would send the same signal, and how the receiver interprets the signal.
Perfect Bayesian equilibrium.
The equilibrium concept that is relevant for signaling games is the "perfect Bayesian equilibrium", a refinement of Bayesian Nash equilibrium.
Nature chooses the sender to have type formula_0 with probability formula_1. The sender then chooses the probability with which to take signalling action formula_2, which can be written as formula_3 for each possible formula_4 The receiver observes the signal formula_5 but not formula_6, and chooses the probability with which to take response action formula_7, which can be written as formula_8 for each possible formula_9 The sender's payoff is formula_10 and the receiver's is formula_11
A perfect Bayesian equilibrium is a combination of beliefs and strategies for each player. Both players believe that the other will follow the strategies specified in the equilibrium, as in simple Nash equilibrium, unless they observe something that has probability zero in the equilibrium. The receiver's beliefs also include a probability distribution formula_12 representing the probability put on the sender having type formula_6 if the receiver observes signal formula_13. The receiver's strategy is a choice of formula_14 The sender's strategy is a choice of formula_15. These beliefs and strategies must satisfy certain conditions:
The kinds of perfect Bayesian equilibria that may arise can be divided in three different categories: pooling equilibria, separating equilibria and semi-separating. A given game may or may not have more than one equilibrium.
If there are more types of senders than there are messages, the equilibrium can never be a separating equilibrium (but may be semi-separating).
There are also "hybrid equilibria", in which the sender randomizes between pooling and separating.
Examples.
Reputation game.
In this game, the sender and the receiver are firms. The sender is an incumbent firm and the receiver is an entrant firm.
The payoffs are given by the table at the right. It is assumed that:
We now look for perfect Bayesian equilibria. It is convenient to differentiate between separating equilibria and pooling equilibria.
Summary:
Education game.
Michael Spence's 1973 paper on education as a signal of ability is the start of the economic analysis of signalling. In this game, the senders are workers and the receivers are employers. The example below has two types of workers and a continuous signal level.
The players are a worker and two firms. The worker chooses an education level formula_16 the signal, after which the firms simultaneously offer him a wage formula_17 and formula_18 and he accepts one or the other. The worker's type, known only to himself, is either high ability with formula_19 or low ability with formula_20 each type having probability 1/2. The high-ability worker's payoff is formula_21 and the low-ability's is formula_22 A firm that hires the worker at wage formula_23 has payoff formula_24 and the other firm has payoff 0.
In this game, the firms compete the wage down to where it equals the expected ability, so if there is no signal possible, the result would be formula_25 This will also be the wage in a pooling equilibrium, one where both types of worker choose the same signal, so the firms are left using their prior belief of .5 for the probability he has High ability. In a separating equilibrium, the wage will be 0 for the signal level the Low type chooses and 10 for the high type's signal. There are many equilibria, both pooling and separating, depending on expectations.
In a separating equilibrium, the low type chooses formula_26 The wages will be formula_27 and formula_28 for some critical level formula_29 that signals high ability. For the low type to choose formula_30 requires that formula_31 so formula_32 and we can conclude that formula_33 For the high type to choose formula_34 requires that formula_35 so formula_36 and we can conclude that formula_37 Thus, any value of formula_29 between 5 and 10 can support an equilibrium. Perfect Bayesian equilibrium requires an out-of-equilibrium belief to be specified too, for all the other possible levels of formula_38 besides 0 and formula_39 levels which are "impossible" in equilibrium since neither type plays them. These beliefs must be such that neither player would want to deviate from his equilibrium strategy 0 or formula_29 to a different formula_40 A convenient belief is that formula_41 if formula_42 another, more realistic, belief that would support an equilibrium is formula_43 if formula_44 and formula_45 if formula_46. There is a continuum of equilibria, for each possible level of formula_47 One equilibrium, for example, is
formula_48
In a pooling equilibrium, both types choose the same formula_40 One pooling equilibrium is for both types to choose formula_49 no education, with the out-of-equilibrium belief formula_50 In that case, the wage will be the expected ability of 5, and neither type of worker will deviate to a higher education level because the firms would not think that told them anything about the worker's type.
The most surprising result is that there are also pooling equilibria with formula_51 Suppose we specify the out-of-equilibrium belief to be formula_52 Then the wage will be 5 for a worker with formula_53 but 0 for a worker with wage formula_54 The low type compares the payoffs formula_55 to formula_56 and if formula_57 he is willing to follow his equilibrium strategy of formula_58 The high type will choose formula_59 a fortiori. Thus, there is another continuum of equilibria, with values of formula_60 in [0, 2.5].
In the signalling model of education, expectations are crucial. If, as in the separating equilibrium, employers expect that high-ability people will acquire a certain level of education and low-ability ones will not, we get the main insight: that if people cannot communicate their ability directly, they will acquire educations even if it does not increase productivity, just to demonstrate ability. Or, in the pooling equilibrium with formula_49 if employers do not think education signals anything, we can get the outcome that nobody becomes educated. Or, in the pooling equilibrium with formula_61 everyone acquires education that is completely useless, not even showing who has high ability, out of fear that if they deviate and do not acquire education, employers will think they have low ability.
Beer-Quiche game.
The Beer-Quiche game of Cho and Kreps draws on the stereotype of quiche eaters being less masculine. In this game, an individual B is considering whether to duel with another individual A. B knows that A is either a "wimp" or is "surly" but not which. B would prefer a duel if A is a "wimp" but not if A is "surly". Player A, regardless of type, wants to avoid a duel. Before making the decision B has the opportunity to see whether A chooses to have beer or quiche for breakfast. Both players know that "wimps" prefer quiche while "surlies" prefer beer. The point of the game is to analyze the choice of breakfast by each kind of A. This has become a standard example of a signaling game. See for more details.
Applications of signaling games.
Signaling games describe situations where one player has information the other player does not have. These situations of asymmetric information are very common in economics and behavioral biology.
Philosophy.
The first signaling game was the Lewis signaling game, which occurred in David K. Lewis' Ph. D. dissertation (and later book) "Convention". See Replying to W.V.O. Quine, Lewis attempts to develop a theory of convention and meaning using signaling games. In his most extreme comments, he suggests that understanding the equilibrium properties of the appropriate signaling game captures all there is to know about meaning:
I have now described the character of a case of signaling without mentioning the meaning of the signals: that two lanterns meant that the redcoats were coming by sea, or whatever. But nothing important seems to have been left unsaid, so what has been said must somehow imply that the signals have their meanings.
The use of signaling games has been continued in the philosophical literature. Others have used evolutionary models of signaling games to describe the emergence of language. Work on the emergence of language in simple signaling games includes models by Huttegger, Grim, "et al.", Skyrms, and Zollman. Harms, and Huttegger, have attempted to extend the study to include the distinction between normative and descriptive language.
Economics.
The first application of signaling games to economic problems was Michael Spence's Education game. A second application was the Reputation game.
Biology.
Valuable advances have been made by applying signaling games to a number of biological questions. Most notably, Alan Grafen's (1990) handicap model of mate attraction displays. The antlers of stags, the elaborate plumage of peacocks and bird-of-paradise, and the song of the nightingale are all such signals. Grafen's analysis of biological signaling is formally similar to the classic monograph on economic market signaling by Michael Spence. More recently, a series of papers by Getty shows that Grafen's analysis, like that of Spence, is based on the critical simplifying assumption that signalers trade off costs for benefits in an additive fashion, the way humans invest money to increase income in the same currency. This assumption that costs and benefits trade off in an additive fashion might be valid for some biological signaling systems, but is not valid for multiplicative tradeoffs, such as the survival cost – reproduction benefit tradeoff that is assumed to mediate the evolution of sexually selected signals.
Charles Godfray (1991) modeled the begging behavior of nestling birds as a signaling game. The nestlings begging not only informs the parents that the nestling is hungry, but also attracts predators to the nest. The parents and nestlings are in conflict. The nestlings benefit if the parents work harder to feed them than the parents ultimate benefit level of investment. The parents are trading off investment in the current nestlings against investment in future offspring.
Pursuit deterrent signals have been modeled as signaling games. Thompson's gazelles are known sometimes to perform a 'stott', a jump into the air of several feet with the white tail showing, when they detect a predator. Alcock and others have suggested that this action is a signal of the gazelle's speed to the predator. This action successfully distinguishes types because it would be impossible or too costly for a sick creature to perform and hence the predator is deterred from chasing a stotting gazelle because it is obviously very agile and would prove hard to catch.
The concept of information asymmetry in molecular biology has long been apparent. Although molecules are not rational agents, simulations have shown that through replication, selection, and genetic drift, molecules can behave according to signaling game dynamics. Such models have been proposed to explain, for example, the emergence of the genetic code from an RNA and amino acid world.
Costly versus cost-free signaling.
One of the major uses of signaling games both in economics and biology has been to determine under what conditions honest signaling can be an equilibrium of the game. That is, under what conditions can we expect rational people or animals subject to natural selection to reveal information about their types?
If both parties have coinciding interest, that is they both prefer the same outcomes in all situations, then honesty is an equilibrium. (Although in most of these cases non-communicative equilibria exist as well.) However, if the parties' interests do not perfectly overlap, then the maintenance of informative signaling systems raises an important problem.
Consider a circumstance described by John Maynard Smith regarding transfer between related individuals. Suppose a signaler can be either starving or just hungry, and they can signal that fact to another individual who has food. Suppose that they would like more food regardless of their state, but that the individual with food only wants to give them the food if they are starving. While both players have identical interests when the signaler is starving, they have opposing interests when the signaler is only hungry. When they are only hungry, they have an incentive to lie about their need in order to obtain the food. And if the signaler regularly lies, then the receiver should ignore the signal and do whatever they think is best.
Determining how signaling is stable in these situations has concerned both economists and biologists, and both have independently suggested that signal cost might play a role. If sending one signal is costly, it might only be worth the cost for the starving person to signal. The analysis of when costs are necessary to sustain honesty has been a significant area of research in both these fields.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " t "
},
{
"math_id": 1,
"text": "p "
},
{
"math_id": 2,
"text": "m "
},
{
"math_id": 3,
"text": "Prob(m|t)"
},
{
"math_id": 4,
"text": "t. "
},
{
"math_id": 5,
"text": "m "
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": " a "
},
{
"math_id": 8,
"text": "Prob(a|m)"
},
{
"math_id": 9,
"text": "m. "
},
{
"math_id": 10,
"text": "u(a, m, t)"
},
{
"math_id": 11,
"text": "v(a,t)."
},
{
"math_id": 12,
"text": " b(t|m) "
},
{
"math_id": 13,
"text": "m"
},
{
"math_id": 14,
"text": " Prob(a|m)."
},
{
"math_id": 15,
"text": " Prob(m|t)"
},
{
"math_id": 16,
"text": "s,"
},
{
"math_id": 17,
"text": "w_1"
},
{
"math_id": 18,
"text": "w_2"
},
{
"math_id": 19,
"text": "a=10"
},
{
"math_id": 20,
"text": "a = 0,"
},
{
"math_id": 21,
"text": "U_H= w - s"
},
{
"math_id": 22,
"text": "U_{L}= w - 2s."
},
{
"math_id": 23,
"text": "w"
},
{
"math_id": 24,
"text": "a-w"
},
{
"math_id": 25,
"text": "w_1=w_2 = .5(10) + .5 (0) =5."
},
{
"math_id": 26,
"text": "s=0."
},
{
"math_id": 27,
"text": "w(s=0)=0"
},
{
"math_id": 28,
"text": "w(s=s^*) =10"
},
{
"math_id": 29,
"text": "s^*"
},
{
"math_id": 30,
"text": "s = 0"
},
{
"math_id": 31,
"text": "U_L (s = 0) \\geq U_L(s=s^*),"
},
{
"math_id": 32,
"text": " 0 \\geq 10-2s^*"
},
{
"math_id": 33,
"text": "s^* \\geq 5."
},
{
"math_id": 34,
"text": "s = s^*"
},
{
"math_id": 35,
"text": "U_H (s = s^*) \\geq U_H(s=0),"
},
{
"math_id": 36,
"text": "10-s \\geq 0"
},
{
"math_id": 37,
"text": "s^* \\leq 10."
},
{
"math_id": 38,
"text": "s"
},
{
"math_id": 39,
"text": "s^*,"
},
{
"math_id": 40,
"text": "s."
},
{
"math_id": 41,
"text": "Prob(a = High) =0"
},
{
"math_id": 42,
"text": "s \\neq s^*;"
},
{
"math_id": 43,
"text": "Prob(a = High) = 0"
},
{
"math_id": 44,
"text": "s < s^*"
},
{
"math_id": 45,
"text": "Prob(a = High) = 1"
},
{
"math_id": 46,
"text": "s \\geq s^*"
},
{
"math_id": 47,
"text": "s^*."
},
{
"math_id": 48,
"text": "s|Low = 0, s|High= 7, w|(s=7) = 10, w|(s \\neq 7) = 0, Prob(a=High|s=7) = 1, Prob(a=High|s \\neq 7) =0. "
},
{
"math_id": 49,
"text": "s=0,"
},
{
"math_id": 50,
"text": "Prob(a=High|s>0) = .5."
},
{
"math_id": 51,
"text": "s = s'>0."
},
{
"math_id": 52,
"text": "Prob(a=High|s< s') = 0."
},
{
"math_id": 53,
"text": "s= s',"
},
{
"math_id": 54,
"text": "s = 0."
},
{
"math_id": 55,
"text": "U_L(s=s') = 5 - 2s'"
},
{
"math_id": 56,
"text": "U_L(s=0) =0,"
},
{
"math_id": 57,
"text": "s'\\leq 2.5,"
},
{
"math_id": 58,
"text": "s=s'."
},
{
"math_id": 59,
"text": "s=s'"
},
{
"math_id": 60,
"text": "s'"
},
{
"math_id": 61,
"text": "s>0,"
}
]
| https://en.wikipedia.org/wiki?curid=1395967 |
13961210 | Stack resource policy | Resource allocation policy used in real-time computing
The Stack Resource Policy (SRP) is a resource allocation policy used in real-time computing, used for accessing shared resources when using earliest deadline first scheduling. It was defined by T. P. Baker. SRP is not the same as the Priority ceiling protocol which is for fixed priority tasks (FP).
Function.
Each task is assigned a preemption level based upon the following formula where formula_0 denotes the deadline of task formula_1 and formula_2 denotes the preemption level of task i:
formula_3
Each resource R has a current ceiling formula_4 that represents the maximum of the preemption levels of the tasks that may be blocked, when there are formula_5 units of formula_6 available and formula_7 is the maximum units of formula_6 that formula_8 may require at any one time. formula_4 is assigned as follows:
formula_9
There is also a system ceiling formula_10 which is the maximum of all current ceilings of the resources.
formula_11
Any task formula_8 that wishes to preempt the system must first satisfy the following constraint:
formula_12
This can be refined for Operating System implementation (as in MarteOS) by removing the multi-unit resources and defining the stack resource policy as follows
Relevancy.
The 2011 book "Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications" by Giorgio C. Buttazzo featured a dedicated section to reviewing SRP from Baker 1991 work.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " D(T_i) "
},
{
"math_id": 1,
"text": " i "
},
{
"math_id": 2,
"text": " \\pi_i(T_i) "
},
{
"math_id": 3,
"text": " D(T_i) < D(T_j) \\iff \\pi_i(T_i) > \\pi_i(T_j) "
},
{
"math_id": 4,
"text": " C_R(V_R) "
},
{
"math_id": 5,
"text": " V "
},
{
"math_id": 6,
"text": " R "
},
{
"math_id": 7,
"text": " \\mu_R(J) "
},
{
"math_id": 8,
"text": " T_i "
},
{
"math_id": 9,
"text": " C_R(V_R) = max(\\{0\\} \\cup \\{\\pi(J) | V_R < \\mu_R(J)\\}) "
},
{
"math_id": 10,
"text": " \\pi' "
},
{
"math_id": 11,
"text": " \\pi' = max(\\{C_R(i) | i = 1,...,m\\} \\cup \\{\\pi(J_c)\\}) "
},
{
"math_id": 12,
"text": " \\pi' < P_i(T_i) "
}
]
| https://en.wikipedia.org/wiki?curid=13961210 |
1396397 | Bicubic interpolation | Extension of cubic spline interpolation
In mathematics, bicubic interpolation is an extension of cubic spline interpolation (a method of applying cubic interpolation to a data set) for interpolating data points on a two-dimensional regular grid. The interpolated surface (meaning the kernel shape, not the image) is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.
In image processing, bicubic interpolation is often chosen over bilinear or nearest-neighbor interpolation in image resampling, when speed is not an issue. In contrast to bilinear interpolation, which only takes 4 pixels (2×2) into account, bicubic interpolation considers 16 pixels (4×4). Images resampled with bicubic interpolation can have different interpolation artifacts, depending on the b and c values chosen.
Computation.
Suppose the function values formula_0 and the derivatives formula_1, formula_2 and formula_3 are known at the four corners formula_4, formula_5, formula_6, and formula_7 of the unit square. The interpolated surface can then be written as
formula_8
The interpolation problem consists of determining the 16 coefficients formula_9.
Matching formula_10 with the function values yields four equations:
Likewise, eight equations for the derivatives in the formula_15 and the formula_16 directions:
And four equations for the formula_25 mixed partial derivative:
The expressions above have used the following identities:
formula_30
formula_31
formula_32
This procedure yields a surface formula_10 on the unit square formula_33 that is continuous and has continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries.
Grouping the unknown parameters formula_9 in a vector
formula_34
and letting
formula_35
the above system of equations can be reformulated into a matrix for the linear equation formula_36.
Inverting the matrix gives the more useful linear equation formula_37, where
formula_38
which allows formula_39 to be calculated quickly and easily.
There can be another concise matrix form for 16 coefficients:
formula_40
or
formula_41
where
formula_42
Extension to rectilinear grids.
Often, applications call for bicubic interpolation using data on a rectilinear grid, rather than the unit square. In this case, the identities for formula_43 and formula_44 become
formula_45
formula_46
formula_47
where formula_48 is the formula_15 spacing of the cell containing the point formula_49 and similar for formula_50.
In this case, the most practical approach to computing the coefficients formula_39 is to let
formula_51
then to solve formula_52 with formula_53 as before. Next, the normalized interpolating variables are computed as
formula_54
where formula_55 and formula_56 are the formula_15 and formula_16 coordinates of the grid points surrounding the point formula_49. Then, the interpolating surface becomes
formula_57
Finding derivatives from function values.
If the derivatives are unknown, they are typically approximated from the function values at points neighbouring the corners of the unit square, e.g. using finite differences.
To find either of the single derivatives, formula_1 or formula_2, using that method, find the slope between the two "surrounding" points in the appropriate axis. For example, to calculate formula_1 for one of the points, find formula_58 for the points to the left and right of the target point and calculate their slope, and similarly for formula_2.
To find the cross derivative formula_3, take the derivative in both axes, one at a time. For example, one can first use the formula_1 procedure to find the formula_15 derivatives of the points above and below the target point, then use the formula_2 procedure on those values (rather than, as usual, the values of formula_0 for those points) to obtain the value of formula_59 for the target point. (Or one can do it in the opposite direction, first calculating formula_2 and then formula_1 from those. The two give equivalent results.)
At the edges of the dataset, when one is missing some of the surrounding points, the missing points can be approximated by a number of methods. A simple and common method is to assume that the slope from the existing point to the target point continues without further change, and using this to calculate a hypothetical value for the missing point.
Bicubic convolution algorithm.
Bicubic spline interpolation requires the solution of the linear system described above for each grid cell. An interpolator with similar properties can be obtained by applying a convolution with the following kernel in both dimensions:
formula_60
where formula_61 is usually set to −0.5 or −0.75. Note that formula_62 and formula_63 for all nonzero integers formula_64.
This approach was proposed by Keys, who showed that formula_65 produces third-order convergence with respect to the sampling interval of the original function.
If we use the matrix notation for the common case formula_66, we can express the equation in a more friendly manner:
formula_67
for formula_68 between 0 and 1 for one dimension. Note that for 1-dimensional cubic convolution interpolation 4 sample points are required. For each inquiry two samples are located on its left and two samples on the right. These points are indexed from −1 to 2 in this text. The distance from the point indexed with 0 to the inquiry point is denoted by formula_68 here.
For two dimensions first applied once in formula_15 and again in formula_16:
formula_69
formula_70
Use in computer graphics.
The bicubic algorithm is frequently used for scaling images and video for display (see bitmap resampling). It preserves fine detail better than the common bilinear algorithm.
However, due to the negative lobes on the kernel, it causes overshoot (haloing). This can cause clipping, and is an artifact (see also ringing artifacts), but it increases acutance (apparent sharpness), and can be desirable.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "f_x"
},
{
"math_id": 2,
"text": "f_y"
},
{
"math_id": 3,
"text": "f_{xy}"
},
{
"math_id": 4,
"text": "(0,0)"
},
{
"math_id": 5,
"text": "(1,0)"
},
{
"math_id": 6,
"text": "(0,1)"
},
{
"math_id": 7,
"text": "(1,1)"
},
{
"math_id": 8,
"text": "p(x,y) = \\sum\\limits_{i=0}^3 \\sum_{j=0}^3 a_{ij} x^i y^j."
},
{
"math_id": 9,
"text": "a_{ij}"
},
{
"math_id": 10,
"text": "p(x,y)"
},
{
"math_id": 11,
"text": "f(0,0) = p(0,0) = a_{00},"
},
{
"math_id": 12,
"text": "f(1,0) = p(1,0) = a_{00} + a_{10} + a_{20} + a_{30},"
},
{
"math_id": 13,
"text": "f(0,1) = p(0,1) = a_{00} + a_{01} + a_{02} + a_{03},"
},
{
"math_id": 14,
"text": "f(1,1) = p(1,1) = \\textstyle \\sum\\limits_{i=0}^3 \\sum\\limits_{j=0}^3 a_{ij}."
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "f_x(0,0) = p_x(0,0) = a_{10},"
},
{
"math_id": 18,
"text": "f_x(1,0) = p_x(1,0) = a_{10} + 2a_{20} + 3a_{30},"
},
{
"math_id": 19,
"text": "f_x(0,1) = p_x(0,1) = a_{10} + a_{11} + a_{12} + a_{13},"
},
{
"math_id": 20,
"text": "f_x(1,1) = p_x(1,1) = \\textstyle \\sum\\limits_{i=1}^3 \\sum\\limits_{j=0}^3 a_{ij} i,"
},
{
"math_id": 21,
"text": "f_y(0,0) = p_y(0,0) = a_{01},"
},
{
"math_id": 22,
"text": "f_y(1,0) = p_y(1,0) = a_{01} + a_{11} + a_{21} + a_{31},"
},
{
"math_id": 23,
"text": "f_y(0,1) = p_y(0,1) = a_{01} + 2a_{02} + 3a_{03},"
},
{
"math_id": 24,
"text": "f_y(1,1) = p_y(1,1) = \\textstyle \\sum\\limits_{i=0}^3 \\sum\\limits_{j=1}^3 a_{ij} j."
},
{
"math_id": 25,
"text": "xy"
},
{
"math_id": 26,
"text": "f_{xy}(0,0) = p_{xy}(0,0) = a_{11},"
},
{
"math_id": 27,
"text": "f_{xy}(1,0) = p_{xy}(1,0) = a_{11} + 2a_{21} + 3a_{31},"
},
{
"math_id": 28,
"text": "f_{xy}(0,1) = p_{xy}(0,1) = a_{11} + 2a_{12} + 3a_{13},"
},
{
"math_id": 29,
"text": "f_{xy}(1,1) = p_{xy}(1,1) = \\textstyle \\sum\\limits_{i=1}^3 \\sum\\limits_{j=1}^3 a_{ij} i j."
},
{
"math_id": 30,
"text": "p_x(x,y) = \\textstyle \\sum\\limits_{i=1}^3 \\sum\\limits_{j=0}^3 a_{ij} i x^{i-1} y^j,"
},
{
"math_id": 31,
"text": "p_y(x,y) = \\textstyle \\sum\\limits_{i=0}^3 \\sum\\limits_{j=1}^3 a_{ij} x^i j y^{j-1},"
},
{
"math_id": 32,
"text": "p_{xy}(x,y) = \\textstyle \\sum\\limits_{i=1}^3 \\sum\\limits_{j=1}^3 a_{ij} i x^{i-1} j y^{j-1}."
},
{
"math_id": 33,
"text": "[0,1] \\times [0,1]"
},
{
"math_id": 34,
"text": "\\alpha=\\left[\\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\\end{smallmatrix}\\right]^T"
},
{
"math_id": 35,
"text": "x=\\left[\\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_x(0,0)&f_x(1,0)&f_x(0,1)&f_x(1,1)&f_y(0,0)&f_y(1,0)&f_y(0,1)&f_y(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\\end{smallmatrix}\\right]^T,"
},
{
"math_id": 36,
"text": "A\\alpha=x"
},
{
"math_id": 37,
"text": "A^{-1}x=\\alpha"
},
{
"math_id": 38,
"text": "A^{-1}=\\left[\\begin{smallmatrix}\\begin{array}{rrrrrrrrrrrrrrrr}\n 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n -3 & 3 & 0 & 0 & -2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 2 & -2 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 3 & 0 & 0 & -2 & -1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & 1 & 1 & 0 & 0 \\\\\n -3 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & -3 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & -1 & 0 \\\\\n 9 & -9 & -9 & 9 & 6 & 3 & -6 & -3 & 6 & -6 & 3 & -3 & 4 & 2 & 2 & 1 \\\\\n -6 & 6 & 6 & -6 & -3 & -3 & 3 & 3 & -4 & 4 & -2 & 2 & -2 & -2 & -1 & -1 \\\\\n 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 2 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\\\\n -6 & 6 & 6 & -6 & -4 & -2 & 4 & 2 & -3 & 3 & -3 & 3 & -2 & -1 & -2 & -1 \\\\\n 4 & -4 & -4 & 4 & 2 & 2 & -2 & -2 & 2 & -2 & 2 & -2 & 1 & 1 & 1 & 1\n\\end{array}\\end{smallmatrix}\\right],"
},
{
"math_id": 39,
"text": "\\alpha"
},
{
"math_id": 40,
"text": "\\begin{bmatrix}\nf(0,0)&f(0,1)&f_y (0,0)&f_y (0,1)\\\\f(1,0)&f(1,1)&f_y (1,0)&f_y (1,1)\\\\f_x (0,0)&f_x (0,1)&f_{xy} (0,0)&f_{xy} (0,1)\\\\f_x (1,0)&f_x (1,1)&f_{xy} (1,0)&f_{xy} (1,1)\n\\end{bmatrix} =\n\\begin{bmatrix}\n1&0&0&0\\\\1&1&1&1\\\\0&1&0&0\\\\0&1&2&3\n\\end{bmatrix}\n\\begin{bmatrix}\na_{00}&a_{01}&a_{02}&a_{03}\\\\a_{10}&a_{11}&a_{12}&a_{13}\\\\a_{20}&a_{21}&a_{22}&a_{23}\\\\a_{30}&a_{31}&a_{32}&a_{33}\n\\end{bmatrix}\n\\begin{bmatrix}\n1&1&0&0\\\\0&1&1&1\\\\0&1&0&2\\\\0&1&0&3\n\\end{bmatrix},"
},
{
"math_id": 41,
"text": "\n\\begin{bmatrix}\na_{00}&a_{01}&a_{02}&a_{03}\\\\a_{10}&a_{11}&a_{12}&a_{13}\\\\a_{20}&a_{21}&a_{22}&a_{23}\\\\a_{30}&a_{31}&a_{32}&a_{33}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1&0&0&0\\\\0&0&1&0\\\\-3&3&-2&-1\\\\2&-2&1&1\n\\end{bmatrix}\n\\begin{bmatrix}\nf(0,0)&f(0,1)&f_y (0,0)&f_y (0,1)\\\\f(1,0)&f(1,1)&f_y (1,0)&f_y (1,1)\\\\f_x (0,0)&f_x (0,1)&f_{xy} (0,0)&f_{xy} (0,1)\\\\f_x (1,0)&f_x (1,1)&f_{xy} (1,0)&f_{xy} (1,1)\n\\end{bmatrix}\n\\begin{bmatrix}\n1&0&-3&2\\\\0&0&3&-2\\\\0&1&-2&1\\\\0&0&-1&1\n\\end{bmatrix},\n"
},
{
"math_id": 42,
"text": "p(x,y)=\\begin{bmatrix}1\n&x&x^2&x^3\\end{bmatrix}\n\\begin{bmatrix}\na_{00}&a_{01}&a_{02}&a_{03}\\\\a_{10}&a_{11}&a_{12}&a_{13}\\\\a_{20}&a_{21}&a_{22}&a_{23}\\\\a_{30}&a_{31}&a_{32}&a_{33}\n\\end{bmatrix}\n\\begin{bmatrix}1\\\\y\\\\y^2\\\\y^3\\end{bmatrix}."
},
{
"math_id": 43,
"text": "p_x, p_y,"
},
{
"math_id": 44,
"text": "p_{xy}"
},
{
"math_id": 45,
"text": "p_x(x,y) = \\textstyle \\sum\\limits_{i=1}^3 \\sum\\limits_{j=0}^3 \\frac{a_{ij} i x^{i-1} y^j}{\\Delta x},"
},
{
"math_id": 46,
"text": "p_y(x,y) = \\textstyle \\sum\\limits_{i=0}^3 \\sum\\limits_{j=1}^3 \\frac{a_{ij} x^i j y^{j-1}}{\\Delta y},"
},
{
"math_id": 47,
"text": "p_{xy}(x,y) = \\textstyle \\sum\\limits_{i=1}^3 \\sum\\limits_{j=1}^3 \\frac{a_{ij} i x^{i-1} j y^{j-1}}{\\Delta x \\Delta y},"
},
{
"math_id": 48,
"text": "\\Delta x"
},
{
"math_id": 49,
"text": "(x,y)"
},
{
"math_id": 50,
"text": "\\Delta y"
},
{
"math_id": 51,
"text": "x=\\left[\\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&\\Delta x f_x(0,0)&\\Delta xf_x(1,0)&\\Delta x f_x(0,1)&\\Delta x f_x(1,1)&\\Delta y f_y(0,0)&\\Delta y f_y(1,0)&\\Delta y f_y(0,1)&\\Delta y f_y(1,1)&\\Delta x \\Delta y f_{xy}(0,0)&\\Delta x \\Delta y f_{xy}(1,0)&\\Delta x \\Delta y f_{xy}(0,1)&\\Delta x \\Delta y f_{xy}(1,1)\\end{smallmatrix}\\right]^T,"
},
{
"math_id": 52,
"text": "\\alpha=A^{-1}x"
},
{
"math_id": 53,
"text": "A"
},
{
"math_id": 54,
"text": "\\begin{align}\n\\overline{x} &= \\frac{x-x_0}{x_1-x_0}, \\\\\n\\overline{y} &= \\frac{y-y_0}{y_1-y_0}\n\\end{align}"
},
{
"math_id": 55,
"text": "x_0, x_1, y_0,"
},
{
"math_id": 56,
"text": "y_1"
},
{
"math_id": 57,
"text": "p(x,y) = \\sum\\limits_{i=0}^3 \\sum_{j=0}^3 a_{ij} {\\overline{x}}^i {\\overline{y}}^j."
},
{
"math_id": 58,
"text": "f(x,y)"
},
{
"math_id": 59,
"text": "f_{xy}(x,y)"
},
{
"math_id": 60,
"text": "W(x) = \n\\begin{cases}\n (a+2)|x|^3-(a+3)|x|^2+1 & \\text{for } |x| \\leq 1, \\\\\n a|x|^3-5a|x|^2+8a|x|-4a & \\text{for } 1 < |x| < 2, \\\\\n 0 & \\text{otherwise},\n\\end{cases}\n"
},
{
"math_id": 61,
"text": "a"
},
{
"math_id": 62,
"text": "W(0)=1"
},
{
"math_id": 63,
"text": "W(n)=0"
},
{
"math_id": 64,
"text": "n"
},
{
"math_id": 65,
"text": "a=-0.5"
},
{
"math_id": 66,
"text": "a = -0.5"
},
{
"math_id": 67,
"text": "p(t) =\n\\tfrac{1}{2}\n\\begin{bmatrix}\n1 & t & t^2 & t^3\n\\end{bmatrix}\n\n\\begin{bmatrix}\n0 & 2 & 0 & 0 \\\\\n-1 & 0 & 1 & 0 \\\\\n2 & -5 & 4 & -1 \\\\\n-1 & 3 & -3 & 1\n\\end{bmatrix}\n\n\\begin{bmatrix}\nf_{-1} \\\\\nf_0 \\\\\nf_1 \\\\\nf_2\n\\end{bmatrix}\n"
},
{
"math_id": 68,
"text": "t"
},
{
"math_id": 69,
"text": "\\begin{align}\nb_{-1}&= p(t_x, f_{(-1,-1)}, f_{(0,-1)}, f_{(1,-1)}, f_{(2,-1)}), \\\\[1ex]\nb_{0} &= p(t_x, f_{(-1,0)}, f_{(0,0)}, f_{(1,0)}, f_{(2,0)}), \\\\[1ex]\nb_{1} &= p(t_x, f_{(-1,1)}, f_{(0,1)}, f_{(1,1)}, f_{(2,1)}), \\\\[1ex]\nb_{2} &= p(t_x, f_{(-1,2)}, f_{(0,2)}, f_{(1,2)}, f_{(2,2)}),\n\\end{align}"
},
{
"math_id": 70,
"text": "p(x,y) = p(t_y, b_{-1}, b_{0}, b_{1}, b_{2})."
}
]
| https://en.wikipedia.org/wiki?curid=1396397 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.