id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
14661319
Mannosyl-3-phosphoglycerate synthase
Class of enzymes In enzymology, a mannosyl-3-phosphoglycerate synthase (EC 2.4.1.217) is an enzyme that catalyzes the chemical reaction GDP-mannose + 3-phospho-D-glycerate formula_0 GDP + 2-(alpha-D-mannosyl)-3-phosphoglycerate Thus, the two substrates of this enzyme are GDP-mannose and 3-phospho-D-glycerate, whereas its two products are GDP and 2-(alpha-D-mannosyl)-3-phosphoglycerate. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-mannose:3-phosphoglycerate 3-alpha-D-mannosyltransferase. This enzyme is also called MPG synthase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661319
14661336
Mannotetraose 2-alpha-N-acetylglucosaminyltransferase
Class of enzymes In enzymology, a mannotetraose 2-alpha-N-acetylglucosaminyltransferase (EC 2.4.1.138) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + 1,3-alpha-D-mannosyl-1,2-alpha-D-mannosyl-1,2-alpha-D-mannosyl-D-mannose formula_0 UDP + 1,3-alpha-D-mannosyl-1,2-(N-acetyl-alpha-D-glucosaminyl-alpha-D-mannosyl)-1,2-alpha-D-mannosyl-D-mannose The 2 substrates of this enzyme are UDP-N-acetyl-D-glucosamine and 1,3-alpha-D-mannosyl-1,2-alpha-D-mannosyl-1,2-alpha-D-mannosyl-D-mannose, whereas its 2 products are UDP and 1,3-alpha-D-mannosyl-1,2-(N-acetyl-alpha-D-glucosaminyl-alpha-D-mannosyl)-1,2-alpha-D-mannosyl-D-mannose. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:mannotetraose alpha-N-acetyl-D-glucosaminyltransferase. Other names in common use include alpha-N-acetylglucosaminyltransferase, uridine diphosphoacetylglucosamine mannoside, and alpha1->2-alphacetylglucosaminyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661336
14661351
Methyl-ONN-azoxymethanol beta-D-glucosyltransferase
Class of enzymes In enzymology, a methyl-ONN-azoxymethanol beta-D-glucosyltransferase (EC 2.4.1.171) is an enzyme that catalyzes the chemical reaction UDP-glucose + methyl-ONN-azoxymethanol formula_0 UDP + cycasin Thus, the two substrates of this enzyme are UDP-glucose and methyl-ONN-azoxymethanol, whereas its two products are UDP and cycasin. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:methyl-ONN-azoxymethanol beta-D-glucosyltransferase. Other names in common use include cycasin synthase, uridine diphosphoglucose-methylazoxymethanol glucosyltransferase, and UDP-glucose-methylazoxymethanol glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661351
14661379
Monogalactosyldiacylglycerol synthase
Class of enzymes In enzymology, a monogalactosyldiacylglycerol synthase (EC 2.4.1.46) is an enzyme that catalyzes the chemical reaction UDP-galactose + 1,2-diacyl-sn-glycerol formula_0 UDP + 3-beta-D-galactosyl-1,2-diacyl-sn-glycerol Thus, the two substrates of this enzyme are UDP-galactose and 1,2-diacyl-sn-glycerol, whereas its two products are UDP and 3-beta-D-galactosyl-1,2-diacyl-sn-glycerol. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:1,2-diacyl-sn-glycerol 3-beta-D-galactosyltransferase. Other names in common use include uridine diphosphogalactose-1,2-diacylglycerol galactosyltransferase, UDP-galactose:diacylglycerol galactosyltransferase, MGDG synthase, UDP galactose-1,2-diacylglycerol galactosyltransferase, UDP-galactose-diacylglyceride galactosyltransferase, UDP-galactose:1,2-diacylglycerol 3-beta-D-galactosyltransferase, 1beta-MGDG, and 1,2-diacylglycerol 3-beta-galactosyltransferase. This enzyme participates in glycerolipid metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661379
14661396
Monosialoganglioside sialyltransferase
Class of enzymes In enzymology, a monosialoganglioside sialyltransferase (EC 2.4.99.2) is an enzyme that catalyzes the chemical reaction CMP-N-acetylneuraminate + D-galactosyl-N-acetyl-D-galactosaminyl-(N-acetylneuraminyl)-D-galactosyl-D-glucosylceramide formula_0 CMP + N-acetylneuraminyl-D-galactosyl-N-acetyl-D-galactosaminyl-(N-acetylneuraminyl)-D-galactosyl-D-glucosylceramide The 2 substrates of this enzyme are CMP-N-acetylneuraminate and D-galactosyl-N-acetyl-D-galactosaminyl-(N-acetylneuraminyl)-D-galactosyl-D-glucosylceramide, whereas its 2 products are CMP and N-acetylneuraminyl-D-galactosyl-N-acetyl-D-galactosaminyl-(N-acetylneuraminyl)-D-galactosyl-D-glucosylceramide. This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:D-galactosyl-N-acetyl-D-galactosaminyl-(N-ac etylneuraminyl)-D-galactosyl-D-glucosylceramide N-acetylneuraminyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661396
14661416
Monoterpenol beta-glucosyltransferase
Class of enzymes In enzymology, a monoterpenol beta-glucosyltransferase (EC 2.4.1.127) is an enzyme that catalyzes the chemical reaction UDP-glucose + (-)-menthol formula_0 UDP + (-)-menthyl O-beta-D-glucoside Thus, the two substrates of this enzyme are UDP-glucose and (-)-menthol, whereas its two products are UDP and (-)-menthyl O-beta-D-glucoside. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:(-)-menthol O-beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-monoterpenol glucosyltransferase, and UDPglucose:monoterpenol glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661416
14661435
N-acetylgalactosaminyl-proteoglycan 3-beta-glucuronosyltransferase
Class of enzymes In enzymology, a N-acetylgalactosaminyl-proteoglycan 3-beta-glucuronosyltransferase (EC 2.4.1.226) is an enzyme that catalyzes the chemical reaction UDP-alpha-D-glucuronate + N-acetyl-beta-D-galactosaminyl-(1->4)-beta-D-glucuronosyl- proteoglycan formula_0 UDP + beta-D-glucuronosyl-(1->3)-N-acetyl-beta-D-galactosaminyl-(1->4)- beta-D-glucuronosyl-proteoglycan The 3 substrates of this enzyme are UDP-alpha-D-glucuronate, N-acetyl-beta-D-galactosaminyl-(1->4)-beta-D-glucuronosyl-, and proteoglycan, whereas its 3 products are UDP, beta-D-glucuronosyl-(1->3)-N-acetyl-beta-D-galactosaminyl-(1->4)-, and beta-D-glucuronosyl-proteoglycan. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is alpha-D-glucuronate:N-acetyl-beta-D-galactosaminyl-(1->4)-beta-D-glu curonosyl-proteoglycan 3-beta-glucuronosyltransferase. This enzyme is also called chondroitin glucuronyltransferase II. This enzyme participates in chondroitin sulfate biosynthesis and glycan structures - biosynthesis 1. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661435
14661461
N-acetylglucosaminyldiphosphoundecaprenol glucosyltransferase
Class of enzymes In enzymology, a N-acetylglucosaminyldiphosphoundecaprenol glucosyltransferase (EC 2.4.1.188) is an enzyme that catalyzes the chemical reaction UDP-glucose + N-acetyl-D-glucosaminyldiphosphoundecaprenol formula_0 UDP + beta-D-glucosyl-1,4-N-acetyl-D-glucosaminyldiphosphoundecaprenol Thus, the two substrates of this enzyme are UDP-glucose and N-acetyl-D-glucosaminyldiphosphoundecaprenol, whereas its two products are UDP and beta-D-glucosyl-1,4-N-acetyl-D-glucosaminyldiphosphoundecaprenol. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:N-acetyl-D-glucosaminyldiphosphoundecaprenol 4-beta-D-glucosyltransferase. Other names in common use include UDP-D-glucose:N-acetylglucosaminyl pyrophosphorylundecaprenol, glucosyltransferase, uridine, diphosphoglucose-acetylglucosaminylpyrophosphorylundecaprenol, and glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661461
14661500
N-acetylglucosaminyl-proteoglycan 4-beta-glucuronosyltransferase
Class of enzymes In enzymology, a N-acetylglucosaminyl-proteoglycan 4-beta-glucuronosyltransferase (EC 2.4.1.225) is an enzyme that catalyzes the chemical reaction UDP-alpha-D-glucuronate + N-acetyl-alpha-D-glucosaminyl-(1->4)-beta-D-glucuronosyl- proteoglycan formula_0 UDP + beta-D-glucuronosyl-(1->4)-N-acetyl-alpha-D-glucosaminyl-(1->4)- beta-D-glucuronosyl-proteoglycan The 3 substrates of this enzyme are UDP-alpha-D-glucuronate, N-acetyl-alpha-D-glucosaminyl-(1->4)-beta-D-glucuronosyl-, and proteoglycan, whereas its 3 products are UDP, beta-D-glucuronosyl-(1->4)-N-acetyl-alpha-D-glucosaminyl-(1->4)-, and beta-D-glucuronosyl-proteoglycan. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-alpha-D-glucuronate:N-acetyl-alpha-D-glucosaminyl-(1->4)-beta-D- glucuronosyl-proteoglycan 4-beta-glucuronosyltransferase. Other names in common use include N-acetylglucosaminylproteoglycan beta-1,4-glucuronyltransferase, and heparan glucuronyltransferase II. This enzyme participates in heparan sulfate biosynthesis and glycan structures - biosynthesis 1. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661500
14661519
N-acetyllactosaminide 3-alpha-galactosyltransferase
Class of enzymes In enzymology, a N-acetyllactosaminide 3-alpha-galactosyltransferase (EC 2.4.1.87) is an enzyme that catalyzes the chemical reaction UDP-galactose + beta-D-galactosyl-(1->4)-beta-N-acetyl-D-glucosaminyl-R formula_0 UDP + alpha-D-galactosyl-(1->3)-beta-D-galactosyl-(1->4)-beta-N- acetylglucosaminyl-R Thus, the two substrates of this enzyme are UDP-galactose and beta-D-galactosyl-(1->4)-beta-N-acetyl-D-glucosaminyl-R, whereas its 3 products are UDP, alpha-D-galactosyl-(1->3)-beta-D-galactosyl-(1->4)-beta-N-, and acetylglucosaminyl-R. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:N-acetyllactosaminide 3-alpha-D-galactosyltransferase. Other names in common use include alpha-galactosyltransferase, UDP-Gal:beta-D-Gal(1,4)-D-GlcNAc alpha(1,3)-galactosyltransferase, UDP-Gal:N-acetyllactosaminide alpha(1,3)-galactosyltransferase, UDP-Gal:N-acetyllactosaminide alpha-1,3-D-galactosyltransferase, UDP-Gal:Galbeta1->4GlcNAc-R alpha1->3-galactosyltransferase, UDP-galactose-acetyllactosamine alpha-D-galactosyltransferase, UDPgalactose:beta-D-galactosyl-beta-1,4-N-acetyl-D-glucosaminyl-, glycopeptide alpha-1,3-D-galactosyltransferase, glucosaminylglycopeptide alpha-1,3-galactosyltransferase, uridine diphosphogalactose-acetyllactosamine, alpha1->3-galactosyltransferase, uridine diphosphogalactose-acetyllactosamine galactosyltransferase, uridine, diphosphogalactose-, galactosylacetylglucosaminylgalactosylglucosylceramide, galactosyltransferase, beta-D-galactosyl-N-acetylglucosaminylglycopeptide, and alpha-1,3-galactosyltransferase. This enzyme participates in 3 metabolic pathways: glycosphingolipid biosynthesis - lactoseries, glycosphingolipid biosynthesis - neo-lactoseries, and glycan structures - biosynthesis 2. Structural studies. As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2JCF, 2JCK, and 2JCL. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661519
14661541
N-acetyllactosaminide alpha-2,3-sialyltransferase
Class of enzymes In enzymology, a N-acetyllactosaminide alpha-2,3-sialyltransferase (EC 2.4.99.6) is an enzyme that catalyzes the chemical reaction CMP-N-acetylneuraminate + beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-glycoprotein formula_0 CMP + alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-1,4-N-acetyl-D- glucosaminyl-glycoprotein Thus, the two substrates of this enzyme are CMP-N-acetylneuraminate and beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-glycoprotein, whereas its 3 products are CMP, alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-1,4-N-acetyl-D-, and glucosaminyl-glycoprotein. This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:beta-D-galactosyl-1,4-N-acetyl-D-glucosaminy l-glycoprotein alpha-2,3-N-acetylneuraminyltransferase. Other names in common use include sialyltransferase, cytidine, monophosphoacetylneuraminate-beta-galactosyl(1-, >4)acetylglucosaminide alpha2->3-sialyltransferase, alpha2->3 sialyltransferase, and SiaT. This enzyme participates in 4 metabolic pathways: keratan sulfate biosynthesis, glycosphingolipid biosynthesis - lactoseries, glycan structures - biosynthesis 1, and glycan structures - biosynthesis 2. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2EX0 and 2EX1. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661541
14661560
N-acetyllactosaminide beta-1,3-N-acetylglucosaminyltransferase
Class of enzymes In enzymology, a N-acetyllactosaminide beta-1,3-N-acetylglucosaminyltransferase (EC 2.4.1.149) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-R formula_0 UDP + N-acetyl-beta-D-glucosaminyl-1,3-beta-D-galactosyl-1,4-N-acetyl-D- glucosaminyl-R Thus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-R, whereas its 3 products are UDP, N-acetyl-beta-D-glucosaminyl-1,3-beta-D-galactosyl-1,4-N-acetyl-D-, and glucosaminyl-R. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:beta-D-galactosyl-1,4-N-acetyl-D-glucosam ine beta-1,3-acetyl-D-glucosaminyltransferase. Other names in common use include uridine diphosphoacetylglucosamine-acetyllactosaminide, beta1->3-acetylglucosaminyltransferase, poly-N-acetyllactosamine extension enzyme, Galbeta1->4GlcNAc-R beta1->3 N-acetylglucosaminyltransferase, UDP-GlcNAc:GalR, beta-D-3-N-acetylglucosaminyltransferase, N-acetyllactosamine beta(1–3)N-acetylglucosaminyltransferase, UDP-GlcNAc:Galbeta1->4GlcNAcbeta-Rbeta1->3-N-, acetylglucosaminyltransferase, and GnTE. This enzyme participates in 4 metabolic pathways: keratan sulfate biosynthesis, glycosphingolipid biosynthesis - neo-lactoseries, glycan structures - biosynthesis 1, and glycan structures - biosynthesis 2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661560
14661582
N-acetyllactosaminide beta-1,6-N-acetylglucosaminyl-transferase
Class of enzymes In enzymology, a N-acetyllactosaminide beta-1,6-N-acetylglucosaminyl-transferase (EC 2.4.1.150) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-R formula_0 UDP + N-acetyl-beta-D-glucosaminyl-1,6-beta-D-galactosyl-1,4-N-acetyl-D- glucosaminyl-R Thus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and beta-D-galactosyl-1,4-N-acetyl-D-glucosaminyl-R, whereas its 3 products are UDP, N-acetyl-beta-D-glucosaminyl-1,6-beta-D-galactosyl-1,4-N-acetyl-D-, and glucosaminyl-R. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:beta-D-galactosyl-1,4-N-acetyl-D-glucosaminide beta-1,6-N-acetyl-D-glucosaminyltransferase. Other names in common use include N-acetylglucosaminyltransferase, uridine diphosphoacetylglucosamine-acetyllactosaminide, beta1->6-acetylglucosaminyltransferase, Galbeta1->4GlcNAc-R beta1->6 N-acetylglucosaminyltransferase, and UDP-GlcNAc:Gal-R, beta-D-6-N-acetylglucosaminyltransferase. This enzyme participates in glycosphingolipid biosynthesis - neo-lactoseries and glycan structures - biosynthesis 2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661582
14661611
N-acylsphingosine galactosyltransferase
Class of enzymes In enzymology, a N-acylsphingosine galactosyltransferase (EC 2.4.1.47) is an enzyme that catalyzes the chemical reaction UDP-galactose + N-acylsphingosine formula_0 UDP + D-galactosylceramide Thus, the two substrates of this enzyme are UDP-galactose and N-acylsphingosine, whereas its two products are UDP and D-galactosylceramide. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:N-acylsphingosine D-galactosyltransferase. Other names in common use include UDP galactose-N-acylsphingosine galactosyltransferase, and uridine diphosphogalactose-acylsphingosine galactosyltransferase. This enzyme participates in sphingolipid metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661611
14661631
NAD(+)—dinitrogen-reductase ADP-D-ribosyltransferase
InterPro Family In enzymology, a NAD+-dinitrogen-reductase ADP-D-ribosyltransferase (EC 2.4.2.37) is an enzyme that catalyzes the chemical reaction NAD+ + [dinitrogen reductase] formula_0 nicotinamide + ADP-D-ribosyl-[dinitrogen reductase] Thus, the two substrates of this enzyme are NAD+ and dinitrogen reductase, whereas its two products are nicotinamide and ADP-D-ribosyl-[dinitrogen reductase]. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is NAD+:[dinitrogen reductase] (ADP-D-ribosyl)transferase. Other names in common use include NAD-azoferredoxin (ADPribose)transferase, and NAD-dinitrogen-reductase ADP-D-ribosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661631
14661655
NAD(+)—diphthamide ADP-ribosyltransferase
Class of enzymes In enzymology, a NAD+-diphthamide ADP-ribosyltransferase (EC 2.4.2.36) is an enzyme that catalyzes the chemical reaction NAD+ + peptide diphthamide formula_0 nicotinamide + peptide N-(ADP-D-ribosyl)diphthamide Thus, the two substrates of this enzyme are NAD+ and peptide diphthamide, whereas its two products are nicotinamide and peptide N-(ADP-D-ribosyl)diphthamide. This enzyme belongs to the family of glycosyltransferases, to be specific, the pentosyltransferases. The systematic name of this enzyme class is NAD+:peptide-diphthamide N-(ADP-D-ribosyl)transferase. Other names in common use include ADP-ribosyltransferase, mono(ADPribosyl)transferase, and NAD-diphthamide ADP-ribosyltransferase. Structural studies. As of late 2007, 15 structures have been solved for this class of enzymes, with PDB accession codes 1S5B, 1S5C, 1S5D, 1S5E, 1S5F, 1SGK, 1TOX, 1XDT, 1XK9, 1ZM3, 1ZM4, 1ZM9, 2A5D, 2A5F, and 2A5G. Clinical significance. The extracellular ADP-ribosyl-transferase ART2 is expressed only on T cells. T cell activation of P2X7 receptors can activate the T cells or cause T cell differentiation, can affect T cell migration or (at high extracellular levels of NAD+) can induce cell death by ART2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661655
14661685
NDP-glucose—starch glucosyltransferase
Class of enzymes In enzymology, a NDP-glucose—starch glucosyltransferase (EC 2.4.1.242) is an enzyme that catalyzes the chemical reaction NDP-glucose + (1,4-alpha-D-glucosyl)n formula_0 NDP + (1,4-alpha-D-glucosyl)n+1 Thus, the two substrates of this enzyme are NDP-glucose and (1,4-alpha-D-glucosyl)n, whereas its two products are NDP and (1,4-alpha-D-glucosyl)n+1. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is NDP-glucose:1,4-alpha-D-glucan 4-alpha-D-glucosyltransferase. Other names in common use include granule-bound starch synthase, starch synthase II (ambiguous), waxy protein, starch granule-bound nucleoside diphosphate glucose-starch, glucosyltransferase, granule-bound starch synthase I, GBSSI, granule-bound starch synthase II, GBSSII, GBSS, and NDPglucose-starch glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661685
14661715
Neolactotetraosylceramide alpha-2,3-sialyltransferase
Class of enzymes In enzymology, a neolactotetraosylceramide alpha-2,3-sialyltransferase (EC 2.4.99.10) is an enzyme that catalyzes the chemical reaction CMP-N-acetylneuraminate + beta-D-galactosyl-1,4-N-acetyl-beta-D-glucosaminyl-1,3-beta-D- galactosyl-1,4-D-glucosylceramide formula_0 CMP + alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-1,4-N-acetyl-beta-D- glucosaminyl-1,3-beta-D-galactosyl-1,4-D-glucosylceramide The 3 substrates of this enzyme are CMP-N-acetylneuraminate, beta-D-galactosyl-1,4-N-acetyl-beta-D-glucosaminyl-1,3-beta-D-, and galactosyl-1,4-D-glucosylceramide, whereas its 3 products are CMP, alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-1,4-N-acetyl-beta-D-, and glucosaminyl-1,3-beta-D-galactosyl-1,4-D-glucosylceramide. This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:neolactotetraosylceramide alpha-2,3-sialyltransferase. Other names in common use include cytidine monophosphoacetylneuraminate-neolactotetraosylceramide, sialyltransferase, sialyltransferase 3, and SAT-3. This enzyme participates in glycosphingolipid biosynthesis - neo-lactoseries and glycan structures - biosynthesis 2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661715
14661731
N-hydroxythioamide S-beta-glucosyltransferase
Class of enzymes In enzymology, a N-hydroxythioamide S-beta-glucosyltransferase (EC 2.4.1.195) is an enzyme that catalyzes the chemical reaction UDP-glucose + N-hydroxy-2-phenylethanethioamide formula_0 UDP + desulfoglucotropeolin Thus, the two substrates of this enzyme are UDP-glucose and N-hydroxy-2-phenylethanethioamide, whereas its two products are UDP and desulfoglucotropeolin. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:N-hydroxy-2-phenylethanethioamide S-beta-D-glucosyltransferase. Other names in common use include desulfoglucosinolate-uridine diphosphate glucosyltransferase, uridine diphosphoglucose-thiohydroximate glucosyltransferase, thiohydroximate beta-D-glucosyltransferase, UDPG:thiohydroximate glucosyltransferase, thiohydroximate S-glucosyltransferase, thiohydroximate glucosyltransferase, and UDP-glucose:thiohydroximate S-beta-D-glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661731
14661752
Nicotinamide phosphoribosyltransferase
Human protein and coding gene Nicotinamide phosphoribosyltransferase (NAmPRTase or NAMPT), formerly known as pre-B-cell colony-enhancing factor 1 (PBEF1) or visfatin for its extracellular form (eNAMPT), is an enzyme that in humans is encoded by the "NAMPT" gene. The intracellular form of this protein (iNAMPT) is the rate-limiting enzyme in the nicotinamide adenine dinucleotide (NAD+) salvage pathway that converts nicotinamide to nicotinamide mononucleotide (NMN) which is responsible for most of the NAD+ formation in mammals. iNAMPT can also catalyze the synthesis of NMN from phosphoribosyl pyrophosphate (PRPP) when ATP is present. eNAMPT has been reported to be a cytokine (PBEF) that activates TLR4, that promotes B cell maturation, and that inhibits neutrophil apoptosis. Reaction. iNAMPT catalyzes the following chemical reaction: nicotinamide + 5-phosphoribosyl-1-pyrophosphate (PRPP) formula_0 nicotinamide mononucleotide (NMN) + pyrophosphate (PPi) Thus, the two substrates of this enzyme are nicotinamide and 5-phosphoribosyl-1-pyrophosphate (PRRP), whereas its two products are nicotinamide mononucleotide and pyrophosphate. This enzyme belongs to the family of glycosyltransferases, to be specific, the pentosyltransferases. This enzyme participates in nicotinate and nicotinamide metabolism. Expression and regulation. The liver has the highest iNAMPT activity of any organ, about 10-20 times greater activity than kidney, spleen, heart, muscle, brain or lung. iNAMPT is downregulated by an increase of miR-34a in obesity via a 3'UTR functional binding site of iNAMPT mRNA resulting in a reduction of NAD(+) and decreased SIRT1 activity. Endurance-trained athletes have twice the expression of iNAMPT in skeletal muscle compared with sedentary type 2 diabetic persons. In a six-week study comparing legs trained by endurance exercise with untrained legs, iNAMPT was increased in the endurance-trained legs. A study of 21 young (under 36) and 22 old (over 54) adults subject to 12 weeks of aerobic and resistance exercise showed aerobic exercise to increase skeletal muscle iNAMPT 12% and 28% in young and old (respectively) and resistance exercise to increase skeletal muscle iNAMPT 25% and 30% in young and old (respectively). Aging, obesity, and chronic inflammation all reduce iNAMPT (and consequently NAD+) in multiple tissues, and NAMPT activity was shown to promote a proinflammatory transcriptional reprogramming of immune cells (e.g. macrophages) and brain-resident astrocytes. Function. iNAMPT catalyzes the condensation of nicotinamide (NAM) with 5-phosphoribosyl-1-pyrophosphate to yield nicotinamide mononucleotide (NMN), the first step in the biosynthesis of nicotinamide adenine dinucleotide (NAD+). This salvage pathway, reusing NAM from enzymes using NAD+ (sirtuins, PARPs, CD38) and producing NAM as a waste product, is the major source of NAD+ production in the body. De novo synthesis of NAD+ from tryptophan occurs only in the liver and kidney, overwhelmingly in the liver. Nomenclature. The systematic name of this enzyme class is nicotinamide-nucleotide:diphosphate phospho-alpha-D-ribosyltransferase. Other names in common use include: Extracellular NAMPT. Extracellular NAMPT (eNAMPT) is functionally different from intracellular NAMPT (iNAMPT), and less well understood (which is why the enzyme has been given so many names: NAMPT, PBEF and visfatin). iNAMPT is secreted by many cell types (nobably adipocytes) to become eNAMPT. The sirtuin 1 (SIRT1) enzyme is required for eNAMPT secretion from adipose tissue. eNAMPT may act more as a cytokine, although its receptor (possibly TLR4) has not been proven. It has been demonstrated that eNAMPT could bind to and activate TLR4. eNAMPT can exist as a dimer or as a monomer, but is normally a circulating dimer. As a monomer, eNAMPT has pro-inflammatory effects that are independent of NAD+, whereas the dimeric form of eNAMPT protects against these effects. eNAMPT/PBEF/visfatin was originally cloned as a putative cytokine shown to enhance the maturation of B cell precursors in the presence of Interleukin-7 (IL-7) and stem cell factor, it was therefore named "pre-B cell colony-enhancing factor" (PBEF). When the gene encoding the bacterial nicotinamide phosphoribosyltransferase ("nadV") was first isolated in "Haemophilus ducreyi", it was found to exhibit significant homology to the mammalian PBEF gene. Rongvaux et al. demonstrated genetically that the mouse PBEF gene conferred Nampt enzymatic activity and NAD-independent growth to bacteria lacking nadV. Revollo et al. determined biochemically that the mouse PBEF gene product encodes an eNAMPT enzyme, capable of modulating intracellular NAD levels. Others have since confirmed these findings. More recently, several groups have reported the crystal structure of Nampt/PBEF/visfatin and they all show that this protein is a dimeric type II phosphoribosyltransferase enzyme involved in NAD biosynthesis. eNAMPT has been shown to be more enzymatically active than iNAMPT, supporting the proposal that eNAMPT from adipose tissue enhances NAD+ in tissues with low levels of iNAMPT, notably pancreatic beta cells and brain neurons. Hormone claim retracted. Although the original cytokine function of PBEF has not been confirmed to date, others have since reported or suggested a cytokine-like function for this protein. In particular, Nampt/PBEF was recently re-identified as a "new visceral fat-derived hormone" named visfatin. It is reported that visfatin is enriched in the visceral fat of both humans and mice and that its plasma levels increase during the development of obesity. Noteworthy is that visfatin is reported to exert insulin-mimetic effects in cultured cells and to lower plasma glucose levels in mice by binding to and activating the insulin receptor. However, the physiological relevance of visfatin is still in question because its plasma concentration is 40 to 100-fold lower than that of insulin despite having similar receptor-binding affinity. In addition, the ability of visfatin to bind and activate the insulin-receptor has yet to be confirmed by other groups. On 26 October 2007, A. Fukuhara (first author), I.Shimomura (senior author) and the other co-authors of the paper, who first described Visfatin as a visceral-fat derived hormone that acts by binding and activating the insulin receptor, retracted the entire paper at the suggestion of the editor of the journal 'Science' and recommendation of the Faculty Council of Osaka University Medical School after a report of the Committee for Research Integrity. As a drug target. Because cancer cells utilize increased glycolysis, and because NAD enhances glycolysis, iNAMPT is often amplified in cancer cells. APO866 is an experimental drug that inhibits this enzyme. It is being tested for treatment of advanced melanoma, cutaneous T-cell lymphoma (CTL), and refractory or relapsed B-chronic lymphocytic leukemia. The NAMPT inhibitor FK866 has been shown to inhibit epithelial–mesenchymal transition (EMT), and may also inhibit tumor-associated angiogenesis. Anti-aging biomedical company Calico has licensed the experimental P7C3 analogs involved in enhancing iNAMPT activity. P7C3 compounds have been shown in a number of publications to be beneficial in animal models for age-related neurodegeneration. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661752
14661774
Nicotinate glucosyltransferase
Class of enzymes In enzymology, a nicotinate glucosyltransferase (EC 2.4.1.196) is an enzyme that catalyzes the chemical reaction UDP-glucose + nicotinate formula_0 UDP + N-glucosylnicotinate Thus, the two substrates of this enzyme are UDP-glucose and nicotinate, whereas its two products are UDP and N-glucosylnicotinate. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:nicotinate N-glucosyltransferase. Other names in common use include uridine diphosphoglucose-nicotinate N-glucosyltransferase, and UDP-glucose:nicotinic acid-N-glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661774
14661792
Nicotinate-nucleotide—dimethylbenzimidazole phosphoribosyltransferase
Class of enzymes In enzymology, a nicotinate-nucleotide-dimethylbenzimidazole phosphoribosyltransferase (EC 2.4.2.21) is an enzyme that catalyzes the chemical reaction beta-nicotinate D-ribonucleotide + 5,6-dimethylbenzimidazole formula_0 nicotinate + alpha-ribazole 5'-phosphate Thus, the two substrates of this enzyme are beta-nicotinate D-ribonucleotide and 5,6-dimethylbenzimidazole, whereas its two products are nicotinate and alpha-ribazole 5'-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is nicotinate-nucleotide:5,6-dimethylbenzimidazole phospho-D-ribosyltransferase. Other names in common use include CobT, nicotinate mononucleotide-dimethylbenzimidazole phosphoribosyltransferase, nicotinate ribonucleotide:benzimidazole (adenine) phosphoribosyltransferase, nicotinate-nucleotide:dimethylbenzimidazole phospho-D-ribosyltransferase, and nicotinate mononucleotide (NaMN):5,6-dimethylbenzimidazole phosphoribosyltransferase. This enzyme is part of the biosynthetic pathway to cobalamin (vitamin B12) in bacteria. Function. This enzyme plays a central role in the synthesis of alpha-ribazole-5'-phosphate, an intermediate for the lower ligand of cobalamin. It is one of the enzymes of the anaerobic pathway of cobalamin biosynthesis, and one of the four proteins (CobU, CobT, CobC, and CobS) involved in the synthesis of the lower ligand and the assembly of the nucleotide loop. Biosynthesis of cobalamin. Vitamin B12 (cobalamin) is used as a cofactor in a number of enzyme-catalysed reactions in bacteria, archaea and eukaryotes. The biosynthetic pathway to adenosylcobalamin from its five-carbon precursor, 5-aminolaevulinic acid, can be divided into three sections: (1) the biosynthesis of uroporphyrinogen III from 5-aminolaevulinic acid; (2) the conversion of uroporphyrinogen III into the ring-contracted, deacylated intermediate precorrin 6 or cobalt-precorrin 6; and (3) the transformation of this intermediate to form adenosylcobalamin. Cobalamin is synthesised by bacteria and archaea via two alternative routes that differ primarily in the steps of section 2 that lead to the contraction of the macrocycle and excision of the extruded carbon molecule (and its attached methyl group). One pathway (exemplified by "Pseudomonas denitrificans") incorporates molecular oxygen into the macrocycle as a prerequisite to ring contraction, and has consequently been termed the aerobic pathway. The alternative, anaerobic, route (exemplified by "Salmonella typhimurium") takes advantage of a chelated cobalt ion, in the absence of oxygen, to set the stage for ring contraction. Structural studies. As of late 2007, 28 structures have been solved for this class of enzymes, with PDB accession codes 1D0S, 1D0V, 1JH8, 1JHA, 1JHM, 1JHP, 1JHQ, 1JHR, 1JHU, 1JHV, 1JHX, 1JHY, 1L4B, 1L4E, 1L4F, 1L4G, 1L4H, 1L4K, 1L4L, 1L4M, 1L4N, 1L5F, 1L5K, 1L5L, 1L5M, 1L5N, 1L5O, and 1WX1. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661792
14661813
Nicotinate-nucleotide diphosphorylase (carboxylating)
Class of enzymes In enzymology, a nicotinate-nucleotide diphosphorylase (carboxylating) (EC 2.4.2.19) is an enzyme that catalyzes the chemical reaction nicotinate D-ribonucleotide + diphosphate + CO2 formula_0 pyridine-2,3-dicarboxylate + 5-phospho-alpha-D-ribose 1-diphosphate The 3 substrates of this enzyme are nicotinate D-ribonucleotide, diphosphate, and CO2, whereas its two products are pyridine-2,3-dicarboxylate and 5-phospho-alpha-D-ribose 1-diphosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is nicotinate-nucleotide:diphosphate phospho-alpha-D-ribosyltransferase (carboxylating). Other names in common use include quinolinate phosphoribosyltransferase (decarboxylating), quinolinic acid phosphoribosyltransferase, QAPRTase, NAD+ pyrophosphorylase, nicotinate mononucleotide pyrophosphorylase (carboxylating), and quinolinic phosphoribosyltransferase. This enzyme participates in nicotinate and nicotinamide metabolism. Structural studies. As of late 2007, 9 structures have been solved for this class of enzymes, with PDB accession codes 1QAP, 1QPN, 1QPO, 1QPQ, 1QPR, 1X1O, 2B7N, 2B7P, and 2B7Q. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661813
14661836
Nicotinate phosphoribosyltransferase
In enzymology, a nicotinate phosphoribosyltransferase (EC 6.3.4.21) is an enzyme that catalyzes the chemical reaction nicotinate + 5-phospho-α--ribose 1-diphosphate + ATP + H2O formula_0 nicotinate -ribonucleotide + diphosphate + ADP + phosphate Thus, the four substrates of this enzyme are nicotinate, 5-phospho-alpha-D-ribose 1-diphosphate, ATP, and H2O, whereas its four products are nicotinate D-ribonucleotide, diphosphate, ADP, and phosphate. This enzyme belongs to the family of ligases, specifically those forming generic carbon-nitrogen bonds. The systematic name of this enzyme class is 5-phospho-alpha-D-ribose 1-diphosphate:nicotinate ligase (ADP, diphosphate-forming) . Structural studies. As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1VLP, 1YBE, 1YIR, 1YTD, 1YTE, 1YTK, and 2F7F. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661836
14661853
Nuatigenin 3beta-glucosyltransferase
Class of enzymes In enzymology, a nuatigenin 3beta-glucosyltransferase (EC 2.4.1.192) is an enzyme that catalyzes the chemical reaction UDP-glucose + (20S,22S,25S)-22,25-epoxyfurost-5-ene-3beta,26-diol formula_0 UDP + (20S,22S,25S)-22,25-epoxyfurost-5-ene-3beta,26-diol 3-O-beta-D-glucoside Thus, the two substrates of this enzyme are UDP-glucose and (20S,22S,25S)-22,25-epoxyfurost-5-ene-3beta,26-diol, whereas its 3 products are UDP, (20S,22S,25S)-22,25-epoxyfurost-5-ene-3beta,26-diol, and 3-O-beta-D-glucoside. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:(20S,22S,25S)-22,25-epoxyfurost-5-ene-3beta,26-diol 3-O-beta-D-glucosyltransferase. This enzyme is also called uridine diphosphoglucose-nuatigenin glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661853
14661873
Nucleoside deoxyribosyltransferase
Class of enzymes In enzymology, a nucleoside deoxyribosyltransferase (EC 2.4.2.6) is an enzyme that catalyzes the chemical reaction 2-deoxy-D-ribosyl-base1 + base2 formula_0 2-deoxy-D-ribosyl-base2 + base1 Thus, the two substrates of this enzyme are 2-deoxy-D-ribosyl-base1 and base2, whereas its two products are 2-deoxy-D-ribosyl-base2 and base1. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is nucleoside:purine(pyrimidine) deoxy-D-ribosyltransferase. Other names in common use include purine(pyrimidine) nucleoside:purine(pyrimidine) deoxyribosyl, transferase, deoxyribose transferase, nucleoside trans-N-deoxyribosylase, trans-deoxyribosylase, trans-N-deoxyribosylase, trans-N-glycosidase, nucleoside deoxyribosyltransferase I (purine nucleoside:purine, deoxyribosyltransferase: strictly specific for transfer between, purine bases), nucleoside deoxyribosyltransferase II [purine(pyrimidine), and nucleoside:purine(pyrimidine) deoxyribosyltransferase]. This enzyme participates in pyrimidine metabolism. Structural studies. As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1F8X, 1F8Y, 1S2D, 1S2G, 1S2I, 1S2L, 1S3F, 2A0K, 2F2T, 2F62, 2F64, and 2F67. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661873
14661890
Nucleoside ribosyltransferase
Class of enzymes In enzymology, a nucleoside ribosyltransferase (EC 2.4.2.5) is an enzyme that catalyzes the chemical reaction D-ribosyl-base1 + base2 formula_0 D-ribosyl-base2 + base1 Thus, the two substrates of this enzyme are D-ribosyl-base1 and base2, whereas its two products are D-ribosyl-base2 and base1. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is nucleoside:purine(pyrimidine) D-ribosyltransferase. This enzyme is also called nucleoside N-ribosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661890
14661915
O-dihydroxycoumarin 7-O-glucosyltransferase
Enzyme In enzymology, an o-dihydroxycoumarin 7-O-glucosyltransferase (EC 2.4.1.104) is an enzyme that catalyzes the chemical reaction UDP-glucose + 7,8-dihydroxycoumarin formula_0 UDP + daphnin Thus, the two substrates of this enzyme are UDP-glucose and 7,8-dihydroxycoumarin, whereas its two products are UDP and daphnin. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:7,8-dihydroxycoumarin 7-O-beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-o-dihydroxycoumarin, 7-O-glucosyltransferase, and UDP-glucose:o-dihydroxycoumarin glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661915
14661965
Phenol beta-glucosyltransferase
Class of enzymes In enzymology, a phenol beta-glucosyltransferase (EC 2.4.1.35) is an enzyme that catalyzes the chemical reaction UDP-glucose + a phenol formula_0 UDP + an aryl beta-D-glucoside Thus, the two substrates of this enzyme are UDP-glucose and phenol, whereas its two products are UDP and aryl beta-D-glucoside. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:phenol beta-D-glucosyltransferase. Other names in common use include UDPglucosyltransferase, phenol-beta-D-glucosyltransferase, UDP glucosyltransferase, UDP-glucose glucosyltransferase, and uridine diphosphoglucosyltransferase. This enzyme participates in starch and sucrose metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661965
14661983
Phosphatidylinositol N-acetylglucosaminyltransferase
Class of enzymes In enzymology, a phosphatidylinositol N-acetylglucosaminyltransferase (EC 2.4.1.198) is an enzyme that catalyzes the chemical reaction UDP-"N"-acetylglucosamine + phosphatidylinositol formula_0 UDP + "N"-acetyl-D-glucosaminylphosphatidylinositol Thus, the two substrates of this enzyme are UDP-"N"-acetylglucosamine and phosphatidylinositol, whereas its two products are UDP and "N"-acetyl-D-glucosaminylphosphatidylinositol. The mammalian enzyme is composed of at least six subunits (PIG-A, PIG-H, PIG-C, PIG-P, PIG-Y, and GPI1). PIG-A is the catalytic subunit. This enzyme belongs to the family of glycosyltransferases, to be specific the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:1-phosphatidyl-1D-myo-inositol 6-(N-acetyl-alpha-D-glucosaminyl)transferase. Other names in common use include UDP-N-acetyl-D-glucosamine:phosphatidylinositol, N-acetyl-D-glucosaminyltransferase, uridine diphosphoacetylglucosamine, and alpha1,6-acetyl-D-glucosaminyltransferase. This enzyme participates in 3 metabolic pathways: glycosylphosphatidylinositol(gpi)-anchor, ???, and glycan structures - biosynthesis 2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14661983
14662005
Phosphopolyprenol glucosyltransferase
Class of enzymes In enzymology, a phosphopolyprenol glucosyltransferase (EC 2.4.1.78) is an enzyme that catalyzes the chemical reaction UDP-glucose + polyprenyl phosphate formula_0 UDP + polyprenylphosphate-glucose Thus, the two substrates of this enzyme are UDP-glucose and polyprenyl phosphate, whereas its two products are UDP and polyprenylphosphate-glucose. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:phosphopolyprenol D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-polyprenol monophosphate, glucosyltransferase, and UDP-glucose:polyprenol monophosphate glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662005
14662028
Polygalacturonate 4-alpha-galacturonosyltransferase
Class of enzymes In enzymology, a polygalacturonate 4-alpha-galacturonosyltransferase (EC 2.4.1.43) is an enzyme that catalyzes the chemical reaction UDP-D-galacturonate + (1,4-alpha-D-galacturonosyl)n formula_0 UDP + (1,4-alpha-D-galacturonosyl)n+1 Thus, the two substrates of this enzyme are UDP-D-galacturonate and (1,4-alpha-D-galacturonosyl)n, whereas its two products are UDP and (1,4-alpha-D-galacturonosyl)n+1. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-D-galacturonate:1,4-alpha-poly-D-galacturonate 4-alpha-D-galacturonosyltransferase. Other names in common use include UDP galacturonate-polygalacturonate alpha-galacturonosyltransferase, uridine diphosphogalacturonate-polygalacturonate, and alpha-galacturonosyltransferase. This enzyme participates in starch and sucrose metabolism and nucleotide sugars metabolism. Indications. Polygalacturonate salts can be used clinically to treat the GI reactions that are due to Quinidine. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662028
14662049
Poly(glycerol-phosphate) alpha-glucosyltransferase
Class of enzymes In enzymology, a poly(glycerol-phosphate) alpha-glucosyltransferase (EC 2.4.1.52) is an enzyme that catalyzes the chemical reaction UDP-glucose + poly(glycerol phosphate) formula_0 UDP + O-(alpha-D-glucosyl)poly(glycerol phosphate) Thus, the two substrates of this enzyme are UDP-glucose and poly(glycerol phosphate), whereas its two products are UDP and O-(alpha-D-glucosyl)poly(glycerol phosphate). This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:poly(glycerol-phosphate) alpha-D-glucosyltransferase. Other names in common use include UDP glucose-poly(glycerol-phosphate) alpha-glucosyltransferase, uridine diphosphoglucose-poly(glycerol-phosphate), and alpha-glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662049
14662068
Polypeptide N-acetylgalactosaminyltransferase
Class of enzymes In enzymology, a polypeptide N-acetylgalactosaminyltransferase (EC 2.4.1.41) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-galactosamine + polypeptide formula_0 UDP + N-acetyl-D-galactosaminyl-polypeptide Thus, the two substrates of this enzyme are UDP-N-acetyl-D-galactosamine and polypeptide, whereas its two products are UDP and N-acetyl-D-galactosaminyl-polypeptide. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. This enzyme participates in o-glycan biosynthesis and glycan structures - biosynthesis 1. It has 2 cofactors: manganese, and calcium. Nomenclature. The systematic name of this enzyme class is UDP-N-acetyl-D-galactosamine:polypeptide N-acetylgalactosaminyl-transferase. Other names in common use include: <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662068
14662086
Poly(ribitol-phosphate) beta-glucosyltransferase
Class of enzymes In enzymology, a poly(ribitol-phosphate) beta-glucosyltransferase (EC 2.4.1.53) is an enzyme that catalyzes the chemical reaction UDP-glucose + poly(ribitol phosphate) formula_0 UDP + (beta-D-glucosyl)poly(ribitol phosphate) Thus, the two substrates of this enzyme are UDP-glucose and poly(ribitol phosphate), whereas its two products are UDP and (beta-D-glucosyl)poly(ribitol phosphate). This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:poly(ribitol-phosphate) beta-D-glucosyltransferase. Other names in common use include UDP glucose-poly(ribitol-phosphate) beta-glucosyltransferase, uridine diphosphoglucose-poly(ribitol-phosphate), beta-glucosyltransferase, UDP-D-glucose polyribitol phosphate glucosyl transferase, and UDP-D-glucose:polyribitol phosphate glucosyl transferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662086
14662103
Poly(ribitol-phosphate) N-acetylglucosaminyl-transferase
Class of enzymes In enzymology, a poly(ribitol-phosphate) N-acetylglucosaminyl-transferase (EC 2.4.1.70) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + poly(ribitol phosphate) formula_0 UDP + (N-acetyl-D-glucosaminyl)poly(ribitol phosphate) Thus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and poly(ribitol phosphate), whereas its two products are UDP and (N-acetyl-D-glucosaminyl)poly(ribitol phosphate). This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:poly(ribitol-phosphate) N-acetyl-D-glucosaminyltransferase. Other names in common use include UDP acetylglucosamine-poly(ribitol phosphate), acetylglucosaminyltransferase, uridine diphosphoacetylglucosamine-poly(ribitol phosphate), and acetylglucosaminyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662103
14662119
Procollagen galactosyltransferase
Class of enzymes In enzymology, a procollagen galactosyltransferase (EC 2.4.1.50) is an enzyme that catalyzes the chemical reaction UDP-galactose + procollagen 5-hydroxy-L-lysine formula_0 UDP + procollagen 5-(D-galactosyloxy)-L-lysine Thus, the two substrates of this enzyme are UDP-galactose and procollagen 5-hydroxy-L-lysine, whereas its two products are UDP and procollagen 5-(D-galactosyloxy)-L-lysine. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:procollagen-5-hydroxy-L-lysine D-galactosyltransferase. Other names in common use include hydroxylysine galactosyltransferase, collagen galactosyltransferase, collagen hydroxylysyl galactosyltransferase, UDP galactose-collagen galactosyltransferase, uridine diphosphogalactose-collagen galactosyltransferase, and UDPgalactose:5-hydroxylysine-collagen galactosyltransferase. This enzyme participates in lysine degradation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662119
14662139
Procollagen glucosyltransferase
Class of enzymes In enzymology, a procollagen glucosyltransferase (EC 2.4.1.66) is an enzyme that catalyzes the chemical reaction UDP-glucose + 5-(D-galactosyloxy)-L-lysine-procollagen formula_0 UDP + 1,2-D-glucosyl-5-D-(galactosyloxy)-L-lysine-procollagen Thus, the two substrates of this enzyme are UDP-glucose and 5-(D-galactosyloxy)-L-lysine-procollagen, whereas its two products are UDP and 1,2-D-glucosyl-5-D-(galactosyloxy)-L-lysine-procollagen. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:5-(D-galactosyloxy)-L-lysine-procollagen D-glucosyltransferase. Other names in common use include galactosylhydroxylysine glucosyltransferase, collagen glucosyltransferase, collagen hydroxylysyl glucosyltransferase, galactosylhydroxylysyl glucosyltransferase, UDP-glucose-collagenglucosyltransferase, and uridine diphosphoglucose-collagen glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662139
14662157
Protein N-acetylglucosaminyltransferase
Class of enzymes In enzymology, a protein N-acetylglucosaminyltransferase (EC 2.4.1.94) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + protein formula_0 UDP + 4-N-(N-acetyl-D-glucosaminyl)-protein Thus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and protein, whereas its two products are UDP and 4-N-(N-acetyl-D-glucosaminyl)-protein. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:protein beta-N-acetyl-D-glucosaminyl-transferase. Other names in common use include uridine diphosphoacetylglucosamine-protein, acetylglucosaminyltransferase, uridine diphospho-N-acetylglucosamine:polypeptide, beta-N-acetylglucosaminyltransferase, and O-GlcNAc transferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662157
14662203
Pyridoxine 5'-O-beta-D-glucosyltransferase
Class of enzymes In enzymology, a pyridoxine 5'-O-beta-D-glucosyltransferase (EC 2.4.1.160) is an enzyme that catalyzes the chemical reaction UDP-glucose + pyridoxine formula_0 UDP + 5'-O-beta-D-glucosylpyridoxine Thus, the two substrates of this enzyme are UDP-glucose and pyridoxine, whereas its two products are UDP and 5'-O-beta-D-glucosylpyridoxine. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:pyridoxine 5'-O-beta-D-glucosyltransferase. Other names in common use include UDP-glucose:pyridoxine 5'-O-beta-glucosyltransferase, uridine diphosphoglucose-pyridoxine 5'-beta-glucosyltransferase, and UDP-glucose-pyridoxine glucosyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662203
14662220
Pyrimidine-nucleoside phosphorylase
Class of enzymes In enzymology, a pyrimidine-nucleoside phosphorylase (EC 2.4.2.2) is an enzyme that catalyzes the chemical reaction a pyrimidine nucleoside + phosphate formula_0 a pyrimidine base + alpha-D-ribose 1-phosphate Thus, the two substrates of this enzyme are pyrimidine nucleoside and phosphate, whereas its two products are pyrimidine base and alpha-D-ribose 1-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is pyrimidine-nucleoside:phosphate alpha-D-ribosyltransferase. This enzyme is also called Py-NPase. This enzyme participates in pyrimidine metabolism. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1BRW and 2DSJ. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662220
14662237
Queuine tRNA-ribosyltransferase
Class of enzymes In enzymology, a queuine tRNA-ribosyltransferase (EC 2.4.2.29) is an enzyme that catalyzes the chemical reaction [tRNA]-guanine + queuine formula_0 [tRNA]-queuine + guanine Thus, the two substrates of this enzyme are tRNA-guanine and queuine, whereas its two products are [tRNA]-queuine and guanine. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is [tRNA]-guanine:queuine tRNA-D-ribosyltransferase. Other names in common use include tRNA-guanine transglycosylase, guanine insertion enzyme, tRNA transglycosylase, Q-insertase, queuine transfer ribonucleate ribosyltransferase, transfer ribonucleate glycosyltransferase, tRNA guanine transglycosidase, guanine, queuine-tRNA transglycosylase, and tRNA-guanine:queuine tRNA-D-ribosyltransferase. Structural studies. As of late 2007, 36 structures have been solved for this class of enzymes, with PDB accession codes 1EFZ, 1ENU, 1F3E, 1IQ8, 1IT7, 1IT8, 1J2B, 1K4G, 1K4H, 1N2V, 1OZM, 1OZQ, 1P0B, 1P0D, 1P0E, 1PUD, 1PXG, 1Q2R, 1Q2S, 1Q4W, 1Q63, 1Q65, 1Q66, 1R5Y, 1S38, 1S39, 1WKD, 1WKE, 1WKF, 1Y5V, 1Y5W, 1Y5X, 2ASH, 2BBF, 2QII, and 2QZR. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662237
1466225
Allee effect
Population phenomenon in biology The Allee effect is a phenomenon in biology characterized by a correlation between population size or density and the mean individual fitness (often measured as "per capita" population growth rate) of a population or species. History and background. Although the concept of Allee effect had no title at the time, it was first described in the 1930s by its namesake, Warder Clyde Allee. Through experimental studies, Allee was able to demonstrate that goldfish have a greater survival rate when there are more individuals within the tank. This led him to conclude that aggregation can improve the survival rate of individuals, and that cooperation may be crucial in the overall evolution of social structure. The term "Allee principle" was introduced in the 1950s, a time when the field of ecology was heavily focused on the role of competition among and within species. The classical view of population dynamics stated that due to competition for resources, a population will experience a reduced overall growth rate at higher density and increased growth rate at lower density. In other words, individuals in a population would be better off when there are fewer individuals around due to a limited amount of resources (see ). However, the concept of the Allee effect introduced the idea that the reverse holds true when the population density is low. Individuals within a species often require the assistance of another individual for more than simple reproductive reasons in order to persist. The most obvious example of this is observed in animals that hunt for prey or defend against predators as a group. Definition. The generally accepted definition of Allee effect is positive density dependence, or the positive correlation between population density and individual fitness. It is sometimes referred to as "undercrowding" and it is analogous (or even considered synonymous by some) to "depensation" in the field of fishery sciences. Listed below are a few significant subcategories of the Allee effect used in the ecology literature. Component vs. demographic Allee effects. The "component Allee effect" is the positive relationship between any measurable component of individual fitness and population density. The "demographic Allee effect" is the positive relationship between the overall individual fitness and population density. The distinction between the two terms lies on the scale of the Allee effect: the presence of a demographic Allee effect suggests the presence of at least one component Allee effect, while the presence of a component Allee effect does not necessarily result in a demographic Allee effect. For example, cooperative hunting and the ability to more easily find mates, both influenced by population density, are component Allee effects, as they influence individual fitness of the population. At low population density, these component Allee effects would add up to produce an overall demographic Allee effect (increased fitness with higher population density). When population density reaches a high number, negative density dependence often offsets the component Allee effects through resource competition, thus erasing the demographic Allee effect. Allee effects might occur even at high population density for some species. Strong vs. weak Allee effects. The "strong Allee effect" is a demographic Allee effect with a critical population size or density. The "weak Allee effect" is a demographic Allee effect without a critical population size or density. The distinction between the two terms is based on whether or not the population in question exhibits a critical population size or density. A population exhibiting a weak Allee effect will possess a reduced per capita growth rate (directly related to individual fitness of the population) at lower population density or size. However, even at this low population size or density, the population will always exhibit a positive per capita growth rate. Meanwhile, a population exhibiting a strong Allee effect will have a critical population size or density under which the population growth rate becomes negative. Therefore, when the population density or size hits a number below this threshold, the population will be destined for extinction without any further aid. A strong Allee effect is often easier to demonstrate empirically using time series data, as one can pinpoint the population size or density at which per capita growth rate becomes negative. Mechanisms. Due to its definition as the positive correlation between population density and average fitness, the mechanisms for which an Allee effect arises are therefore inherently tied to survival and reproduction. In general, these Allee effect mechanisms arise from cooperation or facilitation among individuals in the species. Examples of such cooperative behaviors include better mate finding, environmental conditioning, and group defense against predators. As these mechanisms are more-easily observable in the field, they tend to be more commonly associated with the Allee effect concept. Nevertheless, mechanisms of Allee effect that are less conspicuous such as inbreeding depression and sex ratio bias should be considered as well. Ecological mechanism. Although numerous ecological mechanisms for Allee effects exist, the list of most commonly cited facilitative behaviors that contribute to Allee effects in the literature include: mate limitation, cooperative defense, cooperative feeding, and environmental conditioning. While these behaviors are classified in separate categories, they can overlap and tend to be context dependent (will operate only under certain conditions – for example, cooperative defense will only be useful when there are predators or competitors present). Human induced. Classic economic theory predicts that human exploitation of a population is unlikely to result in species extinction because the escalating costs to find the last few individuals will exceed the fixed price one achieves by selling the individuals on the market. However, when rare species are more desirable than common species, prices for rare species can exceed high harvest costs. This phenomenon can create an "anthropogenic" Allee effect where rare species go extinct but common species are sustainably harvested. The anthropogenic Allee effect has become a standard approach for conceptualizing the threat of economic markets on endangered species. However, the original theory was posited using a one dimensional analysis of a two dimensional model. It turns out that a two dimensional analysis yields an Allee curve in human exploiter and biological population space and that this curve separating species destined to extinction vs persistence can be complicated. Even very high population sizes can potentially pass through the originally proposed Allee thresholds on predestined paths to extinction. Genetic mechanisms. Declines in population size can result in a loss of genetic diversity, and owing to genetic variation's role in the evolutionary potential of a species, this could in turn result in an observable Allee effect. As a species' population becomes smaller, its gene pool will be reduced in size as well. One possible outcome from this genetic bottleneck is a reduction in fitness of the species through the process of genetic drift, as well as inbreeding depression. This overall fitness decrease of a species is caused by an accumulation of deleterious mutations throughout the population. Genetic variation within a species could range from beneficial to detrimental. Nevertheless, in a smaller sized gene pool, there is a higher chance of a stochastic event in which deleterious alleles become fixed (genetic drift). While evolutionary theory states that expressed deleterious alleles should be purged through natural selection, purging would be most efficient only at eliminating alleles that are highly detrimental or harmful. Mildly deleterious alleles such as those that act later in life would be less likely to be removed by natural selection, and conversely, newly acquired beneficial mutations are more likely to be lost by random chance in smaller genetic pools than larger ones. Although the long-term population persistence of several species with low genetic variation has recently prompted debate on the generality of inbreeding depression, there are various empirical evidences for genetic Allee effects. One such case was observed in the endangered Florida panther ("Puma concolor coryi"). The Florida panther experienced a genetic bottleneck in the early 1990s where the population was reduced to ≈25 adult individuals. This reduction in genetic diversity was correlated with defects that include lower sperm quality, abnormal testosterone levels, cowlicks, and kinked tails. In response, a genetic rescue plan was put in motion and several female pumas from Texas were introduced into the Florida population. This action quickly led to the reduction in the prevalence of the defects previously associated with inbreeding depression. Although the timescale for this inbreeding depression is larger than of those more immediate Allee effects, it has significant implications on the long-term persistence of a species. Demographic stochasticity. Demographic stochasticity refers to variability in population growth arising from sampling random births and deaths in a population of finite size. In small populations, demographic stochasticity will decrease the population growth rate, causing an effect similar to the Allee effect, which will increase the risk of population extinction. Whether or not demographic stochasticity can be considered a part of Allee effect is somewhat contentious however. The most current definition of Allee effect considers the correlation between population density and mean individual fitness. Therefore, random variation resulting from birth and death events would not be considered part of Allee effect as the increased risk of extinction is not a consequence of the changing fates of individuals within the population. Meanwhile, when demographic stochasticity results in fluctuations of sex ratios, it arguably reduces the mean individual fitness as population declines. For example, a fluctuation in small population that causes a scarcity in one sex would in turn limit the access of mates for the opposite sex, decreasing the fitness of the individuals within the population. This type of Allee effect will likely be more prevalent in monogamous species than polygynous species. Effects on range-expanding populations. Demographic and mathematical studies demonstrate that the existence of an Allee effect can reduce the speed of range expansion of a population and can even prevent biological invasions. Recent results based on spatio-temporal models show that the Allee effect can also promote genetic diversity in expanding populations. These results counteract commonly held notions that the Allee effect possesses net adverse consequences. Reducing the growth rate of the individuals ahead of the colonization front simultaneously reduces the speed of colonization and enables a diversity of genes coming from the core of the population to remain on the front. The Allee effect also affects the spatial distribution of diversity. Whereas spatio-temporal models which do not include an Allee effect lead to a vertical pattern of genetic diversity (i.e., a strongly structured spatial distribution of genetic fractions), those including an Allee effect lead to a "horizontal pattern" of genetic diversity (i.e., an absence of genetic differentiation in space). Mathematical models. A simple mathematical example of an Allee effect is given by the cubic growth model. formula_0 where the population has a negative growth rate for formula_1, and a positive growth rate for formula_2 (assuming formula_3). This is a departure from the logistic growth equation formula_4 where "N" = population size; "r" = intrinsic rate of increase; "K" = carrying capacity; "A" = critical point; and "dN"/"dt" = rate of increase of the population. After dividing both sides of the equation by the population size N, in the logistic growth the left hand side of the equation represents the per capita population growth rate, which is dependent on the population size N, and decreases with increasing "N" throughout the entire range of population sizes. In contrast, when there is an Allee effect the per-capita growth rate increases with increasing "N" over some range of population sizes [0, "N"]. Spatio-temporal models can take Allee effect into account as well. A simple example is given by the reaction-diffusion model formula_5 where "D" = diffusion coefficient; formula_6one-dimensional Laplace operator. When a population is made up of small sub-populations additional factors to the Allee effect arise. If the sub-populations are subject to different environmental variations (i.e. separated enough that a disaster could occur at one sub-population site without affecting the other sub-populations) but still allow individuals to travel between sub-populations, then the individual sub-populations are more likely to go extinct than the total population. In the case of a catastrophic event decreasing numbers at a sub-population, individuals from another sub-population site may be able to repopulate the area. If all sub-populations are subject to the same environmental variations (i.e. if a disaster affected one, it would affect them all) then fragmentation of the population is detrimental to the population and increases extinction risk for the total population. In this case, the species receives none of the benefits of a small sub-population (loss of the sub-population is not catastrophic to the species as a whole) and all of the disadvantages (inbreeding depression, loss of genetic diversity and increased vulnerability to environmental instability) and the population would survive better unfragmented. Allee principles of aggregation. Clumping results due to individuals aggregating in response to: local habitat or landscape differences, daily and seasonal weather changes, reproductive processes, or as the result of social attractions. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\frac{dN}{dt} = -r N \\left( 1 - \\frac{N}{A} \\right) \\left( 1 - \\frac{N}{K} \\right)," }, { "math_id": 1, "text": " 0< N < A " }, { "math_id": 2, "text": " A < N < K " }, { "math_id": 3, "text": " 0 < A < K " }, { "math_id": 4, "text": " \\frac{dN}{dt} = r N \\left( 1- \\frac{N}{K} \\right)" }, { "math_id": 5, "text": " \\frac{\\partial N}{\\partial t} =D \\frac{\\partial^2 N}{\\partial x^2}+ r N \\left( \\frac{N}{A} - 1 \\right) \\left( 1 - \\frac{N}{K} \\right)," }, { "math_id": 6, "text": "\\frac{\\partial^2}{\\partial x^2} ={}" } ]
https://en.wikipedia.org/wiki?curid=1466225
14662266
Raffinose—raffinose alpha-galactosyltransferase
Class of enzymes In enzymology, a raffinose-raffinose alpha-galactosyltransferase (EC 2.4.1.166) is an enzyme that catalyzes the chemical reaction 2 raffinose formula_0 1F-alpha-D-galactosylraffinose + sucrose Hence, this enzyme has one substrate, raffinose, and two products, 1F-alpha-D-galactosylraffinose and sucrose. This enzyme belongs to the family of glycosyltransferases, to be specific the hexosyltransferases. The systematic name of this enzyme class is raffinose:raffinose alpha-D-galactosyltransferase. Other names in common use include raffinose (raffinose donor) galactosyltransferase, raffinose:raffinose alpha-galactosyltransferase, and raffinose-raffinose alpha-galactotransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662266
14662288
Salicyl-alcohol beta-D-glucosyltransferase
Class of enzymes In enzymology, a salicyl-alcohol beta-D-glucosyltransferase (EC 2.4.1.172) is an enzyme that catalyzes the chemical reaction UDP-glucose + salicyl alcohol formula_0 UDP + salicin Thus, the two substrates of this enzyme are UDP-glucose and salicyl alcohol, whereas its two products are UDP and salicin. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:salicyl-alcohol beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-salicyl alcohol 2-glucosyltransferase, and UDPglucose:salicyl alcohol phenyl-glucosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662288
14662330
Scopoletin glucosyltransferase
Class of enzymes In enzymology, a scopoletin glucosyltransferase (EC 2.4.1.128) is an enzyme that catalyzes the chemical reaction UDP-glucose + scopoletin formula_0 UDP + scopolin Thus, the two substrates of this enzyme are UDP-glucose and scopoletin, whereas its two products are UDP and scopolin. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:scopoletin O-beta-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-scopoletin glucosyltransferase, UDP-glucose:scopoletin glucosyltransferase, and SGTase. This enzyme participates in phenylpropanoid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14662330
146625
Wreath product
Topic in group theory In group theory, the wreath product is a special combination of two groups based on the semidirect product. It is formed by the action of one group on many copies of another group, somewhat analogous to exponentiation. Wreath products are used in the classification of permutation groups and also provide a way of constructing interesting examples of groups. Given two groups formula_0 and formula_1 (sometimes known as the "bottom" and "top"), there exist two variants of the wreath product: the unrestricted wreath product formula_2 and the restricted wreath product formula_3. The general form, denoted by formula_4 or formula_5 respectively, requires that formula_1 acts on some set formula_6; when unspecified, usually formula_7 (a regular wreath product), though a different formula_6 is sometimes implied. The two variants coincide when formula_0, formula_1, and formula_6 are all finite. Either variant is also denoted as formula_8 (with \wr for the LaTeX symbol) or "A" ≀ "H" (Unicode U+2240). The notion generalizes to semigroups and, as such, is a central construction in the Krohn–Rhodes structure theory of finite semigroups. Definition. Let formula_0 be a group and let formula_1 be a group acting on a set formula_6 (on the left). The direct product formula_9 of formula_0 with itself indexed by formula_6 is the set of sequences formula_10 in formula_0, indexed by formula_6, with a group operation given by pointwise multiplication. The action of formula_1 on formula_6 can be extended to an action on formula_9 by "reindexing", namely by defining formula_11 for all formula_12 and all formula_13. Then the unrestricted wreath product formula_4 of formula_0 by formula_1 is the semidirect product formula_14 with the action of formula_1 on formula_9 given above. The subgroup formula_9 of formula_14 is called the base of the wreath product. The restricted wreath product formula_5 is constructed in the same way as the unrestricted wreath product except that one uses the direct sum as the base of the wreath product. In this case, the base consists of all sequences in formula_9 with finitely many non-identity entries. The two definitions coincide when formula_6 is finite. In the most common case, formula_7, and formula_1 acts on itself by left multiplication. In this case, the unrestricted and restricted wreath product may be denoted by formula_2 and formula_3 respectively. This is called the regular wreath product. Notation and conventions. The structure of the wreath product of "A" by "H" depends on the "H"-set Ω and in case Ω is infinite it also depends on whether one uses the restricted or unrestricted wreath product. However, in literature the notation used may be deficient and one needs to pay attention to the circumstances. Properties. Agreement of unrestricted and restricted wreath product on finite Ω. Since the finite direct product is the same as the finite direct sum of groups, it follows that the unrestricted "A" WrΩ "H" and the restricted wreath product "A" wrΩ "H" agree if Ω is finite. In particular this is true when Ω = "H" and "H" is finite. Subgroup. "A" wrΩ "H" is always a subgroup of "A" WrΩ "H". Cardinality. If "A", "H" and Ω are finite, then |"A"≀Ω"H"| = |"A"||Ω||"H"|. Universal embedding theorem. "Universal embedding theorem": If "G" is an extension of "A" by "H", then there exists a subgroup of the unrestricted wreath product "A"≀"H" which is isomorphic to "G". This is also known as the "Krasner–Kaloujnine embedding theorem". The Krohn–Rhodes theorem involves what is basically the semigroup equivalent of this. Canonical actions of wreath products. If the group "A" acts on a set Λ then there are two canonical ways to construct sets from Ω and Λ on which "A" WrΩ "H" (and therefore also "A" wrΩ "H") can act. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "A \\text{ Wr } H" }, { "math_id": 3, "text": "A \\text{ wr } H" }, { "math_id": 4, "text": "A \\text{ Wr}_{\\Omega} H" }, { "math_id": 5, "text": "A \\text{ wr}_{\\Omega} H" }, { "math_id": 6, "text": "\\Omega" }, { "math_id": 7, "text": "\\Omega = H" }, { "math_id": 8, "text": "A \\wr H" }, { "math_id": 9, "text": "A^{\\Omega}" }, { "math_id": 10, "text": "\\overline{a} = (a_{\\omega})_{\\omega \\in \\Omega}" }, { "math_id": 11, "text": " h \\cdot (a_{\\omega})_{\\omega \\in \\Omega} := (a_{h^{-1} \\cdot \\omega})_{\\omega \\in \\Omega}" }, { "math_id": 12, "text": "h \\in H" }, { "math_id": 13, "text": "(a_{\\omega})_{\\omega \\in \\Omega} \\in A^{\\Omega}" }, { "math_id": 14, "text": "A^{\\Omega} \\rtimes H" }, { "math_id": 15, "text": "((a_\\omega), h) \\cdot (\\lambda,\\omega') := (a_{h(\\omega')}\\lambda, h\\omega'). " }, { "math_id": 16, "text": "((a_\\omega), h) \\cdot (\\lambda_\\omega) := (a_{h^{-1}\\omega}\\lambda_{h^{-1}\\omega})." }, { "math_id": 17, "text": "\\mathbb{Z}_2 \\wr \\mathbb{Z}" }, { "math_id": 18, "text": "\\mathbb{Z}_m \\wr S_n" }, { "math_id": 19, "text": "\\mathbb{Z}_m^n = \\mathbb{Z}_m ... \\mathbb{Z}_m" }, { "math_id": 20, "text": "\\mathbb{Z}_m" }, { "math_id": 21, "text": "\\phi:S_n \\to \\text{Aut}(\\mathbb{Z}_m^n)" }, { "math_id": 22, "text": "S_2 \\wr S_n" }, { "math_id": 23, "text": "\\mathbb{Z}_2" }, { "math_id": 24, "text": "\\mathbb{Z}_2 \\wr \\mathbb{Z}_2" }, { "math_id": 25, "text": "n \\geq 1" }, { "math_id": 26, "text": "W_n = \\mathbb{Z}_p \\wr ... \\wr \\mathbb{Z}_p" }, { "math_id": 27, "text": "\\mathbb{Z}_p" }, { "math_id": 28, "text": "W_1:=\\mathbb{Z}_p" }, { "math_id": 29, "text": "W_k:=W_{k - 1} \\wr \\mathbb{Z}_p" }, { "math_id": 30, "text": "k \\geq 2" }, { "math_id": 31, "text": "(\\mathbb{Z}_3 \\wr S_8) \\times (\\mathbb{Z}_2 \\wr S_{12})" } ]
https://en.wikipedia.org/wiki?curid=146625
146630
Monolayer
A monolayer is a single, closely packed layer of entities, commonly atoms or molecules. Monolayers can also be made out of cells. "Self-assembled monolayers" form spontaneously on surfaces. Monolayers of layered crystals like graphene and molybdenum disulfide are generally called "2D materials". Types. A Langmuir monolayer or "insoluble monolayer" is a one-molecule thick layer of an insoluble organic material spread onto an aqueous subphase in a Langmuir-Blodgett trough. Traditional compounds used to prepare Langmuir monolayers are amphiphilic materials that possess a hydrophilic headgroup and a hydrophobic tail. Since the 1980s a large number of other materials have been employed to produce Langmuir monolayers, some of which are semi-amphiphilic, including polymeric, ceramic or metallic nanoparticles and macromolecules such as polymers. Langmuir monolayers are extensively studied for the fabrication of Langmuir-Blodgett film (LB films), which are formed by transferred monolayers on a solid substrate. A Gibbs monolayer or "soluble monolayer" is a monolayer formed by a compound that is soluble in one of the phases separated by the interface on which the monolayer is formed. Properties. Formation time. The monolayer formation time or monolayer time is the length of time required, on average, for a surface to be covered by an adsorbate, such as oxygen sticking to fresh aluminum. If the adsorbate has a unity sticking coefficient, so that every molecule which reaches the surface sticks to it without re-evaporating, then the monolayer time is very roughly: formula_0 where "t" is the time and "P" is the pressure. It takes about 1 second for a surface to be covered at a pressure of 300 μPa (2×10−6 Torr). Monolayer phases and equations of state. A Langmuir monolayer can be compressed or expanded by modifying its area with a moving barrier in a Langmuir film balance. If the surface tension of the interface is measured during the compression, a "compression isotherm" is obtained. This isotherm shows the variation of surface pressure (formula_1, where formula_2 is the surface tension of the interface before the monolayer is formed) with the area (the inverse of surface concentration formula_3). It is analogous with a 3D process in which pressure varies with volume. A variety of bidimensional phases can be detected, each separated by a phase transition. During the phase transition, the surface pressure doesn't change, but the area does, just like during normal phase transitions volume changes but pressure doesn't. The 2D phases, in increasing pressure order: If the area is further reduced once the solid phase has been reached, collapse occurs, the monolayer breaks and soluble aggregates and multilayers are formed Gibbs monolayers also follow equations of state, which can be deduced from Gibbs isotherm. Applications. Monolayers have a multitude of applications both at the air-water and at air-solid interphases. Nanoparticle monolayers can be used to create functional surfaces that have for instance anti-reflective or superhydrophobic properties. Monolayers are frequently encountered in biology. A micelle is a monolayer, and the phospholipid lipid bilayer structure of biological membranes is technically two monolayers. Langmuir monolayers are commonly used to mimic cell membrane to study the effects of pharmaceuticals or toxins. Cell culture. In cell culture, a monolayer refers to a layer of cells in which no cell is growing on top of another, but all are growing side by side and often touching each other on the same growth surface. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t = \\frac{3 \\times 10^{-4} \\, \\mathrm{Pa} \\cdot \\mathrm{s}}{P}" }, { "math_id": 1, "text": "\\Pi = \\gamma^o - \\gamma " }, { "math_id": 2, "text": "\\gamma^o" }, { "math_id": 3, "text": "\\Gamma^{-1}" }, { "math_id": 4, "text": "\\Pi A = RT" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "\\gamma = \\gamma_o - mC" }, { "math_id": 7, "text": "\\Pi = \\Gamma R T" }, { "math_id": 8, "text": "\\Gamma = \\Gamma_{\\max} \\frac{C}{a+C}" }, { "math_id": 9, "text": "\\Pi = \\Gamma_{\\max}RT \\left(1+\\frac{C}{a}\\right)" } ]
https://en.wikipedia.org/wiki?curid=146630
14663012
Brocard circle
Circle constructed from a triangle In geometry, the Brocard circle (or seven-point circle) is a circle derived from a given triangle. It passes through the circumcenter and symmedian point of the triangle, and is centered at the midpoint of the line segment joining them (so that this segment is a diameter). Equation. In terms of the side lengths formula_0, formula_1, and formula_2 of the given triangle, and the areal coordinates formula_3 for points inside the triangle (where the formula_4-coordinate of a point is the area of the triangle made by that point with the side of length formula_0, etc), the Brocard circle consists of the points satisfying the equation formula_5 Related points. The two Brocard points lie on this circle, as do the vertices of the Brocard triangle. These five points, together with the other two points on the circle (the circumcenter and symmedian), justify the name "seven-point circle". The Brocard circle is concentric with the first Lemoine circle. Special cases. If the triangle is equilateral, the circumcenter and symmedian coincide and therefore the Brocard circle reduces to a single point. History. The Brocard circle is named for Henri Brocard, who presented a paper on it to the French Association for the Advancement of Science in Algiers in 1881. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "b" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "(x,y,z)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "b^2 c^2 x^2 + a^2 c^2 y^2 + a^2 b^2 z^2 - a^4 y z - b^4 x z - c^4 x y=0." } ]
https://en.wikipedia.org/wiki?curid=14663012
14663031
Sinapate 1-glucosyltransferase
Class of enzymes In enzymology, a sinapate 1-glucosyltransferase (EC 2.4.1.120) is an enzyme that catalyzes the chemical reaction: UDP-glucose + sinapate formula_0 UDP + 1-sinapoyl-D-glucose Thus, the two substrates of this enzyme are UDP-glucose and sinapate, whereas its two products are UDP and 1-sinapoyl-D-glucose. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:sinapate D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-sinapate glucosyltransferase, UDP-glucose:sinapic acid glucosyltransferase, uridine 5'-diphosphoglucose-hydroxycinnamic acid, and acylglucosyltransferase. This enzyme participates in phenylpropanoid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663031
14663053
(Skp1-protein)-hydroxyproline N-acetylglucosaminyltransferase
Enzyme In enzymology, a [Skp1-protein]-hydroxyproline N-acetylglucosaminyltransferase (EC 2.4.1.229) is an enzyme that catalyzes the chemical reaction UDP-N-acetylglucosamine + [Skp1-protein]-hydroxyproline formula_0 UDP + [Skp1-protein]-O-(N-acetyl-D-glucosaminyl)hydroxyproline Thus, the two substrates of this enzyme are UDP-N-acetylglucosamine and Skp1-protein-hydroxyproline, whereas its two products are UDP and Skp1-protein-O-(N-acetyl-D-glucosaminyl)hydroxyproline. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:[Skp1-protein]-hydroxyproline N-acetyl-D-glucosaminyl-transferase. Other names in common use include Skp1-HyPro GlcNAc-transferase, UDP-N-acetylglucosamine (GlcNAc):hydroxyproline polypeptide, GlcNAc-transferase, UDP-GlcNAc:Skp1-hydroxyproline GlcNAc-transferase, and UDP-GlcNAc:hydroxyproline polypeptide GlcNAc-transferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663053
14663073
S-methyl-5'-thioadenosine phosphorylase
Class of enzymes In enzymology, a S-methyl-5'-thioadenosine phosphorylase (EC 2.4.2.28) is an enzyme that catalyzes the chemical reaction S-methyl-5'-thioadenosine + phosphate formula_0 adenine + S-methyl-5-thio-alpha-D-ribose 1-phosphate Thus, the two substrates of this enzyme are S-methyl-5'-thioadenosine and phosphate, whereas its two products are adenine and S-methyl-5-thio-alpha-D-ribose 1-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is S-methyl-5-thioadenosine:phosphate S-methyl-5-thio-alpha-D-ribosyl-transferase. Other names in common use include 5'-methylthioadenosine nucleosidase, 5'-deoxy-5'-methylthioadenosine phosphorylase, MTA phosphorylase, MeSAdo phosphorylase, MeSAdo/Ado phosphorylase, methylthioadenosine phosphorylase, methylthioadenosine nucleoside phosphorylase, 5'-methylthioadenosine:phosphate methylthio-D-ribosyl-transferase, and S-methyl-5-thioadenosine phosphorylase. This enzyme participates in methionine metabolism. Structural studies. As of late 2007, 20 structures have been solved for this class of enzymes, with PDB accession codes 1CB0, 1CG6, 1JDS, 1JDT, 1JDU, 1JDV, 1JDZ, 1JE0, 1JE1, 1JP7, 1JPV, 1K27, 1ODI, 1ODJ, 1ODK, 1SD1, 1SD2, 1V4N, 1WTA, and 2A8Y. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663073
14663103
Sn-glycerol-3-phosphate 1-galactosyltransferase
Class of enzymes In enzymology, a sn-glycerol-3-phosphate 1-galactosyltransferase (EC 2.4.1.96) is an enzyme that catalyzes the chemical reaction UDP-galactose + sn-glycerol 3-phosphate formula_0 UDP + alpha-D-galactosyl-(1,1')-sn-glycerol 3-phosphate Thus, the two substrates of this enzyme are UDP-galactose and sn-glycerol 3-phosphate, whereas its two products are UDP and alpha-D-galactosyl-(1,1')-sn-glycerol 3-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:sn-glycerol-3-phosphate 1-alpha-D-galactosyltransferase. Other names in common use include isofloridoside-phosphate synthase, UDP-Gal:sn-glycero-3-phosphoric acid 1-alpha-galactosyl-transferase, UDPgalactose:sn-glycerol-3-phosphate alpha-D-galactosyltransferase, uridine diphosphogalactose-glycerol phosphate galactosyltransferase, and glycerol 3-phosphate 1alpha-galactosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663103
14663130
Sn-glycerol-3-phosphate 2-alpha-galactosyltransferase
Class of enzymes In enzymology, a sn-glycerol-3-phosphate 2-alpha-galactosyltransferase (EC 2.4.1.137) is an enzyme that catalyzes the chemical reaction UDP-galactose + sn-glycerol 3-phosphate formula_0 UDP + 2-(alpha-D-galactosyl)-sn-glycerol 3-phosphate Thus, the two substrates of this enzyme are UDP-galactose and sn-glycerol 3-phosphate, whereas its two products are UDP and 2-(alpha-D-galactosyl)-sn-glycerol 3-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:sn-glycerol-3-phosphate 2-alpha-D-galactosyltransferase. Other names in common use include floridoside-phosphate synthase, UDP-galactose:sn-glycerol-3-phosphate-2-D-galactosyl transferase, FPS, UDP-galactose, sn-3-glycerol phosphate:1-&gt;2' galactosyltransferase, floridoside phosphate synthetase, and floridoside phosphate synthase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663130
14663154
Sphingosine beta-galactosyltransferase
Class of enzymes In enzymology, a sphingosine beta-galactosyltransferase (EC 2.4.1.23) is an enzyme that catalyzes the chemical reaction UDP-galactose + sphingosine formula_0 UDP + psychosine Thus, the two substrates of this enzyme are UDP-galactose and sphingosine, whereas its two products are UDP and psychosine. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is "UDP-galactose:sphingosine 1-beta-galactosyltransferase". Other names in common use include "psychosine-UDP galactosyltransferase", "galactosyl-sphingosine transferase", "psychosine-uridine diphosphate galactosyltransferase", "UDP-galactose:sphingosine O-galactosyl transferase", "uridine diphosphogalactose-sphingosine beta-galactosyltransferase", and "UDP-galactose:sphingosine 1-beta-galactotransferase". This enzyme participates in sphingolipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663154
14663192
Starch synthase
Enzyme family In enzymology, a starch synthase (EC 2.4.1.21) is an enzyme that catalyzes the chemical reaction ADP-glucose + (1,4-alpha-D-glucosyl)n formula_0 ADP + (1,4-alpha-D-glucosyl)n+1 Thus, the two substrates of this enzyme are ADP-glucose and a chain of D-glucose residues joined by 1,4-alpha-glycosidic bonds, whereas its two products are ADP and an elongated chain of glucose residues. Plants use these enzymes in the biosynthesis of starch. This enzyme belongs to the family of hexosyltransferases, specifically the glycosyltransferases. The systematic name of this enzyme class is ADP-glucose:1,4-alpha-D-glucan 4-alpha-D-glucosyltransferase. Other names in common use include ADP-glucose-starch glucosyltransferase, adenosine diphosphate glucose-starch glucosyltransferase, adenosine diphosphoglucose-starch glucosyltransferase, ADP-glucose starch synthase, ADP-glucose synthase, ADP-glucose transglucosylase, ADP-glucose-starch glucosyltransferase, ADPG starch synthetase, and ADPG-starch glucosyltransferase Five isoforms seems to be present. GBSS which is linked to amylose synthesis. The others are SS1, SS2, SS3 and SS4. These have different roles in amylopectin synthesis. New work implies that SS4 is important for granule initiation. (Szydlowski et al., 2011) Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1RZU, 1RZV, 2BFW, and 2BIS. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663192
14663216
Steroid N-acetylglucosaminyltransferase
Class of enzymes In enzymology, a steroid N-acetylglucosaminyltransferase (EC 2.4.1.39) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine + estradiol-17alpha 3-D-glucuronoside formula_0 UDP + 17alpha-(N-acetyl-D-glucosaminyl)-estradiol 3-D-glucuronoside Thus, the two substrates of this enzyme are UDP-N-acetyl-D-glucosamine and estradiol-17alpha 3-D-glucuronoside, whereas its two products are UDP and 17alpha-(N-acetyl-D-glucosaminyl)-estradiol 3-D-glucuronoside. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:estradiol-17alpha-3-D-glucuronoside 17alpha-N-acetylglucosaminyltransferase. Other names in common use include hydroxy steroid acetylglucosaminyltransferase, steroid acetylglucosaminyltransferase, uridine diphosphoacetylglucosamine-steroid, and acetylglucosaminyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663216
14663237
Sterol 3beta-glucosyltransferase
Class of enzymes In enzymology, a sterol 3beta-glucosyltransferase (EC 2.4.1.173) is an enzyme that catalyzes the chemical reaction UDP-glucose + a sterol formula_0 UDP + a sterol 3-beta-D-glucoside Thus, the two substrates of this enzyme are UDP-glucose and sterol, whereas its two products are UDP and sterol 3-beta-D-glucoside. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:sterol 3-O-beta-D-glucosyltransferase. Other names in common use include UDPG:sterol glucosyltransferase, UDP-glucose-sterol beta-glucosyltransferase, sterol:UDPG glucosyltransferase, UDPG-SGTase, uridine diphosphoglucose-poriferasterol glucosyltransferase, uridine diphosphoglucose-sterol glucosyltransferase, sterol glucosyltransferase, sterol-beta-D-glucosyltransferase, and UDP-glucose-sterol glucosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663237
14663421
Sucrose—1,6-alpha-glucan 3(6)-alpha-glucosyltransferase
Class of enzymes In enzymology, a sucrose-1,6-alpha-glucan 3(6)-alpha-glucosyltransferase (EC 2.4.1.125) is an enzyme that catalyzes the chemical reaction sucrose + (1,6-alpha-D-glucosyl)n formula_0 D-fructose + (1,6-alpha-D-glucosyl)n+1 Thus, the two substrates of this enzyme are sucrose and (1,6-alpha-D-glucosyl)n, whereas its two products are D-fructose and (1,6-alpha-D-glucosyl)n+1. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is sucrose:1,6-alpha-D-glucan 3(6)-alpha-D-glucosyltransferase. Other names in common use include water-soluble-glucan synthase, GTF-S, sucrose-1,6-alpha-glucan 3(6)-alpha-glucosyltransferase, sucrose:1,6-alpha-D-glucan 3-alpha- and 6-alpha-glucosyltransferase, sucrose:1,6-, 1,3-alpha-D-glucan 3-alpha- and, and 6-alpha-D-glucosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663421
14663446
Sucrose 6F-alpha-galactosyltransferase
Class of enzymes In enzymology, a sucrose 6F-alpha-galactosyltransferase (EC 2.4.1.167) is an enzyme that catalyzes the chemical reaction UDP-galactose + sucrose formula_0 UDP + 6F-alpha-D-galactosylsucrose Thus, the two substrates of this enzyme are UDP-galactose and sucrose, whereas its two products are UDP and 6F-alpha-D-galactosylsucrose. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:sucrose 6F-alpha-D-galactosyltransferase. Other names in common use include uridine diphosphogalactose-sucrose 6F-alpha-galactosyltransferase, UDPgalactose:sucrose 6fru-alpha-galactosyltransferase, and sucrose 6F-alpha-galactotransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663446
14663534
Thymidine phosphorylase
Enzyme Thymidine phosphorylase (EC 2.4.2.4) is an enzyme that is encoded by the TYMP gene and catalyzes the reaction: thymidine + phosphate formula_0 thymine + 2-deoxy-alpha-D-ribose 1-phosphate Thymidine phosphorylase is involved in purine metabolism, pyrimidine metabolism, and other metabolic pathways. Variations in thymidine phosphorylase and the "TYMP" gene that encode it are associated with mitochondrial neurogastrointestinal encephalopathy (MNGIE) syndrome and bladder cancer. Nomenclature. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is thymidine:phosphate deoxy-alpha-D-ribosyltransferase. Other names in common use include pyrimidine phosphorylase, thymidine-orthophosphate deoxyribosyltransferase, animal growth regulators, blood platelet-derived endothelial cell, growth factors, blood platelet-derived endothelial cell growth factor, deoxythymidine phosphorylase, gliostatins, pyrimidine deoxynucleoside phosphorylase, and thymidine:phosphate deoxy-D-ribosyltransferase. Mechanism. Thymidine phosphorylase catalyzes the reversible phosphorylation of thymidine, deoxyuridine, and their analogs (except deoxycytidine) to their respective bases (thymine/uracil) and 2-deoxyribose 1-phosphate. The enzyme follows a sequential mechanism, where phosphate binds before thymidine (or deoxyuridine, etc.) and 2-deoxyribose 1-phosphate leaves after the nitrogenous base. The thymidine is bound in a high-energy conformation, in which the glycosidic bond weakens as the phosphate attacks the C1 position of the ribose ring, as shown below. The enzyme can then transfer deoxyribose 1-phosphate to other nitrogenous bases. Further experiments have shown that thymine inhibits the enzyme via both substrate inhibition and nonlinear product inhibition. This suggests that thymine can inhibit the enzyme via multiple sites. The enzyme also displays cooperativity with respect to both thymidine and phosphate in the presence of thymine, which suggests that thymidine phosphorylase has several allosteric and/or catalytic sites as well. Structure. Thymidine phosphorylase is a protein dimer with identical subunits – with a reported molecular weight of 90,000 daltons in Escherichia coli. It has an S-shape with a length of 110 Å and a width of 60 Å. Each monomer is composed of 440 amino acids and is composed of a small α-helical domain and a large α/β domain. The surface of the enzyme is smooth except for a 10 Å deep and 8 Å wide cavity between the two domains that contains the thymine, thymidine, and phosphate binding sites. Detailed analysis of the binding sites shows that Arg-171, Ser-186, and Lys-190 are the important residues in binding the pyrimidine base. The residues Arg-171 and Lys-190 are close to O4 and O2 of the thymine ring, respectively, and can help stabilize the intermediate state. The terminal amino group of Lys-190, which forms a hydrogen bond with the 3′-hydroxyl of the thymidine ribose moiety is also in place to donate a proton to thymine N1 during the intermediate state. As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1AZY, 1OTP, 1TPT, 1UOU, 2J0F, and 2TPT. Function. Thymidine phosphorylase plays a key role in pyrimidine salvage to recover nucleosides after DNA/RNA degradation. Although the reaction it catalyzes between thymidine/deoxyuridine and their respective bases is reversible, the enzyme's function is primarily catabolic. Recent research has found that thymidine phosphorylase is also involved in angiogenesis. Experiments show inhibition of angiogenic effect by thymidine phosphorylase in the presence of 6-amino-5-chlorouracil, an inhibitor of thymidine phosphorylase, suggesting that the enzymatic activity of thymidine phosphorylase is required for its angiogenic activity. Thymidine phosphorylase has been determined to be almost identical to the platelet-derived endothelial cell growth factor (PD-ECGF). Although the mechanism of angiogenesis by thymidine phosphorylase is not yet known, reports show that the enzyme itself is not a growth factor but indirectly causes angiogenesis by stimulating chemotaxis of endothelial and other cells. Some reports suggest that thymidine phosphorylase promotes endothelial cell growth by reducing levels of thymidine that would otherwise inhibit endothelial cell growth. An alternative explanation is that the enzyme’s products induce angiogenesis. Experiments have found that 2-deoxyribose is an endothelial-cell chemoattractant and angiogenesis-inducing factor, which supports this explanation. Research has found thymidine phosphorylase is involved in angiogenesis during the menstrual cycle. The enzyme's expression in the endometrium is raised by a combination of progesterone and transforming growth factor-β1 and varies over the course of the menstrual cycle. Disease relevance. Mitochondrial neurogastrointestinal encephalomyopathy (MNGIE) is an autosomal recessive disorder caused by mutations in the thymidine phosphorylase (TP) gene. Because mitochondrial DNA (mtDNA) depends strongly on thymidine salvage (more so than nuclear DNA), it suffers damage from thymidine phosphorylase deficiency. In MNGIE disease, multiple deletions and depletion of mtDNA accumulate over time, leading to mitochondrial dysfunction. Symptoms of MNGIE disease include diarrhea and abdominal pain as a result of dysmotility, caused by neuromuscular dysfunction, as well as ptosis, ophthalmoparesis, peripheral neuropathy, and hearing loss. Thymidine phosphorylase has also been found to play a dual role in both cancer development and therapy. The enzyme's angiogenic activity promotes tumor growth, as supported by research showing much higher expression and activity of thymidine phosphorylase in malignant tumors (including carcinomas in the esophagus, stomach, colorectum, pancreas, and lung) than in adjacent non-neoplastic tissues Thymidine phosphorylase in these carcinomas is up-regulated by cytokines interferon-γ and TNF-α, which are released by inflammatory cells during wound healing. The enzyme is also up-regulated by low oxygen levels and low pH environments in order to control vascularization of hypoxic regions. However, thymidine phosphorylase has also been found to play an essential role in the activation of the anti-cancer drug capecitabine. Specifically, it converts the intermediate metabolite 5’-deoxy-5-fluorocytidine in tumors to 5-fluorouracil, which acts as a thymidylate synthase inhibitor. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663534
14663562
Trans-zeatin O-beta-D-glucosyltransferase
Class of enzymes In enzymology, a trans-zeatin O-beta-D-glucosyltransferase (EC 2.4.1.203) is an enzyme that catalyzes the chemical reaction UDP-glucose + trans-zeatin formula_0 UDP + O-beta-D-glucosyl-trans-zeatin Thus, the two substrates of this enzyme are UDP-glucose and trans-zeatin, whereas its two products are UDP and O-beta-D-glucosyl-trans-zeatin. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:trans-zeatin O-beta-D-glucosyltransferase. Other names in common use include zeatin O-beta-D-glucosyltransferase, uridine diphosphoglucose-zeatin O-glucosyltransferase, and zeatin O-glucosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663562
14663576
Trehalose 6-phosphate phosphorylase
Class of enzymes In enzymology, a trehalose 6-phosphate phosphorylase (EC 2.4.1.216) is an enzyme that catalyzes the chemical reaction alpha,alpha-trehalose 6-phosphate + phosphate formula_0 glucose 6-phosphate + beta-D-glucose 1-phosphate The two substrates of this enzyme are alpha,alpha'-trehalose 6-phosphate and phosphate. Its two products are glucose 6-phosphate and beta-D-glucose 1-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is alpha,alpha-trehalose 6-phosphate:phosphate beta-D-glucosyltransferase. This enzyme is also called trehalose 6-phosphate:phosphate beta-D-glucosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663576
14663597
TRNA-queuosine beta-mannosyltransferase
Class of enzymes In enzymology, a tRNA-queuosine beta-mannosyltransferase (EC 2.4.1.110) is an enzyme that catalyzes the chemical reaction GDP-mannose + tRNAAsp-queuosine formula_0 GDP + tRNAAsp-O-5"-beta-D-mannosylqueuosine Thus, the two substrates of this enzyme are GDP-mannose and tRNAAsp-queuosine, whereas its two products are GDP and tRNAAsp-O-5"-beta-D-mannosylqueuosine. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-mannose:tRNAAsp-queuosine O-5"-beta-D-mannosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663597
14663616
Undecaprenyldiphospho-muramoylpentapeptide beta-N-acetylglucosaminyltransferase
Class of enzymes In enzymology, an undecaprenyldiphospho-muramoylpentapeptide beta-N-acetylglucosaminyltransferase (EC 2.4.1.227) is an enzyme that catalyzes the chemical reaction UDP-N-acetylglucosamine + Mur2Ac(oyl-L-Ala-gamma-D-Glu-L-Lys-D-Ala-D-Ala)-diphosphoundecaprenol formula_0 UDP + GlcNAc-(1-&gt;4)-Mur2Ac(oyl-L-Ala-gamma-D-Glu-L-Lys-D-Ala-D-Ala)-diphosphoundecaprenol The 2 substrates of this enzyme are UDP-N-acetylglucosamine and Mur2Ac(oyl-L-Ala-gamma-D-Glu-L-Lys-D-Ala-D-Ala)-diphosphoundecaprenol, whereas its 2 products are UDP and Lipid II. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine:N-acetyl-alpha-D-muramyl(oyl-L-Ala-gamma- D-Glu-L-Lys-D-Ala-D-Ala)-diphosphoundecaprenol beta-1,4-N-acetylglucosaminlytransferase. Another name in common use is MurG transferase. This enzyme participates in peptidoglycan biosynthesis. Variant reactions producing modified cell walls include (not muturally exclusive): References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663616
14663654
Undecaprenyl-phosphate mannosyltransferase
Class of enzymes In enzymology, an undecaprenyl-phosphate mannosyltransferase (EC 2.4.1.54) is an enzyme that catalyzes the chemical reaction GDP-mannose + undecaprenyl phosphate formula_0 GDP + D-mannosyl-1-phosphoundecaprenol Thus, the two substrates of this enzyme are GDP-mannose and undecaprenyl phosphate, whereas its two products are GDP and D-mannosyl-1-phosphoundecaprenol. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-mannose:undecaprenyl-phosphate D-mannosyltransferase. Other names in common use include guanosine diphosphomannose-undecaprenyl phosphate, mannosyltransferase, GDP mannose-undecaprenyl phosphate mannosyltransferase, and GDP-D-mannose:lipid phosphate transmannosylase. It employs one cofactor, phosphatidylglycerol. Sources of this enzyme includes "Micrococcus luteus, Phaseolus aureus", "Mycobacterium smegmatis" and cotton fibers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663654
14663700
Urate-ribonucleotide phosphorylase
Enzyme in purine metabolism In enzymology, an urate-ribonucleotide phosphorylase (EC 2.4.2.16) is an enzyme that catalyzes the chemical reaction urate D-ribonucleotide + phosphate formula_0 urate + alpha-D-ribose 1-phosphate Thus, the two substrates of this enzyme are urate D-ribonucleotide and phosphate, whereas its two products are urate and alpha-D-ribose 1-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is urate-ribonucleotide:phosphate alpha-D-ribosyltransferase. Other names in common use include UAR phosphorylase, and urate-ribonucleotide:phosphate D-ribosyltransferase. This enzyme participates in purine metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663700
14663739
Uridine phosphorylase
Class of enzymes In enzymology, an uridine phosphorylase (EC 2.4.2.3) is an enzyme that catalyzes the chemical reaction uridine + phosphate formula_0 uracil + alpha-D-ribose 1-phosphate Thus, the two substrates of this enzyme are uridine and phosphate, whereas its two products are uracil and alpha-D-ribose 1-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is uridine:phosphate alpha-D-ribosyltransferase. Other names in common use include pyrimidine phosphorylase, UrdPase, UPH, and UPase. This enzyme participates in pyrimidine metabolism. Structural studies. As of late 2007, 27 structures have been solved for this class of enzymes, with PDB accession codes 1K3F, 1LX7, 1RXC, 1RXS, 1RXU, 1RXY, 1RYZ, 1SJ9, 1SQ6, 1T0U, 1TGV, 1TGY, 1U1C, 1U1D, 1U1E, 1U1F, 1U1G, 1Y1Q, 1Y1R, 1Y1S, 1Y1T, 1ZL2, 2HN9, 2HRD, 2HSW, 2HWU, and 2I8A. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663739
14663765
Vitexin beta-glucosyltransferase
Class of enzymes In enzymology, a vitexin beta-glucosyltransferase (EC 2.4.1.105) is an enzyme that catalyzes the chemical reaction UDP-glucose + vitexin formula_0 UDP + vitexin 2"-O-beta-D-glucoside Thus, the two substrates of this enzyme are UDP-glucose and vitexin, whereas its two products are UDP and vitexin 2"-O-beta-D-glucoside. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:vitexin 2. This enzyme is also called uridine diphosphoglucose-vitexin 2"-glucosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663765
14663795
Vomilenine glucosyltransferase
Class of enzymes In enzymology, a vomilenine glucosyltransferase (EC 2.4.1.219) is an enzyme that catalyzes the chemical reaction UDP-glucose + vomilenine formula_0 UDP + raucaffricine Thus, the two substrates of this enzyme are UDP-glucose and vomilenine, whereas its two products are UDP and raucaffricine. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:vomilenine 21-O-beta-D-glucosyltransferase. This enzyme is also called UDPG:vomilenine 21-beta-D-glucosyltransferase. This enzyme participates in indole and ipecac alkaloid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663795
14663821
Xanthine phosphoribosyltransferase
Class of enzymes In enzymology, a xanthine phosphoribosyltransferase (EC 2.4.2.22) is an enzyme that catalyzes the chemical reaction XMP + diphosphate formula_0 5-phospho-alpha-D-ribose 1-diphosphate + xanthine Thus, the two substrates of this enzyme are XMP and diphosphate, whereas its two products are 5-phospho-alpha-D-ribose 1-diphosphate and xanthine. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is XMP:diphosphate 5-phospho-alpha-D-ribosyltransferase. Other names in common use include Xan phosphoribosyltransferase, xanthosine 5'-phosphate pyrophosphorylase, xanthylate pyrophosphorylase, xanthylic pyrophosphorylase, XMP pyrophosphorylase, 5-phospho-alpha-D-ribose-1-diphosphate:xanthine, phospho-D-ribosyltransferase, 9-(5-phospho-beta-D-ribosyl)xanthine:diphosphate, and 5-phospho-alpha-D-ribosyltransferase. This enzyme participates in purine metabolism. Structural studies. As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1A95, 1A96, 1A97, 1A98, 1NUL, and 2FXV. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663821
14663848
Xylosylprotein 4-beta-galactosyltransferase
Class of enzymes In enzymology, a xylosylprotein 4-beta-galactosyltransferase (EC 2.4.1.133) is an enzyme that catalyzes the chemical reaction UDP-galactose + O-beta-D-xylosylprotein formula_0 UDP + 4-beta-D-galactosyl-O-beta-D-xylosylprotein Thus, the two substrates of this enzyme are UDP-galactose and O-beta-D-xylosylprotein, whereas its two products are UDP and 4-beta-D-galactosyl-O-beta-D-xylosylprotein. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-galactose:O-beta-D-xylosylprotein 4-beta-D-galactosyltransferase. Other names in common use include UDP-D-galactose:D-xylose galactosyltransferase, UDP-D-galactose:xylose galactosyltransferase, galactosyltransferase I, and uridine diphosphogalactose-xylose galactosyltransferase. This enzyme participates in chondroitin sulfate biosynthesis and glycan structures - biosynthesis 1. It employs one cofactor, manganese. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663848
14663884
Zeatin O-beta-D-xylosyltransferase
Class of enzymes In enzymology, a zeatin O-beta-D-xylosyltransferase (EC 2.4.2.40) is an enzyme that catalyzes the chemical reaction UDP-D-xylose + zeatin formula_0 UDP + O-beta-D-xylosylzeatin Thus, the two substrates of this enzyme are UDP-D-xylose and zeatin, whereas its two products are UDP and O-beta-D-xylosylzeatin. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is UDP-D-xylose:zeatin O-beta-D-xylosyltransferase. Other names in common use include uridine diphosphoxylose-zeatin xylosyltransferase, and zeatin O-xylosyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14663884
14664078
Harris affine region detector
In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas. Overview. The Harris affine detector can identify similar regions between images that are related through affine transformations and have different illuminations. These "affine-invariant" detectors should be capable of identifying similar regions in images taken from different viewpoints that are related by a simple geometric transformation: scaling, rotation and shearing. These detected regions have been called both "invariant" and "covariant". On one hand, the regions are detected "invariant" of the image transformation but the regions "covariantly" change with image transformation. Do not dwell too much on these two naming conventions; the important thing to understand is that the design of these interest points will make them compatible across images taken from several viewpoints. Other detectors that are affine-invariant include Hessian affine region detector, maximally stable extremal regions, Kadir–Brady saliency detector, edge-based regions (EBR) and intensity-extrema-based regions (IBR). Mikolajczyk and Schmid (2002) first described the Harris affine detector as it is used today in Affine Invariant Interest Point Detector". Earlier works in this direction include use of affine shape adaptation by Lindeberg and Garding for computing affine invariant image descriptors and in this way reducing the influence of perspective image deformations, the use affine adapted feature points for wide baseline matching by Baumberg and the first use of scale invariant feature points by Lindeberg; for an overview of the theoretical background. The Harris affine detector relies on the combination of corner points detected through Harris corner detection, multi-scale analysis through Gaussian scale space and affine normalization using an iterative affine shape adaptation algorithm. The recursive and iterative algorithm follows an iterative approach to detecting these regions: Algorithm description. Harris–Laplace detector (initial region points). The Harris affine detector relies heavily on both the Harris measure and a Gaussian scale space representation. Therefore, a brief examination of both follow. For a more exhaustive derivations see corner detection and Gaussian scale space or their associated papers. Harris corner measure. The Harris corner detector algorithm relies on a central principle: at a corner, the image intensity will change largely in multiple directions. This can alternatively be formulated by examining the changes of intensity due to shifts in a local window. Around a corner point, the image intensity will change greatly when the window is shifted in an arbitrary direction. Following this intuition and through a clever decomposition, the Harris detector uses the second moment matrix as the basis of its corner decisions. (See corner detection for a more complete derivation). The matrix formula_0, has also been called the autocorrelation matrix and has values closely related to the derivatives of image intensity. formula_1 where formula_2 and formula_3 are the respective derivatives (of pixel intensity) in the formula_4 and formula_5 direction at point (formula_6,formula_7); formula_6 and formula_7 are the position parameters of the weighting function w. The off-diagonal entries are the product of formula_2 and formula_3, while the diagonal entries are squares of the respective derivatives. The weighting function formula_8 can be uniform, but is more typically an isotropic, circular Gaussian, formula_9 that acts to average in a local region while weighting those values near the center more heavily. As it turns out, this formula_0 matrix describes the shape of the autocorrelation measure as due to shifts in window location. Thus, if we let formula_10 and formula_11 be the eigenvalues of formula_0, then these values will provide a quantitative description of how the autocorrelation measure changes in space: its principal curvatures. As Harris and Stephens (1988) point out, the formula_0 matrix centered on corner points will have two large, positive eigenvalues. Rather than extracting these eigenvalues using methods like singular value decomposition, the Harris measure based on the trace and determinant is used: formula_12 where formula_13 is a constant. Corner points have large, positive eigenvalues and would thus have a large Harris measure. Thus, corner points are identified as local maxima of the Harris measure that are above a specified threshold. formula_14 where formula_15 are the set of all corner points, formula_16 is the Harris measure calculated at formula_4, formula_17 is an 8-neighbor set centered on formula_18 and formula_19 is a specified threshold. Gaussian scale-space. A Gaussian scale space representation of an image is the set of images that result from convolving a Gaussian kernel of various sizes with the original image. In general, the representation can be formulated as: formula_20 where formula_21 is an isotropic, circular Gaussian kernel as defined above. The convolution with a Gaussian kernel smooths the image using a window the size of the kernel. A larger scale, formula_22, corresponds to a smoother resultant image. Mikolajczyk and Schmid (2001) point out that derivatives and other measurements must be normalized across scales. A derivative of order formula_23, formula_24, must be normalized by a factor formula_25 in the following manner: formula_26 These derivatives, or any arbitrary measure, can be adapted to a scale space representation by calculating this measure using a set of scales recursively where the formula_27th scale is formula_28. See scale space for a more complete description. Combining Harris detector across Gaussian scale-space. The Harris–Laplace detector combines the traditional 2D Harris corner detector with the idea of a Gaussian scale space representation in order to create a scale-invariant detector. Harris-corner points are good starting points because they have been shown to have good rotational and illumination invariance in addition to identifying the interesting points of the image. However, the points are not scale invariant and thus the second-moment matrix must be modified to reflect a scale-invariant property. Let us denote, formula_29 as the scale adapted second-moment matrix used in the Harris–Laplace detector. formula_30 where formula_31 is the Gaussian kernel of scale formula_32 and formula_33. Similar to the Gaussian-scale space, formula_34 is the Gaussian-smoothed image. The formula_35 operator denotes convolution. formula_36 and formula_37 are the derivatives in their respective direction applied to the smoothed image and calculated using a Gaussian kernel with scale formula_38. In terms of our Gaussian scale-space framework, the formula_32 parameter determines the current scale at which the Harris corner points are detected. Building upon this scale-adapted second-moment matrix, the Harris–Laplace detector is a twofold process: applying the Harris corner detector at multiple scales and automatically choosing the "characteristic scale". Multi-scale Harris corner points. The algorithm searches over a fixed number of predefined scales. This set of scales is defined as: formula_39 Mikolajczyk and Schmid (2004) use formula_40. For each integration scale, formula_32, chosen from this set, the appropriate differentiation scale is chosen to be a constant factor of the integration scale: formula_41. Mikolajczyk and Schmid (2004) used formula_42. Using these scales, the interest points are detected using a Harris measure on the formula_43 matrix. The "cornerness," like the typical Harris measure, is defined as: formula_44 Like the traditional Harris detector, corner points are those local (8 point neighborhood) maxima of the "cornerness" that are above a specified threshold. Characteristic scale identification. An iterative algorithm based on Lindeberg (1998) both spatially localizes the corner points and selects the "characteristic scale". The iterative search has three key steps, that are carried for each point formula_45 that were initially detected at scale formula_32 by the multi-scale Harris detector (formula_46 indicates the formula_47 iteration): formula_52 where formula_53 and formula_54 are the second derivatives in their respective directions. The formula_55 factor (as discussed above in Gaussian scale-space) is used to normalize the LoG across scales and make these measures comparable, thus making a maximum relevant. Mikolajczyk and Schmid (2001) demonstrate that the LoG measure attains the highest percentage of correctly detected corner points in comparison to other scale-selection measures. The scale which maximizes this LoG measure in the "two scale-space" neighborhood is deemed the characteristic scale, formula_48, and used in subsequent iterations. If no extrema, or maxima of the LoG is found, this point is discarded from future searches. If the stopping criterion is not met, then the algorithm repeats from step 1 using the new formula_59 points and scale. When the stopping criterion is met, the found points represent those that maximize the LoG across scales (scale selection) and maximize the Harris corner measure in a local neighborhood (spatial selection). Affine-invariant points. Mathematical theory. The Harris–Laplace detected points are scale invariant and work well for isotropic regions that are viewed from the same viewing angle. In order to be invariant to arbitrary affine transformations (and viewpoints), the mathematical framework must be revisited. The second-moment matrix formula_60 is defined more generally for anisotropic regions: formula_61 where formula_62 and formula_63 are covariance matrices defining the differentiation and the integration Gaussian kernel scales. Although this may look significantly different from the second-moment matrix in the Harris–Laplace detector; it is in fact, identical. The earlier formula_64 matrix was the 2D-isotropic version in which the covariance matrices formula_62 and formula_63 were 2x2 identity matrices multiplied by factors formula_32 and formula_38, respectively. In the new formulation, one can think of Gaussian kernels as a multivariate Gaussian distributions as opposed to a uniform Gaussian kernel. A uniform Gaussian kernel can be thought of as an isotropic, circular region. Similarly, a more general Gaussian kernel defines an ellipsoid. In fact, the eigenvectors and eigenvalues of the covariance matrix define the rotation and size of the ellipsoid. Thus we can easily see that this representation allows us to completely define an arbitrary elliptical affine region over which we want to integrate or differentiate. The goal of the affine invariant detector is to identify regions in images that are related through affine transformations. We thus consider a point formula_65 and the transformed point formula_66, where A is an affine transformation. In the case of images, both formula_67 and formula_65 live in formula_68 space. The second-moment matrices are related in the following manner: formula_69 where formula_70 and formula_71 are the covariance matrices for the formula_72 reference frame. If we continue with this formulation and enforce that formula_73 where formula_32 and formula_38 are scalar factors, one can show that the covariance matrices for the related point are similarly related: formula_74 By requiring the covariance matrices to satisfy these conditions, several nice properties arise. One of these properties is that the square root of the second-moment matrix, formula_75 will transform the original anisotropic region into isotropic regions that are related simply through a pure rotation matrix formula_76. These new isotropic regions can be thought of as a normalized reference frame. The following equations formulate the relation between the normalized points formula_77 and formula_78: formula_79 The rotation matrix can be recovered using gradient methods likes those in the SIFT descriptor. As discussed with the Harris detector, the eigenvalues and eigenvectors of the second-moment matrix, formula_80 characterize the curvature and shape of the pixel intensities. That is, the eigenvector associated with the largest eigenvalue indicates the direction of largest change and the eigenvector associated with the smallest eigenvalue defines the direction of least change. In the 2D case, the eigenvectors and eigenvalues define an ellipse. For an isotropic region, the region should be circular in shape and not elliptical. This is the case when the eigenvalues have the same magnitude. Thus a measure of the isotropy around a local region is defined as the following: formula_81 where formula_82 denote eigenvalues. This measure has the range formula_83. A value of formula_84 corresponds to perfect isotropy. Iterative algorithm. Using this mathematical framework, the Harris affine detector algorithm iteratively discovers the second-moment matrix that transforms the anisotropic region into a normalized region in which the isotropic measure is sufficiently close to one. The algorithm uses this "shape adaptation matrix", formula_85, to transform the image into a normalized reference frame. In this normalized space, the interest points' parameters (spatial location, integration scale and differentiation scale) are refined using methods similar to the Harris–Laplace detector. The second-moment matrix is computed in this normalized reference frame and should have an isotropic measure close to one at the final iteration. At every formula_46th iteration, each interest region is defined by several parameters that the algorithm must discover: the formula_86 matrix, position formula_87, integration scale formula_88 and differentiation scale formula_89. Because the detector computes the second-moment matrix in the transformed domain, it's convenient to denote this transformed position as formula_90 where formula_91. Computation and implementation. The computational complexity of the Harris-affine detector is broken into two parts: initial point detection and affine region normalization. The initial point detection algorithm, Harris–Laplace, has complexity formula_92 where formula_27 is the number of pixels in the image. The affine region normalization algorithm automatically detects the scale and estimates the "shape adaptation matrix", formula_85. This process has complexity formula_93, where formula_6 is the number of initial points, formula_23 is the size of the search space for the automatic scale selection and formula_46 is the number of iterations required to compute the formula_85 matrix. Some methods exist to reduce the complexity of the algorithm at the expense of accuracy. One method is to eliminate the search in the differentiation scale step. Rather than choose a factor formula_22 from a set of factors, the sped-up algorithm chooses the scale to be constant across iterations and points: formula_94. Although this reduction in search space might decrease the complexity, this change can severely effect the convergence of the formula_85 matrix. Analysis. Convergence. One can imagine that this algorithm might identify duplicate interest points at multiple scales. Because the Harris affine algorithm looks at each initial point given by the Harris–Laplace detector independently, there is no discrimination between identical points. In practice, it has been shown that these points will ultimately all converge to the same interest point. After finishing identifying all interest points, the algorithm accounts for duplicates by comparing the spatial coordinates (formula_45), the integration scale formula_32, the isotropic measure formula_95 and skew. If these interest point parameters are similar within a specified threshold, then they are labeled duplicates. The algorithm discards all these duplicate points except for the interest point that's closest to the average of the duplicates. Typically 30% of the Harris affine points are distinct and dissimilar enough to not be discarded. Mikolajczyk and Schmid (2004) showed that often the initial points (40%) do not converge. The algorithm detects this divergence by stopping the iterative algorithm if the inverse of the isotropic measure is larger than a specified threshold: formula_96. Mikolajczyk and Schmid (2004) use formula_97. Of those that did converge, the typical number of required iterations was 10. Quantitative measure. Quantitative analysis of affine region detectors take into account both the accuracy of point locations and the overlap of regions across two images. Mioklajcyzk and Schmid (2004) extend the repeatability measure of Schmid et al. (1998) as the ratio of point correspondences to minimum detected points of the two images. formula_98 where formula_99 are the number of corresponding points in images formula_0 and formula_100. formula_101 and formula_102 are the number of detected points in the respective images. Because each image represents 3D space, it might be the case that the one image contains objects that are not in the second image and thus whose interest points have no chance of corresponding. In order to make the repeatability measure valid, one remove these points and must only consider points that lie in both images; formula_102 and formula_101 only count those points such that formula_103. For a pair of two images related through a homography matrix formula_104, two points, formula_105 and formula_106 are said to correspond if: Robustness to affine and other transformations. Mikolajczyk et al. (2005) have done a thorough analysis of several state-of-the-art affine region detectors: Harris affine, Hessian affine, MSER, IBR &amp; EBR and salient detectors. Mikolajczyk et al. analyzed both structured images and textured images in their evaluation. Linux binaries of the detectors and their test images are freely available at their webpage. A brief summary of the results of Mikolajczyk et al. (2005) follow; see "A comparison of affine region detectors" for a more quantitative analysis.
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "A(\\mathbf{x}) = \\sum_{p,q} w(p,q)\n\\begin{bmatrix}\nI_x^2(p,q) & I_x I_y(p,q) \\\\\nI_x I_y(p,q) & I_y^2(p,q)\\\\ \n\\end{bmatrix}\n" }, { "math_id": 2, "text": "I_{x}" }, { "math_id": 3, "text": "I_{y}" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "w(x,y)" }, { "math_id": 9, "text": "w(x,y) = g(x,y,\\sigma) = \\frac{1}{2\\pi \\sigma ^2} e^{ \\left (-\\frac{ x^2 + y^2}{2\\sigma ^2} \\right )}" }, { "math_id": 10, "text": " \\lambda_1 " }, { "math_id": 11, "text": " \\lambda_2 " }, { "math_id": 12, "text": "\nR = \\det(A) - \\alpha \\operatorname{trace}^2(A) = \\lambda_1 \\lambda_2 - \\alpha (\\lambda_1 + \\lambda_2)^2\n" }, { "math_id": 13, "text": "\\alpha" }, { "math_id": 14, "text": "\\begin{align}\n\\{x_c\\} = \\big\\{ x_c \\mid R(x_c) > R(x_i), \\forall x_i \\in W(x_c) \\big\\}, \\\\\nR(x_c) > t_\\text{threshold}\n\\end{align}\n" }, { "math_id": 15, "text": " \\{x_c\\}" }, { "math_id": 16, "text": "R(x)" }, { "math_id": 17, "text": "W(x_c)" }, { "math_id": 18, "text": "x_c" }, { "math_id": 19, "text": "t_\\text{threshold}" }, { "math_id": 20, "text": "\nL(\\mathbf{x},s) = G(s) \\otimes I(\\mathbf{x})\n" }, { "math_id": 21, "text": "G(s)" }, { "math_id": 22, "text": "s" }, { "math_id": 23, "text": "m" }, { "math_id": 24, "text": "D_{i_1, ... i_m}" }, { "math_id": 25, "text": "s^m" }, { "math_id": 26, "text": "\nD_{i_1, \\dots, i_m}(\\mathbf{x},s) = s^m L_{i_1, \\dots, i_m}(\\mathbf{x},s)\n" }, { "math_id": 27, "text": "n" }, { "math_id": 28, "text": "s_n = k^n s_0 " }, { "math_id": 29, "text": "M = \\mu(\\mathbf{x}, \\sigma_{\\mathit{I}}, \\sigma_{\\mathit{D}})" }, { "math_id": 30, "text": "\nM = \\mu(\\mathbf{x}, \\sigma_{\\mathit{I}}, \\sigma_{\\mathit{D}}) =\n\\sigma_D^2 g(\\sigma_I) \\otimes\n\\begin{bmatrix}\nL_{x}^2(\\mathbf{x}, \\sigma_{D}) & L_{x}L_{y}(\\mathbf{x}, \\sigma_{D}) \\\\\nL_{x}L_{y}(\\mathbf{x}, \\sigma_{D}) & L_{y}^2(\\mathbf{x}, \\sigma_{D})\n\\end{bmatrix}\n" }, { "math_id": 31, "text": "g(\\sigma_I)" }, { "math_id": 32, "text": "\\sigma_I" }, { "math_id": 33, "text": "\\mathbf{x} = (x,y)" }, { "math_id": 34, "text": "L(\\mathbf{x})" }, { "math_id": 35, "text": "\\mathbf{\\otimes}" }, { "math_id": 36, "text": "L_{x}(\\mathbf{x},\\sigma_{D})" }, { "math_id": 37, "text": "L_{y}(\\mathbf{x}, \\sigma_{D})" }, { "math_id": 38, "text": "\\sigma_D" }, { "math_id": 39, "text": "\n{\\sigma_1 \\dots \\sigma_n} = {k^{1}\\sigma_0 \\dots k^{n}\\sigma_0}\n" }, { "math_id": 40, "text": "k = 1.4" }, { "math_id": 41, "text": "\\sigma_D = s\\sigma_I" }, { "math_id": 42, "text": "s = 0.7" }, { "math_id": 43, "text": " \\mu(\\mathbf{x}, \\sigma_{\\mathit{I}}, \\sigma_{\\mathit{D}})" }, { "math_id": 44, "text": "\n\\mathit{cornerness} = \\det(\\mu(\\mathbf{x}, \\sigma_{\\mathit{I}}, \\sigma_{\\mathit{D}})) - \\alpha \\operatorname{trace}^2(\\mu(\\mathbf{x}, \\sigma_{\\mathit{I}}, \\sigma_{\\mathit{D}}))\n" }, { "math_id": 45, "text": "\\mathbf{x}" }, { "math_id": 46, "text": "k" }, { "math_id": 47, "text": "kth" }, { "math_id": 48, "text": "\\sigma_I^{(k+1)}" }, { "math_id": 49, "text": "1.4" }, { "math_id": 50, "text": " t \\in [0.7, \\dots, 1.4] " }, { "math_id": 51, "text": "\\sigma_I^{(k+1)} = t \\sigma_I^k" }, { "math_id": 52, "text": "\n|\\operatorname{LoG}(\\mathbf{x}, \\sigma_I)| = \\sigma_I^2 \\left|L_{xx}(\\mathbf{x}, \\sigma_I) + L_{yy}(\\mathbf{x},\\sigma_I)\\right|\n" }, { "math_id": 53, "text": "L_{xx}" }, { "math_id": 54, "text": "L_{yy}" }, { "math_id": 55, "text": "\\sigma_I^2" }, { "math_id": 56, "text": "\\mathbf{x}^{(k+1)}" }, { "math_id": 57, "text": "\\sigma_I^{(k+1)} == \\sigma_I^{(k)}" }, { "math_id": 58, "text": "\\mathbf{x}^{(k+1)} == \\mathbf{x}^{(k)}" }, { "math_id": 59, "text": "k+1" }, { "math_id": 60, "text": "\\mathbf{\\mu}" }, { "math_id": 61, "text": "\n\\mu (\\mathbf{x}, \\Sigma_I, \\Sigma_D) = \\det(\\Sigma_D) g(\\Sigma_I) * ( \\nabla L(\\mathbf{x}, \\Sigma_D)\\nabla L(\\mathbf{x}, \\Sigma_D)^T)\n" }, { "math_id": 62, "text": "\\Sigma_I" }, { "math_id": 63, "text": "\\Sigma_D" }, { "math_id": 64, "text": "\\mu" }, { "math_id": 65, "text": "\\mathbf{x}_L" }, { "math_id": 66, "text": "\\mathbf{x}_R = A\\mathbf{x}_L" }, { "math_id": 67, "text": "\\mathbf{x}_R" }, { "math_id": 68, "text": "R^2" }, { "math_id": 69, "text": "\\begin{align}\n\\mu(\\mathbf{x}_L,\\Sigma_{I,L}, \\Sigma_{D,L}) & {} = A^T \\mu (\\mathbf{x}_R, \\Sigma_{I,R}, \\Sigma_{D,R}) A \\\\\nM_L & {} = \\mu(\\mathbf{x}_L,\\Sigma_{I,L}, \\Sigma_{D,L}) \\\\\nM_R & {} = \\mu (\\mathbf{x}_R, \\Sigma_{I,R}, \\Sigma_{D,R}) \\\\\nM_L & {} = A^T M_R A \\\\\n\\Sigma_{I,R} & {} = A \\Sigma_{I,L} A^T\\text{ and }\\Sigma_{D,R} = A \\Sigma_{D,L} A^T\n\\end{align}\n" }, { "math_id": 70, "text": "\\Sigma_{I,b}" }, { "math_id": 71, "text": "\\Sigma_{D,b}" }, { "math_id": 72, "text": "b" }, { "math_id": 73, "text": "\\begin{align}\n\\Sigma_{I,L} = \\sigma_I M_L^{-1} \\\\\n\\Sigma_{D,L} = \\sigma_D M_L^{-1}\n\\end{align}\n" }, { "math_id": 74, "text": "\\begin{align}\n\\Sigma_{I,R} = \\sigma_I M_R^{-1} \\\\\n\\Sigma_{D,R} = \\sigma_D M_R^{-1}\n\\end{align}\n" }, { "math_id": 75, "text": "M^{\\tfrac{1}{2}}" }, { "math_id": 76, "text": "R" }, { "math_id": 77, "text": "x_R^'" }, { "math_id": 78, "text": "x_L^'" }, { "math_id": 79, "text": "\\begin{align}\nA = M_R^{-\\tfrac{1}{2}} R M_L^{\\tfrac{1}{2}} \\\\\nx_R^' = M_R^{\\tfrac{1}{2}}x_R \\\\\nx_L^' = M_L^{\\tfrac{1}{2}}x_L \\\\\nx_L^' = R x_R^'\\\\\n\\end{align}\n" }, { "math_id": 80, "text": "M = \\mu(\\mathbf{x}, \\Sigma_I, \\Sigma_D)" }, { "math_id": 81, "text": "\n\\mathcal{Q} = \\frac{\\lambda_{\\min}(M)}{\\lambda_{\\max}(M)}\n" }, { "math_id": 82, "text": "\\lambda" }, { "math_id": 83, "text": "[0 \\dots 1]" }, { "math_id": 84, "text": "1" }, { "math_id": 85, "text": "U" }, { "math_id": 86, "text": "U^{(k)}" }, { "math_id": 87, "text": "\\mathbf{x}^{(k)}" }, { "math_id": 88, "text": "\\sigma_I^{(k)}" }, { "math_id": 89, "text": "\\sigma_D^{(k)}" }, { "math_id": 90, "text": "\\mathbf{x}_w^{(k)}" }, { "math_id": 91, "text": "U^{(k)}\\mathbf{x}_w^{(k)} = \\mathbf{x^{(k)}}" }, { "math_id": 92, "text": "\\mathcal{O}(n)" }, { "math_id": 93, "text": "\\mathcal{O}((m+k)p)" }, { "math_id": 94, "text": "\\sigma_D = s \\sigma_I,\\; s = constant" }, { "math_id": 95, "text": "\\tfrac{\\lambda_{\\min}(U)}{\\lambda_{\\max}(U)}" }, { "math_id": 96, "text": " \\tfrac{\\lambda_{\\max}(U)}{\\lambda_{\\min}(U)} > t_\\text{diverge} " }, { "math_id": 97, "text": "t_{diverge} = 6" }, { "math_id": 98, "text": "\nR_\\text{score} = \\frac{C(A,B)}{\\min(n_A, n_B)}\n" }, { "math_id": 99, "text": "C(A,B)" }, { "math_id": 100, "text": "B" }, { "math_id": 101, "text": "n_B" }, { "math_id": 102, "text": "n_A" }, { "math_id": 103, "text": "x_A = H \\cdot x_B " }, { "math_id": 104, "text": "H" }, { "math_id": 105, "text": "\\mathbf{x_a}" }, { "math_id": 106, "text": "\\mathbf{x_b}" } ]
https://en.wikipedia.org/wiki?curid=14664078
14664110
Hessian affine region detector
The Hessian affine region detector is a feature detector used in the fields of computer vision and image analysis. Like other feature detectors, the Hessian affine detector is typically used as a preprocessing step to algorithms that rely on identifiable, characteristic interest points. The Hessian affine detector is part of the subclass of feature detectors known as "affine-invariant" detectors: Harris affine region detector, Hessian affine regions, maximally stable extremal regions, Kadir–Brady saliency detector, edge-based regions (EBR) and intensity-extrema-based (IBR) regions. Algorithm description. The Hessian affine detector algorithm is almost identical to the Harris affine region detector. In fact, both algorithms were derived by Krystian Mikolajczyk and Cordelia Schmid in 2002, based on earlier work in, see also for a more general overview. How does the Hessian affine differ? The Harris affine detector relies on interest points detected at multiple scales using the Harris corner measure on the second-moment matrix. The Hessian affine also uses a multiple scale iterative algorithm to spatially localize and select scale and affine invariant points. However, at each individual scale, the Hessian affine detector chooses interest points based on the Hessian matrix at that point: formula_0 where formula_1 is second partial derivative in the formula_2 direction and formula_3 is the mixed partial second derivative in the formula_2 and formula_4 directions. It's important to note that the derivatives are computed in the current iteration scale and thus are derivatives of an image smoothed by a Gaussian kernel: formula_5. As discussed in the Harris affine region detector article, the derivatives must be scaled appropriately by a factor related to the Gaussian kernel: formula_6. At each scale, interest points are those points that simultaneously are local extrema of both the determinant and trace of the Hessian matrix. The trace of Hessian matrix is identical to the Laplacian of Gaussians (LoG): formula_7 As discussed in Mikolajczyk et al.(2005), by choosing points that maximize the determinant of the Hessian, this measure penalizes longer structures that have small second derivatives (signal changes) in a single direction. This type of measure is very similar to the measures used in the blob detection schemes proposed by Lindeberg (1998), where either the Laplacian or the determinant of the Hessian were used in blob detection methods with automatic scale selection. Like the Harris affine algorithm, these interest points based on the Hessian matrix are also spatially localized using an iterative search based on the Laplacian of Gaussians. Predictably, these interest points are called Hessian–Laplace interest points. Furthermore, using these initially detected points, the Hessian affine detector uses an iterative shape adaptation algorithm to compute the local affine transformation for each interest point. The implementation of this algorithm is almost identical to that of the Harris affine detector; however, the above mentioned Hessian measure replaces all instances of the Harris corner measure. Robustness to affine and other transformations. Mikolajczyk et al. (2005) have done a thorough analysis of several state of the art affine region detectors: Harris affine, Hessian affine, MSER, IBR &amp; EBR and salient detectors. Mikolajczyk et al. analyzed both structured images and textured images in their evaluation. Linux binaries of the detectors and their test images are freely available at their webpage. A brief summary of the results of Mikolajczyk et al. (2005) follow; see "A comparison of affine region detectors" for a more quantitative analysis. Overall, the Hessian affine detector performs second best to MSER. Like the Harris affine detector, Hessian affine interest regions tend to be more numerous and smaller than other detectors. For a single image, the Hessian affine detector typically identifies more reliable regions than the Harris-Affine detector. The performance changes depending on the type of scene being analyzed. The Hessian affine detector responds well to textured scenes in which there are a lot of corner-like parts. However, for some structured scenes, like buildings, the Hessian affine detector performs very well. This is complementary to MSER that tends to do better with well structured (segmentable) scenes.
[ { "math_id": 0, "text": "\nH(\\mathbf{x}) = \n\\begin{bmatrix}\nL_{xx}(\\mathbf{x}) & L_{xy}(\\mathbf{x})\\\\\nL_{yx}(\\mathbf{x}) & L_{yy}(\\mathbf{x})\\\\\n\\end{bmatrix}\n" }, { "math_id": 1, "text": "L_{aa}(\\mathbf{x})" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "L_{ab}(\\mathbf{x})" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "L(\\mathbf{x}) = g(\\sigma_I) \\otimes I(\\mathbf{x}) " }, { "math_id": 6, "text": "\\sigma_I^2" }, { "math_id": 7, "text": "\\begin{align}\nDET = \\sigma_I^2 ( L_{xx}L_{yy}(\\mathbf{x}) - L_{xy}^2(\\mathbf{x})) \\\\\nTR = \\sigma_I (L_{xx} + L_{yy}) \n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=14664110
14668195
Gel point
Abrupt change in the viscosity of a solution of polymerizable materials In polymer chemistry, the gel point is an abrupt change in the viscosity of a solution containing polymerizable components. At the gel point, a solution undergoes gelation, as reflected in a loss in fluidity. After the monomer/polymer solution has passed the gel point, internal stress builds up in the gel phase, which can lead to volume shrinkage. Gelation is characteristic of polymerizations that include crosslinkers that can form 2- or 3-dimensional networks. For example, the condensation of a dicarboxylic acid and a triol will give rise to a gel whereas the same dicarboxylic acid and a diol will not. The gel is often a small percentage of the mixture, even though it greatly influences the properties of the bulk. Mathematical definition. An infinite polymer network appears at the gel point. Assuming that it is possible to measure the extent of reaction, formula_0, defined as the fraction of monomers that appear in cross-links, the gel point can be determined. The critical extent of reaction formula_1 for the gel point to be formed is given by: formula_2 For example, a polymer with N≈200 is able to reach the gel point with only 0.5% of monomers reacting. This shows the ease at which polymers are able to form infinite networks. The critical extent of reaction for gelation can be determined as a function of the properties of the monomer mixture, formula_3, formula_0, and formula_4: formula_5 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "p_c" }, { "math_id": 2, "text": "p_c = \\frac{1}{N-1} \\approx \\frac{1}{N} " }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "p_c = \\frac{1}{\\sqrt{r+rp(f-2)}} " } ]
https://en.wikipedia.org/wiki?curid=14668195
14668760
Alternating-direction implicit method
In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions. It is an example of an operator splitting method. ADI for matrix equations. The method. The ADI method is a two step iteration process that alternately updates the column and row spaces of an approximate solution to formula_0. One ADI iteration consists of the following steps:1. Solve for formula_1, where formula_2 2. Solve for formula_3, where formula_4. The numbers formula_5 are called shift parameters, and convergence depends strongly on the choice of these parameters. To perform formula_6 iterations of ADI, an initial guess formula_7 is required, as well as formula_6 shift parameters, formula_8. When to use ADI. If formula_9 and formula_10, then formula_11 can be solved directly in formula_12 using the Bartels-Stewart method. It is therefore only beneficial to use ADI when matrix-vector multiplication and linear solves involving formula_13 and formula_14 can be applied cheaply. The equation formula_15 has a unique solution if and only if formula_16, where formula_17 is the spectrum of formula_18. However, the ADI method performs especially well when formula_19 and formula_20 are well-separated, and formula_13 and formula_14 are normal matrices. These assumptions are met, for example, by the Lyapunov equation formula_21 when formula_13 is positive definite. Under these assumptions, near-optimal shift parameters are known for several choices of formula_13 and formula_14. Additionally, a priori error bounds can be computed, thereby eliminating the need to monitor the residual error in implementation. The ADI method can still be applied when the above assumptions are not met. The use of suboptimal shift parameters may adversely affect convergence, and convergence is also affected by the non-normality of formula_13 or formula_14 (sometimes advantageously). Krylov subspace methods, such as the Rational Krylov Subspace Method, are observed to typically converge more rapidly than ADI in this setting, and this has led to the development of hybrid ADI-projection methods. Shift-parameter selection and the ADI error equation. The problem of finding good shift parameters is nontrivial. This problem can be understood by examining the ADI error equation. After formula_6 iterations, the error is given by formula_22 Choosing formula_23 results in the following bound on the relative error: formula_24 where formula_25 is the operator norm. The ideal set of shift parameters formula_26 defines a rational function formula_27 that minimizes the quantity formula_28. If formula_13 and formula_14 are normal matrices and have eigendecompositions formula_29 and formula_30, then formula_31. Near-optimal shift parameters. Near-optimal shift parameters are known in certain cases, such as when formula_32 and formula_33, where formula_34 and formula_35 are disjoint intervals on the real line. The Lyapunov equation formula_21, for example, satisfies these assumptions when formula_13 is positive definite. In this case, the shift parameters can be expressed in closed form using elliptic integrals, and can easily be computed numerically. More generally, if closed, disjoint sets formula_36 and formula_37, where formula_38 and formula_39, are known, the optimal shift parameter selection problem is approximately solved by finding an extremal rational function that attains the value formula_40 where the infimum is taken over all rational functions of degree formula_41. This approximation problem is related to several results in potential theory, and was solved by Zolotarev in 1877 for formula_36 = [a, b] and formula_42 The solution is also known when formula_43 and formula_44 are disjoint disks in the complex plane. Heuristic shift-parameter strategies. When less is known about formula_19 and formula_20, or when formula_13 or formula_14 are non-normal matrices, it may not be possible to find near-optimal shift parameters. In this setting, a variety of strategies for generating good shift parameters can be used. These include strategies based on asymptotic results in potential theory, using the Ritz values of the matrices formula_13, formula_45, formula_14, and formula_46 to formulate a greedy approach, and cyclic methods, where the same small collection of shift parameters are reused until a convergence tolerance is met. When the same shift parameter is used at every iteration, ADI is equivalent to an algorithm called Smith's method. Factored ADI. In many applications, formula_13 and formula_14 are very large, sparse matrices, and formula_47 can be factored as formula_48, where formula_49, with formula_50. In such a setting, it may not be feasible to store the potentially dense matrix formula_51 explicitly. A variant of ADI, called factored ADI, can be used to compute formula_52, where formula_53. The effectiveness of factored ADI depends on whether formula_51 is well-approximated by a low rank matrix. This is known to be true under various assumptions about formula_13 and formula_14. ADI for parabolic equations. Historically, the ADI method was developed to solve the 2D diffusion equation on a square domain using finite differences. Unlike ADI for matrix equations, ADI for parabolic equations does not require the selection of shift parameters, since the shift appearing in each iteration is determined by parameters such as the timestep, diffusion coefficient, and grid spacing. The connection to ADI on matrix equations can be observed when one considers the action of the ADI iteration on the system at steady state. Example: 2D diffusion equation. The traditional method for solving the heat conduction equation numerically is the Crank–Nicolson method. This method results in a very complicated set of equations in multiple dimensions, which are costly to solve. The advantage of the ADI method is that the equations that have to be solved in each step have a simpler structure and can be solved efficiently with the tridiagonal matrix algorithm. Consider the linear diffusion equation in two dimensions, formula_54 The implicit Crank–Nicolson method produces the following finite difference equation: formula_55 where: formula_56 and formula_57 is the central second difference operator for the "p"-th coordinate formula_58 with formula_59 or formula_60 for formula_61 or formula_62 respectively (and formula_63 a shorthand for lattice points formula_64). After performing a stability analysis, it can be shown that this method will be stable for any formula_65. A disadvantage of the Crank–Nicolson method is that the matrix in the above equation is banded with a band width that is generally quite large. This makes direct solution of the system of linear equations quite costly (although efficient approximate solutions exist, for example use of the conjugate gradient method preconditioned with incomplete Cholesky factorization). The idea behind the ADI method is to split the finite difference equations into two, one with the "x"-derivative taken implicitly and the next with the "y"-derivative taken implicitly, formula_66 formula_67 The system of equations involved is symmetric and tridiagonal (banded with bandwidth 3), and is typically solved using tridiagonal matrix algorithm. It can be shown that this method is unconditionally stable and second order in time and space. There are more refined ADI methods such as the methods of Douglas, or the f-factor method which can be used for three or more dimensions. Generalizations. The usage of the ADI method as an operator splitting scheme can be generalized. That is, we may consider general evolution equations formula_68 where formula_69 and formula_70 are (possibly nonlinear) operators defined on a Banach space. In the diffusion example above we have formula_71 and formula_72. Fundamental ADI (FADI). Simplification of ADI to FADI. It is possible to simplify the conventional ADI method into Fundamental ADI method, which only has the similar operators at the left-hand sides while being operator-free at the right-hand sides. This may be regarded as the fundamental (basic) scheme of ADI method, with no more operator (to be reduced) at the right-hand sides, unlike most traditional implicit methods that usually consist of operators at both sides of equations. The FADI method leads to simpler, more concise and efficient update equations without degrading the accuracy of conventional ADI method. Relations to other implicit methods. Many classical implicit methods by Peaceman-Rachford, Douglas-Gunn, D'Yakonov, Beam-Warming, Crank-Nicolson, etc., may be simplified to fundamental implicit schemes with operator-free right-hand sides. In their fundamental forms, the FADI method of second-order temporal accuracy can be related closely to the fundamental locally one-dimensional (FLOD) method, which can be upgraded to second-order temporal accuracy, such as for three-dimensional Maxwell's equations in computational electromagnetics. For two- and three-dimensional heat conduction and diffusion equations, both FADI and FLOD methods may be implemented in simpler, more efficient and stable manner compared to their conventional methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "AX - XB = C" }, { "math_id": 1, "text": "X^{(j + 1/2)}" }, { "math_id": 2, "text": "\\left( A - \\beta_{j +1} I\\right) X^{(j+1/2)} = X^{(j)}\\left( B - \\beta_{j + 1} I \\right) + C." }, { "math_id": 3, "text": " X^{(j + 1)}" }, { "math_id": 4, "text": " X^{(j+1)}\\left( B - \\alpha_{j + 1} I \\right) = \\left( A - \\alpha_{j+1} I\\right) X^{(j+1/2)} - C" }, { "math_id": 5, "text": "(\\alpha_{j+1}, \\beta_{j+1})" }, { "math_id": 6, "text": "K" }, { "math_id": 7, "text": "X^{(0)}" }, { "math_id": 8, "text": "\\{ (\\alpha_{j}, \\beta_{j})\\}_{j = 1}^{K}" }, { "math_id": 9, "text": "A \\in \\mathbb{C}^{m \\times m}" }, { "math_id": 10, "text": "B \\in \\mathbb{C}^{n \\times n}" }, { "math_id": 11, "text": " AX - XB = C" }, { "math_id": 12, "text": " \\mathcal{O}(m^3 + n^3)" }, { "math_id": 13, "text": "A" }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": " AX-XB=C" }, { "math_id": 16, "text": " \\sigma(A) \\cap \\sigma(B) = \\emptyset" }, { "math_id": 17, "text": " \\sigma(M) " }, { "math_id": 18, "text": "M" }, { "math_id": 19, "text": "\\sigma(A)" }, { "math_id": 20, "text": "\\sigma(B)" }, { "math_id": 21, "text": "AX + XA^* = C" }, { "math_id": 22, "text": " X - X^{(K)} = \\prod_{j = 1}^K \\frac{(A - \\alpha_j I)}{(A - \\beta_j I)} \\left ( X - X^{(0)} \\right ) \\prod_{j = 1}^K \\frac{(B - \\beta_j I)}{(B - \\alpha_j I)}." }, { "math_id": 23, "text": "X^{(0)} = 0" }, { "math_id": 24, "text": " \\frac{\\left \\|X - X^{(K)} \\right \\|_2}{\\|X\\|_2} \\leq \\| r_K(A) \\|_2 \\| r_K(B)^{-1}\\|_2, \\quad r_K(M) = \\prod_{j = 1}^K \\frac{(M - \\alpha_j I)}{(M - \\beta_j I)}. " }, { "math_id": 25, "text": "\\| \\cdot \\|_2" }, { "math_id": 26, "text": " \\{ (\\alpha_j, \\beta_j)\\}_{j = 1}^K " }, { "math_id": 27, "text": " r_K " }, { "math_id": 28, "text": " \\| r_K(A) \\|_2 \\| r_K(B)^{-1}\\|_2 " }, { "math_id": 29, "text": "A = V_A\\Lambda_AV_A^*" }, { "math_id": 30, "text": "B = V_B\\Lambda_BV_B^*" }, { "math_id": 31, "text": " \\| r_K(A) \\|_2 \\| r_K(B)^{-1}\\|_2 = \\| r_K(\\Lambda_A) \\|_2 \\| r_K(\\Lambda_B)^{-1}\\|_2 " }, { "math_id": 32, "text": "\\Lambda_A \\subset [a, b]" }, { "math_id": 33, "text": "\\Lambda_B \\subset [c, d]" }, { "math_id": 34, "text": "[a, b]" }, { "math_id": 35, "text": "[c, d]" }, { "math_id": 36, "text": "E" }, { "math_id": 37, "text": "F" }, { "math_id": 38, "text": "\\Lambda_A \\subset E" }, { "math_id": 39, "text": "\\Lambda_B \\subset F" }, { "math_id": 40, "text": "\nZ_K(E, F) : = \\inf_{r} \\frac{ \\sup_{z \\in E} |r(z)| }{ \\inf_{z \\in F} |r(z)| },\n" }, { "math_id": 41, "text": "(K, K)" }, { "math_id": 42, "text": " F=-E." }, { "math_id": 43, "text": " E" }, { "math_id": 44, "text": " F" }, { "math_id": 45, "text": "A^{-1}" }, { "math_id": 46, "text": "B^{-1}" }, { "math_id": 47, "text": "C" }, { "math_id": 48, "text": "C = C_1C_2^*" }, { "math_id": 49, "text": "C_1 \\in \\mathbb{C}^{m \\times r}, C_2 \\in \\mathbb{C}^{n \\times r}" }, { "math_id": 50, "text": "r = 1, 2" }, { "math_id": 51, "text": "X" }, { "math_id": 52, "text": "ZY^*" }, { "math_id": 53, "text": "X \\approx ZY^*" }, { "math_id": 54, "text": "{\\partial u\\over \\partial t} =\n \\left({\\partial^2 u\\over \\partial x^2 } +\n{\\partial^2 u\\over \\partial y^2 }\n\\right)\n = ( u_{xx} + u_{yy} )\n" }, { "math_id": 55, "text": "{u_{ij}^{n+1}-u_{ij}^n\\over \\Delta t} =\n{1 \\over 2(\\Delta x)^2}\\left(\\delta_x^2+\\delta_y^2\\right)\n\\left(u_{ij}^{n+1}+u_{ij}^n\\right)" }, { "math_id": 56, "text": "\\Delta x = \\Delta y" }, { "math_id": 57, "text": "\\delta_p^2" }, { "math_id": 58, "text": "\\delta_p^2 u_{ij}=u_{ij+e_p}-2u_{ij}+u_{ij-e_p}" }, { "math_id": 59, "text": "e_p=(1,0)" }, { "math_id": 60, "text": "(0,1)" }, { "math_id": 61, "text": "p=x" }, { "math_id": 62, "text": "y" }, { "math_id": 63, "text": "ij" }, { "math_id": 64, "text": "(i,j)" }, { "math_id": 65, "text": "\\Delta t" }, { "math_id": 66, "text": "{u_{ij}^{n+1/2}-u_{ij}^n\\over \\Delta t/2} =\n{\\left(\\delta_x^2 u_{ij}^{n+1/2}+\\delta_y^2 u_{ij}^{n}\\right)\\over \\Delta x^2}" }, { "math_id": 67, "text": "{u_{ij}^{n+1}-u_{ij}^{n+1/2}\\over \\Delta t/2} =\n{\\left(\\delta_x^2 u_{ij}^{n+1/2}+\\delta_y^2 u_{ij}^{n+1}\\right)\\over \\Delta y^2}" }, { "math_id": 68, "text": " \\dot u = F_1 u + F_2 u, " }, { "math_id": 69, "text": " F_1 " }, { "math_id": 70, "text": " F_2 " }, { "math_id": 71, "text": " F_1 = {\\partial^2 \\over \\partial x^2} " }, { "math_id": 72, "text": " F_2 = {\\partial^2 \\over \\partial y^2} " } ]
https://en.wikipedia.org/wiki?curid=14668760
146689
Earth radius
Distance from the Earth surface to a point near its center &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; Earth radius (denoted as "R"🜨 or "R"E) is the distance from the center of Earth to a point on or near its surface. Approximating the figure of Earth by an Earth spheroid (an oblate ellipsoid), the radius ranges from a maximum (equatorial radius, denoted "a") of nearly to a minimum (polar radius, denoted "b") of nearly . A globally-average value is usually considered to be with a 0.3% variability (±10 km) for the following reasons. The International Union of Geodesy and Geophysics (IUGG) provides three reference values: the "mean radius" ("R"1) of three radii measured at two equator points and a pole; the "authalic radius", which is the radius of a sphere with the same surface area ("R"2); and the "volumetric radius", which is the radius of a sphere having the same volume as the ellipsoid ("R"3). All three values are about . Other ways to define and measure the Earth's radius involve either the spheroid's radius of curvature or the actual topography. A few definitions yield values outside the range between the polar radius and equatorial radius because they account for localized effects. A "nominal Earth radius" (denoted formula_0) is sometimes used as a unit of measurement in astronomy and geophysics, a conversion factor used when expressing planetary properties as multiples or fractions of a constant terrestrial radius; if the choice between equatorial or polar radii is not explicit, the equatorial radius is to be assumed, as recommended by the International Astronomical Union (IAU). Introduction. Earth's rotation, internal density variations, and external tidal forces cause its shape to deviate systematically from a perfect sphere. Local topography increases the variance, resulting in a surface of profound complexity. Our descriptions of Earth's surface must be simpler than reality in order to be tractable. Hence, we create models to approximate characteristics of Earth's surface, generally relying on the simplest model that suits the need. Each of the models in common use involve some notion of the geometric radius. Strictly speaking, spheres are the only solids to have radii, but broader uses of the term "radius" are common in many fields, including those dealing with models of Earth. The following is a partial list of models of Earth's surface, ordered from exact to more approximate: In the case of the geoid and ellipsoids, the fixed distance from any point on the model to the specified center is called "a radius of the Earth" or "the radius of the Earth at that point". It is also common to refer to any "mean radius" of a spherical model as "the radius of the earth". When considering the Earth's real surface, on the other hand, it is uncommon to refer to a "radius", since there is generally no practical need. Rather, elevation above or below sea level is useful. Regardless of the model, any of these "geocentric" radii falls between the polar minimum of about 6,357 km and the equatorial maximum of about 6,378 km (3,950 to 3,963 mi). Hence, the Earth deviates from a perfect sphere by only a third of a percent, which supports the spherical model in most contexts and justifies the term "radius of the Earth". While specific values differ, the concepts in this article generalize to any major planet. Physics of Earth's deformation. Rotation of a planet causes it to approximate an "oblate ellipsoid/spheroid" with a bulge at the equator and flattening at the North and South Poles, so that the "equatorial radius" a is larger than the "polar radius" b by approximately aq. The "oblateness constant" q is given by formula_3 where ω is the angular frequency, G is the gravitational constant, and M is the mass of the planet. For the Earth ≈ 289, which is close to the measured inverse flattening ≈ 298.257. Additionally, the bulge at the equator shows slow variations. The bulge had been decreasing, but since 1998 the bulge has increased, possibly due to redistribution of ocean mass via currents. The variation in density and crustal thickness causes gravity to vary across the surface and in time, so that the mean sea level differs from the ellipsoid. This difference is the "geoid height", positive above or outside the ellipsoid, negative below or inside. The geoid height variation is under on Earth. The geoid height can change abruptly due to earthquakes (such as the Sumatra-Andaman earthquake) or reduction in ice masses (such as Greenland). Not all deformations originate within the Earth. Gravitational attraction from the Moon or Sun can cause the Earth's surface at a given point to vary by tenths of a meter over a nearly 12-hour period (see Earth tide). Radius and local conditions. Given local and transient influences on surface height, the values defined below are based on a "general purpose" model, refined as globally precisely as possible within of reference ellipsoid height, and to within of mean sea level (neglecting geoid height). Additionally, the radius can be estimated from the curvature of the Earth at a point. Like a torus, the curvature at a point will be greatest (tightest) in one direction (north–south on Earth) and smallest (flattest) perpendicularly (east–west). The corresponding radius of curvature depends on the location and direction of measurement from that point. A consequence is that a distance to the true horizon at the equator is slightly shorter in the north–south direction than in the east–west direction. In summary, local variations in terrain prevent defining a single "precise" radius. One can only adopt an idealized model. Since the estimate by Eratosthenes, many models have been created. Historically, these models were based on regional topography, giving the best reference ellipsoid for the area under survey. As satellite remote sensing and especially the Global Positioning System gained importance, true global models were developed which, while not as accurate for regional work, best approximate the Earth as a whole. Extrema: equatorial and polar radii. The following radii are derived from the World Geodetic System 1984 (WGS-84) reference ellipsoid. It is an idealized surface, and the Earth measurements used to calculate it have an uncertainty of ±2 m in both the equatorial and polar dimensions. Additional discrepancies caused by topographical variation at specific locations can be significant. When identifying the position of an observable location, the use of more precise values for WGS-84 radii may not yield a corresponding improvement in accuracy. The value for the equatorial radius is defined to the nearest 0.1 m in WGS-84. The value for the polar radius in this section has been rounded to the nearest 0.1 m, which is expected to be adequate for most uses. Refer to the WGS-84 ellipsoid if a more precise value for its polar radius is needed. Location-dependent radii. Geocentric radius. The "geocentric radius" is the distance from the Earth's center to a point on the spheroid surface at geodetic latitude φ, given by the formula: formula_4 where a and b are, respectively, the equatorial radius and the polar radius. The extrema geocentric radii on the ellipsoid coincide with the equatorial and polar radii. They are vertices of the ellipse and also coincide with minimum and maximum radius of curvature. Radii of curvature. Principal radii of curvature. There are two principal radii of curvature: along the meridional and prime-vertical normal sections. Meridional. In particular, the "Earth's meridional radius of curvature" (in the north–south direction) at φ is: formula_5 where formula_6 is the eccentricity of the earth. This is the radius that Eratosthenes measured in his arc measurement. Prime vertical. If one point had appeared due east of the other, one finds the approximate curvature in the east–west direction. This "Earth's prime-vertical radius of curvature", also called the "Earth's transverse radius of curvature", is defined perpendicular (orthogonal) to M at geodetic latitude φ and is: formula_7 "N" can also be interpreted geometrically as the normal distance from the ellipsoid surface to the polar axis. The radius of a parallel of latitude is given by formula_8. Polar and equatorial radius of curvature. The "Earth's meridional radius of curvature at the equator" equals the meridian's semi-latus rectum: "M"e  = 6,335.439 km The "Earth's prime-vertical radius of curvature at the equator" equals the equatorial radius, "N"e "a". The "Earth's polar radius of curvature" (either meridional or prime-vertical) is: "M"p "N"p  = 6,399.594 km Combined radii of curvature. Azimuthal. The Earth's "azimuthal radius of curvature", along an Earth normal section at an azimuth (measured clockwise from north) α and at latitude φ, is derived from Euler's curvature formula as follows: formula_9 Non-directional. It is possible to combine the principal radii of curvature above in a non-directional manner. The "Earth's Gaussian radius of curvature" at latitude φ is: formula_10 Where "K" is the "Gaussian curvature", formula_11. The "Earth's mean radius of curvature" at latitude φ is: formula_12 Global radii. The Earth can be modeled as a sphere in many ways. This section describes the common ways. The various radii derived here use the notation and dimensions noted above for the Earth as derived from the WGS-84 ellipsoid; namely, "Equatorial radius": a = () "Polar radius": b = () A sphere being a gross approximation of the spheroid, which itself is an approximation of the geoid, units are given here in kilometers rather than the millimeter resolution appropriate for geodesy. Arithmetic mean radius. In geophysics, the International Union of Geodesy and Geophysics (IUGG) defines the "Earth's arithmetic mean radius" (denoted "R"1) to be formula_13 The factor of two accounts for the biaxial symmetry in Earth's spheroid, a specialization of triaxial ellipsoid. For Earth, the arithmetic mean radius is . Authalic radius. "Earth's authalic radius" (meaning "equal area") is the radius of a hypothetical perfect sphere that has the same surface area as the reference ellipsoid. The IUGG denotes the authalic radius as "R"2. A closed-form solution exists for a spheroid: formula_14 where &amp;NoBreak;&amp;NoBreak; is the eccentricity and &amp;NoBreak;&amp;NoBreak; is the surface area of the spheroid. For the Earth, the authalic radius is . The authalic radius formula_15 also corresponds to the "radius of (global) mean curvature", obtained by averaging the Gaussian curvature, formula_16, over the surface of the ellipsoid. Using the Gauss–Bonnet theorem, this gives formula_17 Volumetric radius. Another spherical model is defined by the "Earth's volumetric radius", which is the radius of a sphere of volume equal to the ellipsoid. The IUGG denotes the volumetric radius as "R"3. formula_18 For Earth, the volumetric radius equals . Rectifying radius. Another global radius is the "Earth's rectifying radius", giving a sphere with circumference equal to the perimeter of the ellipse described by any polar cross section of the ellipsoid. This requires an elliptic integral to find, given the polar and equatorial radii: formula_19 The rectifying radius is equivalent to the meridional mean, which is defined as the average value of M: formula_20 For integration limits of [0,], the integrals for rectifying radius and mean radius evaluate to the same result, which, for Earth, amounts to . The meridional mean is well approximated by the semicubic mean of the two axes, formula_21 which differs from the exact result by less than ; the mean of the two axes, formula_22 about , can also be used. Topographical radii. The mathematical expressions above apply over the surface of the ellipsoid. The cases below considers Earth's topography, above or below a reference ellipsoid. As such, they are "topographical geocentric distances", "Rt", which depends not only on latitude. Topographical global mean. The "topographical mean geocentric distance" averages elevations everywhere, resulting in a value larger than the IUGG mean radius, the authalic radius, or the volumetric radius. This topographical average is with uncertainty of . Derived quantities: diameter, circumference, arc-length, area, volume. Earth's diameter is simply twice Earth's radius; for example, "equatorial diameter" (2"a") and "polar diameter" (2"b"). For the WGS84 ellipsoid, that's respectively: "Earth's circumference" equals the perimeter length. The "equatorial circumference" is simply the circle perimeter: "Ce"=2"πa", in terms of the equatorial radius, "a". The "polar circumference" equals "Cp"=4"mp", four times the quarter meridian "mp"="aE"("e"), where the polar radius "b" enters via the eccentricity, "e"=(1−"b"2/"a"2)0.5; see Ellipse#Circumference for details. Arc length of more general surface curves, such as meridian arcs and geodesics, can also be derived from Earth's equatorial and polar radii. Likewise for surface area, either based on a map projection or a geodesic polygon. Earth's volume, or that of the reference ellipsoid, is V = πa2b. Using the parameters from WGS84 ellipsoid of revolution, a = 6,378.137 km and b =, V =. Nominal radii. In astronomy, the International Astronomical Union denotes the "nominal equatorial Earth radius" as formula_1, which is defined to be exactly . The "nominal polar Earth radius" is defined exactly as formula_2 = . These values correspond to the zero Earth tide convention. Equatorial radius is conventionally used as the nominal value unless the polar radius is explicitly required. The nominal radius serves as a unit of length for astronomy. Published values. This table summarizes the accepted values of the Earth's radius. History. The first published reference to the Earth's size appeared around 350 BC, when Aristotle reported in his book "On the Heavens" that mathematicians had guessed the circumference of the Earth to be 400,000 stadia. Scholars have interpreted Aristotle's figure to be anywhere from highly accurate to almost double the true value. The first known scientific measurement and calculation of the circumference of the Earth was performed by Eratosthenes in about 240 BC. Estimates of the error of Eratosthenes's measurement range from 0.5% to 17%. For both Aristotle and Eratosthenes, uncertainty in the accuracy of their estimates is due to modern uncertainty over which stadion length they meant. Around 100 BC, Posidonius of Apamea recomputed Earth's radius, and found it to be close to that by Eratosthenes, but later Strabo incorrectly attributed him a value about 3/4 of the actual size. Claudius Ptolemy around 150 AD gave empirical evidence supporting a spherical Earth, but he accepted the lesser value attributed to Posidonius. His highly influential work, the "Almagest", left no doubt among medieval scholars that Earth is spherical, but they were wrong about its size. By 1490, Christopher Columbus believed that traveling 3,000 miles west from the west coast of the Iberian peninsula would let him reach the eastern coasts of Asia. However, the 1492 enactment of that voyage brought his fleet to the Americas. The Magellan expedition (1519–1522), which was the first circumnavigation of the World, soundly demonstrated the sphericity of the Earth, and affirmed the original measurement of by Eratosthenes. Around 1690, Isaac Newton and Christiaan Huygens argued that Earth was closer to an oblate spheroid than to a sphere. However, around 1730, Jacques Cassini argued for a prolate spheroid instead, due to different interpretations of the Newtonian mechanics involved. To settle the matter, the French Geodesic Mission (1735–1739) measured one degree of latitude at two locations, one near the Arctic Circle and the other near the equator. The expedition found that Newton's conjecture was correct: the Earth is flattened at the poles due to rotation's centrifugal force. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{R}^\\mathrm N_\\mathrm{E}" }, { "math_id": 1, "text": "\\mathcal{R}^\\mathrm N_{e\\mathrm{E}}" }, { "math_id": 2, "text": "\\mathcal{R}^\\mathrm N_{p\\mathrm{E}}" }, { "math_id": 3, "text": "q=\\frac{a^3 \\omega^2}{GM}\\,," }, { "math_id": 4, "text": "R(\\varphi)=\\sqrt{\\frac{(a^2\\cos\\varphi)^2+(b^2\\sin\\varphi)^2}{(a\\cos\\varphi)^2+(b\\sin\\varphi)^2}}," }, { "math_id": 5, "text": "M(\\varphi)=\\frac{(ab)^2}{\\big((a\\cos\\varphi)^2+(b\\sin\\varphi)^2\\big)^\\frac32}\n=\\frac{a(1-e^2)}{(1-e^2\\sin^2\\varphi)^\\frac32}\n=\\frac{1-e^2}{a^2} N(\\varphi)^3\\,." }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "N(\\varphi)=\\frac{a^2}{\\sqrt{(a\\cos\\varphi)^2+(b\\sin\\varphi)^2}}\n=\\frac{a}{\\sqrt{1-e^2\\sin^2\\varphi}}\\,." }, { "math_id": 8, "text": "p=N\\cos(\\varphi)" }, { "math_id": 9, "text": "R_\\mathrm{c}=\\frac{1}{\\dfrac{\\cos^2\\alpha}{M}+\\dfrac{\\sin^2\\alpha}{N}}\\,." }, { "math_id": 10, "text": "R_\\mathrm{a}(\\varphi)= \\frac{1}{\\sqrt{K}} = \\frac{1}{2\\pi}\\int_{0}^{2\\pi}R_\\mathrm{c}(\\alpha)\\,d\\alpha\\,=\\sqrt{MN}=\\frac{a^2b}{(a\\cos\\varphi)^2+(b\\sin\\varphi)^2}\n=\\frac{a\\sqrt{1-e^2}}{1-e^2\\sin^2\\varphi}\\,." }, { "math_id": 11, "text": "K = \\kappa_1\\,\\kappa_2 = \\frac{\\det\\, B}{\\det\\, A}" }, { "math_id": 12, "text": "R_\\mathrm{m}=\\frac{2}{\\dfrac{1}{M}+\\dfrac{1}{N}}\\,\\!" }, { "math_id": 13, "text": "R_1 = \\frac{2a+b}{3}\\,\\!" }, { "math_id": 14, "text": "R_2\n=\\sqrt{\\frac12\\left(a^2+\\frac{b^2}{e}\\ln{\\frac{1+e}{b/a}} \\right) }\n=\\sqrt{\\frac{a^2}2+\\frac{b^2}2\\frac{\\tanh^{-1}e}e} \n=\\sqrt{\\frac{A}{4\\pi}}\\,," }, { "math_id": 15, "text": "R_2" }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": " \\frac{\\int K\\, dA}A = \\frac{4\\pi}A = \\frac1{R_2^2}." }, { "math_id": 18, "text": "R_3=\\sqrt[3]{a^2b}\\,." }, { "math_id": 19, "text": "M_\\mathrm{r}=\\frac{2}{\\pi}\\int_{0}^{\\frac{\\pi}{2}}\\sqrt{{a^2}\\cos^2\\varphi + {b^2} \\sin^2\\varphi}\\,d\\varphi\\,." }, { "math_id": 20, "text": "M_\\mathrm{r}=\\frac{2}{\\pi}\\int_{0}^{\\frac{\\pi}{2}}\\!M(\\varphi)\\,d\\varphi\\,." }, { "math_id": 21, "text": "M_\\mathrm{r}\\approx\\left(\\frac{a^\\frac32+b^\\frac32}{2}\\right)^\\frac23\\,," }, { "math_id": 22, "text": "M_\\mathrm{r}\\approx\\frac{a+b}{2}\\,," }, { "math_id": 23, "text": "\\mathcal{R}^\\mathrm N_{p\\mathrm {J}}" } ]
https://en.wikipedia.org/wiki?curid=146689
14669901
Maximally stable extremal regions
In computer vision, maximally stable extremal regions (MSER) technique is used as a method of blob detection in images. This technique was proposed by et al. to find correspondences between image elements taken from two images with different viewpoints. This method of extracting a comprehensive number of corresponding image elements contributes to the wide-baseline matching, and it has led to better stereo matching and object recognition algorithms. Terms and definitions. Image formula_0 is a mapping formula_1. Extremal regions are well defined on images if: Region formula_6 is a contiguous (aka connected) subset of formula_7. (For each formula_8 there is a sequence formula_9 such as formula_10.) Note that under this definition the region can contain "holes" (for example, a ring-shaped region is connected, but its internal circle is not the part of formula_6). (Outer) region boundary formula_11, which means the boundary formula_12 of formula_13 is the set of pixels adjacent to at least one pixel of formula_6 but not belonging to formula_6. Again, in case of regions with "holes", the region boundary is not obliged to be connected subset of formula_7 (a ring has inner bound and outer bound which do not intersect). Extremal region formula_14 is a region such that either for all formula_15 (maximum intensity region) or for all formula_16 (minimum intensity region). As far as formula_2 is totally ordered, we can reformulate these conditions as formula_17 for maximum intensity region and formula_18 for minimum intensity region, respectively. In this form we can use a notion of a threshold intensity value which separates the region and its boundary. Maximally stable extremal region Let formula_19 an extremal region such as all points on it have an intensity smaller than formula_20. Note formula_21 for all positive formula_22. Extremal region formula_23 is maximally stable if and only if formula_24 has a local minimum at formula_25. (Here formula_26 denotes cardinality). formula_22 is here a parameter of the method. The equation checks for regions that remain stable over a certain number of thresholds. If a region formula_27 is not significantly larger than a region formula_28, region formula_19 is taken as a maximally stable region. The concept more simply can be explained by thresholding. All the pixels below a given threshold are 'black' and all those above or equal are 'white'. Given a source image, if a sequence of thresholded result images formula_29 is generated where each image formula_30 corresponds to an increasing threshold t, first a white image would be seen, then 'black' spots corresponding to local intensity minima will appear then grow larger. A maximally stable extremal region is found when size of one of these black areas is the same (or near the same) than in previous image. These 'black' spots will eventually merge, until the whole image is black. The set of all connected components in the sequence is the set of all extremal regions. In that sense, the concept of MSER is linked to the one of component tree of the image. The component tree indeed provide an easy way for implementing MSER. Extremal regions. "Extremal regions" in this context have two important properties, that the set is closed under... Advantages of MSER. Because the regions are defined exclusively by the intensity function in the region and the outer border, this leads to many key characteristics of the regions which make them useful. Over a large range of thresholds, the local binarization is stable in certain regions, and have the properties listed below. Comparison to other region detectors. In Mikolajczyk et al., six region detectors are studied (Harris-affine, Hessian-affine, MSER, edge-based regions, intensity extrema, and salient regions). A summary of MSER performance in comparison to the other five follows. MSER consistently resulted in the highest score through many tests, proving it to be a reliable region detector. Implementation. The original algorithm of Matas et al. is formula_34 in the number formula_35 of pixels. It proceeds by first sorting the pixels by intensity. This would take formula_36 time, using . After sorting, pixels are marked in the image, and the list of growing and merging connected components and their areas is maintained using the union-find algorithm. This would take formula_34 time. In practice these steps are very fast. During this process, the area of each connected component as a function of intensity is stored producing a data structure. A merge of two components is viewed as termination of existence of the smaller component and an insertion of all pixels of the smaller component into the larger one. In the extremal regions, the 'maximally stable' ones are those corresponding to thresholds where the relative area change as a function of relative change of threshold is at a local minimum, i.e. the MSER are the parts of the image where local binarization is stable over a large range of thresholds. The component tree is the set of all connected components of the thresholds of the image, ordered by inclusion. Efficient (quasi-linear whatever the range of the weights) algorithms for computing it do exist. Thus this structure offers an easy way for implementing MSER. More recently, Nister and Stewenius have proposed a truly (if the weight are small integers) worst-case formula_36 method in, which is also much faster in practice. This algorithm is similar to the one of Ph. Salembier et al. Robust wide-baseline algorithm. The purpose of this algorithm is to match MSERs to establish correspondence points between images. First MSER regions are computed on the intensity image (MSER+) and on the inverted image (MSER-). Measurement regions are selected at multiple scales: the size of the actual region, 1.5x, 2x, and 3x scaled convex hull of the region. Matching is accomplished in a robust manner, so it is better to increase the distinctiveness of large regions without being severely affected by clutter or non-planarity of the region's pre-image. A measurement taken from an almost planar patch of the scene with stable invariant description are called a 'good measurement'. Unstable ones or those on non-planar surfaces or discontinuities are called 'corrupted measurements'. The robust similarity is computed: For each formula_37 on region formula_38 regions formula_39 from the other image with the corresponding i-th measurement formula_40 nearest to formula_41 are found and a vote is cast suggesting correspondence of A and each of formula_42. Votes are summed over all measurements, and using probability analysis, 'good measurements' can be picked out as the 'corrupt measurements' will likely spread their votes randomly. By applying to the centers of gravity of the regions, a rough epipolar geometry can be computed. An affine transformation between pairs of potentially corresponding regions is computed, and correspondences define it up to a rotation, which is then determined by epipolar lines. The regions are then filtered, and the ones with correlation of their transformed images above a threshold are chosen. is applied again with a more narrow threshold, and the final epipolar geometry is estimated by the eight-point algorithm. This algorithm can be tested here (Epipolar or homography geometry constrained matches): WBS Image Matcher Use in text detection. The MSER algorithm has been used in text detection by Chen by combining MSER with Canny edges. Canny edges are used to help cope with the weakness of MSER to blur. MSER is first applied to the image in question to determine the character regions. To enhance the MSER regions any pixels outside the boundaries formed by Canny edges are removed. The separation of the later provided by the edges greatly increase the usability of MSER in the extraction of blurred text. An alternative use of MSER in text detection is the work by Shi using a graph model. This method again applies MSER to the image to generate preliminary regions. These are then used to construct a graph model based on the position distance and color distance between each MSER, which is treated as a node. Next the nodes are separated into foreground and background using cost functions. One cost function is to relate the distance from the node to the foreground and background. The other penalizes nodes for being significantly different from its neighbor. When these are minimized the graph is then cut to separate the text nodes from the non-text nodes. To enable text detection in a general scene, Neumann uses the MSER algorithm in a variety of projections. In addition to the greyscale intensity projection, he uses the red, blue, and green color channels to detect text regions that are color distinct but not necessarily distinct in greyscale intensity. This method allows for detection of more text than solely using the MSER+ and MSER- functions discussed above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "I : D \\subset \\mathbb{Z}^2 \\to S" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "\\le" }, { "math_id": 4, "text": " A \\subset D \\times D" }, { "math_id": 5, "text": "pAq" }, { "math_id": 6, "text": "Q" }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "p,q \\in Q" }, { "math_id": 9, "text": "p, a_1, a_2, .., a_n, q" }, { "math_id": 10, "text": "pAa_1, a_1Aa_2, \\dots, a_{n-1}Aa_{n}, a_nAq" }, { "math_id": 11, "text": "\\partial Q = \\{ q \\in D \\setminus Q: \\exists p \\in Q : qAp \\}" }, { "math_id": 12, "text": "\\partial Q" }, { "math_id": 13, "text": " Q" }, { "math_id": 14, "text": "Q \\subset D" }, { "math_id": 15, "text": " p \\in Q, q \\in \\partial Q : I(p) > I(q)" }, { "math_id": 16, "text": " p \\in Q, q \\in \\partial Q : I(p) < I(q)" }, { "math_id": 17, "text": "\\min(I(p)) > \\max(I(q))" }, { "math_id": 18, "text": "\\max(I(p)) < \\min(I(q))" }, { "math_id": 19, "text": "Q_i" }, { "math_id": 20, "text": "i \\in S" }, { "math_id": 21, "text": "Q_i \\subset Q_{i+\\Delta}" }, { "math_id": 22, "text": "\\Delta \\in S" }, { "math_id": 23, "text": "Q_{i*}" }, { "math_id": 24, "text": "| Q_{i+\\Delta} \\setminus Q_{i-\\Delta} | / |Q_i|" }, { "math_id": 25, "text": "i*" }, { "math_id": 26, "text": "| \\cdot |" }, { "math_id": 27, "text": "Q_{i+\\Delta}" }, { "math_id": 28, "text": "Q_{i-\\Delta}" }, { "math_id": 29, "text": "I_t" }, { "math_id": 30, "text": "t" }, { "math_id": 31, "text": "T : D \\to D" }, { "math_id": 32, "text": "O(n)" }, { "math_id": 33, "text": "n" }, { "math_id": 34, "text": "O(n\\,\\log(\\log(n)))" }, { "math_id": 35, "text": "n\\," }, { "math_id": 36, "text": "O(n)\\," }, { "math_id": 37, "text": "M_A^i" }, { "math_id": 38, "text": "A, k" }, { "math_id": 39, "text": "B_1,\\dots, B_k" }, { "math_id": 40, "text": "M_{B_1}^i ,\\dots, M_{B_k}^i" }, { "math_id": 41, "text": " M_A^i" }, { "math_id": 42, "text": "B_1, \\dots , B_k" } ]
https://en.wikipedia.org/wiki?curid=14669901
14669989
Viola–Jones object detection framework
Machine learning algorithm The Viola–Jones object detection framework is a machine learning object detection framework proposed in 2001 by Paul Viola and Michael Jones. It was motivated primarily by the problem of face detection, although it can be adapted to the detection of other object classes. The algorithm is efficient for its time, able to detect faces in 384 by 288 pixel images at 15 frames per second on a conventional 700 MHz Intel Pentium III. It is also robust, achieving high precision and recall. While it has lower accuracy than more modern methods such as convolutional neural network, its efficiency and compact size (only around 50k parameters, compared to millions of parameters for typical CNN like DeepFace) means it is still used in cases with limited computational power. For example, in the original paper, they reported that this face detector could run on the Compaq iPAQ at 2 fps (this device has a low power StrongARM without floating point hardware). Problem description. Face detection is a binary classification problem combined with a localization problem: given a picture, decide whether it contains faces, and construct bounding boxes for the faces. To make the task more manageable, the Viola–Jones algorithm only detects full view (no occlusion), frontal (no head-turning), upright (no rotation), well-lit, full-sized (occupying most of the frame) faces in fixed-resolution images. The restrictions are not as severe as they appear, as one can normalize the picture to bring it closer to the requirements for Viola-Jones. The "frontal" requirement is non-negotiable, as there is no simple transformation on the image that can turn a face from a side view to a frontal view. However, one can train multiple Viola-Jones classifiers, one for each angle: one for frontal view, one for 3/4 view, one for profile view, a few more for the angles in-between them. Then one can at run time execute all these classifiers in parallel to detect faces at different view angles. The "full-view" requirement is also non-negotiable, and cannot be simply dealt with by training more Viola-Jones classifiers, since there are too many possible ways to occlude a face. Components of the framework. A full presentation of the algorithm is in. Consider an image formula_0 of fixed resolution formula_1. Our task is to make a binary decision: whether it is a photo of a standardized face (frontal, well-lit, etc) or not. Viola–Jones is essentially a boosted feature learning algorithm, trained by running a modified AdaBoost algorithm on Haar feature classifiers to find a sequence of classifiers formula_2. Haar feature classifiers are crude, but allows very fast computation, and the modified AdaBoost constructs a strong classifier out of many weak ones. At run time, a given image formula_3 is tested on formula_4 sequentially. If at any point, formula_5, the algorithm immediately returns "no face detected". If all classifiers return 1, then the algorithm returns "face detected". For this reason, the Viola-Jones classifier is also called "Haar cascade classifier". Haar feature classifiers. Consider a perceptron formula_6 defined by two variables formula_7. It takes in an image formula_0 of fixed resolution, and returns formula_8 A Haar feature classifier is a perceptron formula_6 with a very special kind of formula_9 that makes it extremely cheap to calculate. Namely, if we write out the matrix formula_10, we find that it takes only three possible values formula_11, and if we color the matrix with white on formula_12, black on formula_13, and transparent on formula_14, the matrix is in one of the 5 possible patterns shown on the right. Each pattern must also be symmetric to x-reflection and y-reflection (ignoring the color change), so for example, for the horizontal white-black feature, the two rectangles must be of the same width. For the vertical white-black-white feature, the white rectangles must be of the same height, but there is no restriction on the black rectangle's height. Rationale for Haar features. The Haar features used in the Viola-Jones algorithm are a subset of the more general Haar basis functions, which have been used previously in the realm of image-based object detection. While crude compared to alternatives such as steerable filters, Haar features are sufficiently complex to match features of typical human faces. For example: Composition of properties forming matchable facial features: Further, the design of Haar features allows for efficient computation of formula_15 using only constant number of additions and subtractions, regardless of the size of the rectangular features, using the summed-area table. Learning and using a Viola–Jones classifier. Choose a resolution formula_1 for the images to be classified. In the original paper, they recommended formula_16. Learning. Collect a training set, with some containing faces, and others not containing faces. Perform a certain modified AdaBoost training on the set of all Haar feature classifiers of dimension formula_1, until a desired level of precision and recall is reached. The modified AdaBoost algorithm would output a sequence of Haar feature classifiers formula_2. The details of the modified AdaBoost algorithm is detailed below. Using. To use a Viola-Jones classifier with formula_2 on an image formula_3, compute formula_4 sequentially. If at any point, formula_5, the algorithm immediately returns "no face detected". If all classifiers return 1, then the algorithm returns "face detected". Learning algorithm. The speed with which features may be evaluated does not adequately compensate for their number, however. For example, in a standard 24x24 pixel sub-window, there are a total of "M" = 162336 possible features, and it would be prohibitively expensive to evaluate them all when testing an image. Thus, the object detection framework employs a variant of the learning algorithm AdaBoost to both select the best features and to train classifiers that use them. This algorithm constructs a "strong" classifier as a linear combination of weighted simple “weak” classifiers. formula_17 Each weak classifier is a threshold function based on the feature formula_18. formula_19 The threshold value formula_20 and the polarity formula_21 are determined in the training, as well as the coefficients formula_22. Here a simplified version of the learning algorithm is reported: Input: Set of N positive and negative training images with their labels formula_23. If image i is a face formula_24, if not formula_25. Cascade architecture. In cascading, each stage consists of a strong classifier. So all the features are grouped into several stages where each stage has certain number of features. The job of each stage is to determine whether a given sub-window is definitely not a face or may be a face. A given sub-window is immediately discarded as not a face if it fails in any of the stages. A simple framework for cascade training is given below: F(0) = 1.0; D(0) = 1.0; i = 0 while F(i) &gt; Ftarget increase i n(i) = 0; F(i)= F(i-1) while F(i) &gt; f × F(i-1) increase n(i) use P and N to train a classifier with n(i) features using AdaBoost Evaluate current cascaded classifier on validation set to determine F(i) and D(i) decrease threshold for the ith classifier (i.e. how many weak classifiers need to accept for strong classifier to accept) until the current cascaded classifier has a detection rate of at least d × D(i-1) (this also affects F(i)) N = ∅ if F(i) &gt; Ftarget then evaluate the current cascaded detector on the set of non-face images and put any false detections into the set N. The cascade architecture has interesting implications for the performance of the individual classifiers. Because the activation of each classifier depends entirely on the behavior of its predecessor, the false positive rate for an entire cascade is: formula_34 Similarly, the detection rate is: formula_35 Thus, to match the false positive rates typically achieved by other detectors, each classifier can get away with having surprisingly poor performance. For example, for a 32-stage cascade to achieve a false positive rate of 10-6, each classifier need only achieve a false positive rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%, each classifier in the aforementioned cascade needs to achieve a detection rate of approximately 99.7%. Using Viola–Jones for object tracking. In videos of moving objects, one need not apply object detection to each frame. Instead, one can use tracking algorithms like the KLT algorithm to detect salient features within the detection bounding boxes and track their movement between frames. Not only does this improve tracking speed by removing the need to re-detect objects in each frame, but it improves the robustness as well, as the salient features are more resilient than the Viola-Jones detection framework to rotation and photometric changes.
[ { "math_id": 0, "text": "I(x, y)" }, { "math_id": 1, "text": "(M, N)" }, { "math_id": 2, "text": "f_1, f_2, ..., f_k" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "f_1(I), f_2(I), ... f_k(I)" }, { "math_id": 5, "text": "f_i(I) = 0" }, { "math_id": 6, "text": "f_{w, b}" }, { "math_id": 7, "text": "w(x, y), b" }, { "math_id": 8, "text": "f_{w, b}(I) = \\begin{cases}\n1, \\quad \\text{if } \\sum_{x, y}w(x, y)I(x, y) + b > 0 \\\\\n0, \\quad \\text{else}\n\\end{cases}" }, { "math_id": 9, "text": "w" }, { "math_id": 10, "text": "w(x, y)" }, { "math_id": 11, "text": "\\{+1, -1, 0\\}" }, { "math_id": 12, "text": "+1" }, { "math_id": 13, "text": "-1" }, { "math_id": 14, "text": "0" }, { "math_id": 15, "text": "f_{w, b}(I)" }, { "math_id": 16, "text": "(M, N) = (24, 24)" }, { "math_id": 17, "text": "h(\\mathbf{x}) = \\sgn\\left(\\sum_{j=1}^M \\alpha_j h_j (\\mathbf{x})\\right)" }, { "math_id": 18, "text": "f_j" }, { "math_id": 19, "text": "h_j(\\mathbf{x}) = \n\\begin{cases}\n-s_j &\\text{if } f_j < \\theta_j\\\\\ns_j &\\text{otherwise}\n\\end{cases}" }, { "math_id": 20, "text": "\\theta_j" }, { "math_id": 21, "text": "s_j \\in \\pm 1" }, { "math_id": 22, "text": "\\alpha_j" }, { "math_id": 23, "text": "{(\\mathbf{x}^i,y^i)}" }, { "math_id": 24, "text": "y^i=1" }, { "math_id": 25, "text": "y^i=-1" }, { "math_id": 26, "text": "w^i_{1}=\\frac{1}{N}" }, { "math_id": 27, "text": "j = 1,...,M" }, { "math_id": 28, "text": "\\theta_j,s_j" }, { "math_id": 29, "text": "\\theta_j,s_j = \\arg\\min_{\\theta,s} \\;\\sum_{i=1}^N w^i_{j} \\varepsilon^i_{j}" }, { "math_id": 30, "text": "\\varepsilon^i_{j} = \n\\begin{cases}\n0 &\\text{if }y^i = h_j(\\mathbf{x}^i,\\theta_j,s_j)\\\\\n1 &\\text{otherwise}\n\\end{cases}\n" }, { "math_id": 31, "text": "h_j" }, { "math_id": 32, "text": "w_{j+1}^i" }, { "math_id": 33, "text": "h(\\mathbf{x}) = \\sgn\\left(\\sum_{j=1}^{M} \\alpha_j h_j(\\mathbf{x})\\right)" }, { "math_id": 34, "text": "F = \\prod_{i=1}^K f_i." }, { "math_id": 35, "text": "D = \\prod_{i=1}^K d_i." } ]
https://en.wikipedia.org/wiki?curid=14669989
14670825
Mark–Houwink equation
The Mark–Houwink equation, also known as the Mark–Houwink–Sakurada equation or the Kuhn–Mark–Houwink–Sakurada equation or the Landau–Kuhn–Mark–Houwink–Sakurada equation or the Mark-Chrystian equation gives a relation between intrinsic viscosity formula_0 and molecular weight formula_1: formula_2 From this equation the molecular weight of a polymer can be determined from data on the intrinsic viscosity and vice versa. The values of the Mark–Houwink parameters, formula_3 and formula_4, depend on the particular polymer-solvent system. For solvents, a value of formula_5 is indicative of a theta solvent. A value of formula_6 is typical for good solvents. For most flexible polymers, formula_7. For semi-flexible polymers, formula_8. For polymers with an absolute rigid rod, such as Tobacco mosaic virus, formula_9. It is named after Herman F. Mark and Roelof Houwink. Applications. In size-exclusion chromatography, such as gel permeation chromatography, the intrinsic viscosity of a polymer is directly related to the elution volume of the polymer. Therefore, by running several monodisperse samples of polymer in a gel permeation chromatograph (GPC), the values of formula_4 and formula_3 can be determined graphically using a line of best fit. Then the molecular weight and intrinsic viscosity relationship is defined. Also, the molecular weights of two different polymers in a particular solvent can be related using the Mark–Houwink equation when the polymer-solvent systems have the same intrinsic viscosity: formula_10 Knowing the Mark–Houwink parameters and the molecular weight of one of the polymers allows one to find the molecular weight of the other polymer using a GPC. The GPC sorts the polymer chains by volume and as intrinsic viscosity is related to the volume of the polymer chain, the GPC data is the same for the two different polymers. For example, if the GPC calibration curve is known for polystyrene in toluene, polyethylene in toluene can be run in a GPC and the molecular weight of polyethylene can be found according to the polystyrene calibration curve via the above equation.
[ { "math_id": 0, "text": "[\\eta]" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "[\\eta]=KM^a" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": "a=0.5" }, { "math_id": 6, "text": "a=0.8" }, { "math_id": 7, "text": "0.5\\leq a\\leq 0.8" }, { "math_id": 8, "text": "a\\ge 0.8" }, { "math_id": 9, "text": "a=2.0" }, { "math_id": 10, "text": "K_1M_1^{a_1}=K_2M_2^{a_2}" } ]
https://en.wikipedia.org/wiki?curid=14670825
14670996
Time–temperature superposition
The time–temperature superposition principle is a concept in polymer physics and in the physics of glass-forming liquids. This superposition principle is used to determine temperature-dependent mechanical properties of linear viscoelastic materials from known properties at a reference temperature. The elastic moduli of typical amorphous polymers increase with loading rate but decrease when the temperature is increased. Curves of the instantaneous modulus as a function of time do not change shape as the temperature is changed but appear only to shift left or right. This implies that a master curve at a given temperature can be used as the reference to predict curves at various temperatures by applying a shift operation. The time-temperature superposition principle of linear viscoelasticity is based on the above observation. The application of the principle typically involves the following steps: The translation factor is often computed using an empirical relation first established by Malcolm L. Williams, Robert F. Landel and John D. Ferry (also called the Williams-Landel-Ferry or WLF model). An alternative model suggested by Arrhenius is also used. The WLF model is related to macroscopic motion of the bulk material, while the Arrhenius model considers local motion of polymer chains. Some materials, polymers in particular, show a strong dependence of viscoelastic properties on the temperature at which they are measured. If you plot the elastic modulus of a noncrystallizing crosslinked polymer against the temperature at which you measured it, you will get a curve which can be divided up into distinct regions of physical behavior. At very low temperatures, the polymer will behave like a glass and exhibit a high modulus. As you increase the temperature, the polymer will undergo a transition from a hard “glassy” state to a soft “rubbery” state in which the modulus can be several orders of magnitude lower than it was in the glassy state. The transition from glassy to rubbery behavior is continuous and the transition zone is often referred to as the leathery zone. The onset temperature of the transition zone, moving from glassy to rubbery, is known as the glass transition temperature, or Tg. In the 1940s Andrews and Tobolsky showed that there was a simple relationship between temperature and time for the mechanical response of a polymer. Modulus measurements are made by stretching or compressing a sample at a prescribed rate of deformation. For polymers, changing the rate of deformation will cause the curve described above to be shifted along the temperature axis. Increasing the rate of deformation will shift the curve to higher temperatures so that the transition from a glassy to a rubbery state will happen at higher temperatures. It has been shown experimentally that the elastic modulus (E) of a polymer is influenced by the load and the response time. Time–temperature superposition implies that the response time function of the elastic modulus at a certain temperature resembles the shape of the same functions of adjacent temperatures. Curves of E vs. log(response time) at one temperature can be shifted to overlap with adjacent curves, as long as the data sets did not suffer from ageing effects during the test time (see Williams-Landel-Ferry equation). The Deborah number is closely related to the concept of time-temperature superposition. Physical principle. Consider a viscoelastic body that is subjected to dynamic loading. If the excitation frequency is low enough the viscous behavior is paramount and all polymer chains have the time to respond to the applied load within a time period. In contrast, at higher frequencies, the chains do not have the time to fully respond and the resulting artificial viscosity results in an increase in the macroscopic modulus. Moreover, at constant frequency, an increase in temperature results in a reduction of the modulus due to an increase in free volume and chain movement. Time–temperature superposition is a procedure that has become important in the field of polymers to observe the dependence upon temperature on the change of viscosity of a polymeric fluid. Rheology or viscosity can often be a strong indicator of the molecular structure and molecular mobility. Time–temperature superposition avoids the inefficiency of measuring a polymer's behavior over long periods of time at a specified temperature by utilizing the fact that at higher temperatures and shorter time the polymer will behave the same, provided there are no phase transitions. Time-temperature superposition. Consider the relaxation modulus "E" at two temperatures "T" and "T"0 such that "T" &gt; "T"0. At constant strain, the stress relaxes faster at the higher temperature. The principle of time-temperature superposition states that the change in temperature from "T" to "T"0 is equivalent to multiplying the time scale by a constant factor "a"T which is only a function of the two temperatures "T" and "T"0. In other words, formula_0 The quantity "a"T is called the horizontal translation factor or the shift factor and has the properties: formula_1 The superposition principle for complex dynamic moduli (G* = G"' " + i G"" ") at a fixed frequency "ω" is obtained similarly: formula_2 A decrease in temperature increases the time characteristics while frequency characteristics decrease. Relationship between shift factor and intrinsic viscosities. For a polymer in solution or "molten" state the following relationship can be used to determine the shift factor: formula_3 where "η"T0 is the viscosity (non-Newtonian) during continuous flow at temperature "T"0 and "η"T is the viscosity at temperature "T". The time–temperature shift factor can also be described in terms of the activation energy ("E"a). By plotting the shift factor "a"T versus the reciprocal of temperature (in K), the slope of the curve can be interpreted as "E"a/"k", where "k" is the Boltzmann constant = 8.64x10−5 eV/K and the activation energy is expressed in terms of eV. Shift factor using the Williams-Landel-Ferry (WLF) model. The empirical relationship of Williams-Landel-Ferry, combined with the principle of time-temperature superposition, can account for variations in the intrinsic viscosity "η"0 of amorphous polymers as a function of temperature, for temperatures near the glass transition temperature "T"g. The WLF model also expresses the change with the temperature of the shift factor. Williams, Landel and Ferry proposed the following relationship for "a"T in terms of ("T"-"T"0) : formula_4 where formula_5 is the decadic logarithm and "C"1 and "C"2 are positive constants that depend on the material and the reference temperature. This relationship holds only in the approximate temperature range [Tg, Tg + 100 °C]. To determine the constants, the factor "a"T is calculated for each component "M′" and "M" of the complex measured modulus "M"*. A good correlation between the two shift factors gives the values of the coefficients "C"1 and "C"2 that characterize the material. If "T"0 = "T"g: formula_6 where "C"g1 and "C"g2 are the coefficients of the WLF model when the reference temperature is the glass transition temperature. The coefficients "C"1 and "C"2 depend on the reference temperature. If the reference temperature is changed from "T"0 to T′0, the new coefficients are given by formula_7 In particular, to transform the constants from those obtained at the glass transition temperature to a reference temperature "T"0, formula_8 These same authors have proposed the "universal constants" "C"g1 and "C"g2 for a given polymer system be collected in a table. These constants are approximately the same for a large number of polymers and can be written "C"g1 ≈ 15 and "C"g2 ≈ 50 K. Experimentally observed values deviate from the values in the table. These orders of magnitude are useful and are a good indicator of the quality of a relationship that has been computed from experimental data. Construction of master curves. The principle of time-temperature superposition requires the assumption of thermorheologically simple behavior (all curves have the same characteristic time variation law with temperature). From an initial spectral window ["ω"1, "ω"2] and a series of isotherms in this window, we can calculate the master curves of a material which extends over a broader frequency range. An arbitrary temperature "T"0 is taken as a reference for setting the frequency scale (the curve at that temperature undergoes no shift). In the frequency range ["ω"1, "ω"2], if the temperature increases from "T"0, the complex modulus "E′"("ω") decreases. This amounts to explore a part of the master curve corresponding to frequencies lower than "ω"1 while maintaining the temperature at "T"0. Conversely, lowering the temperature corresponds to the exploration of the part of the curve corresponding to high frequencies. For a reference temperature "T"0, shifts of the modulus curves have the amplitude log("a"T). In the area of glass transition, "a"T is described by an homographic function of the temperature. The viscoelastic behavior is well modeled and allows extrapolation beyond the field of experimental frequencies which typically ranges from 0.01 to 100 Hz . Shift factor using Arrhenius law. The shift factor (which depends on the nature of the transition) using an Arrhenius law: formula_9 where "E"a is the activation energy, "R" is the universal gas constant, and "T"0 is a reference temperature in kelvins. This Arrhenius law, under this glass transition temperature, applies to secondary transitions (relaxation) called "β"-transitions. Limitations. For the superposition principle to apply, the sample must be homogeneous, isotropic and amorphous. The material must be linear viscoelastic under the deformations of interest, i.e., the deformation must be expressed as a linear function of the stress by applying very small strains, e.g. 0.01%. To apply the WLF relationship, such a sample should be sought in the approximate temperature range ["T"g, "T"g + 100 °C], where "α"-transitions are observed (relaxation). The study to determine "a"T and the coefficients "C"1 and "C"2 requires extensive dynamic testing at a number of scanning frequencies and temperature, which represents at least a hundred measurement points. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n E (t, T) = E (\\frac{t}{a_{\\rm T}}, T_0)\\,.\n" }, { "math_id": 1, "text": "\n \\begin{align}\n & T > T_0 \\quad \\implies \\quad a_{\\rm T} < 1 \\\\\n & T < T_0 \\quad \\implies \\quad a_{\\rm T} > 1 \\\\\n & T = T_0 \\quad \\implies \\quad a_{\\rm T} = 1 \\,.\n \\end{align}\n " }, { "math_id": 2, "text": "\n \\begin{align}\n G'(\\omega, T) &= G' \\left(a_{\\rm T}\\,\\omega, T_0\\right) \\\\\n G''(\\omega, T) &= G'' \\left(a_{\\rm T}\\,\\omega, T_0\\right) . \n \\end{align}\n " }, { "math_id": 3, "text": "\n a_{\\rm T} = \\frac{\\eta_{\\rm T}}{\\eta_{\\rm{T0}}}\n " }, { "math_id": 4, "text": "\n \\log a_{\\rm T} = -\\frac{C_1 (T-T_0)} {C_2 + (T-T_0)}\n " }, { "math_id": 5, "text": "\\log" }, { "math_id": 6, "text": "\n \\log a_{\\rm T} = -\\frac{C^g_1 (T-T_g)}{C^g_2 + (T-T_g)} = \\log\\left(\\frac{\\eta_{\\rm T}}{\\eta_{T_g}} \\right)\n " }, { "math_id": 7, "text": "\n C'_1 = \\frac{C_1 \\,C_2}{C_2 + (T'_0-T_0)} \\qquad {\\rm and} \\qquad C'_2 = C_2 + (T'_0-T_0) \\,.\n" }, { "math_id": 8, "text": "\n C^0_1 = \\frac{C^g_1 \\,C^g_2}{C^g_2 + (T_0-T_g)} \\qquad {\\rm and} \\qquad C^0_2 = C^g_2 + (T_0-T_g) \\,.\n" }, { "math_id": 9, "text": "\n \\log(a_{\\rm T}) = - \\frac{E_a}{2.303R}\\left(\\frac{1}{T} - \\frac{1}{T_0} \\right)\n " } ]
https://en.wikipedia.org/wiki?curid=14670996
14671319
Topological indistinguishability
Topological relational characteristic In topology, two points of a topological space "X" are topologically indistinguishable if they have exactly the same neighborhoods. That is, if "x" and "y" are points in "X", and "Nx" is the set of all neighborhoods that contain "x", and "Ny" is the set of all neighborhoods that contain "y", then "x" and "y" are "topologically indistinguishable" if and only if "Nx" = "Ny". Intuitively, two points are topologically indistinguishable if the topology of "X" is unable to discern between the points. Two points of "X" are topologically distinguishable if they are not topologically indistinguishable. This means there is an open set containing precisely one of the two points (equivalently, there is a closed set containing precisely one of the two points). This open set can then be used to distinguish between the two points. A T0 space is a topological space in which every pair of distinct points is topologically distinguishable. This is the weakest of the separation axioms. Topological indistinguishability defines an equivalence relation on any topological space "X". If "x" and "y" are points of "X" we write "x" ≡ "y" for ""x" and "y" are topologically indistinguishable". The equivalence class of "x" will be denoted by ["x"]. Examples. By definition, any two distinct points in a T0 space are topologically distinguishable. On the other hand, regularity and normality do not imply T0, so we can find nontrivial examples of topologically indistinguishable points in regular or normal topological spaces. In fact, almost all of the examples given below are completely regular. Specialization preorder. The topological indistinguishability relation on a space "X" can be recovered from a natural preorder on "X" called the specialization preorder. For points "x" and "y" in "X" this preorder is defined by "x" ≤ "y" if and only if "x" ∈ cl{"y"} where cl{"y"} denotes the closure of {"y"}. Equivalently, "x" ≤ "y" if the neighborhood system of "x", denoted "N""x", is contained in the neighborhood system of "y": "x" ≤ "y" if and only if "N""x" ⊂ "N""y". It is easy to see that this relation on "X" is reflexive and transitive and so defines a preorder. In general, however, this preorder will not be antisymmetric. Indeed, the equivalence relation determined by ≤ is precisely that of topological indistinguishability: "x" ≡ "y" if and only if "x" ≤ "y" and "y" ≤ "x". A topological space is said to be symmetric (or R0) if the specialization preorder is symmetric (i.e. "x" ≤ "y" implies "y" ≤ "x"). In this case, the relations ≤ and ≡ are identical. Topological indistinguishability is better behaved in these spaces and easier to understand. Note that this class of spaces includes all regular and completely regular spaces. Properties. Equivalent conditions. There are several equivalent ways of determining when two points are topologically indistinguishable. Let "X" be a topological space and let "x" and "y" be points of "X". Denote the respective closures of "x" and "y" by cl{"x"} and cl{"y"}, and the respective neighborhood systems by "N""x" and "N""y". Then the following statements are equivalent: These conditions can be simplified in the case where "X" is symmetric space. For these spaces (in particular, for regular spaces), the following statements are equivalent: Equivalence classes. To discuss the equivalence class of "x", it is convenient to first define the upper and lower sets of "x". These are both defined with respect to the specialization preorder discussed above. The lower set of "x" is just the closure of {"x"}: formula_4 while the upper set of "x" is the intersection of the neighborhood system at "x": formula_5 The equivalence class of "x" is then given by the intersection formula_6 Since ↓"x" is the intersection of all the closed sets containing "x" and ↑"x" is the intersection of all the open sets containing "x", the equivalence class ["x"] is the intersection of all the open sets and closed sets containing "x". Both cl{"x"} and ∩"N""x" will contain the equivalence class ["x"]. In general, both sets will contain additional points as well. In symmetric spaces (in particular, in regular spaces) however, the three sets coincide: formula_7 In general, the equivalence classes ["x"] will be closed if and only if the space is symmetric. Continuous functions. Let "f" : "X" → "Y" be a continuous function. Then for any "x" and "y" in "X" "x" ≡ "y" implies "f"("x") ≡ "f"("y"). The converse is generally false (There are quotients of T0 spaces which are trivial). The converse will hold if "X" has the initial topology induced by "f". More generally, if "X" has the initial topology induced by a family of maps formula_8 then "x" ≡ "y" if and only if "f"α("x") ≡ "f"α("y") for all α. It follows that two elements in a product space are topologically indistinguishable if and only if each of their components are topologically indistinguishable. Kolmogorov quotient. Since topological indistinguishability is an equivalence relation on any topological space "X", we can form the quotient space "KX" = "X"/≡. The space "KX" is called the Kolmogorov quotient or T0 identification of "X". The space "KX" is, in fact, T0 (i.e. all points are topologically distinguishable). Moreover, by the characteristic property of the quotient map any continuous map "f" : "X" → "Y" from "X" to a T0 space factors through the quotient map "q" : "X" → "KX". Although the quotient map "q" is generally not a homeomorphism (since it is not generally injective), it does induce a bijection between the topology on "X" and the topology on "KX". Intuitively, the Kolmogorov quotient does not alter the topology of a space. It just reduces the point set until points become topologically distinguishable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{f_\\alpha : X \\to Y_\\alpha\\}" }, { "math_id": 1, "text": "f_\\alpha" }, { "math_id": 2, "text": "f_\\alpha(x) = f_\\alpha(y)" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\mathop{\\darr}x = \\{y\\in X: y\\leq x\\} = \\textrm{cl}\\{x\\}" }, { "math_id": 5, "text": "\\mathop{\\uarr}x = \\{y\\in X: x\\leq y\\} = \\bigcap \\mathcal{N}_x." }, { "math_id": 6, "text": "[x] = {\\mathop{\\darr}x} \\cap {\\mathop{\\uarr}x}." }, { "math_id": 7, "text": "[x] = \\textrm{cl}\\{x\\} = \\bigcap\\mathcal{N}_x." }, { "math_id": 8, "text": "f_\\alpha : X \\to Y_\\alpha" } ]
https://en.wikipedia.org/wiki?curid=14671319
146738
Interest
Sum paid for the use of money In finance and economics, interest is payment from a debtor or deposit-taking financial institution to a lender or depositor of an amount above repayment of the principal sum (that is, the amount borrowed), at a particular rate. It is distinct from a fee which the borrower may pay to the lender or some third party. It is also distinct from dividend which is paid by a company to its shareholders (owners) from its profit or reserve, but not at a particular rate decided beforehand, rather on a pro rata basis as a share in the reward gained by risk taking entrepreneurs when the revenue earned exceeds the total costs. For example, a customer would usually pay interest to borrow from a bank, so they pay the bank an amount which is more than the amount they borrowed; or a customer may earn interest on their savings, and so they may withdraw more than they originally deposited. In the case of savings, the customer is the lender, and the bank plays the role of the borrower. Interest differs from profit, in that interest is received by a lender, whereas profit is received by the owner of an asset, investment or enterprise. (Interest may be part or the whole of the profit on an investment, but the two concepts are distinct from each other from an accounting perspective.) The rate of interest is equal to the interest amount paid or received over a particular period divided by the principal sum borrowed or lent (usually expressed as a percentage). Compound interest means that interest is earned on prior interest in addition to the principal. Due to compounding, the total amount of debt grows exponentially, and its mathematical study led to the discovery of the number "e". In practice, interest is most often calculated on a daily, monthly, or yearly basis, and its impact is influenced greatly by its compounding rate. History. Credit is thought to have preceded the existence of coinage by several thousands of years. The first recorded instance of credit is a collection of old Sumerian documents from 3000 BC that show systematic use of credit to loan both grain and metals. The rise of interest as a concept is unknown, though its use in Sumeria argue that it was well established as a concept by 3000BC if not earlier, with historians believing that the concept in its modern sense may have arisen from the lease of animal or seeds for productive purposes. The argument that acquired seeds and animals could reproduce themselves was used to justify interest, but ancient Jewish religious prohibitions against usury (נשך "NeSheKh") represented a "different view". The first written evidence of compound interest dates roughly 2400 BC. The annual interest rate was roughly 20%. Compound interest was necessary for the development of agriculture and important for urbanization. While the traditional Middle Eastern views on interest were the result of the urbanized, economically developed character of the societies that produced them, the new Jewish prohibition on interest showed a pastoral, tribal influence. In the early 2nd millennium BC, since silver used in exchange for livestock or grain could not multiply of its own, the Laws of Eshnunna instituted a legal interest rate, specifically on deposits of dowry. Early Muslims called this "riba", translated today as the charging of interest. The First Council of Nicaea, in 325, forbade clergy from engaging in usury which was defined as lending on interest above 1 percent per month (12.7% AER). Ninth-century ecumenical councils applied this regulation to the laity. Catholic Church opposition to interest hardened in the era of the Scholastics, when even defending it was considered a heresy. St. Thomas Aquinas, the leading theologian of the Catholic Church, argued that the charging of interest is wrong because it amounts to "double charging", charging for both the thing and the use of the thing. In the medieval economy, loans were entirely a consequence of necessity (bad harvests, fire in a workplace) and, under those conditions, it was considered morally reproachable to charge interest. It was also considered morally dubious, since no goods were produced through the lending of money, and thus it should not be compensated, unlike other activities with direct physical output such as blacksmithing or farming. For the same reason, interest has often been looked down upon in Islamic civilization, with almost all scholars agreeing that the Qur'an explicitly forbids charging interest. Medieval jurists developed several financial instruments to encourage responsible lending and circumvent prohibitions on usury, such as the Contractum trinius. In the Renaissance era, greater mobility of people facilitated an increase in commerce and the appearance of appropriate conditions for entrepreneurs to start new, lucrative businesses. Given that borrowed money was no longer strictly for consumption but for production as well, interest was no longer viewed in the same manner. The first attempt to control interest rates through manipulation of the money supply was made by the Banque de France in 1847. Islamic finance. The latter half of the 20th century saw the rise of interest-free Islamic banking and finance, a movement that applies Islamic law to financial institutions and the economy. Some countries, including Iran, Sudan, and Pakistan, have taken steps to eradicate interest from their financial systems. Rather than charging interest, the interest-free lender shares the risk by investing as a partner in profit loss sharing scheme, because predetermined loan repayment as interest is prohibited, as well as making money out of money is unacceptable. All financial transactions must be asset-backed and must not charge any interest or fee for the service of lending. In the history of mathematics. It is thought that Jacob Bernoulli discovered the mathematical constant e by studying a question about compound interest. He realized that if an account that starts with $1.00 and pays say 100% interest per year, at the end of the year, the value is $2.00; but if the interest is computed and added twice in the year, the $1 is multiplied by 1.5 twice, yielding $1.00×1.52 = $2.25. Compounding quarterly yields $1.00×1.254 = $2.4414..., and so on. Bernoulli noticed that if the frequency of compounding is increased without limit, this sequence can be modeled as follows: formula_0 where "n" is the number of times the interest is to be compounded in a year. Economics. In economics, the rate of interest is the price of credit, and it plays the role of the cost of capital. In a free market economy, interest rates are subject to the law of supply and demand of the money supply, and one explanation of the tendency of interest rates to be generally greater than zero is the scarcity of loanable funds. Over centuries, various schools of thought have developed explanations of interest and interest rates. The School of Salamanca justified paying interest in terms of the benefit to the borrower, and interest received by the lender in terms of a premium for the risk of default. In the sixteenth century, Martín de Azpilcueta applied a time preference argument: it is preferable to receive a given good now rather than in the future. Accordingly, interest is compensation for the time the lender forgoes the benefit of spending the money. On the question of why interest rates are normally greater than zero, in 1770, French economist Anne-Robert-Jacques Turgot, Baron de Laune proposed the theory of fructification. By applying an opportunity cost argument, comparing the loan rate with the rate of return on agricultural land, and a mathematical argument, applying the formula for the value of a perpetuity to a plantation, he argued that the land value would rise without limit, as the interest rate approached zero. For the land value to remain positive and finite keeps the interest rate above zero. Adam Smith, Carl Menger, and Frédéric Bastiat also propounded theories of interest rates. In the late 19th century, Swedish economist Knut Wicksell in his 1898 "Interest and Prices" elaborated a comprehensive theory of economic crises based upon a distinction between natural and nominal interest rates. In the 1930s, Wicksell's approach was refined by Bertil Ohlin and Dennis Robertson and became known as the loanable funds theory. Other notable interest rate theories of the period are those of Irving Fisher and John Maynard Keynes. Calculation. Simple interest. Simple interest is calculated only on the principal amount, or on that portion of the principal amount that remains. It excludes the effect of compounding. Simple interest can be applied over a time period other than a year, for example, every month. Simple interest is calculated according to the following formula: formula_1 where "r" is the simple annual interest rate "B" is the initial balance "m" is the number of time periods elapsed and "n" is the frequency of applying interest. For example, imagine that a credit card holder has an outstanding balance of $2500 and that the simple annual interest rate is 12.99% "per annum", applied monthly, so the frequency of applying interest is 12 per year. Over one month, formula_2 interest is due (rounded to the nearest cent). Simple interest applied over 3 months would be formula_3 If the card holder pays off only interest at the end of each of the 3 months, the total amount of interest paid would be formula_4 which is the simple interest applied over 3 months, as calculated above. (The one cent difference arises due to rounding to the nearest cent.) Compound interest. Compound interest includes interest earned on the interest that was previously accumulated. Compare, for example, a bond paying 6 percent semiannually (that is, coupons of 3 percent twice a year) with a certificate of deposit (GIC) that pays 6 percent interest once a year. The total interest payment is $6 per $100 par value in both cases, but the holder of the semiannual bond receives half the $6 per year after only 6 months (time preference), and so has the opportunity to reinvest the first $3 coupon payment after the first 6 months, and earn additional interest. For example, suppose an investor buys $10,000 par value of a US dollar bond, which pays coupons twice a year, and that the bond's simple annual coupon rate is 6 percent per year. This means that every 6 months, the issuer pays the holder of the bond a coupon of 3 dollars per 100 dollars par value. At the end of 6 months, the issuer pays the holder: formula_5 Assuming the market price of the bond is 100, so it is trading at par value, suppose further that the holder immediately reinvests the coupon by spending it on another $300 par value of the bond. In total, the investor therefore now holds: formula_6 and so earns a coupon at the end of the next 6 months of: formula_7 Assuming the bond remains priced at par, the investor accumulates at the end of a full 12 months a total value of: formula_8 and the investor earned in total: formula_9 The formula for the annual equivalent compound interest rate is: formula_10 where r is the simple annual rate of interest n is the frequency of applying interest For example, in the case of a 6% simple annual rate, the annual equivalent compound rate is: formula_11 Other formulations. The outstanding balance "Bn" of a loan after "n" regular payments increases each period by a growth factor according to the periodic interest, and then decreases by the amount paid "p" at the end of each period: formula_12 where "i" = simple annual loan rate in decimal form (for example, 10% = 0.10. The loan rate is the rate used to compute payments and balances.) "r" = period interest rate (for example, "i"/12 for monthly payments) "B"0 = initial balance, which equals the principal sum By repeated substitution, one obtains expressions for "B""n", which are linearly proportional to "B"0 and "p", and use of the formula for the partial sum of a geometric series results in formula_13 A solution of this expression for "p" in terms of "B"0 and "B""n" reduces to formula_14 To find the payment if the loan is to be finished in "n" payments, one sets "B""n" = 0. The PMT function found in spreadsheet programs can be used to calculate the monthly payment of a loan: formula_15 An interest-only payment on the current balance would be formula_16 The total interest, "I""T", paid on the loan is formula_17 The formulas for a regular savings program are similar, but the payments are added to the balances instead of being subtracted, and the formula for the payment is the negative of the one above. These formulas are only approximate since actual loan balances are affected by rounding. To avoid an underpayment at the end of the loan, the payment must be rounded up to the next cent. Consider a similar loan but with a new period equal to "k" periods of the problem above. If "r""k" and "p""k" are the new rate and payment, we now have formula_18 Comparing this with the expression for Bk above, we note that formula_19 and formula_20 The last equation allows us to define a constant that is the same for both problems: formula_21 and "B""k" can be written as formula_22 Solving for "r""k", we find a formula for "r""k" involving known quantities and "B""k", the balance after "k" periods: formula_23. Since "B"0 could be any balance in the loan, the formula works for any two balances separate by "k" periods and can be used to compute a value for the annual interest rate. "B"* is a scale invariant, since it does not change with changes in the length of the period. Rearranging the equation for "B"*, one obtains a transformation coefficient (scale factor): formula_24 (see binomial theorem) and we see that "r" and "p" transform in the same manner: formula_25 formula_26. The change in the balance transforms likewise: formula_27, which gives an insight into the meaning of some of the coefficients found in the formulas above. The annual rate, "r"12, assumes only one payment per year and is not an "effective" rate for monthly payments. With monthly payments, the monthly interest is paid out of each payment and so should not be compounded, and an annual rate of 12·"r" would make more sense. If one just made interest-only payments, the amount paid for the year would be 12·"r"·"B"0. Substituting "p""k" = "r""k" "B"* into the equation for the "B""k", we obtain formula_28. Since "B""n" = 0, we can solve for "B"*: formula_29 Substituting back into the formula for the "B""k" shows that they are a linear function of the "r""k" and therefore the "λ""k": formula_30. This is the easiest way of estimating the balances if the "λ""k" are known. Substituting into the first formula for "B""k" above and solving for "λ""k"+1, we obtain formula_31. "λ"0 and "λ""n" can be found using the formula for "λ""k" above or computing the "λ""k" recursively from "λ"0 = 0 to "λ""n". Since "p" = "rB"*, the formula for the payment reduces to formula_32 and the average interest rate over the period of the loan is formula_33 which is less than "r" if "n" &gt; 1. Rules of thumb. Rule of 78s. In the age before electronic computers were widely available, flat rate consumer loans in the United States of America would be priced using the Rule of 78s, or "sum of digits" method. (The sum of the integers from 1 to 12 is 78.) The technique required only a simple calculation. Payments remain constant over the life of the loan; however, payments are allocated to interest in progressively smaller amounts. In a one-year loan, in the first month, 12/78 of all interest owed over the life of the loan is due; in the second month, 11/78; progressing to the twelfth month where only 1/78 of all interest is due. The practical effect of the Rule of 78s is to make early pay-offs of term loans more expensive. For a one-year loan, approximately 3/4 of all interest due is collected by the sixth month, and pay-off of the principal then will cause the effective interest rate to be much higher than the APR used to calculate the payments. In 1992, the United States outlawed the use of "Rule of 78s" interest in connection with mortgage refinancing and other consumer loans over five years in term. Certain other jurisdictions have outlawed application of the Rule of 78s in certain types of loans, particularly consumer loans. Rule of 72. To approximate how long it takes for money to double at a given interest rate, that is, for accumulated compound interest to reach or exceed the initial deposit, divide 72 by the percentage interest rate. For example, compounding at an annual interest rate of 6 percent, it will take 72/6 = 12 years for the money to double. The rule provides a good indication for interest rates up to 10%. In the case of an interest rate of 18 percent, the rule of 72 predicts that money will double after 72/18 = 4 years. formula_34 In the case of an interest rate of 24 percent, the rule predicts that money will double after 72/24 = 3 years. formula_35 Market interest rates. There are markets for investments (which include the money market, bond market, as well as retail financial institutions like banks) that set interest rates. Each specific debt takes into account the following factors in determining its interest rate: Opportunity cost and deferred consumption. Opportunity cost encompasses any other use to which the money could be put, including lending to others, investing elsewhere, holding cash, or spending the funds. Charging interest equal to inflation preserves the lender's purchasing power, but does not compensate for the time value of money in real terms. The lender may prefer to invest in another product rather than consume. The return they might obtain from competing investments is a factor in determining the interest rate they demand. Inflation. Since the lender is deferring consumption, they will "wish", as a bare minimum, to recover enough to pay the increased cost of goods due to inflation. Because future inflation is unknown, there are three ways this might be achieved: However interest rates are set by the market, and it happens frequently that they are insufficient to compensate for inflation: for example at times of high inflation during, for example, the oil crisis; and during 2011 when real yields on many inflation-linked government stocks are negative. Default. There is always the risk the borrower will become bankrupt, abscond or otherwise default on the loan. The risk premium attempts to measure the integrity of the borrower, the risk of his enterprise succeeding and the security of any collateral pledged. For example, loans to developing countries have higher risk premiums than those to the US government due to the difference in creditworthiness. An operating line of credit to a business will have a higher rate than a mortgage loan. The creditworthiness of businesses is measured by bond rating services and individual's credit scores by credit bureaus. The risks of an individual debt may have a large standard deviation of possibilities. The lender may want to cover his maximum risk, but lenders with portfolios of debt can lower the risk premium to cover just the most probable outcome. Composition of interest rates. In economics, interest is considered the price of credit, therefore, it is also subject to distortions due to inflation. The nominal interest rate, which refers to the price before adjustment to inflation, is the one visible to the consumer (that is, the interest tagged in a loan contract, credit card statement, etc.). Nominal interest is composed of the real interest rate plus inflation, among other factors. An approximate formula for the nominal interest is: formula_36 Where "i" is the nominal interest rate "r" is the real interest rate and π is inflation. However, not all borrowers and lenders have access to the same interest rate, even if they are subject to the same inflation. Furthermore, expectations of future inflation vary, so a forward-looking interest rate cannot depend on a single real interest rate plus a single expected rate of inflation. Interest rates also depend on credit quality or risk of default. Governments are normally highly reliable debtors, and the interest rate on government securities is normally lower than the interest rate available to other borrowers. The equation: formula_37 relates expectations of inflation and credit risk to nominal and expected real interest rates, over the life of a loan, where "i" is the nominal interest applied "r" is the real interest expected π is the inflation expected and "c" is yield spread according to the perceived credit risk. Default interest. Default interest is the rate of interest that a borrower must pay after material breach of a loan covenant. The default interest is usually much higher than the original interest rate since it is reflecting the aggravation in the financial risk of the borrower. Default interest compensates the lender for the added risk. From the borrower's perspective, this means failure to make their regular payment for one or two payment periods or failure to pay taxes or insurance premiums for the loan collateral will lead to substantially higher interest for the entire remaining term of the loan. Banks tend to add default interest to the loan agreements in order to separate between different scenarios. In some jurisdictions, default interest clauses are unenforceable as against public policy. Term. Shorter terms often have less risk of default and exposure to inflation because the near future is easier to predict. In these circumstances, short-term interest rates are lower than longer-term interest rates (an upward sloping yield curve). Government intervention. Interest rates are generally determined by the market, but government intervention - usually by a central bank - may strongly influence short-term interest rates, and is one of the main tools of monetary policy. The central bank offers to borrow (or lend) large quantities of money at a rate which they determine (sometimes this is money that they have created "ex nihilo", that is, printed) which has a major influence on supply and demand and hence on market interest rates. Open market operations in the United States. The Federal Reserve (Fed) implements monetary policy largely by targeting the federal funds rate. This is the rate that banks charge each other for overnight loans of federal funds. Federal funds are the reserves held by banks at the Fed. Open market operations are one tool within monetary policy implemented by the Federal Reserve to steer short-term interest rates. Using the power to buy and sell treasury securities, the Open Market Desk at the Federal Reserve Bank of New York can supply the market with dollars by purchasing U.S. Treasury notes, hence increasing the nation's money supply. By increasing the money supply or Aggregate Supply of Funding (ASF), interest rates will fall due to the excess of dollars banks will end up with in their reserves. Excess reserves may be lent in the Fed funds market to other banks, thus driving down rates. Interest rates and credit risk. It is increasingly recognized that during the business cycle, interest rates and credit risk are tightly interrelated. The Jarrow-Turnbull model was the first model of credit risk that explicitly had random interest rates at its core. Lando (2004), Darrell Duffie and Singleton (2003), and van Deventer and Imai (2003) discuss interest rates when the issuer of the interest-bearing instrument can default. Money and inflation. Loans and bonds have some of the characteristics of money and are included in the broad money supply. National governments (provided, of course, that the country has retained its own currency) can influence interest rates and thus the supply and demand for such loans, thus altering the total of loans and bonds issued. Generally speaking, a higher real interest rate reduces the broad money supply. Through the quantity theory of money, increases in the money supply lead to inflation. This means that interest rates can affect inflation in the future. Liquidity. Liquidity is the ability to quickly re-sell an asset for fair or near-fair value. All else equal, an investor will want a higher return on an illiquid asset than a liquid one, to compensate for the loss of the option to sell it at any time. U.S. Treasury bonds are highly liquid with an active secondary market, while some other debts are less liquid. In the mortgage market, the lowest rates are often issued on loans that can be re-sold as securitized loans. Highly non-traditional loans such as seller financing often carry higher interest rates due to a lack of liquidity. Theories of interest. Aristotle's view of interest. Aristotle and the Scholastics held that it was unjust to claim payment except in compensation for one's own efforts and sacrifices, and that since money is by its nature sterile, there is no loss in being temporarily separated from it. Compensation for risk or for the trouble of setting up a loan was not necessarily impermissible on these grounds. Development of the theory of interest during the seventeenth and eighteenth centuries. Nicholas Barbon (c.1640–c.1698) described as a "mistake" the view that interest is a monetary value, arguing that because money is typically borrowed to buy assets (goods and stock), the interest that is charged on a loan is a type of rent – "a payment for the use of goods". According to Schumpeter, Barbon's theories were forgotten until similar views were put forward by Joseph Massie in 1750. In 1752 David Hume published his essay "Of money" which relates interest to the "demand for borrowing", the "riches available to supply that demand" and the "profits arising from commerce". Schumpeter considered Hume's theory superior to that of Ricardo and Mill, but the reference to profits concentrates to a surprising degree on 'commerce' rather than on industry. Turgot brought the theory of interest close to its classical form. Industrialists... ... share their profits with capitalists who supply the funds ("Réflexions", LXXI). The share that goes to the latter is determined like all other prices (LXXV) by the play of supply and demand amongst borrowers and lenders, so that the analysis is from the outset firmly planted in the general theory of prices. The classical theory of the interest rate. The classical theory was the work of a number of authors, including Turgot, Ricardo, Mountifort Longfield, J. S. Mill, and Irving Fisher. It was strongly criticised by Keynes whose remarks nonetheless made a positive contribution to it. Mill's theory is set out the chapter "Of the rate of interest" in his "Principles of political economy". He says that the interest rate adjusts to maintain equilibrium between the demands for lending and borrowing. Individuals lend in order to defer consumption or for the sake of the greater quantity they will be able to consume at a later date owing to interest earned. They borrow in order to anticipate consumption (whose relative desirability is reflected by the time value of money), but entrepreneurs also borrow to fund investment and governments borrow for their own reasons. The three sources of demand compete for loans. For entrepreneurial borrowing to be in equilibrium with lending: The interest for money... is... regulated... by the rate of profits which can be made by the employment of capital... Ricardo's and Mill's 'profit' is made more precise by the concept of the marginal efficiency of capital (the expression, though not the concept, is due to Keynes), which may be defined as the annual revenue which will be yielded by an extra increment of capital as a proportion of its cost. So the interest rate "r" in equilibrium will be equal to the marginal efficiency of capital "r'". Rather than work with "r" and "r'" as separate variables, we can assume that they are equal and let the single variable "r" denote their common value. The investment schedule "i" ("r") shows how much investment is possible with a return of at least "r". In a stationary economy it is likely to resemble the blue curve in the diagram, with a step shape arising from the assumption that opportunities to invest with yields greater than "r̂" have been largely exhausted while there is untapped scope to invest with a lower return. Saving is the excess of deferred over anticipated consumption, and its dependence on income is much as described by Keynes (see The General Theory), but in classical theory definitely an increasing function of "r". (The dependence of "s" on income "y" was not relevant to classical concerns prior to the development of theories of unemployment.) The rate of interest is given by the intersection of the solid red saving curve with the blue investment schedule. But so long as the investment schedule is almost vertical, a change in income (leading in extreme cases to the broken red saving curve) will make little difference to the interest rate. In some cases the analysis will be less simple. The introduction of a new technique, leading to demand for new forms of capital, will shift the step to the right and reduce its steepness. Or a sudden increase in the desire to anticipate consumption (perhaps through military spending in time of war) will absorb most available loans; the interest rate will increase and investment will be reduced to the amount whose return exceeds it. This is illustrated by the dotted red saving curve. Keynes's criticisms. In the case of extraordinary spending in time of war the government may wish to borrow more than the public would be willing to lend at a normal interest rate. If the dotted red curve started negative and showed no tendency to increase with "r", then the government would be trying to buy what the public was unwilling to sell at any price. Keynes mentions this possibility as a point "which might, perhaps, have warned the classical school that something was wrong" (p. 182). He also remarks (on the same page) that the classical theory does not explain the usual supposition that "an increase in the quantity of money has a tendency to reduce the rate of interest, at any rate in the first instance". Keynes's diagram of the investment schecule lacks the step shape which can be seen as part of the classical theory. He objects that the functions used by classical theory... do not furnish material for a theory of the rate of interest; but they could be used to tell us... what the rate of interest will have to be, if the level of employment [which determines income] is maintained at a given figure. Later (p. 184) Keynes claims that "it involves a circular argument" to construct a theory of interest from the investment schedule since the 'marginal efficiency of capital' partly depends on the scale of current investment, and we must already know the rate of interest before we can calculate what this scale will be. Theories of exploitation, productivity and abstinence. The classical theory of interest explains it as the capitalist's share of business profits, but the pre-marginalist authors were unable to reconcile these profits with the labor theory of value (excluding Longfield, who was essentially a marginalist). Their responses often had a moral tone: Ricardo and Marx viewed profits as exploitation, and McCulloch's productivity theory justified profits by portraying capital equipment as an embodiment of accumulated labor. The theory that interest is a payment for abstinence is attributed to Nassau Senior, and according to Schumpeter was intended neutrally, but it can easily be understood as making a moral claim and was sharply criticised by Marx and Lassalle. Wicksell's theory. Knut Wicksell published his "Interest and Prices" in 1898, elaborating a comprehensive theory of economic crises based upon a distinction between natural and nominal interest rates. Wicksell's contribution, in fact, was twofold. First he separated the monetary rate of interest from the hypothetical "natural" rate that would have resulted from equilibrium of capital supply and demand in a barter economy, and he assumed that as a result of the presence of money alone, the effective market rate could fail to correspond to this ideal rate in actuality. Next he supposed that through the mechanism of credit, the rate of interest had an influence on prices; that a rise of the monetary rate above the "natural" level produced a fall, and a decline below that level a rise, in prices. But Wicksell went on to conclude that if the natural rate coincided with the monetary rate, stability of prices would follow. In the 1930s Wicksell's approach was refined by Bertil Ohlin and Dennis Robertson and became known as the loanable funds theory. Austrian theories. Eugen Böhm von Bawerk and other members of the Austrian School also put forward notable theories of the interest rate. The doyen of the Austrian school, Murray N. Rothbard, sees the emphasis on the loan market which makes up the general analysis on interest as a mistaken view to take. As he explains in his primary economic work, "Man, Economy, and State", the market rate of interest is but a "manifestation" of the natural phenomenon of time preference, which is to prefer present goods to future goods. To Rothbard, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Too many writers consider the rate of interest as only the price of loans on the loan market. In reality...the rate of interest pervades all time markets, and the productive loan market is a strictly subsidiary time market of only derivative importance. Interest is explainable by the rate of time preference among the people. To point to the loan market is insufficient at best. Rather, the rate of interest is what would be observed between the "stages of production", indeed a time market itself, where capital goods which are used to make consumers' goods are ordered out further in time away from the final consumers' goods stage of the economy where consumption takes place. It is "this" spread (between these various stages which will tend toward uniformity), with consumers' goods representing present goods and producers' goods representing future goods, that the real rate of interest is observed. Rothbard has said that &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Interest rate is equal to the rate of price spread in the various stages. Rothbard has furthermore criticized the Keynesian conception of interest, saying &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;One grave and fundamental Keynesian error is to persist in regarding the interest rate as a contract rate on loans, instead of the price spreads between stages of production. Pareto's indifference. Pareto held that The interest rate, being one of the many elements of the general system of equilibrium, was, of course, simultaneously determined with all of them so that there was no point at all in looking for any particular element that 'caused' interest. Keynes's theory of the interest rate. Interest is one of the main components of the economic theories developed in Keynes's 1936 "General theory of employment, interest, and money". In his initial account of liquidity preference (the demand for money) in Chapter 13, this demand is solely a function of the interest rate; and since the supply is given and equilibrium is assumed, the interest rate is determined by the money supply. In his later account (Chapter 15), interest cannot be separated from other economic variables and needs to be analysed together with them. See The General Theory for details. In religious contexts. Judaism. Jews are forbidden from usury in dealing with fellow Jews, and this lending is to be considered tzedakah, or charity. However, there are permissions to charge interest on loans to non-Jews. This is outlined in the Jewish scriptures of the Torah, which Christians hold as part of the Old Testament, and other books of the Tanakh. From the Jewish Publication Society's 1917 Tanakh, with Christian verse numbers, where different, in parentheses: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If thou lend money to any of My people, even to the poor with thee, thou shalt not be to him as a creditor; neither shall ye lay upon him interest. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Take thou no interest of him or increase; but fear thy God; that thy brother may live with thee. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Thou shalt not give him thy money upon interest, nor give him thy victuals for increase. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Thou shalt not lend upon interest to thy brother: interest of money, interest of victuals, interest of any thing that is lent upon interest. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Unto a foreigner thou mayest lend upon interest; but unto thy brother thou shalt not lend upon interest; that the LORD thy God may bless thee in all that thou puttest thy hand unto, in the land whither thou goest in to possess it. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... that hath withdrawn his hand from the poor, that hath not received interest nor increase, hath executed Mine ordinances, hath walked in My statutes; he shall not die for the iniquity of his father, he shall surely live. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;He that putteth not out his money on interest, nor taketh a bribe against the innocent. He that doeth these things shall never be moved. Several historical rulings in Jewish law have mitigated the allowances for usury toward non-Jews. For instance, the 15th-century commentator Rabbi Isaac Abrabanel specified that the rubric for allowing interest does not apply to Christians or Muslims, because their faith systems have a common ethical basis originating from Judaism. The medieval commentator Rabbi David Kimchi extended this principle to non-Jews who show consideration for Jews, saying they should be treated with the same consideration when they borrow. Islam. The following quotations are English translations from the Qur'an: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Those who charge usury are in the same position as those controlled by the devil's influence. This is because they claim that usury is the same as commerce. However, God permits commerce, and prohibits usury. Thus, whoever heeds this commandment from his Lord, and refrains from usury, he may keep his past earnings, and his judgment rests with God. As for those who persist in usury, they incur Hell, wherein they abide forever. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;God condemns usury, and blesses charities. God dislikes every sinning disbeliever. Those who believe and do good works and establish worship and pay the poor-due, their reward is with their Lord and there shall no fear come upon them neither shall they grieve. O you who believe, you shall observe God and refrain from all kinds of usury, if you are believers. If you do not, then expect a war from God and His messenger. But if you repent, you may keep your capitals, without inflicting injustice, or incurring injustice. If the debtor is unable to pay, wait for a better time. If you give up the loan as a charity, it would be better for you, if you only knew. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;O you who believe, you shall not take usury, compounded over and over. Observe God, that you may succeed. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;And for practicing usury, which was forbidden, and for consuming the people's money illicitly. We have prepared for the disbelievers among them painful retribution. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The usury that is practiced to increase some people's wealth, does not gain anything at God. But if people give to charity, seeking God's pleasure, these are the ones who receive their reward many fold. The attitude of Muhammad to usury is articulated in his Last Sermon: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;O People, just as you regard this month, this day, this city as Sacred, so regard the life and property of every Muslim as a sacred trust. Return the goods entrusted to you to their rightful owners. Hurt no one so that no one may hurt you. Remember that you will indeed meet your Lord, and that He will indeed reckon your deeds. Allah has forbidden you to take usury, therefore all usurious obligation shall henceforth be waived. Your capital, however, is yours to keep. You will neither inflict nor suffer any inequity. Allah has Judged that there shall be no usury and that all the usury due to Abbas ibn 'Abd'al Muttalib (Prophet's uncle) shall henceforth be waived ... Christianity. The Old Testament "condemns the practice of charging interest because a loan should be an act of compassion and taking care of one's neighbor"; it teaches that "making a profit off a loan is exploiting that person and dishonoring God's covenant (Exodus 22:25–27)". The first of the scholastic Christian theologians, Saint Anselm of Canterbury, led the shift in thought that labeled charging interest the same as theft. Previously usury had been seen as a lack of charity. St. Thomas Aquinas, the leading scholastic theologian of the Roman Catholic Church, argued charging of interest is wrong because it amounts to "double charging", charging for both the thing and the use of the thing. Aquinas said this would be morally wrong in the same way as if one sold a bottle of wine, charged for the bottle of wine, and then charged for the person using the wine to actually drink it. Similarly, one cannot charge for a piece of cake and for the eating of the piece of cake. Yet this, said Aquinas, is what usury does. Money is a medium of exchange, and is used up when it is spent. To charge for the money and for its use (by spending) is therefore to charge for the money twice. It is also to sell time since the usurer charges, in effect, for the time that the money is in the hands of the borrower. Time, however, is not a commodity that anyone can charge. In condemning usury Aquinas was much influenced by the recently rediscovered philosophical writings of Aristotle and his desire to assimilate Greek philosophy with Christian theology. Aquinas argued that in the case of usury, as in other aspects of Christian revelation, Christian doctrine is reinforced by Aristotelian natural law rationalism. Aristotle's argument is that interest is unnatural, since money, as a sterile element, cannot naturally reproduce itself. Thus, usury conflicts with natural law just as it offends Christian revelation: see Thought of Thomas Aquinas. As such, Aquinas taught that interest is inherently unjust and one who charges interest sins. Outlawing usury did not prevent investment, but stipulated that in order for the investor to share in the profit he must share the risk. In short he must be a joint-venturer. Simply to invest the money and expect it to be returned regardless of the success of the venture was to make money simply by having money and not by taking any risk or by doing any work or by any effort or sacrifice at all, which is usury. St Thomas quotes Aristotle as saying that "to live by usury is exceedingly unnatural". Islam likewise condemns usury but allowed commerce (Al-Baqarah 2:275) – an alternative that suggests investment and sharing of profit and loss instead of sharing only profit through interests. Judaism condemns usury towards Jews, but allows it towards non-Jews (Deut. 23:19–20). St Thomas allows, however, charges for actual services provided. Thus a banker or credit-lender could charge for such actual work or effort as he did carry out, for example, any fair administrative charges. The Catholic Church, in a decree of the Fifth Council of the Lateran, expressly allowed such charges in respect of credit-unions run for the benefit of the poor known as "montes pietatis". In the 13th century Cardinal Hostiensis enumerated thirteen situations in which charging interest was not immoral. The most important of these was "lucrum cessans" (profits given up) which allowed for the lender to charge interest "to compensate him for profit foregone in investing the money himself". This idea is very similar to opportunity cost. Many scholastic thinkers who argued for a ban on interest charges also argued for the legitimacy of "lucrum cessans" profits (for example, Pierre Jean Olivi and St. Bernardino of Siena). However, Hostiensis' exceptions, including for "lucrum cessans", were never accepted as official by the Roman Catholic Church. The Westminster Confession of Faith, a confession of faith upheld by the Reformed Churches, teaches that usury — defined as charging interest at any rate — is a sin prohibited by the eighth commandment. The Roman Catholic Church has always condemned usury, but in modern times, with the rise of capitalism and the disestablishment of the Catholic Church in majority Catholic countries, this prohibition on usury has not been enforced. Pope Benedict XIV's encyclical "Vix Pervenit" gives the reasons why usury is sinful: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The nature of the sin called usury has its proper place and origin in a loan contract ... [which] demands, by its very nature, that one return to another only as much as he has received. The sin rests on the fact that sometimes the creditor desires more than he has given ..., but any gain which exceeds the amount he gave is illicit and usurious. One cannot condone the sin of usury by arguing that the gain is not great or excessive, but rather moderate or small; neither can it be condoned by arguing that the borrower is rich; nor even by arguing that the money borrowed is not left idle, but is spent usefully ... The Congregation of the Missionary Sons of the Immaculate Heart of Mary, a Catholic Christian religious order, thus teaches that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It might initially seem like little is at stake when it comes to interest, but this is an issue of human dignity. A person is made in God's own image and therefore may never be treated as a thing. Interest can diminish the human person to a thing to be manipulated for money. In an article for The Catholic Worker, Dorothy Day articulated this well: "Can I talk about the people living off usury . . . not knowing the way that their infertile money has bred more money by wise investment in God knows what devilish nerve gas, drugs, napalm, missiles, or vanities, when housing and employment . . . for the poor were needed, and money could have been invested there?" Her thoughts were a precursor to what Pope Francis now calls an "economy that kills." To sin is to say "no" to God and God's presence by harming others, ourselves, or all of creation. Charging interest is indeed sinful when doing so takes advantage of a person in need as well as when it means investing in corporations involved in the harming of God's creatures. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lim_{n \\rightarrow \\infty} \\left( 1 + \\dfrac{1}{n} \\right)^n = e," }, { "math_id": 1, "text": "\\frac {r \\cdot B \\cdot m}{n} " }, { "math_id": 2, "text": "\\frac{0.1299 \\times \\$2500}{12} = \\$27.06" }, { "math_id": 3, "text": "\\frac{0.1299 \\times \\$2500 \\times 3}{12} = \\$81.19" }, { "math_id": 4, "text": "\\frac{0.1299 \\times \\$2500}{12} \\times 3 = \\$27.06\\text{ per month} \\times 3\\text{ months} =\\$81.18" }, { "math_id": 5, "text": "\\frac {r \\cdot B \\cdot m}{n} = \\frac {6\\% \\times \\$10\\,000 \\times 1}{2} = \\$300" }, { "math_id": 6, "text": "\\$10\\,000 + \\$300 = \\left(1 + \\frac{r}{n}\\right) \\cdot B = \\left(1 + \\frac{6\\%}{2}\\right) \\times \\$10\\,000" }, { "math_id": 7, "text": "\\begin{align}\\frac {r \\cdot B \\cdot m}{n} \n&= \\frac {6\\% \\times \\left(\\$10\\,000 + \\$300\\right)}{2}\\\\\n&= \\frac {6\\% \\times \\left(1 + \\frac{6\\%}{2}\\right) \\times \\$10\\,000}{2}\\\\\n&=\\$309\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\\$10,000 + \\$300 + \\$309 \n&= \\$10\\,000 + \\frac {6\\% \\times \\$10,000}{2} + \\frac {6\\% \\times \\left( 1 + \\frac {6\\%}{2}\\right) \\times \\$10\\,000}{2}\\\\\n&= \\$10\\,000 \\times \\left(1 + \\frac{6\\%}{2}\\right)^2\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\\$10\\,000 \\times \\left(1 + \\frac {6\\%}{2}\\right)^2 - \\$10\\,000\\\\\n= \\$10\\,000 \\times \\left( \\left( 1 + \\frac {6\\%}{2}\\right)^2 - 1\\right)\\end{align}" }, { "math_id": 10, "text": "\\left(1 + \\frac{r}{n}\\right)^n - 1" }, { "math_id": 11, "text": "\\left(1 + \\frac{6\\%}{2}\\right)^2 - 1 = 1.03^2 - 1 = 6.09\\%" }, { "math_id": 12, "text": "B_{n} = \\big( 1 + r \\big) B_{n - 1} - p," }, { "math_id": 13, "text": "B_n = (1 + r)^n B_0 - \\frac{(1+r)^n - 1}{r} p" }, { "math_id": 14, "text": "p = r \\left[ \\frac{(1+r)^n B_0 - B_n}{(1+r)^n - 1} \\right]" }, { "math_id": 15, "text": "p=\\mathrm{PMT}(\\text{rate},\\text{num},\\text{PV},\\text{FV},) = \\mathrm{PMT}(r,n,-B_0,B_n,)" }, { "math_id": 16, "text": "p_I= r B. " }, { "math_id": 17, "text": "I_{T} = np - B_0. " }, { "math_id": 18, "text": "B_k = B'_0 = (1 + r_k) B_0 - p_k. " }, { "math_id": 19, "text": "r_k = (1 + r)^k - 1" }, { "math_id": 20, "text": "p_k = \\frac{p}{r} r_k. " }, { "math_id": 21, "text": "B^{*} = \\frac{p}{r} = \\frac{p_k}{r_k}" }, { "math_id": 22, "text": "B_k = (1 + r_k) B_0 - r_k B^*." }, { "math_id": 23, "text": "r_k = \\frac{B_0 - B_k}{B^{*} - B_0}" }, { "math_id": 24, "text": "\\lambda_k = \\frac{p_k}{p} = \\frac{r_k}{r} = \\frac{(1 + r)^k - 1}{r} = k\\left[1 + \\frac{(k - 1)r}{2} + \\cdots\\right]" }, { "math_id": 25, "text": "r_k=\\lambda_k r" }, { "math_id": 26, "text": "p_k=\\lambda_k p" }, { "math_id": 27, "text": "\\Delta B_k=B'-B=(\\lambda_k rB-\\lambda_k p)=\\lambda_k \\, \\Delta B " }, { "math_id": 28, "text": "B_k=B_0-r_k(B^*-B_0)" }, { "math_id": 29, "text": "B^{*} = B_0 \\left(\\frac{1}{r_n} + 1 \\right)." }, { "math_id": 30, "text": "B_k=B_0\\left(1-\\frac{r_k}{r_n}\\right)=B_0\\left(1-\\frac{\\lambda_k}{\\lambda_n}\\right)" }, { "math_id": 31, "text": "\\lambda_{k+1}=1+(1+r)\\lambda_k" }, { "math_id": 32, "text": "p=\\left(r+\\frac{1}{\\lambda_n}\\right)B_0" }, { "math_id": 33, "text": "r_\\text{loan} = \\frac{I_T}{nB_0} = r + \\frac{1}{\\lambda_n} - \\frac{1}{n}, " }, { "math_id": 34, "text": "1.18^4 =1.9388 \\text { (4 d.p.)}" }, { "math_id": 35, "text": "1.24^3 = 1.9066 \\text { (4 d.p.)}" }, { "math_id": 36, "text": " i= r + \\pi " }, { "math_id": 37, "text": " i = r + \\pi + c " } ]
https://en.wikipedia.org/wiki?curid=146738
1467468
Honey flow
Term used in beekeeping Honey flow is a term used by beekeepers indicating that one or more major nectar sources are in bloom and the weather is favorable for bees to fly and collect the nectar in abundance. The higher northern and southern latitudes with their longer summer day time hours can be of considerable benefit for honey production. Flowers bloom for longer hours and the time per day that bees can fly is extended, so the number of trips per day is higher. In addition, the higher latitudes do not have hot and dry periods in the summer where virtually all of the excess nectar flow dries up. Where there are a succession of nectar sources throughout the summer season, a honeyflow may last for many weeks. In other areas significant honeyflows may only last two or three weeks per year from one or a limited number of nectar sources. The rest of the year is spent in just maintenance – a situation where the incoming nectar and pollen nearly match the needed food for the hive, or where sufficient reserve stores must be present for the hive to survive a winter season. Speed of work. Honeybees visit up to about 40 flowers per minute depending on floral type, nectar availability and weather conditions. Floral visitation rate by honeybees of some important crops: The longer the time period, the greater the nectar availability. It takes twice as much time to collect a load of nectar compared with a load of pollen. A bee will visit 100–1000 flowers per trip from the hive. There is general agreement that a single bee will do an average of 10 trips per day (range 7–13). Large single loads of nectar may weigh 70 mg for Italian bees. Sometimes a hive may gain 4–10 kg in a single day. For a gain this means: formula_0 In two days a strong hive with more than 20,000 foragers may fill a honey super. This is for nectar, ripe honey has its water fraction reduced significantly.
[ { "math_id": 0, "text": " 7000 \\text{ forager bees} \\times \\frac{10 \\text{ trips in good flying weather}}{\\text{ per day}} \\times \\frac{70 \\text{ mg of nectar during honey flow}}{\\text{per trip and bee}} \\times \\frac{1\\text{ kg}}{1,000,000\\text{ mg}} \\approx 5 \\text{ kg/day} " } ]
https://en.wikipedia.org/wiki?curid=1467468
14674709
Trace diagram
Graphical means of performing computations in linear algebra In mathematics, trace diagrams are a graphical means of performing computations in linear and multilinear algebra. They can be represented as (slightly modified) graphs in which some edges are labeled by matrices. The simplest trace diagrams represent the trace and determinant of a matrix. Several results in linear algebra, such as Cramer's Rule and the Cayley–Hamilton theorem, have simple diagrammatic proofs. They are closely related to Penrose's graphical notation. Formal definition. Let "V" be a vector space of dimension "n" over a field "F" (with "n"≥2), and let Hom("V","V") denote the linear transformations on "V". An "n"-trace diagram is a graph formula_0, where the sets "V""i" ("i" = 1, 2, "n") are composed of vertices of degree "i", together with the following additional structures: Note that "V"2 and "Vn" should be considered as distinct sets in the case "n" = 2. A framed trace diagram is a trace diagram together with a partition of the degree-1 vertices "V"1 into two disjoint ordered collections called the "inputs" and the "outputs". The "graph" underlying a trace diagram may have the following special features, which are not always included in the standard definition of a graph: Correspondence with multilinear functions. Every framed trace diagram corresponds to a multilinear function between tensor powers of the vector space "V". The degree-1 vertices correspond to the inputs and outputs of the function, while the degree-"n" vertices correspond to the generalized Levi-Civita symbol (which is an anti-symmetric tensor related to the determinant). If a diagram has no output strands, its function maps tensor products to a scalar. If there are no degree-1 vertices, the diagram is said to be closed and its corresponding function may be identified with a scalar. By definition, a trace diagram's function is computed using signed graph coloring. For each edge coloring of the graph's edges by "n" labels, so that no two edges adjacent to the same vertex have the same label, one assigns a "weight" based on the labels at the vertices and the labels adjacent to the matrix labels. These weights become the coefficients of the diagram's function. In practice, a trace diagram's function is typically computed by "decomposing" the diagram into smaller pieces whose functions are known. The overall function can then be computed by re-composing the individual functions. Examples. 3-Vector diagrams. Several vector identities have easy proofs using trace diagrams. This section covers 3-trace diagrams. In the translation of diagrams to functions, it can be shown that the positions of ciliations at the degree-3 vertices has no influence on the resulting function, so they may be omitted. It can be shown that the cross product and dot product of 3-dimensional vectors are represented by In this picture, the inputs to the function are shown as vectors in yellow boxes at the bottom of the diagram. The cross product diagram has an output vector, represented by the free strand at the top of the diagram. The dot product diagram does not have an output vector; hence, its output is a scalar. As a first example, consider the scalar triple product identity formula_1 To prove this diagrammatically, note that all of the following figures are different depictions of the same 3-trace diagram (as specified by the above definition): Combining the above diagrams for the cross product and the dot product, one can read off the three leftmost diagrams as precisely the three leftmost scalar triple products in the above identity. It can also be shown that the rightmost diagram represents det[u v w]. The scalar triple product identity follows because each is a different representation of the same diagram's function. As a second example, one can show that (where the equality indicates that the identity holds for the underlying multilinear functions). One can show that this kind of identity does not change by "bending" the diagram or attaching more diagrams, provided the changes are consistent across all diagrams in the identity. Thus, one can bend the top of the diagram down to the bottom, and attach vectors to each of the free edges, to obtain which reads formula_2 a well-known identity relating four 3-dimensional vectors. Diagrams with matrices. The simplest closed diagrams with a single matrix label correspond to the coefficients of the characteristic polynomial, up to a scalar factor that depends only on the dimension of the matrix. One representation of these diagrams is shown below, where formula_3 is used to indicate equality up to a scalar factor that depends only on the dimension "n" of the underlying vector space. Properties. Let "G" be the group of n×n matrices. If a closed trace diagram is labeled by "k" different matrices, it may be interpreted as a function from formula_4 to an algebra of multilinear functions. This function is invariant under simultaneous conjugation, that is, the function corresponding to formula_5 is the same as the function corresponding to formula_6 for any invertible formula_7. Extensions and applications. Trace diagrams may be specialized for particular Lie groups by altering the definition slightly. In this context, they are sometimes called birdtracks, tensor diagrams, or Penrose graphical notation. Trace diagrams have primarily been used by physicists as a tool for studying Lie groups. The most common applications use representation theory to construct spin networks from trace diagrams. In mathematics, they have been used to study character varieties. References. Books:
[ { "math_id": 0, "text": "\\mathcal{D}=(V_1\\sqcup V_2\\sqcup V_n, E)" }, { "math_id": 1, "text": "(\\mathbf{u}\\times\\mathbf{v})\\cdot\\mathbf{w}=\\mathbf{u}\\cdot(\\mathbf{v}\\times\\mathbf{w})=(\\mathbf{w}\\times\\mathbf{u})\\cdot\\mathbf{v}=\\det(\\mathbf{u}\\mathbf{v}\\mathbf{w})." }, { "math_id": 2, "text": "(\\mathbf{x}\\times\\mathbf{u})\\cdot(\\mathbf{v}\\times\\mathbf{w})=(\\mathbf{x}\\cdot\\mathbf{v})(\\mathbf{u}\\cdot\\mathbf{w})-(\\mathbf{x}\\cdot\\mathbf{w})(\\mathbf{u}\\cdot\\mathbf{v})," }, { "math_id": 3, "text": "\\propto" }, { "math_id": 4, "text": "G^k" }, { "math_id": 5, "text": "(g_1,\\ldots,g_k)" }, { "math_id": 6, "text": "(a g_1 a^{-1}, \\ldots, a g_k a^{-1})" }, { "math_id": 7, "text": "a\\in G" } ]
https://en.wikipedia.org/wiki?curid=14674709
14675761
Birkhoff–Grothendieck theorem
Classifies holomorphic vector bundles over the complex projective line In mathematics, the Birkhoff–Grothendieck theorem classifies holomorphic vector bundles over the complex projective line. In particular every holomorphic vector bundle over formula_0 is a direct sum of holomorphic line bundles. The theorem was proved by Alexander Grothendieck (1957, Theorem 2.1), and is more or less equivalent to Birkhoff factorization introduced by George David Birkhoff (1909). Statement. More precisely, the statement of the theorem is as the following. Every holomorphic vector bundle formula_1 on formula_0 is holomorphically isomorphic to a direct sum of line bundles: formula_2 The notation implies each summand is a Serre twist some number of times of the trivial bundle. The representation is unique up to permuting factors. Generalization. The same result holds in algebraic geometry for algebraic vector bundle over formula_3 for any field formula_4. It also holds for formula_5 with one or two orbifold points, and for chains of projective lines meeting along nodes. Applications. One application of this theorem is it gives a classification of all coherent sheaves on formula_6. We have two cases, vector bundles and coherent sheaves supported along a subvariety, so formula_7 where n is the degree of the fat point at formula_8. Since the only subvarieties are points, we have a complete classification of coherent sheaves. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbb{CP}^1 " }, { "math_id": 1, "text": " \\mathcal{E} " }, { "math_id": 2, "text": " \\mathcal{E}\\cong\\mathcal{O}(a_1)\\oplus \\cdots \\oplus \\mathcal{O}(a_n)." }, { "math_id": 3, "text": "\\mathbb{P}^1_k" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "\\mathbb{P}^1" }, { "math_id": 6, "text": "\\mathbb{CP}^1" }, { "math_id": 7, "text": "\\mathcal{O}(k), \\mathcal{O}_{nx}" }, { "math_id": 8, "text": "x \\in \\mathbb{CP}^1" } ]
https://en.wikipedia.org/wiki?curid=14675761
14676156
1-alkenyl-2-acylglycerol choline phosphotransferase
Class of enzymes In enzymology, a 1-alkenyl-2-acylglycerol choline phosphotransferase (EC 2.7.8.22) is an enzyme that catalyzes the chemical reaction CDP-choline + 1-alkenyl-2-acylglycerol formula_0 CMP + plasmenylcholine Thus, the two substrates of this enzyme are CDP-choline and 1-alkenyl-2-acylglycerol, whereas its two products are CMP and plasmenylcholine. This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. The systematic name of this enzyme class is CDP-choline:1-alkenyl-2-acylglycerol cholinephosphotransferase. This enzyme is also called CDP-choline-1-alkenyl-2-acyl-glycerol phosphocholinetransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676156
14676191
1-phosphatidylinositol 4-kinase
Class of enzymes In enzymology, a 1-phosphatidylinositol 4-kinase (EC 2.7.1.67) is an enzyme that catalyzes the chemical reaction ATP + 1-phosphatidyl-1D-myo-inositol formula_0 ADP + 1-phosphatidyl-1D-myo-inositol 4-phosphate Thus, the two substrates of this enzyme are ATP and 1-phosphatidyl-1D-myo-inositol, whereas its two products are ADP and 1-phosphatidyl-1D-myo-inositol 4-phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:1-phosphatidyl-1D-myo-inositol 4-phosphotransferase. Other names in common use include phosphatidylinositol kinase (phosphorylating), phosphatidylinositol 4-kinase, phosphatidylinositol kinase, type II phosphatidylinositol kinase, PI kinase, and PI 4-kinase. This enzyme participates in inositol phosphate metabolism and phosphatidylinositol signaling system. Structural studies. As of late 2007, the structure has only been solved for this enzyme. Part of the enzyme was crystallized with its activating partner frequenin. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676191
14676222
1-phosphatidylinositol-4-phosphate 5-kinase
Class of enzymes In enzymology, 1-phosphatidylinositol-4-phosphate 5-kinase (EC 2.7.1.68) is an enzyme that catalyzes the chemical reaction ATP + 1-phosphatidyl-1D-myo-inositol 4-phosphate formula_0 ADP + 1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate Thus, the two substrates of this enzyme are ATP and 1-phosphatidyl-1D-myo-inositol 4-phosphate, whereas its two products are ADP and 1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:1-phosphatidyl-1D-myo-inositol-4-phosphate 5-phosphotransferase. Other names in common use include diphosphoinositide kinase, PIP kinase, phosphatidylinositol 4-phosphate kinase, phosphatidylinositol-4-phosphate 5-kinase, and type I PIP kinase. This enzyme participates in 3 metabolic pathways: inositol phosphate metabolism, phosphatidylinositol signaling system, and regulation of the actin cytoskeleton. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1BO1 and 2GK9. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676222
14676240
1-phosphatidylinositol-5-phosphate 4-kinase
Class of enzymes In enzymology, a 1-phosphatidylinositol-5-phosphate 4-kinase (EC 2.7.1.149) is an enzyme that catalyzes the chemical reaction ATP + 1-phosphatidyl-1D-myo-inositol 5-phosphate formula_0 ADP + 1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate Thus, the two substrates of this enzyme are ATP and 1-phosphatidyl-1D-myo-inositol 5-phosphate, whereas its two products are ADP and 1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:1-phosphatidyl-1D-myo-inositol-5-phosphate 4-phosphotransferase. This enzyme is also called type II PIP kinase. This enzyme participates in 3 metabolic pathways: inositol phosphate metabolism, phosphatidylinositol signaling system, and regulation of actin cytoskeleton. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676240
14676275
(2,3-dihydroxybenzoyl)adenylate synthase
InterPro Family In enzymology, a (2,3-dihydroxybenzoyl)adenylate synthase (EC 2.7.7.58) is an enzyme that catalyzes the chemical reaction "ATP + 2,3-dihydroxybenzoate formula_0 diphosphate + (2,3-dihydroxybenzoyl)adenylate". Thus, the two substrates of this enzyme are ATP and 2,3-dihydroxybenzoate, whereas its two products are diphosphate and (2,3-dihydroxybenzoyl)adenylate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is ATP:2,3-dihydroxybenzoate adenylyltransferase. This enzyme is also called 2,3-dihydroxybenzoate-AMP ligase. This enzyme participates in biosynthesis of siderophore group nonribosomal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676275
14676297
2-amino-4-hydroxy-6-hydroxymethyldihydropteridine diphosphokinase
Enzyme In enzymology, a 2-amino-4-hydroxy-6-hydroxymethyldihydropteridine diphosphokinase (EC 2.7.6.3) is an enzyme that catalyzes the chemical reaction ATP + 2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine formula_0 AMP + (2-amino-4-hydroxy-7,8-dihydropteridin-6-yl)methyl diphosphate Thus, the two substrates of this enzyme are ATP and 2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine, whereas its two products are AMP and (2-amino-4-hydroxy-7,8-dihydropteridin-6-yl)methyl diphosphate. This enzyme belongs to the family of transferases, specifically those transferring two phosphorus-containing groups (diphosphotransferases). The systematic name of this enzyme class is ATP:2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine 6'-diphosphotransferase. Other names in common use include 2-amino-4-hydroxy-6-hydroxymethyldihydropteridine pyrophosphokinase, H2-pteridine-CH2OH pyrophosphokinase, 7,8-dihydroxymethylpterin-pyrophosphokinase, HPPK, 7,8-dihydro-6-hydroxymethylpterin pyrophosphokinase, and hydroxymethyldihydropteridine pyrophosphokinase. This enzyme participates in folate biosynthesis. This enzyme catalyses the first step in a three-step pathway leading to 7,8 dihydrofolate. Bacterial HPPK (gene folK or sulD) is a protein of 160 to 270 amino acids. In the lower eukaryote "Pneumocystis carinii", HPPK is the central domain of a multifunctional folate synthesis enzyme (gene fas). Structural studies. As of late 2007, 23 structures have been solved for this class of enzymes, with PDB accession codes 1DY3, 1EQ0, 1EQM, 1EX8, 1F9H, 1F9Y, 1G4C, 1HKA, 1HQ2, 1IM6, 1KBR, 1Q0N, 1RAO, 1RB0, 1RTZ, 1RU1, 1RU2, 1TMJ, 1TMM, 2BMB, 2CG8, 2F63, and 2F65. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676297
14676311
2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase
Class of enzymes In enzymology, a 2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase (EC 2.7.7.60) is an enzyme that catalyzes the chemical reaction: 2-C-methyl-D-erythritol 4-phosphate + CTP formula_0 diphosphate + 4-(cytidine 5'-diphospho)-2-C-methyl-D-erythritol Thus, the two substrates of this enzyme are CTP and 2-C-methyl-D-erythritol 4-phosphate, whereas its two products are diphosphate and 4-diphosphocytidyl-2-C-methylerythritol. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). This enzyme participates in isoprenoid biosynthesis and stenvenosim. It catalyzes the third step of the MEP pathway; the formation of CDP-ME (4-diphosphocytidyl-2C-methyl-D-erythritol) from CTP and MEP (2C-methyl-D-erythritol 4-phosphate). The isoprenoid pathway is a well known target for anti-infective drug development. Nomenclature. The systematic name of this enzyme class is CTP:2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase. This enzyme is also called: It is normally abbreviated IspD. It is also referenced by the open reading frame YgbP. Structural studies. The crystal structure of the "E. coli" 2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase 1I52, 1INI &amp; 1INJ, reported by Richard et al. (2001), was the first one for an enzyme involved in the MEP pathway. As of February 2010, 13 other structures have been solved for this class of enzymes, with PDB accession codes 1H3M, 1VGT, 1VGU, 1VGZ, 1VPA, 1VGW, 1W55, 1W57, 1W77,2PX7, 2VSI, 3F1C and 2VSH. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676311
14676324
2-dehydro-3-deoxygalactonokinase
InterPro Family In enzymology, a 2-dehydro-3-deoxygalactonokinase (EC 2.7.1.58) is an enzyme that catalyzes the chemical reaction ATP + 2-dehydro-3-deoxy-D-galactonate formula_0 ADP + 2-dehydro-3-deoxy-D-galactonate 6-phosphate Thus, the two substrates of this enzyme are ATP and 2-dehydro-3-deoxy-D-galactonate, whereas its two products are ADP and 2-dehydro-3-deoxy-D-galactonate 6-phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:2-dehydro-3-deoxy-D-galactonate 6-phosphotransferase. Other names in common use include 2-keto-3-deoxygalactonokinase, 2-keto-3-deoxygalactonate kinase (phosphorylating), and 2-oxo-3-deoxygalactonate kinase. This enzyme participates in galactose metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676324
14676341
2-dehydro-3-deoxygluconokinase
Class of enzymes In enzymology, a 2-dehydro-3-deoxygluconokinase (EC 2.7.1.45) is an enzyme that catalyzes the chemical reaction ATP + 2-dehydro-3-deoxy-D-gluconate formula_0 ADP + 6-phospho-2-dehydro-3-deoxy-D-gluconate Thus, the two substrates of this enzyme are ATP and 2-dehydro-3-deoxy-D-gluconate, whereas its two products are ADP and 6-phospho-2-dehydro-3-deoxy-D-gluconate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:2-dehydro-3-deoxy-D-gluconate 6-phosphotransferase. Other names in common use include 2-keto-3-deoxygluconokinase, 2-keto-3-deoxy-D-gluconic acid kinase, 2-keto-3-deoxygluconokinase (phosphorylating), 2-keto-3-deoxygluconate kinase, and ketodeoxygluconokinase. This enzyme participates in pentose phosphate pathway and pentose and glucuronate interconversions. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1WYE. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676341
14676356
2'-phosphotransferase
Class of enzymes In enzymology, a 2'-phosphotransferase (EC 2.7.1.160) is an enzyme that catalyzes the chemical reaction 2'-phospho-[ligated tRNA] + NAD+ formula_0 mature tRNA + ADP-ribose 1",2"-phosphate + nicotinamide + H2O Thus, the two substrates of this enzyme are 2'-phospho-[ligated tRNA] and NAD+, whereas its 4 products are mature tRNA, ADP-ribose 1",2"-phosphate, nicotinamide, and H2O. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is 2'-phospho-[ligated tRNA]:NAD+ phosphotransferase. Other names in common use include yeast 2'-phosphotransferase, Tpt1, Tpt1p, and 2'-phospho-tRNA:NAD+ phosphotransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676356
14676374
3-deoxy-manno-octulosonate cytidylyltransferase
InterPro Family In enzymology, a 3-deoxy-manno-octulosonate cytidylyltransferase (EC 2.7.7.38) is an enzyme that catalyzes the chemical reaction CTP + 3-deoxy-D-manno-octulosonate formula_0 diphosphate + CMP-3-deoxy-D-manno-octulosonate Thus, the two substrates of this enzyme are CTP and 3-deoxy-D-manno-octulosonate, whereas its two products are diphosphate and CMP-3-deoxy-D-manno-octulosonate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is CTP:3-deoxy-D-manno-octulosonate cytidylyltransferase. Other names in common use include CMP-3-deoxy-D-manno-octulosonate pyrophosphorylase, 2-keto-3-deoxyoctonate cytidylyltransferase, 3-Deoxy-D-manno-octulosonate cytidylyltransferase, CMP-3-deoxy-D-manno-octulosonate synthetase, CMP-KDO synthetase, CTP:CMP-3-deoxy-D-manno-octulosonate cytidylyltransferase, and cytidine monophospho-3-deoxy-D-manno-octulosonate pyrophosphorylase. This enzyme participates in lipopolysaccharide biosynthesis. Structural studies. As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes 1GQ9, 1GQC, 1H6J, 1H7E, 1H7F, 1H7G, 1H7H, 1H7T, 1VH1, 1VH3, and 1VIC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676374
14676397
3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring) kinase
Class of enzymes In enzymology, a [3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring)] (EC 2.7.11.4) is an enzyme that catalyzes the chemical reaction ATP + [3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring)] formula_0 ADP + [3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring)] phosphate Thus, the two substrates of this enzyme are ATP and 3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring), whereas its 3 products are ADP, 3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring), and phosphate. This enzyme belongs to the family of transferases, specifically those transferring a phosphate group to the sidechain oxygen atom of serine or threonine residues in proteins (protein-serine/threonine kinases). The systematic name of this enzyme class is ATP:[3-methyl-2-oxobutanoate dehydrogenase (acetyl-transferring)] phosphotransferase. Other names in common use include kinase, BCK, BCKD kinase, BCODH kinase, branched-chain alpha-ketoacid dehydrogenase kinase, branched-chain 2-oxo acid dehydrogenase kinase, branched-chain keto acid dehydrogenase kinase, branched-chain oxo acid dehydrogenase kinase (phosphorylating), and STK2. In 2012, it was suggested that mutations in the gene which expresses this enzyme could be the cause of a rare form of autism. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Literature. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676397
14676412
3-phosphoglyceroyl-phosphate—polyphosphate phosphotransferase
Class of enzymes In enzymology, a 3-phosphoglyceroyl-phosphate—polyphosphate phosphotransferase (EC 2.7.4.17) is an enzyme that catalyzes the chemical reaction 3-phospho-D-glyceroyl phosphate + (phosphate)n formula_0 3-phosphoglycerate + (phosphate)n+1 Thus, the two substrates of this enzyme are 3-phospho-D-glyceroyl phosphate and (phosphate)n, whereas its two products are 3-phosphoglycerate and (phosphate)n+1. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with a phosphate group as acceptor. The systematic name of this enzyme class is 3-phospho-D-glyceroyl-phosphate:polyphosphate phosphotransferase. Other names in common use include diphosphoglycerate-polyphosphate phosphotransferase, and 1,3-diphosphoglycerate-polyphosphate phosphotransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14676412