id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
14215994
L-ribulose-5-phosphate 3-epimerase
In enzymology, a L-ribulose-5-phosphate 3-epimerase (EC 5.1.3.22) is an enzyme that catalyzes the chemical reaction L-ribulose 5-phosphate formula_0 L-xylulose 5-phosphate Hence, this enzyme has one substrate, L-ribulose 5-phosphate, and one product, L-xylulose 5-phosphate. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is L-ribulose-5-phosphate 3-epimerase. Other names in common use include L-xylulose 5-phosphate 3-epimerase, UlaE, and SgaU. This enzyme participates in ascorbate and aldarate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14215994
14216020
L-ribulose-5-phosphate 4-epimerase
In enzymology, a -ribulose-5-phosphate 4-epimerase (EC 5.1.3.4) is an enzyme that catalyzes the interconversion of ribulose 5-phosphate and xylulose 5-phosphate in the oxidative phase of the Pentose phosphate pathway. -ribulose 5-phosphate formula_0 -xylulose 5-phosphate This enzyme has a molecular mass of 102 kDa and is believed to be composed of four identical 25.5 kDa subunits. It belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is -ribulose-5-phosphate 4-epimerase. Other names in common use include phosphoribulose isomerase, ribulose phosphate 4-epimerase, -ribulose-phosphate 4-epimerase, -ribulose 5-phosphate 4-epimerase, AraD, and -Ru5P. This enzyme participates in pentose and glucuronate interconversions and ascorbate and aldarate metabolism. Enzyme Mechanism. -Ribulose 5-phosphate 4-epimerase catalyzes the epimerization of -ribulose 5-phosphate to -xylulose 5-phosphate by retro-aldol cleavage and subsequent aldol reaction. The proposed mechanism involves the abstraction of the proton from the hydroxyl group on C-4, followed by cleavage of the bond between C-3 and C-4 to give a metal-stabilized acetone enediolate and a glycolaldehyde phosphate fragment. The C–C bond of glycolaldehyde phosphate is then rotated 180°, and the C–C bond between C-3 and C-4 is regenerated to give inversion of stereochemistry at C-4. This mechanism is contested by a possible alternative dehydration reaction scheme. The literature favors the aldol mechanism for two reasons. First, the retro-aldol cleavage mechanism is analogous to the reaction catalyzed by L-fuculose-phosphate aldolase which has high levels of sequence similarity with -ribulose-5-phosphate 4-epimerase. Second, the analysis of 13C and deuterium kinetic isotope effects points toward the aldol mechanism. It has been reported that there is little to no difference in the deuterium isotope effects at C-3 and C-4, suggesting that these C–H bonds are not broken during epimerization. Changes in isotope effect at C-3 would be expected for the dehydration mechanism, because the breaking of the C–H bond is the rate-limiting step in this mechanism and substituting the C-3 hydrogen with deuterium would significantly alter the rate. At the same time there are significantly large 13C isotope effects, suggesting rate-limiting C–C bond breakage, as expected with the aldol mechanism. Structure. The structure is homo-tetrameric and displays C4 symmetry. Each protein subunit has a single domain consisting of a central β sheet flanked on either side by layers of α-helix. A central β-sheet is formed from nine β-strands (b1-b9) and is predominantly antiparallel except between strands b7 and b8. The eight α-helices of the structure form two layers on either side of the central β-sheet. The active site is identified by the position of the catalytic zinc residue and is located at the interface between two adjacent subunits. Asp76, His95, His97, and His171act as the metal-binding residues. A remarkable feature of the structure is that it shows a very close resemblance to that of L-fuculose-phosphate aldolase. This is consistent with the notion that both enzymes belong to a superfamily of epimerases/aldolases that catalyze carbon-carbon bond cleavage reactions via a metal-stabilized enolate intermediate. Biological Function. Ribulose 5-phosphate 4-epimerase is found on the well studied L-arabinose operon. This operon consists of eight genes araA-araH with the gene for Ribulose 5-phosphate 4-epimerase called araD. The arabinose system enables the take up the pentose L-arabinose, and then the conversion of intracellular arabinose in three steps catalyzed by the products of the araB, araA, araD genes to D-xylulose-5-phosphate. Evolution. -Ribulose-5-phosphate 4-epimerase and -fuculose-1-phosphate (L-Fuc1P) aldolase are evolutionarily related enzymes that display 26% sequence identity and a very high degree of structural similarity. They both employ a divalent cation in the stabilization of an enolate during catalysis, and both are able to deprotonate the C-4 hydroxyl group of a phosphoketose substrate. Despite these many similarities, subtle distinctions are present which allow the enzymes to catalyze two seemingly different reactions and to accommodate substrates differing greatly in the position of the phosphate (C-5 vs C-1). References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216020
14216077
Lysolecithin acylmutase
In enzymology, a lysolecithin acylmutase (EC 5.4.1.1) is an enzyme that catalyzes the chemical reaction 2-lysolecithin formula_0 3-lysolecithin Hence, this enzyme has one substrate, 2-lysolecithin, and one product, 3-lysolecithin. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring acyl groups. The systematic name of this enzyme class is lysolecithin 2,3-acylmutase. This enzyme is also called lysolecithin migratase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216077
14216128
Maleylacetoacetate isomerase
Class of enzymes In enzymology, maleylacetoacetate isomerase (EC 5.2.1.2) is an enzyme that catalyzes the chemical reaction 4-maleylacetoacetate formula_0 4-fumarylacetoacetate This enzyme belongs to the family of isomerases, specifically "cis"-"trans" isomerases. The systematic name of this enzyme class is 4-maleylacetoacetate "cis"-"trans"-isomerase. 4-Maleylacetoacetate isomerase is an enzyme involved in the degradation of L-phenylalanine. It is encoded by the gene glutathione S-transferase zeta 1, or GSTZ1. This enzyme catalyzes the conversion of 4-maleylacetoacetate to 4-fumarylacetoacetate. 4-Maleylacetoacetate isomerase belongs to the zeta class of the glutathione S-transferase (GST) superfamily. Mechanism. In the  phenylalanine degradation pathway, 4-maleylacetoacetate isomerase catalyzes a "cis"-"trans" isomerization of  4-maleylacetoacetate to fumarylacetoacetate. 4-maleylacetoacetate isomerase requires the cofactor glutathione to function. Ser 15, Cys 16, Gln 111, and the helix dipole of alpha 1 of the enzyme stabilize the thiolate form of glutathione which activates it to attack the alpha carbon of 4-maleylacetoacetate, thus breaking the double bond and allowing rotation around the single bond. 4-maleylacetoacetate is converted to 4-fumarylacetoacetate, this compound can be broken down into fumarate and acetoacetate by the enzyme fumarylacetoacetate hydrolase. The conversion of 4-maleylacetoacetate to fumarylacetoacetate is a step in the catabolism of phenylalanine and tyrosine, amino acids acquired through dietary protein consumption. When 4-maleylacetoacetate isomerase is unable to function properly, the 4-maleylacetoacetate may be converted instead to succinylacetoacetate and further broken down into succinate and acetoacetate by fumarylacetoacetate hydrolase. Structure. 4-maleylacetoacetate is a homodimer. It is classified as an isomerase transferase. It has a total residue count of 216 and a total atom count of 1700. This enzyme's theoretical weight is 24.11 KDa. 4-maleylacetoacetate isomerase has 3 isoforms The most common isoform has two domains, the N-terminal domain (4-87) the C terminal domain (92-212) and the glutathione binding site (14-19, 71-72 and 115-117).  The N-terminal domain has a four stranded beta sheet which is sandwiched by alpha helices on both sides to form a three layer sandwich tertiary structure. The C terminal domain is composed mostly of alpha helices and has an up down structure of tightly bundled alpha helices. Glutathione binds in positions 14-19, 71-72, and 115-117. It also binds the sulfate ion and dithiothreitol. Clinical significance. Maleylacetoacetate isomerase deficiency is a disease caused by a mutation in the gene GSTZ1. This is an autosomal recessive inborn error of metabolism. It is caused by a mutation in the gene that codes for the synthesis of 4-maleylacetoacetate isomerase, GSTZ1. Mutations in 4-maleylacetoacetate isomerase resulted in accumulation of fumarylacetoacetate and succinylacetone in the urine, but individuals were otherwise healthy. It is likely that there exists an alternate nonenzymatic bypass that allows the catabolism of 4-maleylacetoacetate in the absence of 4-maleylacetoacetate isomerase. Because of this mechanism, a mutation in the gene encoding 4-Maleylacetoacetate isomerase is not considered dangerous. GSTZ1 is highly expressed in the liver, however mutations in this gene do not impair liver function or coagulation. Gene expression. The gene from which this enzyme is synthesized is mostly expressed in the liver, with some expression in the kidneys, skeletal muscle, and brain. It is also expressed in melanocytes, synovium, placenta, breasts, fetal liver and heart. Related enzymes. Other enzymes involved in the catabolism of phenylalanine include phenylalanine hydroxylase, aminotransferase, p-hydroxyphenylpyruvate dioxygenase, homogentisate oxidase, and fumarylacetoacetate hydrolase. Mutations in some of these enzymes can lead to more severe diseases such as, phenylketonuria, alkaptonuria, and tyrosinemia. The gene GSTZ1 is located on chromosome 14q24.3. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216128
14216156
Maleylpyruvate isomerase
Class of enzymes In enzymology, a maleylpyruvate isomerase (EC 5.2.1.4) is an enzyme that catalyzes the chemical reaction 3-maleylpyruvate formula_0 3-fumarylpyruvate Hence, this enzyme has one substrate, 3-maleylpyruvate, and one product, 3-fumarylpyruvate. This enzyme belongs to the family of isomerases, specifically cis-trans isomerases. The systematic name of this enzyme class is 3-maleylpyruvate cis-trans-isomerase. This enzyme participates in tyrosine metabolism. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2NSF and 2NSG. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216156
14216187
Maltose alpha-D-glucosyltransferase
In enzymology, a maltose α-D-glucosyltransferase (EC 5.4.99.16) is an enzyme that catalyzes the chemical reaction maltose formula_0 alpha,alpha-trehalose Hence, this enzyme has one substrate, maltose, and one product, alpha,alpha-trehalose. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring other groups. The systematic name of this enzyme class is maltose alpha-D-glucosylmutase. Other names in common use include trehalose synthase, and maltose glucosylmutase. This enzyme participates in starch and sucrose metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216187
14216219
Maltose epimerase
In enzymology, a maltose epimerase (EC 5.1.3.21) is an enzyme that catalyzes the chemical reaction alpha-maltose formula_0 beta-maltose Hence, this enzyme has one substrate, alpha-maltose, and one product, beta-maltose. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is maltose 1-epimerase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216219
14216252
Mannose isomerase
In enzymology, a mannose isomerase (EC 5.3.1.7) is an enzyme that catalyzes the chemical reaction D-mannose formula_0 D-fructose Hence, this enzyme has one substrate, D-mannose, and one product, D-fructose. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases interconverting aldoses and ketoses. The systematic name of this enzyme class is D-mannose aldose-ketose-isomerase. Other names in common use include D-mannose isomerase, and D-mannose ketol-isomerase. This enzyme participates in fructose and mannose metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216252
14216283
Methionine racemase
In enzymology, a methionine racemase (EC 5.1.1.2) is an enzyme that catalyzes the chemical reaction L-methionine formula_0 D-methionine Hence, this enzyme has one substrate, L-methionine, and one product, D-methionine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on amino acids and derivatives. The systematic name of this enzyme class is methionine racemase. It employs one cofactor, pyridoxal phosphate. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216283
14216319
Methylaspartate mutase
In enzymology, a methylaspartate mutase (EC 5.4.99.1) is an enzyme that catalyzes the chemical reaction L-threo-3-methylaspartate formula_0 L-glutamate Hence, this enzyme has one substrate, L-threo-3-methylaspartate, and one product, L-glutamate. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring other groups. The systematic name of this enzyme class is L-threo-3-methylaspartate carboxy-aminomethylmutase. Other names in common use include glutamate mutase, glutamic mutase, glutamic isomerase, glutamic acid mutase, glutamic acid isomerase, methylaspartic acid mutase, beta-methylaspartate-glutamate mutase, and glutamate isomerase. This enzyme participates in c5-branched dibasic acid metabolism. It employs one cofactor, cobamide. Structural studies. As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1B1A, 1BE1, 1CB7, 1CCW, 1FMF, 1I9C, 1ID8, and 2PWH. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216319
14216351
Methylitaconate Delta-isomerase
In enzymology, a methylitaconate Δ-isomerase (EC 5.3.3.6) is an enzyme that catalyzes the chemical reaction methylitaconate formula_0 2,3-dimethylmaleate Hence, this enzyme has one substrate, methylitaconate, and one product, 2,3-dimethylmaleate. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is methylitaconate Delta2-Delta3-isomerase. This enzyme is also called methylitaconate isomerase. This enzyme participates in c5-branched dibasic acid metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216351
14216378
Muconolactone Δ-isomerase
Class of enzymes In enzymology, a muconolactone Δ-isomerase (EC 5.3.3.4) is an enzyme that catalyzes the chemical reaction (S)-5-oxo-2,5-dihydrofuran-2-acetate formula_0 5-oxo-4,5-dihydrofuran-2-acetate Hence, this enzyme has one substrate, (S)-5-oxo-2,5-dihydrofuran-2-acetate, and one product, 5-oxo-4,5-dihydrofuran-2-acetate. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is 5-oxo-4,5-dihydrofuran-2-acetate Delta3-Delta2-isomerase. This enzyme is also called muconolactone isomerase. This enzyme participates in benzoate degradation via hydroxylation. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1MLI. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216378
14216410
N-acylglucosamine 2-epimerase
In enzymology, a N-acylglucosamine 2-epimerase (EC 5.1.3.8) is an enzyme that catalyzes the chemical reaction N-acyl-D-glucosamine formula_0 N-acyl-D-mannosamine Hence, this enzyme has one substrate, N-acyl-D-glucosamine, and one product, N-acyl-D-mannosamine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is N-acyl-D-glucosamine 2-epimerase. Other names in common use include acylglucosamine 2-epimerase, and N-acetylglucosamine 2-epimerase. This enzyme participates in aminosugars metabolism. It employs one cofactor, ATP. Structural studies. As of late 2019, three structures have been solved for this class of enzymes, with the PDB accession codes 1FP3, 2GZ6, and 6F04. They show that the N-acylglucosamine 2-epimerase monomer folds as a barrel composed of α-helices, in a manner known as (α/α)6-barrel. The structures are presented as dimers, with the structures from "Sus scrofa" and "Anabaena" sp. CH1 having a different organization than the structure from "Nostoc" sp. KJV10. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216410
14216434
N-acylglucosamine-6-phosphate 2-epimerase
In enzymology, a N-acylglucosamine-6-phosphate 2-epimerase (EC 5.1.3.9) is an enzyme that catalyzes the chemical reaction N-acyl-D-glucosamine 6-phosphate formula_0 N-acyl-D-mannosamine 6-phosphate Hence, this enzyme has one substrate, N-acyl-D-glucosamine 6-phosphate, and one product, N-acyl-D-mannosamine 6-phosphate. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is N-acyl-D-glucosamine-6-phosphate 2-epimerase. Other names in common use include acylglucosamine-6-phosphate 2-epimerase, and acylglucosamine phosphate 2-epimerase. This enzyme participates in aminosugars metabolism. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1Y0E and 1YXY. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216434
14216463
Neoxanthin synthase
In enzymology, a neoxanthin synthase (EC 5.3.99.9) is an enzyme that catalyzes the chemical reaction: violaxanthin formula_0 neoxanthin Hence, this enzyme has one substrate, violaxanthin, and one product, neoxanthin. This enzyme belongs to the family of isomerases, specifically a class of other intramolecular oxidoreductases. The systematic name of this enzyme class is violaxanthin---neoxanthin isomerase (epoxide-opening). This enzyme is also called NSY. This enzyme participates in carotenoid biosynthesis - general. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216463
14216482
Nocardicin-A epimerase
In enzymology, a nocardicin-A epimerase (EC 5.1.1.14) is an enzyme that catalyzes the chemical reaction isonocardicin A formula_0 nocardicin A Hence, this enzyme has one substrate, isonocardicin A, and one product, nocardicin A. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on amino acids and derivatives. The systematic name of this enzyme class is nocardicin-A epimerase. This enzyme is also called isonocardicin A epimerase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216482
14216502
Ornithine racemase
In enzymology, an ornithine racemase (EC 5.1.1.12) is an enzyme that catalyzes the chemical reaction: L-ornithine formula_0 D-ornithine Hence, this enzyme has one substrate, L-ornithine, and one product, D-ornithine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on amino acids and derivatives. The systematic name of this enzyme class is ornithine racemase. This enzyme participates in d-arginine and d-ornithine metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216502
14216524
Oxaloacetate tautomerase
Enzyme In enzymology, an oxaloacetate tautomerase (EC 5.3.2.2) is an enzyme that catalyzes the chemical reaction keto-oxaloacetate formula_0 enol-oxaloacetate Hence, this enzyme has one substrate, keto-oxaloacetate, and one product, enol-oxaloacetate. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases interconverting keto- and enol-groups. The systematic name of this enzyme class is oxaloacetate keto---enol-isomerase. This enzyme is also called oxaloacetic keto-enol isomerase. While oxaloacetate tautomerase was characterized in several papers in the 1960s and 1970s, this activity has not been correlated with any gene identified in the genome of higher organisms. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216524
14216555
Phenylpyruvate tautomerase
In enzymology, phenylpyruvate tautomerase or Macrophage migration inhibitory factor (EC 5.3.2.1) is an enzyme that catalyzes the chemical reaction keto-phenylpyruvate formula_0 enol-phenylpyruvate Phenylpyruvate tautomerase has also been found to exhibit the same keto-enol tautomerism for 4-Hydroxyphenylpyruvic acid, which is structurally similar to phenylpyruvate but contains an additional hydroxyl moiety in the para position of the aromatic ring. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases interconverting keto- and enol-groups. The systematic name of this enzyme class is phenylpyruvate keto---enol-isomerase. This enzyme is also called phenylpyruvic keto-enol isomerase. This enzyme participates in tyrosine metabolism and phenylalanine metabolism. Structural studies. As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1GYJ, 1GYX, 1GYY, 2GDG, 2OOH, 2OOW, and 2OOZ. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216555
14216587
Phosphoacetylglucosamine mutase
In enzymology, a phosphoacetylglucosamine mutase (EC 5.4.2.3) is an enzyme that catalyzes the chemical reaction N-acetyl-alpha-D-glucosamine 1-phosphate formula_0 N-acetyl-D-glucosamine 6-phosphate Hence, this enzyme has one substrate, N-acetyl-alpha-D-glucosamine 1-phosphate, and one product, N-acetyl-D-glucosamine 6-phosphate. This enzyme belongs to the family of isomerases, specifically the phosphotransferases (phosphomutases), which transfer phosphate groups within a molecule. The systematic name of this enzyme class is N-acetyl-alpha-D-glucosamine 1,6-phosphomutase. Other names in common use include acetylglucosamine phosphomutase, acetylglucosamine phosphomutase, acetylaminodeoxyglucose phosphomutase, phospho-N-acetylglucosamine mutase, and N-acetyl-D-glucosamine 1,6-phosphomutase. This enzyme participates in aminosugars metabolism. This enzyme has at least one effector, N-Acetyl-D-glucosamine 1,6-bisphosphate. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1WJW, 2DKA, 2DKC, and 2DKD. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216587
14216616
Phosphoenolpyruvate mutase
Enzyme In enzymology, a phosphoenolpyruvate mutase (EC 5.4.2.9) is an enzyme that catalyzes the chemical reaction phosphoenolpyruvate formula_0 3-phosphonopyruvate Hence, this enzyme has one substrate, phosphoenolpyruvate (PEP), and one product, 3-phosphonopyruvate (PPR), which are structural isomers. This enzyme belongs to the family of isomerases, specifically the phosphotransferases (phosphomutases), which transfer phosphate groups within a molecule. The systematic name of this enzyme class is phosphoenolpyruvate 2,3-phosphonomutase. Other names in common use include phosphoenolpyruvate-phosphonopyruvate phosphomutase, PEP phosphomutase, phosphoenolpyruvate phosphomutase, PEPPM, and PEP phosphomutase. This enzyme participates in aminophosphonate metabolism. Phosphoenolpyruvate mutase was discovered in 1988. Structural studies. As of late 2007, 6 structures have been solved for this class of enzymes, all by the Herzberg group at the University of Maryland using PEPPM from the blue mussel, "Mytilus edulis". The first structure (PDB accession code 1PYM) was solved in 1999 and featured a magnesium oxalate inhibitor. This structure identified the enzyme as consisting of identical beta barrel subunits (exhibiting the TIM barrel fold, which consists of eight parallel beta strands). Dimerization was observed in which a helix from each subunit interacts with the other subunit's barrel; the authors called this feature "helix swapping." The dimers can dimerize as well to form a homotetrameric enzyme. A double phosphoryl transfer mechanism was proposed on the basis of this study: this would involve breakage of PEP's phosphorus-oxygen bond to form a phosphoenzyme intermediate, followed by transfer of the phosphoryl group from the enzyme to carbon-3, forming PPR. However, more recently, a structure with a sulfopyruvate inhibitor, which is a closer substrate analogue, was solved (1M1B); this study supported instead a dissociative mechanism. A notable feature of these structures was the shielding of the active site from solvent; it was proposed that a significant conformational change takes place on binding to allow this, moving the protein from an "open" to a "closed" state, and this was supported by several crystal structures in the open state. Three of these were of the wild type: the apoenzyme in 1S2T, the enzyme plus its magnesium ion cofactor in 1S2V, and the enzyme at high ionic strength in 1S2W. A mutant (D58A, in one of the active-site loops) was crystallized as an apoenzyme also (1S2U). From these structures, an active-site "gating" loop (residues 115-133) that shields the substrate from solvent in the closed conformation was identified. The two conformations, taken from the crystal structures 1M1B (closed) and 1S2T (open), are docked into each other in the images below; they differ negligibly except in the gating loop, which is colored purple for the closed conformation and blue for the open conformation. In the active-site closeup (left), several sidechains (cyan) that have been identified as important in catalysis are included as well; the overview (right) illustrates the distinctive helix-swapping fold. The images are still shots from ribbon kinemages. Both of these structures were crystallized as dimers. In chain A (used for the active-site closeup), helices are red while loops (other than the gating loop) are white and beta strands are green; in chain B, helices are yellow, beta strands are olive, and loops are gray; these colors are the same for the closed and open structures. Magnesium ions are gray and the sulfopyruvate ligands are pink; both are from the closed structure (though the enzyme has also been crystallized with only magnesium bound, and it adopted an open conformation). The structure of PEPPM is very similar to that of methylisocitrate lyase, an enzyme involved in propanoate metabolism whose substrate is also a low-molecular weight carboxylic acid—the beta-barrel structure as well as the active site layout and multimerization geometry are the same. Isocitrate lyase is also quite similar, though each subunit has a second, smaller beta domain in addition to the main beta barrel. Mechanism. Phosphoenolpyruvate mutase is thought to exhibit a dissociative mechanism. A magnesium ion is involved as a cofactor. The phosphoryl/phosphate group also appears to interact ionically with Arg159 and His190, stabilizing the reactive intermediate. A phosphoenzyme intermediate is unlikely because the most feasible residues for the covalent adduct can be mutated with only partial loss of function. The reaction involves dissociation of phosphorus from oxygen 2 and then a nucleophilic attack by carbon 3 on phosphorus. Notably, the configuration is retained at phosphorus, i.e. carbon 3 of PPR adds to the same face of phosphorus from which oxygen 2 of PEP was removed; this would be unlikely for a non-enzyme-catalyzed dissociative mechanism, but since the reactive intermediate interacts strongly with the amino acids and magnesium ions of the active site, it is to be expected in the presence of enzyme catalysis. Residues in the active-site gating loop, particularly Lys120, Asn122, and Leu124, also appear to interact with the substrate and reactive intermediate; these interactions explain why the loop moves into the closed conformation on substrate binding. Biological function. Because phosphoenolpyruvate mutase has the unusual ability to form a new carbon-phosphorus bond, it is essential to the synthesis of phosphonates, such as phosphonolipids and the antibiotics fosfomycin and bialaphos. The formation of this bond is quite thermodynamically unfavorable; even though PEP is a very high-energy phosphate compound, the equilibrium in PEP-PPR interconversion still favors PEP. The enzyme phosphonopyruvate decarboxylase presents a solution to this problem: it catalyzes the very thermodynamically favorable decarboxylation of PPR, and the resulting 2-phosphonoacetaldehyde is then converted into biologically useful phosphonates. This allows phosphoneolpyruvate's reaction to proceed in the forward direction, due to Le Chatelier's principle. The decarboxylation removes product quickly, and thus the reaction moves forward even though there would be much more reactant than product if the system were allowed to reach equilibrium by itself. The enzyme carboxyphosphoenolpyruvate phosphonomutase performs a similar reaction, converting P-carboxyphosphoenolpyruvate to phosphinopyruvate and carbon dioxide. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216616
14216671
Phosphoglucosamine mutase
In enzymology, a phosphoglucosamine mutase (EC 5.4.2.10) is an enzyme that catalyzes the chemical reaction alpha-D-glucosamine 1-phosphate formula_0 D-glucosamine 6-phosphate Hence, this enzyme has one substrate, alpha-D-glucosamine 1-phosphate, and one product, D-glucosamine 6-phosphate. This enzyme belongs to the family of isomerases, specifically the phosphotransferases (α-D-phosphohexomutases), which transfer phosphate groups within a molecule. The systematic name of this enzyme class is alpha-D-glucosamine 1,6-phosphomutase. This enzyme participates in aminosugars metabolism. Crystal structures of two bacterial phosphoglucosamine mutases are known (PDB entries 3I3W and 3PDK), from "Francisella tularensis" and "Bacillus anthracis". Both share a similar dimeric quaternary structure, as well as conserved features of the active site, as found their enzyme superfamily, the α-D-phosphohexomutases. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216671
14216698
Phosphomannomutase
In enzymology, a phosphomannomutase (EC 5.4.2.8) is an enzyme that catalyzes the chemical reaction alpha-D-mannose 1-phosphate formula_0 D-mannose 6-phosphate Hence, this enzyme has one substrate, alpha-D-mannose 1-phosphate, and one product, D-mannose 6-phosphate. This enzyme belongs to the family of isomerases, specifically the phosphotransferases (phosphomutases), which transfer phosphate groups within a molecule. The systematic name of this enzyme class is alpha-D-mannose 1,6-phosphomutase. Other names in common use include mannose phosphomutase, phosphomannose mutase, and D-mannose 1,6-phosphomutase. This enzyme participates in fructose and mannose metabolism. It has 2 cofactors: D-glucose 1,6-bisphosphate, and D-Mannose 1,6-bisphosphate. Structural studies. As of late 2007, 18 structures have been solved for this class of enzymes, with PDB accession codes 1K2Y, 1WQA, 2AMY, 2F7L, 2FKF, 2FKM, 2FUC, 2FUE, 2H4L, 2H5A, 2I54, 2I55, and 2Q4R. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216698
14216728
Phosphopentomutase
In enzymology, a phosphopentomutase (EC 5.4.2.7) is an enzyme that catalyzes the chemical reaction alpha-D-ribose 1-phosphate formula_0 D-ribose 5-phosphate Hence, this enzyme has one substrate, alpha-D-ribose 1-phosphate, and one product, D-ribose 5-phosphate. This enzyme belongs to the family of isomerases, specifically the phosphotransferases (phosphomutases), which transfer phosphate groups within a molecule. The systematic name of this enzyme class is alpha-D-ribose 1,5-phosphomutase. Other names in common use include phosphodeoxyribomutase, deoxyribose phosphomutase, deoxyribomutase, phosphoribomutase, alpha-D-glucose-1,6-bisphosphate:deoxy-D-ribose-1-phosphate, phosphotransferase, and D-ribose 1,5-phosphomutase. This enzyme participates in pentose phosphate pathway and purine metabolism. It has 3 cofactors: D-ribose 1,5-bisphosphate, alpha-D-Glucose 1,6-bisphosphate, and 2-Deoxy-D-ribose 1,5-bisphosphate. Structural studies. The first published description of a structure of a prokaryotic phosphopentomutase was in 2011. Structures of "Bacillus cereus" phosphopentomutase as it was purified, after activation, bound to ribose 5-phosphate and bound to glucose 1,6-bisphosphate are deposited in the PDB with accession codes 3M8W, 3M8Y, 3M8Z and 3OT9, respectively. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216728
14216784
Polyenoic fatty acid isomerase
In enzymology, a polyenoic fatty acid isomerase (EC 5.3.3.13) is an enzyme that catalyzes the chemical reaction (5Z,8Z,11Z,14Z,17Z)-icosapentaenoate formula_0 (5Z,7E,9E,14Z,17Z)-icosapentaenoate Hence, this enzyme has one substrate, (5Z,8Z,11Z,14Z,17Z)-icosapentaenoate, and one product, (5Z,7E,9E,14Z,17Z)-icosapentaenoate. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is (5Z,8Z,11Z,14Z,17Z)-icosapentaenoate Delta8,11-Delta7,9-isomerase (trans-double-bond-forming). Other names in common use include PFI, eicosapentaenoate cis-Delta5,8,11,14,17-eicosapentaenoate, cis-Delta5-trans-Delta7,9-cis-Delta14,17 isomerase, (5Z,8Z,11Z,14Z,17Z)-eicosapentaenoate Delta8,11-Delta7,8-isomerase, (incorrect), (5Z,8Z,11Z,14Z,17Z)-eicosapentaenoate Delta8,11-Delta7,9-isomerase, and (trans-double-bond-forming). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216784
14216810
Precorrin-8X methylmutase
In enzymology, a precorrin-8X methylmutase (EC 5.4.99.61) is an enzyme that catalyzes the chemical reaction precorrin-8X formula_0 hydrogenobyrinate Hence, this enzyme has one substrate, precorrin 8X, and one product, hydrogenobyrinate. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring other groups. The systematic name of this enzyme class is precorrin-8X 11,12-methylmutase. Other names in common use include precorrin isomerase, hydrogenobyrinic acid-binding protein and CobH. This enzyme is part of the biosynthetic pathway to cobalamin (vitamin B12) in aerobic bacteria. Structural studies. As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1F2V, 1I1H, 1OU0, 1V9C, 2AFR, and 2AFV. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216810
14216843
Proline racemase
In enzymology, a proline racemase (EC 5.1.1.4) is an enzyme that catalyzes the chemical reaction L-proline formula_0 D-proline Hence, this enzyme has two substrates, L- and D-proline, and two products, D- and L- proline. This enzyme belongs to the family of proline racemases acting on free amino acids. The systematic name of this enzyme class is proline racemase. This enzyme participates in arginine and proline metabolism. These enzymes catalyse the interconversion of L- and D-proline in bacteria. Species distribution. This first eukaryotic proline racemase was identified in "Trypanosoma cruzi" and fully characterized Q9NCP4. The parasite enzyme, "Tc"PRAC, is as a co-factor-independent proline racemase and displays B-cell mitogenic properties when released by "T. cruzi" upon infection, contributing to parasite escape. Novel proline racemases of medical and veterinary importance were described respectively in "Clostridium difficile" (Q17ZY4) and "Trypanosoma vivax" (B8LFE4). These studies showed that a peptide motif used as a minimal pattern signature to identify putative proline racemases (motif III*) is insufficient stringent "per se" to discriminate proline racemases from 4-hydroxyproline epimerases (HyPRE). Also, additional, non-dissociated elements that account for the discrimination of these enzymes were identified, based for instance on polarity constraints imposed by specific residues of the catalytic pockets. Based on those elements, enzymes incorrectly described as proline racemases were biochemically proved to be hydroxyproline epimerases (i.e. HyPREs from "Pseudomonas aeruginosa" (Q9I476), "Burkholderia pseudomallei" (Q63NG7), "Brucella abortus" (Q57B94), "Brucella suis" (Q8FYS0) and "Brucella melitensis" (Q8YJ29). Structural studies. The biochemical mechanism of proline racemase was first put forward in the late sixties by Cardinale and Abeles using the "Clostridium sticklandii" enzyme, "Cs"PRAC. The catalytic mechanism of proline racemase was late revisited by Buschiazzo, Goytia and collaborators that, in 2006, resolved the structure of the parasite "Tc"PRAC co-crystallyzed with its known competitive inhibitor - pyrrole carboxylic acid (PYC). Those studies showed that each active enzyme contains two catalytic pockets. Isothermal titration calorimetry then showed that two molecules of PYC associate with "Tc"PRAC in solution, and that this association is time-dependent and most probably based on mechanism of negative cooperativity. Complementary biochemical findings are consistent with the presence of two active catalytic sites per homodimer, each pertaining to one enzyme subunit, challenging the previously proposed mechanism of one catalytic site per homodimer previously proposed. Mechanism. The proline racemase active site contains two general bases, each of them a Cys, located on either side of the alpha-carbon of the substrate. In order to work properly, one Cys must be protonated (a thiol, RSH) and the other must be deprotonated (a thiolate, RS–). Inhibition. Proline racemase is inhibited by pyrrole-2-carboxylic acid, a transition state analogue that is flat like the transition state. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216843
14216879
Prostaglandin-A1 Delta-isomerase
In enzymology, a Prostaglandin-A1 Δ-isomerase (EC 5.3.3.9) is an enzyme that catalyzes the chemical reaction (13E)-(15S)-15-hydroxy-9-oxoprosta-10,13-dienoate formula_0 (13E)-(15S)-15-hydroxy-9-oxoprosta-11,13-dienoate Hence, this enzyme has one substrate, (13E)-(15S)-15-hydroxy-9-oxoprosta-10,13-dienoate (Prostaglandin A1 or PGA1), and one product, (13E)-(15S)-15-hydroxy-9-oxoprosta-11,13-dienoate (Prostaglandin C1). This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is (13E)-(15S)-15-hydroxy-9-oxoprosta-10,13-dienoate Delta10-Delta11-isomerase. This enzyme is also called prostaglandin A isomerase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216879
14216915
Prostaglandin-D synthase
In enzymology, a prostaglandin-D synthase (EC 5.3.99.2) is an enzyme that catalyzes the chemical reaction (5Z,13E)-(15S)-9alpha,11alpha-epidioxy-15-hydroxyprosta-5,13- dienoate formula_0 (5Z,13E)-(15S)-9alpha,15-dihydroxy-11-oxoprosta-5,13-dienoate Thus, the substrate of this enzyme is (5Z,13E)-(15S)-9alpha,11alpha-epidioxy-15-hydroxyprosta-5,13-dienoate, whereas its product is (5Z,13E)-(15S)-9alpha,15-dihydroxy-11-oxoprosta-5,13-dienoate. This enzyme belongs to the family of isomerases, specifically a class of other intramolecular oxidoreductases. The systematic name of this enzyme class is (5,13)-(15S)-9alpha,11alpha-epidioxy-15-hydroxyprosta-5,13-dienoate Delta-isomerase. Other names in common use include prostaglandin-H2 Delta-isomerase, prostaglandin-R-prostaglandin D isomerase, and PGH-PGD isomerase. This enzyme participates in arachidonic acid metabolism. In March 2012 American scientists reported a discovery that shows this enzyme triggers male baldness According to the discovery, levels of this enzyme are elevated in the cells of hair follicles located in bald patches on the scalp, but not in hairy areas. The research could lead to a cream to treat baldness. Structural studies. As of late 2001, 7 structures have been solved for this class of enzymes, with PDB accession codes 1IYH, 1IYI, 1PD2, 1V40, 2CZT, 2CZU, and 2E4J. See also. Prostaglandin D2 synthase Hematopoietic prostaglandin D synthase References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14216915
14217006
Protein-serine epimerase
In enzymology, a protein-serine epimerase (EC 5.1.1.16) is an enzyme that catalyzes the chemical reaction [protein]-L-serine formula_0 [protein]-D-serine Hence, this enzyme has one substrate, [protein]-L-serine, and one product, [protein]-D-serine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on amino acids and derivatives. The systematic name of this enzyme class is [protein]-serine epimerase. This enzyme is also called protein-serine racemase. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1WTC. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217006
14217452
Retinal isomerase
Retinal isomerase (EC 5.2.1.3) is an enzyme that catalyzes the isomerisation of all-trans-retinal in the eye into 11-cis-retinal which is the form that most opsins bind. all-trans-retinal formula_0 11-cis-retinal Hence, this enzyme has one substrate, all-trans-retinal, and one product, 11-cis-retinal. This enzyme belongs to the family of isomerases, specifically cis-trans isomerases. Its systematic name is all-trans-retinal 11-cis-trans-isomerase. Other names are retinene isomerase, and retinoid isomerase. This enzyme participates in the retinol metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217452
14217472
Retinol isomerase
In enzymology, a retinol isomerase (EC 5.2.1.7) is an enzyme that catalyzes the chemical reaction all-trans-retinol formula_0 11-cis-retinol Hence, this enzyme has one substrate, all-trans-retinol, and one product, 11-cis-retinol. These enzymes are alternatively referred to as retinoid isomerases. This enzyme belongs to the family of isomerases, specifically cis-trans isomerases. The systematic name of this enzyme class is all-trans-retinol 11-cis-trans-isomerase. This enzyme is also called all-trans-retinol isomerase. This enzyme participates in retinol metabolism. In vertebrates, RPE65 is the active retinol isomerase in the visual cycle. A lack of RPE65 function results in congenital blindness in children (specifically Leber congenital amaurosis). Emixustat, a partial inhibitor of RPE65, is currently in FDA clinical trials for the treatment of age-related macular degeneration. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217472
14217514
Ribose isomerase
In enzymology, a ribose isomerase (EC 5.3.1.20) is an enzyme that catalyzes the chemical reaction D-ribose formula_0 D-ribulose Hence, this enzyme has one substrate, D-ribose, and one product, D-ribulose. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases interconverting aldoses and ketoses. The systematic name of this enzyme class is D-ribose aldose-ketose-isomerase. Other names in common use include D-ribose isomerase, and D-ribose ketol-isomerase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217514
14217558
S-methyl-5-thioribose-1-phosphate isomerase
In enzymology, a S-methyl-5-thioribose-1-phosphate isomerase (EC 5.3.1.23) is an enzyme that catalyzes the chemical reaction S-methyl-5-thio-alpha-D-ribose 1-phosphate formula_0 S-methyl-5-thio-D-ribulose 1-phosphate Hence, this enzyme has one substrate, S-methyl-5-thio-alpha-D-ribose 1-phosphate, and one product, S-methyl-5-thio-D-ribulose 1-phosphate. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases interconverting aldoses and ketoses. The systematic name of this enzyme class is S-methyl-5-thio-alpha-D-ribose-1-phosphate aldose-ketose-isomerase. Other names in common use include methylthioribose 1-phosphate isomerase, 1-PMTR isomerase, 5-methylthio-5-deoxy-D-ribose-1-phosphate ketol-isomerase, S-methyl-5-thio-5-deoxy-D-ribose-1-phosphate ketol-isomerase, S-methyl-5-thio-5-deoxy-D-ribose-1-phosphate, aldose-ketose-isomerase, 1-phospho-5'-S-methylthioribose isomerase, and S-methyl-5-thio-D-ribose-1-phosphate aldose-ketose-isomerase. This enzyme participates in methionine metabolism. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1T9K and 1W2W. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217558
14217578
Steroid Delta-isomerase
In enzymology, a steroid Δ5-isomerase (EC 5.3.3.1) is an enzyme that catalyzes the chemical reaction a 3-oxo-Δ5-steroid formula_0 a 3-oxo-Δ4-steroid Hence, this enzyme has one substrate, a 3-oxo-Δ5-steroid, and one product, a 3-oxo-Δ4-steroid. Introduction. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is 3-oxosteroid Δ5-Δ4-isomerase. Other names in common use include ketosteroid isomerase (KSI), hydroxysteroid isomerase, steroid isomerase, Δ5-ketosteroid isomerase, Δ5(or Δ4)-3-keto steroid isomerase, Δ5-steroid isomerase, 3-oxosteroid isomerase, Δ5-3-keto steroid isomerase, and Δ5-3-oxosteroid isomerase. KSI has been studied extensively from the bacteria "Comamonas testosteroni" (TI), formerly referred to as "Pseudomonas testosteroni", and "Pseudomonas putida" (PI). The enzymes from these two sources are 34% homologous, and structural studies have shown that the placement of the catalytic groups in the active sites is virtually identical. Mammalian KSI has been studied from bovine adrenal cortex and rat liver. This enzyme participates in c21-steroid hormone metabolism and androgen and estrogen metabolism. An example substrate is Δ5-androstene-3,17-dione, which KSI converts to Δ4-androstene-3,17-dione. The above reaction in the absence of enzyme takes 7 weeks to complete in aqueous solution. KSI performs this reaction on an order of 1011 times faster, ranking it among the most proficient enzymes known. Bacterial KSI also serves as a model protein for studying enzyme catalysis and protein folding. Structural studies. KSI exists as a homodimer with two identical halves. The interface between the two monomers is narrow and well defined, consisting of neutral or apolar amino acids, suggesting the hydrophobic interaction is important for dimerization. Results show that the dimerization is essential to function. The active site is highly apolar and folds around the substrate in a manner similar to other enzymes with hydrophobic substrates, suggesting this fold is characteristic for binding hydrophobic substrates. No complete atomic structure of KSI appeared until 1997, when an NMR structure of TI KSI was reported. This structure showed that the active site is a deep hydrophobic pit with Asp-38 and Tyr-14 located at the bottom of this pit. The structure is thus entirely consistent with the proposed mechanistic roles of Asp-38 and Tyr-14. As of late 2007, 25 structures have been solved for this class of enzymes, with PDB accession codes 1BUQ, 1C7H, 1CQS, 1DMM, 1DMN, 1DMQ, 1E97, 1GS3, 1ISK, 1K41, 1OCV, 1OGX, 1OGZ, 1OH0, 1OHO, 1OHP, 1OHS, 1OPY, 1VZZ, 1W00, 1W01, 1W02, 1W6Y, 2PZV, and 8CHO. Mechanism. KSI catalyzes the rearrangement of a carbon-carbon double bond in ketosteroids through an enolate intermediate at a diffusion-limited rate. There have been conflicting results on the ionization state of the intermediate, whether it exists as the enolate or enol. Pollack uses a thermodynamic argument to suggest the intermediate exists as the enolate. The general base Asp-38 abstracts a proton from position 4 (alpha to the carbonyl, next to the double bond) of the steroid ring to form an enolate (the rate-limiting step) that is stabilized by the hydrogen bond donating Tyr-14 and Asp-99. Tyr-14 and Asp-99 are positioned deep within the hydrophobic active site and form a so-called oxanion hole. Protonated Asp-38 then transfers its proton to position 6 of the steroid ring to complete the reaction. Although the mechanistic steps of the reaction are not disputed, the "contributions" of various factors to catalysis such as electrostatics, hydrogen bonding of the oxyanion hole, and distal binding effects are discussed below and still debated. The Warshel group applied statistical mechanical computational methods and empirical valence bond theory to previous experimental data. It was determined that electrostatic preorganization-including ionic residues and fixed dipoles within the active site-contributes most to KSI catalysis. More specifically, Tyr-14 and Asp-99 dipoles work to stabilize the growing charge which accumulates on the enolate oxygen (O-3) throughout catalysis. In a similar way, the charge on Asp38 is stabilized by surrounding residues and a water molecule during the course of the reaction. The Boxer group used experimental Stark spectroscopy methods to identify the presence of H-bond-mediated electric fields within the KSI active site. These measurements quantified the electrostatic contribution to KSI catalysis (70%). The active site is lined with hydrophobic residues to accommodate the substrate, but Asp-99 and Tyr-14 are within hydrogen bonding distance of O-3. The hydrogen bonds from Tyr-14 and Asp-99 are known to significantly affect the rate of catalysis in KSI. Mutagenesis of this residue to alanine (D99A) or asparagine (D99N) results in a loss in activity at pH 7 of 3000-fold and 27-fold, respectively, implicating Asp-99 as important for enzymatic activity. Wu et al. proposed a mechanism that involves both Tyr-14 and Asp-99 forming hydrogen bonds directly to O-3 of the steroid. This mechanism was challenged by Zhao et al., who postulated a hydrogen bonding network with Asp-99 hydrogen bonding to Tyr-14, which in turn forms a hydrogen bond to O-3. More recently, the Herschlag group utilized unnatural amino acid incorporation to assay the importance of Tyr-14 to KSI catalysis. The natural tyrosine residue was substituted with unnatural halogenated amino acids surveying a range of pKa's. There was very little difference in KSI catalytic turnover with decreasing pKa, suggesting, in contrast to the electrostatic studies outlined above, that oxyanion hole stabilization is not primarily important for catalysis. Asp-38 general acidic/basic activity and effective molarity was probed by the Herschlag group through site-directed mutagenesis and exogenous base rescue. Asp-38 was mutated to Gly, nullifying catalytic activity, and exogenous rescue was attempted with carboxylates of varying size and molarity. By calculating the concentration of base needed for full rescue, the Herschlag group determined the effective molarity of Asp-38 in KSI (6400 M). Thus, Asp-38 is critical for KSI catalysis. Sigala et al. found that solvent exclusion and replacement by the remote hydrophobic steroid rings negligibly alter the electrostatic environment within the KSI oxyanion hole. In addition, ligand binding does not grossly alter the conformations of backbone and side chain groups observed in X-ray structures of PI KSI. However, NMR and UV studies suggest that steroid binding restricts the motions of several active-site groups, including Tyr-16. Recently, the Herschlag group proposed that remote binding of hydrophobic regions of the substrate to distal portions of the active site contribute to KSI catalysis (>5 kcal/mol). A 4-ring substrate reacted 27,000 times faster than a single ring substrate indicating the importance of distal active site binding motifs. This activity ratio persists throughout mutagenesis of residues important to oxyanion hole stabilization, implying that distal binding is what accounts for the large aforementioned reactivity difference. Numerous physical changes occur upon steroid binding within the KSI active site. In the free enzyme an ordered water molecule is positioned within hydrogen-bonding distance of Tyr-16 (the PI equivalent of TI KSI Tyr-14) and Asp-103 (the PI equivalent of TI KSI Asp-99). This and additional disordered water molecules present within the unliganded active site are displaced upon steroid binding and are substantially excluded by the dense constellation of hydrophobic residues that pack around the bound, hydrophobic steroid skeleton. As stated above, the degree to which various factors contribute to KSI catalysis is still debated. Function. KSI occurs in animal tissues concerned with steroid hormone biosynthesis, such as the adrenal, testis, and ovary. KSI in "Comamomas testosteroni" is used in the degradation pathway of steroids, allowing this bacteria to utilize steroids containing a double bond at Δ5, such as testosterone, as its sole source of carbon. In mammals, transfer of a double bond at Δ5 to Δ4 is catalyzed by 3-β-hydroxy-Δ5-steroid dehydrogenase at the same time as the dehydroxylation of 3-β-hydroxyl group to ketone group, while in "C. testosteroni" and "P. putida", Δ5,3-ketosteroid isomerase just transfers a double bond at Δ5 of 3-ketosteroid to Δ4. A Δ5-3-ketosteroid isomerase-disrupted mutant of strain TA441 can grow on dehydroepiandrosterone, which has a double bond at Δ5, but cannot grow on epiandrosterone, which lacks a double bond at Δ5, indicating that "C. testosteroni" KSI is responsible for transfer of the double bond from Δ5 to Δ4 and transfer of the double bond by hydrogenation at Δ5 and following dehydrogenation at Δ4 is not possible. Model enzyme. KSI has been used as a model system to test different theories to explain how enzymes achieve their catalytic efficiency. Low-barrier hydrogen bonds and unusual pKa values for the catalytic residues have been proposed as the basis for the fast action of KSI. Gerlt and Gassman proposed the formation of unusually short, strong hydrogen bonds between KSI oxanion hole and the reaction intermediate as a means of catalytic rate enhancement. In their model, high-energy states along the reaction coordinate are specifically stabilized by the formation of these bonds. Since then, the catalytic role of short, strong hydrogen bonds has been debated. Another proposal explaining enzyme catalysis tested through KSI is the geometrical complementarity of the active site to the transition state, which proposes the active site electrostatics is complementary to the substrate transition state. KSI has also been a model system for studying protein folding. Kim et al. studied the effect of folding and tertiary structure on the function of KSI. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217578
14217606
Styrene-oxide isomerase
In enzymology, a styrene-oxide isomerase (EC 5.3.99.7) is an enzyme that catalyzes the chemical reaction styrene oxide formula_0 phenylacetaldehyde Hence, this enzyme has one substrate, styrene oxide, and one product, phenylacetaldehyde. This enzyme belongs to the family of isomerases, specifically a class of other intramolecular oxidoreductases. The systematic name of this enzyme class is styrene-oxide isomerase (epoxide-cleaving). This enzyme is also called SOI. This enzyme participates in styrene degradation and is the second step of the pathway after the epoxidation of styrene by styrene monooxygenase. SOI is an integral membrane protein consisting of four transmembrane helices. Khanppnavar et al. determined the first cryo-EM structures of this protein, which show that SOI forms a novel homo-trimeric assembly, displaying a structural fold reminiscent of ion channels. The trimeric organization of SOI is essential for its function and is guided by the ferric heme b prosthetic group positioned at the interface of its subunits. This ferric heme b acts as a Lewis acid, interacting with the epoxide oxygen atom to facilitate epoxide ring-opening of substrates. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217606
14217632
Tartrate epimerase
Class of enzymes In enzymology, a tartrate epimerase (EC 5.1.2.5) is an enzyme that catalyzes the chemical reaction (R,R)-tartrate formula_0 meso-tartrate Hence, this enzyme has one substrate, (R,R)-tartrate, and one product, meso-tartrate. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on hydroxy acids and derivatives. The systematic name of this enzyme class is tartrate epimerase. This enzyme is also called tartaric racemase. This enzyme participates in glyoxylate and dicarboxylate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217632
14217662
Tetrahydroxypteridine cycloisomerase
In enzymology, a tetrahydroxypteridine cycloisomerase (EC 5.5.1.3) is an enzyme that catalyzes the chemical reaction tetrahydroxypteridine formula_0 xanthine-8-carboxylate Hence, this enzyme has one substrate, tetrahydroxypteridine, and one product, xanthine-8-carboxylate. This enzyme belongs to the family of isomerases, specifically the class of intramolecular lyases. The systematic name of this enzyme class is tetrahydroxypteridine lyase (isomerizing). It employs one cofactor, NAD+. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217662
14217679
Thiocyanate isomerase
In enzymology, a thiocyanate isomerase (EC 5.99.1.1) is an enzyme that catalyzes the chemical reaction benzyl isothiocyanate formula_0 benzyl thiocyanate Hence, this enzyme has one substrate, benzyl isothiocyanate, and one product, benzyl thiocyanate. This enzyme belongs to the family of isomerases, specifically those other isomerases sole sub-subclass for isomerases that do not belong in the other subclasses. The systematic name of this enzyme class is benzyl-thiocyanate isomerase. This enzyme is also called isothiocyanate isomerase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217679
14217704
Threonine racemase
In enzymology, a threonine racemase (EC 5.1.1.6) is an enzyme that catalyzes the chemical reaction L-threonine formula_0 D-threonine Hence, this enzyme has one substrate, L-threonine, and one product, D-threonine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on amino acids and derivatives. The systematic name of this enzyme class is threonine racemase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217704
14217730
Trans-2-decenoyl-(acyl-carrier protein) isomerase
In enzymology, a trans-2-decenoyl-[acyl-carrier protein] isomerase (EC 5.3.3.14) is an enzyme that catalyzes the chemical reaction trans-dec-2-enoyl-[acyl-carrier-protein] formula_0 cis-dec-3-enoyl-[acyl-carrier-protein] Hence, this enzyme has one substrate, trans-dec-2-enoyl-[acyl-carrier-protein], and one product, cis-dec-3-enoyl-[acyl-carrier-protein]. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is decenoyl-[acyl-carrier-protein] Delta2-trans-Delta3-cis-isomerase. Other names in common use include beta-hydroxydecanoyl thioester dehydrase, trans-2-cis-3-decenoyl-ACP isomerase, trans-2,cis-3-decenoyl-ACP isomerase, trans-2-decenoyl-ACP isomerase, and FabM. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217730
14217757
TRNA-pseudouridine synthase I
In enzymology, a tRNA-pseudouridine synthase I (EC 5.4.99.12) is an enzyme that catalyzes the chemical reaction tRNA uridine formula_0 tRNA pseudouridine Hence, this enzyme has one substrate, tRNA uridine, and one product, tRNA pseudouridine. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring other groups. The systematic name of this enzyme class is tRNA-uridine uracilmutase. Other names in common use include tRNA-uridine isomerase, tRNA pseudouridylate synthase I, transfer ribonucleate pseudouridine synthetase, pseudouridine synthase, and transfer RNA pseudouridine synthetase. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1VS3, 2NQP, 2NR0, and 2NRE. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217757
14217777
Tyrosine 2,3-aminomutase
In enzymology, a tyrosine 2,3-aminomutase (EC 5.4.3.6) is an enzyme that catalyzes the chemical reaction L-tyrosine formula_0 3-amino-3-(4-hydroxyphenyl)propanoate Hence, this enzyme has one substrate, L-tyrosine, and one product, 3-amino-3-(4-hydroxyphenyl)propanoate. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring amino groups. The systematic name of this enzyme class is L-tyrosine 2,3-aminomutase. This enzyme is also called tyrosine alpha,beta-mutase. This enzyme participates in tyrosine metabolism. It employs one cofactor, 5-methylene-3,5-dihydroimidazol-4-one (MIO) which is formed autocatalytic rearrangement of the internal tripeptide Ala-Ser-Gly. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2OHY. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217777
14217797
UDP-arabinose 4-epimerase
In enzymology, an UDP-arabinose 4-epimerase (EC 5.1.3.5) is an enzyme that catalyzes the chemical reaction UDP-L-arabinose formula_0 UDP-D-xylose Hence, this enzyme has one substrate, UDP-L-arabinose, and one product, UDP-D-xylose. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-L-arabinose 4-epimerase. Other names in common use include uridine diphosphoarabinose epimerase, UDP arabinose epimerase, uridine 5'-diphosphate-D-xylose 4-epimerase, and UDP-D-xylose 4-epimerase. This enzyme participates in nucleotide sugars metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217797
14217814
UDP-galactopyranose mutase
In enzymology, an UDP-galactopyranose mutase (EC 5.4.99.9) is an enzyme that catalyzes the chemical reaction UDP-D-galactopyranose formula_0 UDP-D-galacto-1,4-furanose Hence, this enzyme has one substrate, UDP-D-galactopyranose, and one product, UDP-D-galacto-1,4-furanose. This enzyme belongs to the family of isomerases, specifically those intramolecular transferases transferring other groups. The systematic name of this enzyme class is UDP-D-galactopyranose furanomutase. UDP-D-galactofuranose then serves as an activated sugar donor for the biosynthesis of galactofuranose glycoconjugates. The exocyclic 1,2-diol of galactofuranose is the epitope recognized by the putative chordate immune lectin intelectin. Structural studies. Because UGM is not present in the mammalian systems but is essential among several pathogenic microbes, the enzyme is an attractive antibiotic target. As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1I8T, 1V0J, 1WAM, 2BI7, and 2BI8. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217814
14217843
UDP-glucosamine 4-epimerase
Class of enzymes In enzymology, an UDP-glucosamine 4-epimerase (EC 5.1.3.16) is an enzyme that catalyzes the chemical reaction UDP-glucosamine formula_0 UDP-galactosamine Hence, this enzyme has one substrate, UDP-glucosamine, and one product, UDP-galactosamine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-glucosamine 4-epimerase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217843
14217885
UDP-glucuronate 4-epimerase
Class of enzymes In enzymology, an UDP-glucuronate 4-epimerase (EC 5.1.3.6) is an enzyme that catalyzes the chemical reaction UDP-glucuronate formula_0 UDP-D-galacturonate Hence, this enzyme has one substrate, UDP-glucuronate, and one product, UDP-D-galacturonate. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-glucuronate 4-epimerase. Other names in common use include uridine diphospho-D-galacturonic acid, UDP glucuronic epimerase, uridine diphosphoglucuronic epimerase, UDP-galacturonate 4-epimerase, uridine diphosphoglucuronate epimerase, and UDP-D-galacturonic acid 4-epimerase. This enzyme participates in starch and sucrose metabolism and nucleotide sugars metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217885
14217907
UDP-glucuronate 5'-epimerase
Class of enzymes In enzymology, an UDP-glucuronate 5'-epimerase (EC 5.1.3.12) is an enzyme that catalyzes the chemical reaction UDP-glucuronate formula_0 UDP-L-iduronate Hence, this enzyme has one substrate, UDP-glucuronate, and one product, UDP-L-iduronate. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-glucuronate 5'-epimerase. Other names in common use include uridine diphosphoglucuronate 5'-epimerase, UDP-glucuronic acid 5'-epimerase, and C-5-uronosyl epimerase. This enzyme participates in nucleotide sugars metabolism. It employs one cofactor, NAD+. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217907
14217924
UDP-N-acetylglucosamine 2-epimerase
Class of enzymes In enzymology, an UDP-N-acetylglucosamine 2-epimerase (EC 5.1.3.14) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine formula_0 UDP-N-acetyl-D-mannosamine Hence, this enzyme has one substrate, UDP-N-acetyl-D-glucosamine, and one product, UDP-N-acetyl-D-mannosamine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine 2-epimerase. Other names in common use include UDP-N-acetylglucosamine 2'-epimerase, uridine diphosphoacetylglucosamine 2'-epimerase, uridine diphospho-N-acetylglucosamine 2'-epimerase, and uridine diphosphate-N-acetylglucosamine-2'-epimerase. This enzyme participates in aminosugars metabolism. In microorganisms this epimerase is involved in the synthesis of the capsule precursor UDP-ManNAcA. An inhibitor of the bacterial 2-epimerase, epimerox, has been described. Some of these enzymes are bifunctional. The UDP-N-acetylglucosamine 2-epimerase from rat liver displays both epimerase and kinase activity. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1F6D, 1O6C, 1V4V, and 1VGV. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217924
14217938
UDP-N-acetylglucosamine 4-epimerase
Class of enzymes In enzymology, an UDP-N-acetylglucosamine 4-epimerase (EC 5.1.3.7) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-glucosamine formula_0 UDP-N-acetyl-D-galactosamine Hence, this enzyme has one substrate, UDP-N-acetyl-D-glucosamine, and one product, UDP-N-acetyl-D-galactosamine. This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-N-acetyl-D-glucosamine 4-epimerase. Other names in common use include UDP acetylglucosamine epimerase, uridine diphosphoacetylglucosamine epimerase, uridine diphosphate N-acetylglucosamine-4-epimerase, and uridine 5'-diphospho-N-acetylglucosamine-4-epimerase. This enzyme participates in aminosugars metabolism. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1SB8 and 1SB9. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217938
14217960
Vinylacetyl-CoA Delta-isomerase
In enzymology, a vinylacetyl-CoA Delta-isomerase (EC 5.3.3.3) is an enzyme that catalyzes the chemical reaction vinylacetyl-CoA formula_0 crotonyl-CoA Hence, this enzyme has one substrate, vinylacetyl-CoA, and one product, crotonyl-CoA. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases transposing C=C bonds. The systematic name of this enzyme class is vinylacetyl-CoA Delta3-Delta2-isomerase. Other names in common use include vinylacetyl coenzyme A Delta-isomerase, vinylacetyl coenzyme A isomerase, and Delta3-cis-Delta2-trans-enoyl-CoA isomerase. This enzyme participates in butanoate metabolism. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1U8V. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14217960
14220
History of mathematics
The history of mathematics deals with the origin of discoveries in mathematics and the mathematical methods and notation of the past. Before the modern age and the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. From 3000 BC the Mesopotamian states of Sumer, Akkad and Assyria, followed closely by Ancient Egypt and the Levantine state of Ebla began using arithmetic, algebra and geometry for purposes of taxation, commerce, trade and also in the field of astronomy to record time and formulate calendars. The earliest mathematical texts available are from Mesopotamia and Egypt – "Plimpton 322" (Babylonian c. 2000 – 1900 BC), the "Rhind Mathematical Papyrus" (Egyptian c. 1800 BC) and the "Moscow Mathematical Papyrus" (Egyptian c. 1890 BC). All of these texts mention the so-called Pythagorean triples, so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry. The study of mathematics as a "demonstrative discipline" began in the 6th century BC with the Pythagoreans, who coined the term "mathematics" from the ancient Greek "μάθημα" ("mathema"), meaning "subject of instruction". Greek mathematics greatly refined the methods (especially through the introduction of deductive reasoning and mathematical rigor in proofs) and expanded the subject matter of mathematics. Although they made virtually no contributions to theoretical mathematics, the ancient Romans used applied mathematics in surveying, structural engineering, mechanical engineering, bookkeeping, creation of lunar and solar calendars, and even arts and crafts. Chinese mathematics made early contributions, including a place value system and the first use of negative numbers. The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics through the work of Muḥammad ibn Mūsā al-Khwārizmī. Islamic mathematics, in turn, developed and expanded the mathematics known to these civilizations. Contemporaneous with but independent of these traditions were the mathematics developed by the Maya civilization of Mexico and Central America, where the concept of zero was given a standard symbol in Maya numerals. Many Greek and Arabic texts on mathematics were translated into Latin from the 12th century onward, leading to further development of mathematics in Medieval Europe. From ancient times through the Middle Ages, periods of mathematical discovery were often followed by centuries of stagnation. Beginning in Renaissance Italy in the 15th century, new mathematical developments, interacting with new scientific discoveries, were made at an increasing pace that continues through the present day. This includes the groundbreaking work of both Isaac Newton and Gottfried Wilhelm Leibniz in the development of infinitesimal calculus during the course of the 17th century. Prehistoric. The origins of mathematical thought lie in the concepts of number, patterns in nature, magnitude, and form. Modern studies of animal cognition have shown that these concepts are not unique to humans. Such concepts would have been part of everyday life in hunter-gatherer societies. The idea of the "number" concept evolving gradually over time is supported by the existence of languages which preserve the distinction between "one", "two", and "many", but not of numbers larger than two. The Ishango bone, found near the headwaters of the Nile river (northeastern Congo), may be more than 20,000 years old and consists of a series of marks carved in three columns running the length of the bone. Common interpretations are that the Ishango bone shows either a "tally" of the earliest known demonstration of sequences of prime numbers or a six-month lunar calendar. Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after 10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10." The Ishango bone, according to scholar Alexander Marshack, may have influenced the later development of mathematics in Egypt as, like some entries on the Ishango bone, Egyptian arithmetic also made use of multiplication by 2; this however, is disputed. Predynastic Egyptians of the 5th millennium BC pictorially represented geometric designs. It has been claimed that megalithic monuments in England and Scotland, dating from the 3rd millennium BC, incorporate geometric ideas such as circles, ellipses, and Pythagorean triples in their design. All of the above are disputed however, and the currently oldest undisputed mathematical documents are from Babylonian and dynastic Egyptian sources. Babylonian. Babylonian mathematics refers to any mathematics of the peoples of Mesopotamia (modern Iraq) from the days of the early Sumerians through the Hellenistic period almost to the dawn of Christianity. The majority of Babylonian mathematical work comes from two widely separated periods: The first few hundred years of the second millennium BC (Old Babylonian period), and the last few centuries of the first millennium BC (Seleucid period). It is named Babylonian mathematics due to the central role of Babylon as a place of study. Later under the Arab Empire, Mesopotamia, especially Baghdad, once again became an important center of study for Islamic mathematics. In contrast to the sparsity of sources in Egyptian mathematics, knowledge of Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s. Written in Cuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians, who built the earliest civilization in Mesopotamia. They developed a complex system of metrology from 3000 BC that was chiefly concerned with administrative/financial counting, such as grain allotments, workers, weights of silver, or even liquids, among other things. From around 2500 BC onward, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to this period. Babylonian mathematics were written using a sexagesimal (base-60) numeral system. From this derives the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 × 6) degrees in a circle, as well as the use of seconds and minutes of arc to denote fractions of a degree. It is thought the sexagesimal system was initially used by Sumerian scribes because 60 can be evenly divided by 2, 3, 4, 5, 6, 10, 12, 15, 20 and 30, and for scribes (doling out the aforementioned grain allotments, recording weights of silver, etc.) being able to easily calculate by hand was essential, and so a sexagesimal system is pragmatically easier to calculate by hand with; however, there is the possibility that using a sexagesimal system was an ethno-linguistic phenomenon (that might not ever be known), and not a mathematical/practical decision. Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a place-value system, where digits written in the left column represented larger values, much as in the decimal system. The power of the Babylonian notational system lay in that it could be used to represent fractions as easily as whole numbers; thus multiplying two numbers that contained fractions was no different from multiplying integers, similar to modern notation. The notational system of the Babylonians was the best of any civilization until the Renaissance, and its power allowed it to achieve remarkable computational accuracy; for example, the Babylonian tablet YBC 7289 gives an approximation of √2 accurate to five decimal places. The Babylonians lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context. By the Seleucid period, the Babylonians had developed a zero symbol as a placeholder for empty positions; however it was only used for intermediate positions. This zero sign does not appear in terminal positions, thus the Babylonians came close but did not develop a true place value system. Other topics covered by Babylonian mathematics include fractions, algebra, quadratic and cubic equations, and the calculation of regular numbers, and their reciprocal pairs. The tablets also include multiplication tables and methods for solving linear, quadratic equations and cubic equations, a remarkable achievement for the time. Tablets from the Old Babylonian period also contain the earliest known statement of the Pythagorean theorem. However, as with Egyptian mathematics, Babylonian mathematics shows no awareness of the difference between exact and approximate solutions, or the solvability of a problem, and most importantly, no explicit statement of the need for proofs or logical principles. Egyptian. Egyptian mathematics refers to mathematics written in the Egyptian language. From the Hellenistic period, Greek replaced Egyptian as the written language of Egyptian scholars. Mathematical study in Egypt later continued under the Arab Empire as part of Islamic mathematics, when Arabic became the written language of Egyptian scholars. Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa. Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs. The most extensive Egyptian mathematical text is the Rhind papyrus (sometimes also called the Ahmes Papyrus after its author), dated to c. 1650 BC but likely a copy of an older document from the Middle Kingdom of about 2000–1800 BC. It is an instruction manual for students in arithmetic and geometry. In addition to giving area formulas and methods for multiplication, division and working with unit fractions, it also contains evidence of other mathematical knowledge, including composite and prime numbers; arithmetic, geometric and harmonic means; and simplistic understandings of both the Sieve of Eratosthenes and perfect number theory (namely, that of the number 6). It also shows how to solve first order linear equations as well as arithmetic and geometric series. Another significant Egyptian mathematical text is the Moscow papyrus, also from the Middle Kingdom period, dated to c. 1890 BC. It consists of what are today called "word problems" or "story problems", which were apparently intended as entertainment. One problem is considered to be of particular importance because it gives a method for finding the volume of a frustum (truncated pyramid). Finally, the Berlin Papyrus 6619 (c. 1800 BC) shows that ancient Egyptians could solve a second-order algebraic equation. Greek. Greek mathematics refers to the mathematics written in the Greek language from the time of Thales of Miletus (~600 BC) to the closure of the Academy of Athens in 529 AD. Greek mathematicians lived in cities spread over the entire Eastern Mediterranean, from Italy to North Africa, but were united by culture and language. Greek mathematics of the period following Alexander the Great is sometimes called Hellenistic mathematics. Greek mathematics was much more sophisticated than the mathematics that had been developed by earlier cultures. All surviving records of pre-Greek mathematics show the use of inductive reasoning, that is, repeated observations used to establish rules of thumb. Greek mathematicians, by contrast, used deductive reasoning. The Greeks used logic to derive conclusions from definitions and axioms, and used mathematical rigor to prove them. Greek mathematics is thought to have begun with Thales of Miletus (c. 624–c.546 BC) and Pythagoras of Samos (c. 582–c. 507 BC). Although the extent of the influence is disputed, they were probably inspired by Egyptian and Babylonian mathematics. According to legend, Pythagoras traveled to Egypt to learn mathematics, geometry, and astronomy from Egyptian priests. Thales used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales' Theorem. As a result, he has been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed. Pythagoras established the Pythagorean School, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number". It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins. The Pythagoreans are credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history, and with the proof of the existence of irrational numbers. Although he was preceded by the Babylonians, Indians and the Chinese, the Neopythagorean mathematician Nicomachus (60–120 AD) provided one of the earliest Greco-Roman multiplication tables, whereas the oldest extant Greek multiplication table is found on a wax tablet dated to the 1st century AD (now found in the British Museum). The association of the Neopythagoreans with the Western invention of the multiplication table is evident in its later Medieval name: the "mensa Pythagorica". Plato (428/427 BC – 348/347 BC) is important in the history of mathematics for inspiring and guiding others. His Platonic Academy, in Athens, became the mathematical center of the world in the 4th century BC, and it was from this school that the leading mathematicians of the day, such as Eudoxus of Cnidus (c. 390 - c. 340 BC), came. Plato also discussed the foundations of mathematics, clarified some of the definitions (e.g. that of a line as "breadthless length"), and reorganized the assumptions. The analytic method is ascribed to Plato, while a formula for obtaining Pythagorean triples bears his name. Eudoxus developed the method of exhaustion, a precursor of modern integration and a theory of ratios that avoided the problem of incommensurable magnitudes. The former allowed the calculations of areas and volumes of curvilinear figures, while the latter enabled subsequent geometers to make significant advances in geometry. Though he made no specific technical mathematical discoveries, Aristotle (384–c. 322 BC) contributed significantly to the development of mathematics by laying the foundations of logic. In the 3rd century BC, the premier center of mathematical education and research was the Musaeum of Alexandria. It was there that Euclid (c. 300 BC) taught, and wrote the "Elements", widely considered the most successful and influential textbook of all time. The "Elements" introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of the "Elements" were already known, Euclid arranged them into a single, coherent logical framework. The "Elements" was known to all educated people in the West up through the middle of the 20th century and its contents are still taught in geometry classes today. In addition to the familiar theorems of Euclidean geometry, the "Elements" was meant as an introductory textbook to all mathematical subjects of the time, such as number theory, algebra and solid geometry, including proofs that the square root of two is irrational and that there are infinitely many prime numbers. Euclid also wrote extensively on other subjects, such as conic sections, optics, spherical geometry, and mechanics, but only half of his writings survive. Archimedes (c. 287–212 BC) of Syracuse, widely considered the greatest mathematician of antiquity, used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. He also showed one could use the method of exhaustion to calculate the value of π with as much precision as desired, and obtained the most accurate value of π then known, 3+ < π < 3+. He also studied the spiral bearing his name, obtained formulas for the volumes of surfaces of revolution (paraboloid, ellipsoid, hyperboloid), and an ingenious method of exponentiation for expressing very large numbers. While he is also known for his contributions to physics and several advanced mechanical devices, Archimedes himself placed far greater value on the products of his thought and general mathematical principles. He regarded as his greatest achievement his finding of the surface area and volume of a sphere, which he obtained by proving these are 2/3 the surface area and volume of a cylinder circumscribing the sphere. Apollonius of Perga (c. 262–190 BC) made significant advances to the study of conic sections, showing that one can obtain all three varieties of conic section by varying the angle of the plane that cuts a double-napped cone. He also coined the terminology in use today for conic sections, namely parabola ("place beside" or "comparison"), "ellipse" ("deficiency"), and "hyperbola" ("a throw beyond"). His work "Conics" is one of the best known and preserved mathematical works from antiquity, and in it he derives many theorems concerning conic sections that would prove invaluable to later mathematicians and astronomers studying planetary motion, such as Isaac Newton. While neither Apollonius nor any other Greek mathematicians made the leap to coordinate geometry, Apollonius' treatment of curves is in some ways similar to the modern treatment, and some of his work seems to anticipate the development of analytical geometry by Descartes some 1800 years later. Around the same time, Eratosthenes of Cyrene (c. 276–194 BC) devised the Sieve of Eratosthenes for finding prime numbers. The 3rd century BC is generally regarded as the "Golden Age" of Greek mathematics, with advances in pure mathematics henceforth in relative decline. Nevertheless, in the centuries that followed significant advances were made in applied mathematics, most notably trigonometry, largely to address the needs of astronomers. Hipparchus of Nicaea (c. 190–120 BC) is considered the founder of trigonometry for compiling the first known trigonometric table, and to him is also due the systematic use of the 360 degree circle. Heron of Alexandria (c. 10–70 AD) is credited with Heron's formula for finding the area of a scalene triangle and with being the first to recognize the possibility of negative numbers possessing square roots. Menelaus of Alexandria (c. 100 AD) pioneered spherical trigonometry through Menelaus' theorem. The most complete and influential trigonometric work of antiquity is the "Almagest" of Ptolemy (c. AD 90–168), a landmark astronomical treatise whose trigonometric tables would be used by astronomers for the next thousand years. Ptolemy is also credited with Ptolemy's theorem for deriving trigonometric quantities, and the most accurate value of π outside of China until the medieval period, 3.1416. Following a period of stagnation after Ptolemy, the period between 250 and 350 AD is sometimes referred to as the "Silver Age" of Greek mathematics. During this period, Diophantus made significant advances in algebra, particularly indeterminate analysis, which is also known as "Diophantine analysis". The study of Diophantine equations and Diophantine approximations is a significant area of research to this day. His main work was the "Arithmetica", a collection of 150 algebraic problems dealing with exact solutions to determinate and indeterminate equations. The "Arithmetica" had a significant influence on later mathematicians, such as Pierre de Fermat, who arrived at his famous Last Theorem after trying to generalize a problem he had read in the "Arithmetica" (that of dividing a square into two squares). Diophantus also made significant advances in notation, the "Arithmetica" being the first instance of algebraic symbolism and syncopation. Among the last great Greek mathematicians is Pappus of Alexandria (4th century AD). He is known for his hexagon theorem and centroid theorem, as well as the Pappus configuration and Pappus graph. His "Collection" is a major source of knowledge on Greek mathematics as most of it has survived. Pappus is considered the last major innovator in Greek mathematics, with subsequent work consisting mostly of commentaries on earlier work. The first woman mathematician recorded by history was Hypatia of Alexandria (AD 350–415). She succeeded her father (Theon of Alexandria) as Librarian at the Great Library and wrote many works on applied mathematics. Because of a political dispute, the Christian community in Alexandria had her stripped publicly and executed. Her death is sometimes taken as the end of the era of the Alexandrian Greek mathematics, although work did continue in Athens for another century with figures such as Proclus, Simplicius and Eutocius. Although Proclus and Simplicius were more philosophers than mathematicians, their commentaries on earlier works are valuable sources on Greek mathematics. The closure of the neo-Platonic Academy of Athens by the emperor Justinian in 529 AD is traditionally held as marking the end of the era of Greek mathematics, although the Greek tradition continued unbroken in the Byzantine empire with mathematicians such as Anthemius of Tralles and Isidore of Miletus, the architects of the Hagia Sophia. Nevertheless, Byzantine mathematics consisted mostly of commentaries, with little in the way of innovation, and the centers of mathematical innovation were to be found elsewhere by this time. Roman. Although ethnic Greek mathematicians continued under the rule of the late Roman Republic and subsequent Roman Empire, there were no noteworthy native Latin mathematicians in comparison. Ancient Romans such as Cicero (106–43 BC), an influential Roman statesman who studied mathematics in Greece, believed that Roman surveyors and calculators were far more interested in applied mathematics than the theoretical mathematics and geometry that were prized by the Greeks. It is unclear if the Romans first derived their numerical system directly from the Greek precedent or from Etruscan numerals used by the Etruscan civilization centered in what is now Tuscany, central Italy. Using calculation, Romans were adept at both instigating and detecting financial fraud, as well as managing taxes for the treasury. Siculus Flaccus, one of the Roman "gromatici" (i.e. land surveyor), wrote the "Categories of Fields", which aided Roman surveyors in measuring the surface areas of allotted lands and territories. Aside from managing trade and taxes, the Romans also regularly applied mathematics to solve problems in engineering, including the erection of architecture such as bridges, road-building, and preparation for military campaigns. Arts and crafts such as Roman mosaics, inspired by previous Greek designs, created illusionist geometric patterns and rich, detailed scenes that required precise measurements for each tessera tile, the opus tessellatum pieces on average measuring eight millimeters square and the finer opus vermiculatum pieces having an average surface of four millimeters square. The creation of the Roman calendar also necessitated basic mathematics. The first calendar allegedly dates back to 8th century BC during the Roman Kingdom and included 356 days plus a leap year every other year. In contrast, the lunar calendar of the Republican era contained 355 days, roughly ten-and-one-fourth days shorter than the solar year, a discrepancy that was solved by adding an extra month into the calendar after the 23rd of February. This calendar was supplanted by the Julian calendar, a solar calendar organized by Julius Caesar (100–44 BC) and devised by Sosigenes of Alexandria to include a leap day every four years in a 365-day cycle. This calendar, which contained an error of 11 minutes and 14 seconds, was later corrected by the Gregorian calendar organized by Pope Gregory XIII (r. 1572 – 1585), virtually the same solar calendar used in modern times as the international standard calendar. At roughly the same time, the Han Chinese and the Romans both invented the wheeled odometer device for measuring distances traveled, the Roman model first described by the Roman civil engineer and architect Vitruvius (c. 80 BC – c. 15 BC). The device was used at least until the reign of emperor Commodus (r. 177 – 192 AD), but its design seems to have been lost until experiments were made during the 15th century in Western Europe. Perhaps relying on similar gear-work and technology found in the Antikythera mechanism, the odometer of Vitruvius featured chariot wheels measuring 4 feet (1.2 m) in diameter turning four-hundred times in one Roman mile (roughly 4590 ft/1400 m). With each revolution, a pin-and-axle device engaged a 400-tooth cogwheel that turned a second gear responsible for dropping pebbles into a box, each pebble representing one mile traversed. Chinese. An analysis of early Chinese mathematics has demonstrated its unique development compared to other parts of the world, leading scholars to assume an entirely independent development. The oldest extant mathematical text from China is the "Zhoubi Suanjing" (周髀算經), variously dated to between 1200 BC and 100 BC, though a date of about 300 BC during the Warring States Period appears reasonable. However, the Tsinghua Bamboo Slips, containing the earliest known decimal multiplication table (although ancient Babylonians had ones with a base of 60), is dated around 305 BC and is perhaps the oldest surviving mathematical text of China. Of particular note is the use in Chinese mathematics of a decimal positional notation system, the so-called "rod numerals" in which distinct ciphers were used for numbers between 1 and 10, and additional ciphers for powers of ten. Thus, the number 123 would be written using the symbol for "1", followed by the symbol for "100", then the symbol for "2" followed by the symbol for "10", followed by the symbol for "3". This was the most advanced number system in the world at the time, apparently in use several centuries before the common era and well before the development of the Indian numeral system. Rod numerals allowed the representation of numbers as large as desired and allowed calculations to be carried out on the "suan pan", or Chinese abacus. The date of the invention of the "suan pan" is not certain, but the earliest written mention dates from AD 190, in Xu Yue's "Supplementary Notes on the Art of Figures". The oldest extant work on geometry in China comes from the philosophical Mohist canon c. 330 BC, compiled by the followers of Mozi (470–390 BC). The "Mo Jing" described various aspects of many fields associated with physical science, and provided a small number of geometrical theorems as well. It also defined the concepts of circumference, diameter, radius, and volume. In 212 BC, the Emperor Qin Shi Huang commanded all books in the Qin Empire other than officially sanctioned ones be burned. This decree was not universally obeyed, but as a consequence of this order little is known about ancient Chinese mathematics before this date. After the book burning of 212 BC, the Han dynasty (202 BC–220 AD) produced works of mathematics which presumably expanded on works that are now lost. The most important of these is "The Nine Chapters on the Mathematical Art", the full title of which appeared by AD 179, but existed in part under other titles beforehand. It consists of 246 word problems involving agriculture, business, employment of geometry to figure height spans and dimension ratios for Chinese pagoda towers, engineering, surveying, and includes material on right triangles. It created mathematical proof for the Pythagorean theorem, and a mathematical formula for Gaussian elimination. The treatise also provides values of π, which Chinese mathematicians originally approximated as 3 until Liu Xin (d. 23 AD) provided a figure of 3.1457 and subsequently Zhang Heng (78–139) approximated pi as 3.1724, as well as 3.162 by taking the square root of 10. Liu Hui commented on the "Nine Chapters" in the 3rd century AD and gave a value of π accurate to 5 decimal places (i.e. 3.14159). Though more of a matter of computational stamina than theoretical insight, in the 5th century AD Zu Chongzhi computed the value of π to seven decimal places (between 3.1415926 and 3.1415927), which remained the most accurate value of π for almost the next 1000 years. He also established a method which would later be called Cavalieri's principle to find the volume of a sphere. The high-water mark of Chinese mathematics occurred in the 13th century during the latter half of the Song dynasty (960–1279), with the development of Chinese algebra. The most important text from that period is the "Precious Mirror of the Four Elements" by Zhu Shijie (1249–1314), dealing with the solution of simultaneous higher order algebraic equations using a method similar to Horner's method. The "Precious Mirror" also contains a diagram of Pascal's triangle with coefficients of binomial expansions through the eighth power, though both appear in Chinese works as early as 1100. The Chinese also made use of the complex combinatorial diagram known as the magic square and magic circles, described in ancient times and perfected by Yang Hui (AD 1238–1298). Even after European mathematics began to flourish during the Renaissance, European and Chinese mathematics were separate traditions, with significant Chinese mathematical output in decline from the 13th century onwards. Jesuit missionaries such as Matteo Ricci carried mathematical ideas back and forth between the two cultures from the 16th to 18th centuries, though at this point far more mathematical ideas were entering China than leaving. Japanese mathematics, Korean mathematics, and Vietnamese mathematics are traditionally viewed as stemming from Chinese mathematics and belonging to the Confucian-based East Asian cultural sphere. Korean and Japanese mathematics were heavily influenced by the algebraic works produced during China's Song dynasty, whereas Vietnamese mathematics was heavily indebted to popular works of China's Ming dynasty (1368–1644). For instance, although Vietnamese mathematical treatises were written in either Chinese or the native Vietnamese Chữ Nôm script, all of them followed the Chinese format of presenting a collection of problems with algorithms for solving them, followed by numerical answers. Mathematics in Vietnam and Korea were mostly associated with the professional court bureaucracy of mathematicians and astronomers, whereas in Japan it was more prevalent in the realm of private schools. Japan. The mathematics that developed in Japan during the Edo period (1603-1887) is independent of Western mathematics; To this period belongs the mathematician Seki Takakazu, of great influence, for example, in the development of wasan (traditional Japanese mathematics), and whose discoveries (in areas such as integral calculus), are almost simultaneous with contemporary European mathematicians such as Gottfried Leibniz. Japanese mathematics of this period is inspired by Chinese mathematics and is oriented towards essentially geometric problems. On wooden tablets called sangaku, "geometric enigmas" are proposed and solved; That's where, for example, Soddy's hexlet theorem comes from. Indian. The earliest civilization on the Indian subcontinent is the Indus Valley civilization (mature second phase: 2600 to 1900 BC) that flourished in the Indus river basin. Their cities were laid out with geometric regularity, but no known mathematical documents survive from this civilization. The oldest extant mathematical records from India are the Sulba Sutras (dated variously between the 8th century BC and the 2nd century AD), appendices to religious texts which give simple rules for constructing altars of various shapes, such as squares, rectangles, parallelograms, and others. As with Egypt, the preoccupation with temple functions points to an origin of mathematics in religious ritual. The Sulba Sutras give methods for constructing a circle with approximately the same area as a given square, which imply several different approximations of the value of π. In addition, they compute the square root of 2 to several decimal places, list Pythagorean triples, and give a statement of the Pythagorean theorem. All of these results are present in Babylonian mathematics, indicating Mesopotamian influence. It is not known to what extent the Sulba Sutras influenced later Indian mathematicians. As in China, there is a lack of continuity in Indian mathematics; significant advances are separated by long periods of inactivity. Pāṇini (c. 5th century BC) formulated the rules for Sanskrit grammar. His notation was similar to modern mathematical notation, and used metarules, transformations, and recursion. Pingala (roughly 3rd–1st centuries BC) in his treatise of prosody uses a device corresponding to a binary numeral system. His discussion of the combinatorics of meters corresponds to an elementary version of the binomial theorem. Pingala's work also contains the basic ideas of Fibonacci numbers (called "mātrāmeru"). The next significant mathematical documents from India after the "Sulba Sutras" are the "Siddhantas", astronomical treatises from the 4th and 5th centuries AD (Gupta period) showing strong Hellenistic influence. They are significant in that they contain the first instance of trigonometric relations based on the half-chord, as is the case in modern trigonometry, rather than the full chord, as was the case in Ptolemaic trigonometry. Through a series of translation errors, the words "sine" and "cosine" derive from the Sanskrit "jiya" and "kojiya". Around 500 AD, Aryabhata wrote the "Aryabhatiya", a slim volume, written in verse, intended to supplement the rules of calculation used in astronomy and mathematical mensuration, though with no feeling for logic or deductive methodology. It is in the "Aryabhatiya" that the decimal place-value system first appears. Several centuries later, the Muslim mathematician Abu Rayhan Biruni described the "Aryabhatiya" as a "mix of common pebbles and costly crystals". In the 7th century, Brahmagupta identified the Brahmagupta theorem, Brahmagupta's identity and Brahmagupta's formula, and for the first time, in "Brahma-sphuta-siddhanta", he lucidly explained the use of zero as both a placeholder and decimal digit, and explained the Hindu–Arabic numeral system. It was from a translation of this Indian text on mathematics (c. 770) that Islamic mathematicians were introduced to this numeral system, which they adapted as Arabic numerals. Islamic scholars carried knowledge of this number system to Europe by the 12th century, and it has now displaced all older number systems throughout the world. Various symbol sets are used to represent numbers in the Hindu–Arabic numeral system, all of which evolved from the Brahmi numerals. Each of the roughly dozen major scripts of India has its own numeral glyphs. In the 10th century, Halayudha's commentary on Pingala's work contains a study of the Fibonacci sequence and Pascal's triangle, and describes the formation of a matrix. In the 12th century, Bhāskara II, who lived in southern India, wrote extensively on all then known branches of mathematics. His work contains mathematical objects equivalent or approximately equivalent to infinitesimals, the mean value theorem and the derivative of the sine function although he did not develop the notion of a derivative.In the 14th century, Narayana Pandita completed his "Ganita Kaumudi". Also in the 14th century, Madhava of Sangamagrama, the founder of the Kerala School of Mathematics, found the Madhava–Leibniz series and obtained from it a transformed series, whose first 21 terms he used to compute the value of π as 3.14159265359. Madhava also found the Madhava-Gregory series to determine the arctangent, the Madhava-Newton power series to determine sine and cosine and the Taylor approximation for sine and cosine functions. In the 16th century, Jyesthadeva consolidated many of the Kerala School's developments and theorems in the "Yukti-bhāṣā". It has been argued that certain ideas of calculus like infinite series and taylor series of some trigonometry functions, were transmitted to Europe in the 16th century via Jesuit missionaries and traders who were active around the ancient port of Muziris at the time and, as a result, directly influenced later European developments in analysis and calculus. However, other scholars argue that the Kerala School did not formulate a systematic theory of differentiation and integration, and that there is not any direct evidence of their results being transmitted outside Kerala. Islamic empires. The Islamic Empire established across the Middle East, Central Asia, North Africa, Iberia, and in parts of India in the 8th century made significant contributions towards mathematics. Although most Islamic texts on mathematics were written in Arabic, they were not all written by Arabs, since much like the status of Greek in the Hellenistic world, Arabic was used as the written language of non-Arab scholars throughout the Islamic world at the time. In the 9th century, the Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī wrote an important book on the Hindu–Arabic numerals and one on methods for solving equations. His book "On the Calculation with Hindu Numerals", written about 825, along with the work of Al-Kindi, were instrumental in spreading Indian mathematics and Indian numerals to the West. The word "algorithm" is derived from the Latinization of his name, Algoritmi, and the word "algebra" from the title of one of his works, "Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa’l-muqābala" ("The Compendious Book on Calculation by Completion and Balancing"). He gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots, and he was the first to teach algebra in an elementary form and for its own sake. He also discussed the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which al-Khwārizmī originally described as "al-jabr". His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems." In Egypt, Abu Kamil extended algebra to the set of irrational numbers, accepting square roots and fourth roots as solutions and coefficients to quadratic equations. He also developed techniques used to solve three non-linear simultaneous equations with three unknown variables. One unique feature of his works was trying to find all the possible solutions to some of his problems, including one where he found 2676 solutions. His works formed an important foundation for the development of algebra and influenced later mathematicians, such as al-Karaji and Fibonacci. Further developments in algebra were made by Al-Karaji in his treatise "al-Fakhri", where he extends the methodology to incorporate integer powers and integer roots of unknown quantities. Something close to a proof by mathematical induction appears in a book written by Al-Karaji around 1000 AD, who used it to prove the binomial theorem, Pascal's triangle, and the sum of integral cubes. The historian of mathematics, F. Woepcke, praised Al-Karaji for being "the first who introduced the theory of algebraic calculus." Also in the 10th century, Abul Wafa translated the works of Diophantus into Arabic. Ibn al-Haytham was the first mathematician to derive the formula for the sum of the fourth powers, using a method that is readily generalizable for determining the general formula for the sum of any integral powers. He performed an integration in order to find the volume of a paraboloid, and was able to generalize his result for the integrals of polynomials up to the fourth degree. He thus came close to finding a general formula for the integrals of polynomials, but he was not concerned with any polynomials higher than the fourth degree. In the late 11th century, Omar Khayyam wrote "Discussions of the Difficulties in Euclid", a book about what he perceived as flaws in Euclid's "Elements", especially the parallel postulate. He was also the first to find the general geometric solution to cubic equations. He was also very influential in calendar reform. In the 13th century, Nasir al-Din Tusi (Nasireddin) made advances in spherical trigonometry. He also wrote influential work on Euclid's parallel postulate. In the 15th century, Ghiyath al-Kashi computed the value of π to the 16th decimal place. Kashi also had an algorithm for calculating "n"th roots, which was a special case of the methods given many centuries later by Ruffini and Horner. Other achievements of Muslim mathematicians during this period include the addition of the decimal point notation to the Arabic numerals, the discovery of all the modern trigonometric functions besides the sine, al-Kindi's introduction of cryptanalysis and frequency analysis, the development of analytic geometry by Ibn al-Haytham, the beginning of algebraic geometry by Omar Khayyam and the development of an algebraic notation by al-Qalasādī. During the time of the Ottoman Empire and Safavid Empire from the 15th century, the development of Islamic mathematics became stagnant. Maya. In the Pre-Columbian Americas, the Maya civilization that flourished in Mexico and Central America during the 1st millennium AD developed a unique tradition of mathematics that, due to its geographic isolation, was entirely independent of existing European, Egyptian, and Asian mathematics. Maya numerals used a base of twenty, the vigesimal system, instead of a base of ten that forms the basis of the decimal system used by most modern cultures. The Maya used mathematics to create the Maya calendar as well as to predict astronomical phenomena in their native Maya astronomy. While the concept of zero had to be inferred in the mathematics of many contemporary cultures, the Maya developed a standard symbol for it. Medieval European. Medieval European interest in mathematics was driven by concerns quite different from those of modern mathematicians. One driving element was the belief that mathematics provided the key to understanding the created order of nature, frequently justified by Plato's "Timaeus" and the biblical passage (in the "Book of Wisdom") that God had "ordered all things in measure, and number, and weight". Boethius provided a place for mathematics in the curriculum in the 6th century when he coined the term "quadrivium" to describe the study of arithmetic, geometry, astronomy, and music. He wrote "De institutione arithmetica", a free translation from the Greek of Nicomachus's "Introduction to Arithmetic"; "De institutione musica", also derived from Greek sources; and a series of excerpts from Euclid's "Elements". His works were theoretical, rather than practical, and were the basis of mathematical study until the recovery of Greek and Arabic mathematical works. In the 12th century, European scholars traveled to Spain and Sicily seeking scientific Arabic texts, including al-Khwārizmī's "The Compendious Book on Calculation by Completion and Balancing", translated into Latin by Robert of Chester, and the complete text of Euclid's "Elements", translated in various versions by Adelard of Bath, Herman of Carinthia, and Gerard of Cremona. These and other new sources sparked a renewal of mathematics. Leonardo of Pisa, now known as Fibonacci, serendipitously learned about the Hindu–Arabic numerals on a trip to what is now Béjaïa, Algeria with his merchant father. (Europe was still using Roman numerals.) There, he observed a system of arithmetic (specifically algorism) which due to the positional notation of Hindu–Arabic numerals was much more efficient and greatly facilitated commerce. Leonardo wrote "Liber Abaci" in 1202 (updated in 1254) introducing the technique to Europe and beginning a long period of popularizing it. The book also brought to Europe what is now known as the Fibonacci sequence (known to Indian mathematicians for hundreds of years before that) which Fibonacci used as an unremarkable example. The 14th century saw the development of new mathematical concepts to investigate a wide range of problems. One important contribution was development of mathematics of local motion. Thomas Bradwardine proposed that speed (V) increases in arithmetic proportion as the ratio of force (F) to resistance (R) increases in geometric proportion. Bradwardine expressed this by a series of specific examples, but although the logarithm had not yet been conceived, we can express his conclusion anachronistically by writing: V = log (F/R). Bradwardine's analysis is an example of transferring a mathematical technique used by al-Kindi and Arnald of Villanova to quantify the nature of compound medicines to a different physical problem. One of the 14th-century Oxford Calculators, William Heytesbury, lacking differential calculus and the concept of limits, proposed to measure instantaneous speed "by the path that would be described by [a body] if... it were moved uniformly at the same degree of speed with which it is moved in that given instant". Heytesbury and others mathematically determined the distance covered by a body undergoing uniformly accelerated motion (today solved by integration), stating that "a moving body uniformly acquiring or losing that increment [of speed] will traverse in some given time a [distance] completely equal to that which it would traverse if it were moving continuously through the same time with the mean degree [of speed]". Nicole Oresme at the University of Paris and the Italian Giovanni di Casali independently provided graphical demonstrations of this relationship, asserting that the area under the line depicting the constant acceleration, represented the total distance traveled. In a later mathematical commentary on Euclid's "Elements", Oresme made a more detailed general analysis in which he demonstrated that a body will acquire in each successive increment of time an increment of any quality that increases as the odd numbers. Since Euclid had demonstrated the sum of the odd numbers are the square numbers, the total quality acquired by the body increases as the square of the time. Renaissance. During the Renaissance, the development of mathematics and of accounting were intertwined. While there is no direct relationship between algebra and accounting, the teaching of the subjects and the books published often intended for the children of merchants who were sent to reckoning schools (in Flanders and Germany) or abacus schools (known as "abbaco" in Italy), where they learned the skills useful for trade and commerce. There is probably no need for algebra in performing bookkeeping operations, but for complex bartering operations or the calculation of compound interest, a basic knowledge of arithmetic was mandatory and knowledge of algebra was very useful. Piero della Francesca (c. 1415–1492) wrote books on solid geometry and linear perspective, including "De Prospectiva Pingendi (On Perspective for Painting)", "Trattato d’Abaco (Abacus Treatise)", and "De quinque corporibus regularibus (On the Five Regular Solids)". Luca Pacioli's "Summa de Arithmetica, Geometria, Proportioni et Proportionalità" (Italian: "Review of Arithmetic, Geometry, Ratio and Proportion") was first printed and published in Venice in 1494. It included a 27-page treatise on bookkeeping, "Particularis de Computis et Scripturis" (Italian: "Details of Calculation and Recording"). It was written primarily for, and sold mainly to, merchants who used the book as a reference text, as a source of pleasure from the mathematical puzzles it contained, and to aid the education of their sons. In "Summa Arithmetica", Pacioli introduced symbols for plus and minus for the first time in a printed book, symbols that became standard notation in Italian Renaissance mathematics. "Summa Arithmetica" was also the first known book printed in Italy to contain algebra. Pacioli obtained many of his ideas from Piero Della Francesca whom he plagiarized. In Italy, during the first half of the 16th century, Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book "Ars Magna", together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his "L'Algebra" in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. Simon Stevin's "De Thiende" ('the art of tenths'), first published in Dutch in 1585, contained the first systematic treatment of decimal notation in Europe, which influenced all later work on the real number system. Driven by the demands of navigation and the growing need for accurate maps of large areas, trigonometry grew to be a major branch of mathematics. Bartholomaeus Pitiscus was the first to use the word, publishing his "Trigonometria" in 1595. Regiomontanus's table of sines and cosines was published in 1533. During the Renaissance the desire of artists to represent the natural world realistically, together with the rediscovered philosophy of the Greeks, led artists to study mathematics. They were also the engineers and architects of that time, and so had need of mathematics in any case. The art of painting in perspective, and the developments in geometry that were involved, were studied intensely. Mathematics during the Scientific Revolution. 17th century. The 17th century saw an unprecedented increase of mathematical and scientific ideas across Europe. Galileo observed the moons of Jupiter in orbit about that planet, using a telescope based Hans Lipperhey's. Tycho Brahe had gathered a large quantity of mathematical data describing the positions of the planets in the sky. By his position as Brahe's assistant, Johannes Kepler was first exposed to and seriously interacted with the topic of planetary motion. Kepler's calculations were made simpler by the contemporaneous invention of logarithms by John Napier and Jost Bürgi. Kepler succeeded in formulating mathematical laws of planetary motion. The analytic geometry developed by René Descartes (1596–1650) allowed those orbits to be plotted on a graph, in Cartesian coordinates. Building on earlier work by many predecessors, Isaac Newton discovered the laws of physics that explain Kepler's Laws, and brought together the concepts now known as calculus. Independently, Gottfried Wilhelm Leibniz, developed calculus and much of the calculus notation still in use today. He also refined the binary number system, which is the foundation of nearly all digital (electronic, solid-state, discrete logic) computers, including the Von Neumann architecture, which is the standard design paradigm, or "computer architecture", followed from the second half of the 20th century, and into the 21st. Leibniz has been called the "founder of computer science". Science and mathematics had become an international endeavor, which would soon spread over the entire world. In addition to the application of mathematics to the studies of the heavens, applied mathematics began to expand into new areas, with the correspondence of Pierre de Fermat and Blaise Pascal. Pascal and Fermat set the groundwork for the investigations of probability theory and the corresponding rules of combinatorics in their discussions over a game of gambling. Pascal, with his wager, attempted to use the newly developing probability theory to argue for a life devoted to religion, on the grounds that even if the probability of success was small, the rewards were infinite. In some sense, this foreshadowed the development of utility theory in the 18th and 19th centuries. 18th century. The most influential mathematician of the 18th century was arguably Leonhard Euler (1707–1783). His contributions range from founding the study of graph theory with the Seven Bridges of Königsberg problem to standardizing many modern mathematical terms and notations. For example, he named the square root of minus 1 with the symbol "i", and he popularized the use of the Greek letter formula_0 to stand for the ratio of a circle's circumference to its diameter. He made numerous contributions to the study of topology, graph theory, calculus, combinatorics, and complex analysis, as evidenced by the multitude of theorems and notations named for him. Other important European mathematicians of the 18th century included Joseph Louis Lagrange, who did pioneering work in number theory, algebra, differential calculus, and the calculus of variations, and Pierre-Simon Laplace, who, in the age of Napoleon, did important work on the foundations of celestial mechanics and on statistics. Modern. 19th century. Throughout the 19th century mathematics became increasingly abstract. Carl Friedrich Gauss (1777–1855) epitomizes this trend. He did revolutionary work on functions of complex variables, in geometry, and on the convergence of series, leaving aside his many contributions to science. He also gave the first satisfactory proofs of the fundamental theorem of algebra and of the quadratic reciprocity law. This century saw the development of the two forms of non-Euclidean geometry, where the parallel postulate of Euclidean geometry no longer holds. The Russian mathematician Nikolai Ivanovich Lobachevsky and his rival, the Hungarian mathematician János Bolyai, independently defined and studied hyperbolic geometry, where uniqueness of parallels no longer holds. In this geometry the sum of angles in a triangle add up to less than 180°. Elliptic geometry was developed later in the 19th century by the German mathematician Bernhard Riemann; here no parallel can be found and the angles in a triangle add up to more than 180°. Riemann also developed Riemannian geometry, which unifies and vastly generalizes the three types of geometry, and he defined the concept of a manifold, which generalizes the ideas of curves and surfaces, and set the mathematical foundations for the theory of general relativity. The 19th century saw the beginning of a great deal of abstract algebra. Hermann Grassmann in Germany gave a first version of vector spaces, William Rowan Hamilton in Ireland developed noncommutative algebra. The British mathematician George Boole devised an algebra that soon evolved into what is now called Boolean algebra, in which the only numbers were 0 and 1. Boolean algebra is the starting point of mathematical logic and has important applications in electrical engineering and computer science. Augustin-Louis Cauchy, Bernhard Riemann, and Karl Weierstrass reformulated the calculus in a more rigorous fashion. Also, for the first time, the limits of mathematics were explored. Niels Henrik Abel, a Norwegian, and Évariste Galois, a Frenchman, proved that there is no general algebraic method for solving polynomial equations of degree greater than four (Abel–Ruffini theorem). Other 19th-century mathematicians used this in their proofs that straight edge and compass alone are not sufficient to trisect an arbitrary angle, to construct the side of a cube twice the volume of a given cube, nor to construct a square equal in area to a given circle. Mathematicians had vainly attempted to solve all of these problems since the time of the ancient Greeks. On the other hand, the limitation of three dimensions in geometry was surpassed in the 19th century through considerations of parameter space and hypercomplex numbers. Abel and Galois's investigations into the solutions of various polynomial equations laid the groundwork for further developments of group theory, and the associated fields of abstract algebra. In the 20th century physicists and other scientists have seen group theory as the ideal way to study symmetry. In the later 19th century, Georg Cantor established the first foundations of set theory, which enabled the rigorous treatment of the notion of infinity and has become the common language of nearly all mathematics. Cantor's set theory, and the rise of mathematical logic in the hands of Peano, L.E.J. Brouwer, David Hilbert, Bertrand Russell, and A.N. Whitehead, initiated a long running debate on the foundations of mathematics. The 19th century saw the founding of a number of national mathematical societies: the London Mathematical Society in 1865, the Société Mathématique de France in 1872, the Circolo Matematico di Palermo in 1884, the Edinburgh Mathematical Society in 1883, and the American Mathematical Society in 1888. The first international, special-interest society, the Quaternion Society, was formed in 1899, in the context of a vector controversy. In 1897, Kurt Hensel introduced p-adic numbers. 20th century. The 20th century saw mathematics become a major profession. By the end of the century, thousands of new Ph.D.s in mathematics were being awarded every year, and jobs were available in both teaching and industry. An effort to catalogue the areas and applications of mathematics was undertaken in Klein's encyclopedia. In a 1900 speech to the International Congress of Mathematicians, David Hilbert set out a list of 23 unsolved problems in mathematics. These problems, spanning many areas of mathematics, formed a central focus for much of 20th-century mathematics. Today, 10 have been solved, 7 are partially solved, and 2 are still open. The remaining 4 are too loosely formulated to be stated as solved or not. Notable historical conjectures were finally proven. In 1976, Wolfgang Haken and Kenneth Appel proved the four color theorem, controversial at the time for the use of a computer to do so. Andrew Wiles, building on the work of others, proved Fermat's Last Theorem in 1995. Paul Cohen and Kurt Gödel proved that the continuum hypothesis is independent of (could neither be proved nor disproved from) the standard axioms of set theory. In 1998, Thomas Callister Hales proved the Kepler conjecture, also using a computer. Mathematical collaborations of unprecedented size and scope took place. An example is the classification of finite simple groups (also called the "enormous theorem"), whose proof between 1955 and 2004 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages. A group of French mathematicians, including Jean Dieudonné and André Weil, publishing under the pseudonym "Nicolas Bourbaki", attempted to exposit all of known mathematics as a coherent rigorous whole. The resulting several dozen volumes has had a controversial influence on mathematical education. Differential geometry came into its own when Albert Einstein used it in general relativity. Entirely new areas of mathematics such as mathematical logic, topology, and John von Neumann's game theory changed the kinds of questions that could be answered by mathematical methods. All kinds of structures were abstracted using axioms and given names like metric spaces, topological spaces etc. As mathematicians do, the concept of an abstract structure was itself abstracted and led to category theory. Grothendieck and Serre recast algebraic geometry using sheaf theory. Large advances were made in the qualitative study of dynamical systems that Poincaré had begun in the 1890s. Measure theory was developed in the late 19th and early 20th centuries. Applications of measures include the Lebesgue integral, Kolmogorov's axiomatisation of probability theory, and ergodic theory. Knot theory greatly expanded. Quantum mechanics led to the development of functional analysis. Other new areas include Laurent Schwartz's distribution theory, fixed point theory, singularity theory and René Thom's catastrophe theory, model theory, and Mandelbrot's fractals. Lie theory with its Lie groups and Lie algebras became one of the major areas of study. Non-standard analysis, introduced by Abraham Robinson, rehabilitated the infinitesimal approach to calculus, which had fallen into disrepute in favour of the theory of limits, by extending the field of real numbers to the Hyperreal numbers which include infinitesimal and infinite quantities. An even larger number system, the surreal numbers were discovered by John Horton Conway in connection with combinatorial games. The development and continual improvement of computers, at first mechanical analog machines and then digital electronic machines, allowed industry to deal with larger and larger amounts of data to facilitate mass production and distribution and communication, and new areas of mathematics were developed to deal with this: Alan Turing's computability theory; complexity theory; Derrick Henry Lehmer's use of ENIAC to further number theory and the Lucas–Lehmer primality test; Rózsa Péter's recursive function theory; Claude Shannon's information theory; signal processing; data analysis; optimization and other areas of operations research. In the preceding centuries much mathematical focus was on calculus and continuous functions, but the rise of computing and communication networks led to an increasing importance of discrete concepts and the expansion of combinatorics including graph theory. The speed and data processing abilities of computers also enabled the handling of mathematical problems that were too time-consuming to deal with by pencil and paper calculations, leading to areas such as numerical analysis and symbolic computation. Some of the most important methods and algorithms of the 20th century are: the simplex algorithm, the fast Fourier transform, error-correcting codes, the Kalman filter from control theory and the RSA algorithm of public-key cryptography. At the same time, deep insights were made about the limitations to mathematics. In 1929 and 1930, it was proved the truth or falsity of all statements formulated about the natural numbers plus either addition or multiplication (but not both), was decidable, i.e. could be determined by some algorithm. In 1931, Kurt Gödel found that this was not the case for the natural numbers plus both addition and multiplication; this system, known as Peano arithmetic, was in fact incomplete. (Peano arithmetic is adequate for a good deal of number theory, including the notion of prime number.) A consequence of Gödel's two incompleteness theorems is that in any mathematical system that includes Peano arithmetic (including all of analysis and geometry), truth necessarily outruns proof, i.e. there are true statements that cannot be proved within the system. Hence mathematics cannot be reduced to mathematical logic, and David Hilbert's dream of making all of mathematics complete and consistent needed to be reformulated. One of the more colorful figures in 20th-century mathematics was Srinivasa Aiyangar Ramanujan (1887–1920), an Indian autodidact , including properties of highly composite numbers, the partition function and its asymptotics, and mock theta functions. He also made major investigations in the areas of gamma functions, modular forms, divergent series, hypergeometric series and prime number theory. Paul Erdős published more papers than any other mathematician in history, working with hundreds of collaborators. Mathematicians have a game equivalent to the Kevin Bacon Game, which leads to the Erdős number of a mathematician. This describes the "collaborative distance" between a person and Erdős, as measured by joint authorship of mathematical papers. Emmy Noether has been described by many as the most important woman in the history of mathematics. She studied the theories of rings, fields, and algebras. As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century, there were hundreds of specialized areas in mathematics, and the Mathematics Subject Classification was dozens of pages long. More and more mathematical journals were published and, by the end of the century, the development of the World Wide Web led to online publishing. 21st century. In 2000, the Clay Mathematics Institute announced the seven Millennium Prize Problems. In 2003 the Poincaré conjecture was solved by Grigori Perelman (who declined to accept an award, as he was critical of the mathematics establishment). Most mathematical journals now have online versions as well as print versions, and many online-only journals are launched. There is an increasing drive toward open access publishing, first made popular by arXiv. Future. There are many observable trends in mathematics, the most notable being that the subject is growing ever larger as computers are ever more important and powerful; the volume of data being produced by science and industry, facilitated by computers, continues expanding exponentially. As a result, there is a corresponding growth in the demand for mathematics to help process and understand this big data. Math science careers are also expected to continue to grow, with the US Bureau of Labor Statistics estimating (in 2018) that "employment of mathematical science occupations is projected to grow 27.9 percent from 2016 to 2026." See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=14220
14220429
Heuristic (computer science)
Type of algorithm, produces approximately correct solutions In mathematical optimization and computer science, heuristic (from Greek εὑρίσκω "I find, discover") is a technique designed for problem solving more quickly when classic methods are too slow for finding an exact or approximate solution, or when classic methods fail to find any exact solution in a search space. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution. Definition and motivation. The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. This solution may not be the best of all the solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time. Heuristics may produce results by themselves, or they may be used in conjunction with optimization algorithms to improve their efficiency (e.g., they may be used to generate good seed values). Results about NP-hardness in theoretical computer science make heuristics the only viable option for a variety of complex optimization problems that need to be routinely solved in real-world applications. Heuristics underlie the whole field of Artificial Intelligence and the computer simulation of thinking, as they may be used in situations where there are no known algorithms. Trade-off. The trade-off criteria for deciding whether to use a heuristic for solving a given problem include the following: In some cases, it may be difficult to decide whether the solution found by the heuristic is good enough because the theory underlying heuristics is not very elaborate. Examples. Simpler problem. One way of achieving the computational performance gain expected of a heuristic consists of solving a simpler problem whose solution is also a solution to the initial problem. Travelling salesman problem. An example of approximation is described by Jon Bentley for solving the travelling salesman problem (TSP): so as to select the order to draw using a pen plotter. TSP is known to be NP-hard so an optimal solution for even a moderate size problem is difficult to solve. Instead, the greedy algorithm can be used to give a good but not optimal solution (it is an approximation to the optimal answer) in a reasonably short amount of time. The greedy algorithm heuristic says to pick whatever is currently the best next step regardless of whether that prevents (or even makes impossible) good steps later. It is a heuristic in the sense that practice indicates it is a good enough solution, while theory indicates that there are better solutions (and even indicates how much better, in some cases). Search. Another example of heuristic making an algorithm faster occurs in certain search problems. Initially, the heuristic tries every possibility at each step, like the full-space search algorithm. But it can stop the search at any time if the current possibility is already worse than the best solution already found. In such search problems, a heuristic can be used to try good choices first so that bad paths can be eliminated early (see alpha–beta pruning). In the case of best-first search algorithms, such as A* search, the heuristic improves the algorithm's convergence while maintaining its correctness as long as the heuristic is admissible. Newell and Simon: heuristic search hypothesis. In their Turing Award acceptance speech, Allen Newell and Herbert A. Simon discuss the heuristic search hypothesis: a physical symbol system will repeatedly generate and modify known symbol structures until the created structure matches the solution structure. Each following step depends upon the step before it, thus the heuristic search learns what avenues to pursue and which ones to disregard by measuring how close the current step is to the solution. Therefore, some possibilities will never be generated as they are measured to be less likely to complete the solution. A heuristic method can accomplish its task by using search trees. However, instead of generating all possible solution branches, a heuristic selects branches more likely to produce outcomes than other branches. It is selective at each decision point, picking branches that are more likely to produce solutions. Antivirus software. Antivirus software often uses heuristic rules for detecting viruses and other forms of malware. Heuristic scanning looks for code and/or behavioral patterns common to a class or family of viruses, with different sets of rules for different viruses. If a file or executing process is found to contain matching code patterns and/or to be performing that set of activities, then the scanner infers that the file is infected. The most advanced part of behavior-based heuristic scanning is that it can work against highly randomized self-modifying/mutating (polymorphic) viruses that cannot be easily detected by simpler string scanning methods. Heuristic scanning has the potential to detect future viruses without requiring the virus to be first detected somewhere else, submitted to the virus scanner developer, analyzed, and a detection update for the scanner provided to the scanner's users. Pitfalls. Some heuristics have a strong underlying theory; they are either derived in a top-down manner from the theory or are arrived at based on either experimental or real world data. Others are just rules of thumb based on real-world observation or experience without even a glimpse of theory. The latter are exposed to a larger number of pitfalls. When a heuristic is reused in various contexts because it has been seen to "work" in one context, without having been mathematically proven to meet a given set of requirements, it is possible that the current data set does not necessarily represent future data sets (see: overfitting) and that purported "solutions" turn out to be akin to noise. Statistical analysis can be conducted when employing heuristics to estimate the probability of incorrect outcomes. To use a heuristic for solving a search problem or a knapsack problem, it is necessary to check that the heuristic is admissible. Given a heuristic function formula_0 meant to approximate the true optimal distance formula_1 to the goal node formula_2 in a directed graph formula_3 containing formula_4 total nodes or vertices labeled formula_5, "admissible" means roughly that the heuristic underestimates the cost to the goal or formally that formula_6 for "all" formula_7 where formula_8. If a heuristic is not admissible, it may never find the goal, either by ending up in a dead end of graph formula_3 or by skipping back and forth between two nodes formula_9 and formula_10 where formula_11. Etymology. The word "heuristic" came into usage in the early 19th century. It is formed irregularly from the Greek word "heuriskein", meaning "to find". References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "h(v_i, v_g)" }, { "math_id": 1, "text": "d^\\star(v_i,v_g)" }, { "math_id": 2, "text": "v_g" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "v_0,v_1,\\cdots,v_n" }, { "math_id": 6, "text": "h(v_i, v_g) \\leq d^\\star(v_i,v_g)" }, { "math_id": 7, "text": "(v_i, v_g)" }, { "math_id": 8, "text": "{i,g} \\in [0, 1, ... , n]" }, { "math_id": 9, "text": "v_i" }, { "math_id": 10, "text": "v_j" }, { "math_id": 11, "text": "{i, j}\\neq g" } ]
https://en.wikipedia.org/wiki?curid=14220429
14220559
Adenosylmethionine hydrolase
In enzymology, an adenosylmethionine hydrolase (EC 3.3.1.2) is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + H2O formula_0 L-homoserine + methylthioadenosine Thus, the two substrates of this enzyme are S-adenosyl-L-methionine and H2O, whereas its two products are L-homoserine and methylthioadenosine. This enzyme belongs to the family of hydrolases, specifically those acting on ether bonds involving sulfur (thioether and trialkylsulfonium hydrolases). The systematic name of this enzyme class is S-adenosyl-L-methionine hydrolase. Other names in common use include S-adenosylmethionine cleaving enzyme, methylmethionine-sulfonium-salt hydrolase, and adenosylmethionine lyase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220559
14220583
Alkenylglycerophosphocholine hydrolase
In enzymology, an alkenylglycerophosphocholine hydrolase (EC 3.3.2.2) is an enzyme that catalyzes the chemical reaction 1-(1-alkenyl)-sn-glycero-3-phosphocholine + H2O formula_0 an aldehyde + sn-glycero-3-phosphocholine Thus, the two substrates of this enzyme are 1-(1-alkenyl)-sn-glycero-3-phosphocholine and H2O, whereas its two products are aldehyde and sn-glycero-3-phosphocholine. This enzyme belongs to the family of hydrolases, specifically those acting on ether bonds (ether hydrolases). The systematic name of this enzyme class is 1-(1-alkenyl)-sn-glycero-3-phosphocholine aldehydohydrolase. This enzyme is also called lysoplasmalogenase. This enzyme participates in ether lipid metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220583
14220612
Alkenylglycerophosphoethanolamine hydrolase
Enzyme In enzymology, an alkenylglycerophosphoethanolamine hydrolase (EC 3.3.2.5) is an enzyme that catalyzes the chemical reaction 1-(1-alkenyl)-sn-glycero-3-phosphoethanolamine + H2O formula_0 an aldehyde + sn-glycero-3-phosphoethanolamine Thus, the two substrates of this enzyme are 1-(1-alkenyl)-sn-glycero-3-phosphoethanolamine and H2O, whereas its two products are aldehyde and sn-glycero-3-phosphoethanolamine. This enzyme belongs to the family of hydrolases, specifically those acting on ether bonds (ether hydrolases). The systematic name of this enzyme class is 1-(1-alkenyl)-sn-glycero-3-phosphoethanolamine aldehydohydrolase. This enzyme participates in ether lipid metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220612
14220632
Hepoxilin-epoxide hydrolase
Enzyme In enzymology, a hepoxilin-epoxide hydrolase (EC 3.3.2.7) is an enzyme that catalyzes the conversion of the epoxyalcohol metabolites arachidonic acid, hepoxilin A3 and hepoxilin B3 to their tri-hydroxyl products, trioxolin A3 and trioxilin B3, respectively. These reactions in general inactivate the two biologically active hepoxilins. Enzyme activity. Hepoxilin-epoxide hydrolase converts the epoxide residue in hepoxilins A3 and B3 to Vicinal (chemistry) diols as exemplified in the following enzyme reaction for the metabolism of hepoxilin A3 to trioxilin A3: 8-hydroxy-11"S",12"S"epoxy-(5"Z",8"Z",14"Z")-eicosatrienoic acid + H2O formula_0 8,11,12-trihydroxy-(5"Z",9"E",14"Z")-eicosatrienoic acid The substrates of this enzyme are 8-hydroxy-11"S",12"S"epoxy-(5"Z",8"Z",14"Z")-eicosatrienoic acid, i.e. hepoxilin A3 and H2O, whereas its product is 8,11,12-trihydroxy-(5"Z",9"E",14"Z")-eicosatrienoic acid, i.e. the triol, trioxilin A3. Epoxide hydrolases. Epoxide hydrolases represent a group of enzymes that convert various types of epoxides to vicinal diols. Several members of this group have this metabolic activity on fatty acid epoxides including microsomal epoxide hydrolase (i.e. epoxide hydrolase 1 or EH1), soluble epoxide hydrolase (i.e. epoxide hydrolase 2 or EH2), epoxide hydrolase 3 (EH3), epoxide hydrolase 4 (EH4), and leukotriene A4 hydrolase (see epoxide hydrolase). The systematic name of this enzyme class is (5Z,9E,14Z)-(8xi,11R,12S)-11,12-epoxy-8-hydroxyicosa-5,9,14-trienoat e hydrolase. Other names in common use include hepoxilin epoxide hydrolase, hepoxylin hydrolase, and hepoxilin A3 hydrolase. Since the hepoxilins are metabolites of arachidonic acid, hepoxilin-epoxide hydrolase participates in arachidonic acid metabolism. Identity of hepoxilin-epoxide hydrolase. Studies have shown that soluble epoxide hydrolase (i.e. epoxide hydrolase 2 or EH2) readily metabolizes a) hepoxilin A3 (8-hydroxy-11"S",12"S"epoxy-(5"Z",8"Z",14"Z")-eicosatrienoic acid) to trioxilin A3 (8,11,12-trihydroxy-(5"Z",9"E",14"Z")-eicosatrienoic acid) and b) hepoxilin B3 (10-hydroxy-11"S",12"S"epoxy-(5"Z",9"E",14"Z")-eicosatrienoic acid) to trioxlin B3 (10,11,12-trihydroxy-(5"Z",9"E",14"Z")-eicosatrienoic acid. Soluble epoxide hydrolase (i.e. epoxide hydrolase 2 or EH2) sEH also appears to be the hepoxilin hydrolase that is responsible for inactivating the epoxyalcohol metabolites of arachidonic acid, hepoxilin A3 and hepoxiin B3. Soluble epoxide hydrolase is widely expressed in a diversity of human and other mammal tissues and therefore appears to be the hepoxilin hydrolase responsible for inactivating hepoxilin A3 and B3 (see soluble epoxide hydrolase#Function and epoxide hydrolase#Hepoxilin-epoxide hydrolase). The ability of EH1, EH3, EH4, and leukotriene A4 hydrolase to metabolize hepoxilins to trioxilins has not yet been reported. Function. Hepoxilins possess several activities (see hepoxilin#Physiological effects) whereas their trioxilin products are generally considered to be inactive. Accordingly, the soluble epoxide hydrolase metabolic pathway is considered to function in vivo to inactivate or limit the activity of the hepoxilins. The other fatty acid epoxide hydrolases cited in the Epoxide hydrolases section (above) have not be reported for hepoxilin-epoxide hydrolase activity, could possibly exhibit this, and therefore contribute to inactivating the hepoxilins. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220632
14220655
Isochorismatase
In enzymology, an isochorismatase (EC 3.3.2.1) is an enzyme that catalyzes the chemical reaction isochorismate + H2O formula_0 2,3-dihydroxy-2,3-dihydrobenzoate + pyruvate Thus, the two substrates of this enzyme are isochorismate and H2O, whereas its two products are 2,3-dihydroxy-2,3-dihydrobenzoate and pyruvate. This enzyme belongs to the family of hydrolases, specifically those acting on ether bonds (ether hydrolases). The systematic name of this enzyme class is isochorismate pyruvate-hydrolase. Other names in common use include 2,3-dihydro-2,3-dihydroxybenzoate synthase, 2,3-dihydroxy-2,3-dihydrobenzoate synthase, and 2,3-dihydroxy-2,3-dihydrobenzoic synthase. This enzyme participates in the biosynthesis of siderophore group (nonribosomal). Structural studies. As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1NF8, 1NF9, and 2FQ1. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220655
14220689
Limonene-1,2-epoxide hydrolase
In enzymology, a limonene-1,2-epoxide hydrolase (EC 3.3.2.8) is an enzyme that catalyzes the chemical reaction limonene-1,2-epoxide + H2O formula_0 limonene-1,2-diol Thus, the two substrates of this enzyme are limonene-1,2-epoxide and H2O, whereas its product is limonene-1,2-diol. This enzyme is found in the bacterium "Rhodococcus erythropolis" DCL14, where it plays a role in the limonene degradation pathway that allows the bacteria to catabolize limonene as a carbon and energy source. The enzyme belongs to the family of hydrolases, specifically those acting on ether bonds (ether hydrolases). The systematic name of this enzyme class is limonene-1,2-epoxide hydrolase. This enzyme is also called limonene oxide hydrolase. This enzyme has maximal activity at pH 7 and 50°C, and participates in limonene and pinene degradation. Epoxide hydrolases catalyze the hydrolysis of epoxides to corresponding diols, which is important in detoxification, synthesis of signal molecules, or metabolism. Limonene-1,2- epoxide hydrolase (LEH) differs from many other epoxide hydrolases (EHs) in its structure and its novel one-step catalytic mechanism. EHs typically contain conserved α/β-hydrolase folds and catalytic residues which aid with epoxide stabilization and its subsequent hydrolysis reaction. However, LEH’s low molecular mass of 16 kDa suggests that it is too small to house these α/β-hydrolase folds and catalytic triad motifs found in other EHs. Moreover, compared to other EHs, LEH accepts a smaller diversity of substrates and is only able to catalyze reactions with limonene-1,2-epoxide, 1-methylcyclohexene oxide, cyclohexene oxide, and indene oxide. Thus, LEH is considered the founding member of a novel EH family, and its mechanistic, structural, and functional details are of special interest. Mechanism. The epoxide hydrolysis of limonene catalyzed by LEH occurs in a one-step mechanism. Nucleophilic water attacks at one of the two electrophilic positions on the epoxide, opening the three-membered ring to create vicinal diols. Quantum-mechanical and molecular-mechanical studies have observed that LEH-mediated hydrolysis preferentially attacks at the most substituted epoxide carbon. The activation energies of attack at the more and less substituted carbons are 16.9 kcal/mol and 25.1 kcal/mol, respectively. These data also suggest that the LEH mechanism is acid-catalyzed, because acidic conditions favor hydrolysis at the more substituted epoxide carbon which has a greater δ+ charge. The mechanism of LEH hydrolysis does not utilize a covalent enzyme-substrate intermediate, which is distinct from other EHs. However, it does still recruit active site amino acids for acid-base proton exchange and substrate stabilization. According to mutagenesis studies, LEH contains five crucial catalytic residues: Asp101, Arg99, Asp132, Tyr53, and Asn55. The first three catalytic residues form an Asp-Arg-Asp triad that actively donates and accepts protons from substrates in the reaction to drive it forward and help it proceed favorably. Evidence from computational modeling suggests that Asp132 acts to deprotonate water to increase its nucleophilicity in the reaction, while Asp101 protonates the epoxide oxygen to form one of the two alcohols in the diol product. Positively charged Arg99 contributes by stabilizing the negative charges on Asp101 and Asp132. The last two catalytic residues, Tyr53 and Asn55, aid in stabilizing and binding the water molecule via hydrogen bonds to help it achieve the optimal orientation for epoxide attack. Stereochemistry. The reaction catalyzed by LEH results in selective stereochemistry at its chiral carbons. LEH affords pure enantiomers of the limonene-1,2-diol when given a racemic mixture of the epoxide. When the substrate has an R chiral center at carbon 4 ("4R"), the product is ("1S,2S,4R")-limonene-1,2-diol, regardless of whether the substrate’s epoxide is "trans" or "cis" to the substitution on carbon 4. Similarly, a substrate with an S chiral center at carbon 4 ("4S") yields only the ("1R,2R,4S")-limonene-1,2-diol. Because of the enantioconvergent nature of LEH and its ability to produce a single enantiomeric products, it has significant applications to industrial synthesis. LEH also has a preference for specific stereoisomers of its substrate. It reacts with all ("1R,2S") limonene epoxides before it begins hydrolysis of the ("1S,2R") stereoisomers. The presence of ("1S,2R") substrates does not decrease the speed of reaction with the preferred stereoisomers, suggesting that the ("1S,2R") limonene epoxides are weak competitive inhibitors. Structure. The crystal structure of LEH contains a six-stranded mixed beta-sheet, with three N-terminal alpha helices packed to one side to create a pocket that extends into the protein core. A fourth helix lies in such a way that it acts as a rim to this pocket. Although mainly lined by hydrophobic residues, this pocket features a cluster of polar groups that lie at its deepest point and constitute the enzyme’s active site. LEH is also a dimer with two subunits at an angle of 179° to each other. The two subunits are largely symmetrical, excluding the amino acids at the N-terminus that are proximal to the main fold. While the LEH structure is distinct from the majority of EHs, it is not entirely unrecognizable from all of them. For example, epoxide hydrolase Rv2740, native to "Mycobacterium tuberculosis," contains an active site and catalytic triad similar to LEH, with three helices packed onto a curved six-stranded beta sheet. Like LEH, it lacks the α/β-hydrolase fold found in most EHs. With this emerging class of enzymes, LEH and similarly unique EHs may be novel tools with large potential for industrial catalysis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220689
142207
Singular value decomposition
Matrix decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any ⁠⁠ matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an formula_0 complex matrix ⁠⁠ is a factorization of the form formula_1 where ⁠⁠ is an ⁠⁠ complex unitary matrix, formula_2 is an formula_0 rectangular diagonal matrix with non-negative real numbers on the diagonal, ⁠⁠ is an formula_3 complex unitary matrix, and formula_4 is the conjugate transpose of ⁠⁠. Such decomposition always exists for any complex matrix. If ⁠⁠ is real, then ⁠⁠ and ⁠⁠ can be guaranteed to be real orthogonal matrices; in such contexts, the SVD is often denoted formula_5 The diagonal entries formula_6 of formula_2 are uniquely determined by ⁠⁠ and are known as the singular values of ⁠⁠. The number of non-zero singular values is equal to the rank of ⁠⁠. The columns of ⁠⁠ and the columns of ⁠⁠ are called left-singular vectors and right-singular vectors of ⁠⁠, respectively. They form two sets of orthonormal bases ⁠⁠ and ⁠⁠ and if they are sorted so that the singular values formula_7 with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as formula_8 where formula_9 is the rank of ⁠⁠ The SVD is not unique, however it is always possible to choose the decomposition such that the singular values formula_10 are in descending order. In this case, formula_2 (but not ⁠⁠ and ⁠⁠) is uniquely determined by ⁠⁠ The term sometimes refers to the compact SVD, a similar decomposition ⁠⁠ in which ⁠⁠ is square diagonal of size ⁠⁠ where ⁠}⁠ is the rank of ⁠⁠ and has only the non-zero singular values. In this variant, ⁠⁠ is an ⁠⁠ semi-unitary matrix and formula_11 is an ⁠⁠ semi-unitary matrix, such that formula_12 Mathematical applications of the SVD include computing the pseudoinverse, matrix approximation, and determining the rank, range, and null space of a matrix. The SVD is also extremely useful in all areas of science, engineering, and statistics, such as signal processing, least squares fitting of data, and process control. Intuitive interpretations. Rotation, coordinate scaling, and reflection. In the special case when ⁠⁠ is an ⁠⁠ real square matrix, the matrices ⁠⁠ and ⁠⁠ can be chosen to be real ⁠⁠ matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as ⁠⁠ as a linear transformation ⁠}⁠ of the space ⁠⁠ the matrices ⁠⁠ and ⁠⁠ represent rotations or reflection of the space, while ⁠⁠ represents the scaling of each coordinate ⁠⁠ by the factor ⁠⁠ Thus the SVD decomposition breaks down any linear transformation of ⁠⁠ into a composition of three geometrical transformations: a rotation or reflection followed by a coordinate-by-coordinate scaling followed by another rotation or reflection In particular, if ⁠⁠ has a positive determinant, then ⁠⁠ and ⁠⁠ can be chosen to be both rotations with reflections, or both rotations without reflections. If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type. If the matrix ⁠⁠ is real but not square, namely ⁠⁠ with ⁠⁠ it can be interpreted as a linear transformation from ⁠⁠ to ⁠⁠ Then ⁠⁠ and ⁠⁠ can be chosen to be rotations/reflections of ⁠⁠ and ⁠⁠ respectively; and ⁠⁠ besides scaling the first ⁠}⁠ coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turn ⁠⁠ into ⁠⁠ Singular values as semiaxes of an ellipse or ellipsoid. As shown in the figure, the singular values can be interpreted as the magnitude of the semiaxes of an ellipse in 2D. This concept can be generalized to ⁠⁠-dimensional Euclidean space, with the singular values of any ⁠⁠ square matrix being viewed as the magnitude of the semiaxis of an ⁠⁠-dimensional ellipsoid. Similarly, the singular values of any ⁠⁠ matrix can be viewed as the magnitude of the semiaxis of an ⁠⁠-dimensional ellipsoid in ⁠⁠-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. See below for further details. The columns of U and V are orthonormal bases. Since ⁠⁠ and ⁠⁠ are unitary, the columns of each of them form a set of orthonormal vectors, which can be regarded as basis vectors. The matrix ⁠⁠ maps the basis vector ⁠⁠ to the stretched unit vector ⁠⁠ By the definition of a unitary matrix, the same is true for their conjugate transposes ⁠⁠ and ⁠⁠ except the geometric interpretation of the singular values as stretches is lost. In short, the columns of ⁠⁠ ⁠⁠ ⁠⁠ and ⁠⁠ are orthonormal bases. When ⁠⁠ is a positive-semidefinite Hermitian matrix, ⁠⁠ and ⁠⁠ are both equal to the unitary matrix used to diagonalize ⁠⁠ However, when ⁠⁠ is not positive-semidefinite and Hermitian but still diagonalizable, its eigendecomposition and singular value decomposition are distinct. Geometric meaning. Because ⁠⁠ and ⁠⁠ are unitary, we know that the columns ⁠⁠ of ⁠⁠ yield an orthonormal basis of ⁠⁠ and the columns ⁠⁠ of ⁠⁠ yield an orthonormal basis of ⁠⁠ (with respect to the standard scalar products on these spaces). The linear transformation formula_13 has a particularly simple description with respect to these orthonormal bases: we have formula_14 where ⁠⁠ is the ⁠⁠-th diagonal entry of ⁠⁠ and ⁠⁠ for ⁠⁠ The geometric content of the SVD theorem can thus be summarized as follows: for every linear map ⁠⁠ one can find orthonormal bases of ⁠⁠ and ⁠⁠ such that ⁠⁠ maps the ⁠⁠-th basis vector of ⁠⁠ to a non-negative multiple of the ⁠⁠-th basis vector of ⁠⁠ and sends the leftover basis vectors to zero. With respect to these bases, the map ⁠⁠ is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphere ⁠⁠ of radius one in ⁠⁠ The linear map ⁠⁠ maps this sphere onto an ellipsoid in ⁠⁠ Non-zero singular values are simply the lengths of the semi-axes of this ellipsoid. Especially when ⁠⁠ and all the singular values are distinct and non-zero, the SVD of the linear map ⁠⁠ can be easily analyzed as a succession of three consecutive moves: consider the ellipsoid ⁠⁠ and specifically its axes; then consider the directions in ⁠⁠ sent by ⁠⁠ onto these axes. These directions happen to be mutually orthogonal. Apply first an isometry ⁠⁠ sending these directions to the coordinate axes of ⁠⁠ On a second move, apply an endomorphism ⁠⁠ diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths of ⁠⁠ as stretching coefficients. The composition ⁠⁠ then sends the unit-sphere onto an ellipsoid isometric to ⁠⁠ To define the third and last move, apply an isometry ⁠⁠ to this ellipsoid to obtain ⁠⁠ As can be easily checked, the composition ⁠⁠ coincides with ⁠⁠ Example. Consider the ⁠⁠ matrix formula_15 A singular value decomposition of this matrix is given by ⁠⁠ formula_16 The scaling matrix ⁠⁠ is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matrices ⁠⁠ and ⁠⁠ are unitary, multiplying by their respective conjugate transposes yields identity matrices, as shown below. In this case, because ⁠⁠ and ⁠⁠ are real valued, each is an orthogonal matrix. formula_17 This particular singular value decomposition is not unique. Choosing ⁠⁠ such that formula_18 is also a valid singular value decomposition. SVD and spectral decomposition. Singular values, singular vectors, and their relation to the SVD. A non-negative real number ⁠⁠ is a singular value for ⁠⁠ if and only if there exist unit-length vectors ⁠⁠ in ⁠⁠ and ⁠⁠ in ⁠⁠ such that formula_19 The vectors ⁠⁠ and ⁠⁠ are called left-singular and right-singular vectors for ⁠⁠ respectively. In any singular value decomposition formula_20 the diagonal entries of ⁠⁠ are equal to the singular values of ⁠⁠ The first ⁠⁠ columns of ⁠⁠ and ⁠⁠ are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that: A singular value for which we can find two left (or right) singular vectors that are linearly independent is called "degenerate". If ⁠⁠ and ⁠⁠ are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns of ⁠⁠ and ⁠⁠ corresponding to diagonal elements of ⁠⁠ all with the same value ⁠⁠ As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in the cokernel and kernel, respectively, of ⁠⁠ which by the rank–nullity theorem cannot be the same dimension if ⁠⁠ Even if all singular values are nonzero, if ⁠⁠ then the cokernel is nontrivial, in which case ⁠⁠ is padded with ⁠⁠ orthogonal vectors from the cokernel. Conversely, if ⁠⁠ then ⁠⁠ is padded by ⁠⁠ orthogonal vectors from the kernel. However, if the singular value of ⁠⁠ exists, the extra columns of ⁠⁠ or ⁠⁠ already appear as left or right-singular vectors. Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factor ⁠}⁠ (for the real case up to a sign). Consequently, if all singular values of a square matrix ⁠⁠ are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column of ⁠⁠ by a unit-phase factor and simultaneous multiplication of the corresponding column of ⁠⁠ by the same unit-phase factor. In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of both ⁠⁠ and ⁠⁠ spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors of ⁠⁠ and ⁠⁠ spanning the kernel and cokernel, respectively, of ⁠⁠ Relation to eigenvalue decomposition. The singular value decomposition is very general in the sense that it can be applied to any ⁠⁠ matrix, whereas eigenvalue decomposition can only be applied to square diagonalizable matrices. Nevertheless, the two decompositions are related. If ⁠⁠ has SVD ⁠⁠ the following two relations hold: formula_21 The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently: In the special case of ⁠⁠ being a normal matrix, and thus also square, the spectral theorem ensures that it can be unitarily diagonalized using a basis of eigenvectors, and thus decomposed as ⁠⁠ for some unitary matrix ⁠⁠ and diagonal matrix ⁠⁠ with complex elements ⁠⁠ along the diagonal. When ⁠⁠ is positive semi-definite, ⁠⁠ will be non-negative real numbers so that the decomposition ⁠⁠ is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phase ⁠}⁠ of each ⁠⁠ to either its corresponding ⁠⁠ or ⁠⁠ The natural connection of the SVD to non-normal matrices is through the polar decomposition theorem: ⁠⁠ where ⁠⁠ is positive semidefinite and normal, and ⁠⁠ is unitary. Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD of ⁠⁠ while related, differ: the eigenvalue decomposition is ⁠⁠ where ⁠⁠ is not necessarily unitary and ⁠⁠ is not necessarily positive semi-definite, while the SVD is ⁠⁠ where ⁠⁠ is diagonal and positive semi-definite, and ⁠⁠ and ⁠⁠ are unitary matrices that are not necessarily related except through the matrix ⁠⁠ While only non-defective square matrices have an eigenvalue decomposition, any ⁠⁠ matrix has a SVD. Applications of the SVD. Pseudoinverse. The singular value decomposition can be used for computing the pseudoinverse of a matrix. The pseudoinverse of the matrix ⁠⁠ with singular value decomposition ⁠⁠ is, formula_22 where formula_23 is the pseudoinverse of formula_24, which is formed by replacing every non-zero diagonal entry by its reciprocal and transposing the resulting matrix. The pseudoinverse is one way to solve linear least squares problems. Solving homogeneous linear equations. A set of homogeneous linear equations can be written as ⁠⁠ for a matrix ⁠⁠ and vector ⁠⁠ A typical situation is that ⁠⁠ is known and a non-zero ⁠⁠ is to be determined which satisfies the equation. Such an ⁠⁠ belongs to ⁠⁠'s null space and is sometimes called a (right) null vector of ⁠⁠ The vector ⁠⁠ can be characterized as a right-singular vector corresponding to a singular value of ⁠⁠ that is zero. This observation means that if ⁠⁠ is a square matrix and has no vanishing singular value, the equation has no non-zero ⁠⁠ as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero ⁠⁠ satisfying ⁠⁠ with ⁠⁠ denoting the conjugate transpose of ⁠⁠ is called a left null vector of ⁠⁠ Total least squares minimization. A total least squares problem seeks the vector ⁠⁠ that minimizes the 2-norm of a vector ⁠⁠ under the constraint formula_25 The solution turns out to be the right-singular vector of ⁠⁠ corresponding to the smallest singular value. Range, null space and rank. Another application of the SVD is that it provides an explicit representation of the range and null space of a matrix ⁠⁠ The right-singular vectors corresponding to vanishing singular values of ⁠⁠ span the null space of ⁠⁠ and the left-singular vectors corresponding to the non-zero singular values of ⁠⁠ span the range of ⁠⁠ For example, in the above example the null space is spanned by the last row of ⁠⁠ and the range is spanned by the first three columns of ⁠⁠ As a consequence, the rank of ⁠⁠ equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in formula_2. In numerical linear algebra the singular values can be used to determine the "effective rank" of a matrix, as rounding error may lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero. Low-rank matrix approximation. Some practical applications need to solve the problem of approximating a matrix ⁠⁠ with another matrix formula_26, said to be truncated, which has a specific rank ⁠⁠. In the case that the approximation is based on minimizing the Frobenius norm of the difference between ⁠⁠ and ⁠}⁠ under the constraint that formula_27 it turns out that the solution is given by the SVD of ⁠⁠ namely formula_28 where formula_29 is the same matrix as formula_2 except that it contains only the ⁠⁠ largest singular values (the other singular values are replaced by zero). This is known as the Eckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; see ). Separable models. The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix ⁠⁠ can be written as an outer product of two vectors ⁠⁠ or, in coordinates, ⁠⁠ Specifically, the matrix ⁠⁠ can be decomposed as, formula_30 Here ⁠⁠ and ⁠⁠ are the ⁠⁠-th columns of the corresponding SVD matrices, ⁠⁠ are the ordered singular values, and each ⁠⁠ is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero ⁠⁠ is exactly the rank of the matrix. Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described by a Gabor filter in the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example, reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of ⁠⁠ in the SVD factorization is then a Gabor while the first column of ⁠⁠ represents the time modulation (or vice versa). One may then define an index of separability formula_31 which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition. Nearest orthogonal matrix. It is possible to use the SVD of a square matrix ⁠⁠ to determine the orthogonal matrix ⁠⁠ closest to ⁠⁠ The closeness of fit is measured by the Frobenius norm of ⁠⁠ The solution is the product ⁠⁠ This intuitively makes sense because an orthogonal matrix would have the decomposition ⁠⁠ where ⁠⁠ is the identity matrix, so that if ⁠⁠ then the product ⁠⁠ amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix ⁠⁠ of the Polar Decomposition formula_32 in either order of stretch and rotation, as described above. A similar problem, with interesting applications in shape analysis, is the orthogonal Procrustes problem, which consists of finding an orthogonal matrix ⁠⁠ which most closely maps ⁠⁠ to ⁠⁠ Specifically, formula_33 where formula_34 denotes the Frobenius norm. This problem is equivalent to finding the nearest orthogonal matrix to a given matrix formula_35. The Kabsch algorithm. The Kabsch algorithm (called Wahba's problem in other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules. Signal processing. The SVD and pseudoinverse have been successfully applied to signal processing, image processing and big data (e.g., in genomic signal processing). Other examples. The SVD is also applied extensively to the study of linear inverse problems and is useful in the analysis of regularization methods such as that of Tikhonov. It is widely used in statistics, where it is related to principal component analysis and to correspondence analysis, and in signal processing and pattern recognition. It is also used in output-only modal analysis, where the non-scaled mode shapes can be determined from the singular vectors. Yet another usage is latent semantic indexing in natural-language text processing. In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number" formula_36. It often controls the error rate or convergence rate of a given computational scheme on such systems. The SVD also plays a crucial role in the field of quantum information, in a form often referred to as the Schmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to be entangled: if the rank of the formula_2 matrix is larger than one. One application of SVD to rather large matrices is in numerical weather prediction, where Lanczos methods are used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate an ensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction. SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled with radial basis functions to interpolate solutions to three-dimensional unsteady flow problems. Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO. SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models. Singular value decomposition is used in recommender systems to predict people's item ratings. Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines. Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to disease outbreak detection. A combination of SVD and higher-order SVD also has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance. In astrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design and orbital station-keeping. Proof of existence. An eigenvalue ⁠⁠ of a matrix ⁠⁠ is characterized by the algebraic relation ⁠⁠ When ⁠⁠ is Hermitian, a variational characterization is also available. Let ⁠⁠ be a real ⁠⁠ symmetric matrix. Define formula_37 By the extreme value theorem, this continuous function attains a maximum at some ⁠⁠ when restricted to the unit sphere formula_38 By the Lagrange multipliers theorem, ⁠⁠ necessarily satisfies formula_39 for some real number ⁠⁠ The nabla symbol, ⁠⁠, is the del operator (differentiation with respect to Using the symmetry of ⁠⁠ we obtain formula_40 Therefore ⁠⁠ so ⁠⁠ is a unit length eigenvector of ⁠⁠ For every unit length eigenvector ⁠⁠ of ⁠⁠ its eigenvalue is ⁠⁠ so ⁠⁠ is the largest eigenvalue of ⁠⁠ The same calculation performed on the orthogonal complement of ⁠⁠ gives the next largest eigenvalue and so on. The complex Hermitian case is similar; there ⁠⁠ is a real-valued function of ⁠⁠ real variables. Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, of ⁠⁠ is no longer required. This section gives these two arguments for existence of singular value decomposition. Based on the spectral theorem. Let formula_41 be an ⁠⁠ complex matrix. Since formula_42 is positive semi-definite and Hermitian, by the spectral theorem, there exists an ⁠⁠ unitary matrix formula_11 such that formula_43 where formula_44 is diagonal and positive definite, of dimension formula_45, with formula_46 the number of non-zero eigenvalues of formula_42 (which can be shown to verify formula_47). Note that formula_11 is here by definition a matrix whose formula_48-th column is the formula_48-th eigenvector of formula_42, corresponding to the eigenvalue formula_49. Moreover, the formula_50-th column of formula_11, for formula_51, is an eigenvector of formula_42 with eigenvalue formula_52. This can be expressed by writing formula_11 as formula_53, where the columns of formula_54 and formula_55 therefore contain the eigenvectors of formula_42 corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting of formula_11, the equation becomes: formula_56 This implies that formula_57 Moreover, the second equation implies formula_58. Finally, the unitary-ness of formula_11 translates, in terms of formula_54 and formula_55, into the following conditions: formula_59 where the subscripts on the identity matrices are used to remark that they are of different dimensions. Let us now define formula_60 Then, formula_61 since formula_62 This can be also seen as immediate consequence of the fact that formula_63. This is equivalent to the observation that if formula_64 is the set of eigenvectors of formula_42 corresponding to non-vanishing eigenvalues formula_65, then formula_66 is a set of orthogonal vectors, and formula_67 is a (generally not complete) set of "orthonormal" vectors. This matches with the matrix formalism used above denoting with formula_54 the matrix whose columns are formula_64, with formula_55 the matrix whose columns are the eigenvectors of formula_42 with vanishing eigenvalue, and formula_68 the matrix whose columns are the vectors formula_67. We see that this is almost the desired result, except that formula_68 and formula_54 are in general not unitary, since they might not be square. However, we do know that the number of rows of formula_68 is no smaller than the number of columns, since the dimensions of formula_44 is no greater than formula_69 and formula_70. Also, since formula_71 the columns in formula_68 are orthonormal and can be extended to an orthonormal basis. This means that we can choose formula_72 such that formula_73 is unitary. For ⁠⁠ we already have ⁠⁠ to make it unitary. Now, define formula_74 where extra zero rows are added or removed to make the number of zero rows equal the number of columns of ⁠⁠ and hence the overall dimensions of formula_2 equal to formula_75. Then formula_76 which is the desired result: formula_77 Notice the argument could begin with diagonalizing ⁠⁠ rather than ⁠⁠ (This shows directly that ⁠⁠ and ⁠⁠ have the same non-zero eigenvalues). Based on variational characterization. The singular values can also be characterized as the maxima of ⁠⁠ considered as a function of ⁠⁠ and ⁠⁠ over particular subspaces. The singular vectors are the values of ⁠⁠ and ⁠⁠ where these maxima are attained. Let ⁠⁠ denote an ⁠⁠ matrix with real entries. Let ⁠}⁠ be the unit formula_78-sphere in formula_79, and define formula_80 formula_81 formula_82 Consider the function ⁠⁠ restricted to ⁠⁠ Since both ⁠}⁠ and ⁠}⁠ are compact sets, their product is also compact. Furthermore, since ⁠⁠ is continuous, it attains a largest value for at least one pair of vectors ⁠⁠ in ⁠}⁠ and ⁠⁠ in ⁠⁠ This largest value is denoted ⁠⁠ and the corresponding vectors are denoted ⁠⁠ and ⁠⁠ Since ⁠⁠ is the largest value of ⁠⁠ it must be non-negative. If it were negative, changing the sign of either ⁠⁠ or ⁠⁠ would make it positive and therefore larger. Statement. ⁠⁠ and ⁠⁠ are left and right-singular vectors of ⁠⁠ with corresponding singular value ⁠⁠ Proof. Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation: formula_83 After some algebra, this becomes formula_84 Multiplying the first equation from left by ⁠}⁠ and the second equation from left by ⁠}⁠ and taking formula_85 into account gives formula_86 Plugging this into the pair of equations above, we have formula_87 This proves the statement. More singular vectors and singular values can be found by maximizing ⁠⁠ over normalized ⁠⁠ and ⁠⁠ which are orthogonal to ⁠⁠ and ⁠⁠ respectively. The passage from real to complex is similar to the eigenvalue case. Calculating the SVD. One-sided Jacobi algorithm. One-sided Jacobi algorithm is an iterative algorithm, where a matrix is iteratively transformed into a matrix with orthogonal columns. The elementary iteration is given as a Jacobi rotation, formula_88 where the angle formula_89 of the Jacobi rotation matrix formula_90 is chosen such that after the rotation the columns with numbers formula_91 and formula_92 become orthogonal. The indices formula_93 are swept cyclically, formula_94, where formula_69 is the number of columns. After the algorithm has converged, the singular value decomposition formula_95 is recovered as follows: the matrix formula_96 is the accumulation of Jacobi rotation matrices, the matrix formula_97 is given by normalising the columns of the transformed matrix formula_98, and the singular values are given as the norms of the columns of the transformed matrix formula_98. Two-sided Jacobi algorithm. Two-sided Jacobi SVD algorithm—a generalization of the Jacobi eigenvalue algorithm—is an iterative algorithm where a square matrix is iteratively transformed into a diagonal matrix. If the matrix is not square the QR decomposition is performed first and then the algorithm is applied to the formula_99 matrix. The elementary iteration zeroes a pair of off-diagonal elements by first applying a Givens rotation to symmetrize the pair of elements and then applying a Jacobi transformation to zero them, formula_100 where formula_101 is the Givens rotation matrix with the angle chosen such that the given pair of off-diagonal elements become equal after the rotation, and where formula_102 is the Jacobi transformation matrix that zeroes these off-diagonal elements. The iterations proceeds exactly as in the Jacobi eigenvalue algorithm: by cyclic sweeps over all off-diagonal elements. After the algorithm has converged the resulting diagonal matrix contains the singular values. The matrices formula_97 and formula_96 are accumulated as follows: formula_103, formula_104. Numerical approach. The singular value decomposition can be computed using the following observations: The SVD of a matrix ⁠⁠ is typically computed by a two-step procedure. In the first step, the matrix is reduced to a bidiagonal matrix. This takes order ⁠⁠ floating-point operations (flop), assuming that ⁠⁠ The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with an iterative method (as with eigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like the machine epsilon. If this precision is considered constant, then the second step takes ⁠⁠ iterations, each costing ⁠⁠ flops. Thus, the first step is more expensive, and the overall cost is ⁠⁠ flops . The first step can be done using Householder reflections for a cost of ⁠⁠ flops, assuming that only the singular values are needed and not the singular vectors. If ⁠⁠ is much larger than ⁠⁠ then it is advantageous to first reduce the matrix ⁠⁠ to a triangular matrix with the QR decomposition and then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is ⁠⁠ flops . The second step can be done by a variant of the QR algorithm for the computation of eigenvalues, which was first described by . The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small . Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD routine for the computation of the singular value decomposition. The same algorithm is implemented in the GNU Scientific Library (GSL). The GSL also offers an alternative method that uses a one-sided Jacobi orthogonalization in step 2 . This method computes the SVD of the bidiagonal matrix by solving a sequence of ⁠⁠ SVD problems, similar to how the Jacobi eigenvalue algorithm solves a sequence of ⁠⁠ eigenvalue methods . Yet another method for step 2 uses the idea of divide-and-conquer eigenvalue algorithms . There is an alternative way that does not explicitly use the eigenvalue decomposition. Usually the singular value problem of a matrix ⁠⁠ is converted into an equivalent symmetric eigenvalue problem such as ⁠⁠ ⁠⁠ or formula_105 The approaches that use eigenvalue decompositions are based on the QR algorithm, which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between the QR decomposition and the LQ decomposition to find the real diagonal Hermitian matrices. The QR decomposition gives ⁠⁠ and the LQ decomposition of ⁠⁠ gives ⁠⁠ Thus, at every iteration, we have ⁠⁠ update ⁠⁠ and repeat the orthogonalizations. Eventually, this iteration between QR decomposition and LQ decomposition produces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD. Analytic result of 2 × 2 SVD. The singular values of a ⁠⁠ matrix can be found analytically. Let the matrix be formula_106 where formula_107 are complex numbers that parameterize the matrix, ⁠⁠ is the identity matrix, and formula_7 denote the Pauli matrices. Then its two singular values are given by formula_108 Reduced SVDs. In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for an ⁠⁠ matrix ⁠⁠ of rank ⁠⁠: Thin SVD. The thin, or economy-sized, SVD of a matrix ⁠⁠ is given by formula_109 where formula_110 the matrices ⁠⁠ and ⁠⁠ contain only the first ⁠⁠ columns of ⁠⁠ and ⁠⁠ and ⁠⁠ contains only the first ⁠⁠ singular values from ⁠⁠ The matrix ⁠⁠ is thus ⁠⁠ ⁠⁠ is ⁠⁠ diagonal, and ⁠⁠ is ⁠⁠ The thin SVD uses significantly less space and computation time if ⁠⁠ The first stage in its calculation will usually be a QR decomposition of ⁠⁠ which can make for a significantly quicker calculation in this case. Compact SVD. The compact SVD of a matrix ⁠⁠ is given by formula_111 Only the ⁠⁠ column vectors of ⁠⁠ and ⁠⁠ row vectors of ⁠⁠ corresponding to the non-zero singular values ⁠⁠ are calculated. The remaining vectors of ⁠⁠ and ⁠⁠ are not calculated. This is quicker and more economical than the thin SVD if ⁠⁠ The matrix ⁠⁠ is thus ⁠⁠ ⁠⁠ is ⁠⁠ diagonal, and ⁠⁠ is ⁠⁠ Truncated SVD. In many applications the number ⁠⁠ of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute only ⁠⁠ non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrix ⁠⁠ but rather provides the optimal low-rank matrix approximation ⁠}⁠ by any matrix of a fixed rank ⁠⁠ formula_112 where matrix ⁠⁠ is ⁠⁠ ⁠⁠ is ⁠⁠ diagonal, and ⁠⁠ is ⁠⁠ Only the ⁠⁠ column vectors of ⁠⁠ and ⁠⁠ row vectors of ⁠⁠ corresponding to the ⁠⁠ largest singular values ⁠⁠ are calculated. This can be much quicker and more economical than the compact SVD if ⁠⁠ but requires a completely different toolset of numerical solvers. In applications that require an approximation to the Moore–Penrose inverse of the matrix ⁠⁠ the smallest singular values of ⁠⁠ are of interest, which are more challenging to compute compared to the largest ones. Truncated SVD is employed in latent semantic indexing. Norms. Ky Fan norms. The sum of the ⁠⁠ largest singular values of ⁠⁠ is a matrix norm, the Ky Fan ⁠⁠-norm of ⁠⁠ The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of ⁠⁠ as a linear operator with respect to the Euclidean norms of ⁠⁠ and ⁠⁠ In other words, the Ky Fan 1-norm is the operator norm induced by the standard formula_113 Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operator ⁠⁠ on (possibly infinite-dimensional) Hilbert spaces formula_114 But, in the matrix case, ⁠}⁠ is a normal matrix, so formula_115 is the largest eigenvalue of ⁠⁠ i.e. the largest singular value of ⁠⁠ The last of the Ky Fan norms, the sum of all singular values, is the trace norm (also known as the 'nuclear norm'), defined by formula_116 (the eigenvalues of ⁠⁠ are the squares of the singular values). Hilbert–Schmidt norm. The singular values are related to another norm on the space of operators. Consider the Hilbert–Schmidt inner product on the ⁠⁠ matrices, defined by formula_117 So the induced norm is formula_118 Since the trace is invariant under unitary equivalence, this shows formula_119 where ⁠⁠ are the singular values of ⁠⁠ This is called the Frobenius norm, Schatten 2-norm, or Hilbert–Schmidt norm of ⁠⁠ Direct calculation shows that the Frobenius norm of ⁠⁠ coincides with: formula_120 In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of the Schatten norm. Variations and generalizations. Scale-invariant SVD. The singular values of a matrix ⁠⁠ are uniquely defined and are invariant with respect to left and/or right unitary transformations of ⁠⁠ In other words, the singular values of ⁠⁠ for unitary matrices ⁠⁠ and ⁠⁠ are equal to the singular values of ⁠⁠ This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations. The Scale-Invariant SVD, or SI-SVD, is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of ⁠⁠ In other words, the singular values of ⁠⁠ for invertible diagonal matrices ⁠⁠ and ⁠⁠ are equal to the singular values of ⁠⁠ This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed. Bounded operators on Hilbert spaces. The factorization ⁠⁠ can be extended to a bounded operator ⁠⁠ on a separable Hilbert space ⁠⁠ Namely, for any bounded operator ⁠⁠ there exist a partial isometry ⁠⁠ a unitary ⁠⁠ a measure space ⁠⁠ and a non-negative measurable ⁠⁠ such that formula_121 where ⁠⁠ is the multiplication by ⁠⁠ on ⁠⁠ This can be shown by mimicking the linear algebraic argument for the matrix case above. ⁠⁠ is the unique positive square root of ⁠⁠ as given by the Borel functional calculus for self-adjoint operators. The reason why ⁠⁠ need not be unitary is that, unlike the finite-dimensional case, given an isometry ⁠⁠ with nontrivial kernel, a suitable ⁠⁠ may not be found such that formula_122 is a unitary operator. As for matrices, the singular value factorization is equivalent to the polar decomposition for operators: we can simply write formula_123 and notice that ⁠⁠ is still a partial isometry while ⁠⁠ is positive. Singular values and compact operators. The notion of singular values and left/right-singular vectors can be extended to compact operator on Hilbert space as they have a discrete spectrum. If ⁠⁠ is compact, every non-zero ⁠⁠ in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If ⁠⁠ is compact, so is ⁠⁠. Applying the diagonalization result, the unitary image of its positive square root ⁠⁠ has a set of orthonormal eigenvectors ⁠}⁠ corresponding to strictly positive eigenvalues ⁠}⁠. For any ⁠⁠ in ⁠⁠ formula_124 where the series converges in the norm topology on ⁠⁠ Notice how this resembles the expression from the finite-dimensional case. ⁠⁠ are called the singular values of ⁠⁠ ⁠}⁠ (resp. ⁠}⁠) can be considered the left-singular (resp. right-singular) vectors of ⁠⁠ Compact operators on a Hilbert space are the closure of finite-rank operators in the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is: Theorem. ⁠⁠ is compact if and only if ⁠⁠ is compact. History. The singular value decomposition was originally developed by differential geometers, who wished to determine whether a real bilinear form could be made equal to another by independent orthogonal transformations of the two spaces it acts on. Eugenio Beltrami and Camille Jordan discovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form a complete set of invariants for bilinear forms under orthogonal substitutions. James Joseph Sylvester also arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values the "canonical multipliers" of the matrix ⁠⁠ The fourth mathematician to discover the singular value decomposition independently is Autonne in 1915, who arrived at it via the polar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be by Carl Eckart and Gale J. Young in 1936; they saw it as a generalization of the principal axis transformation for Hermitian matrices. In 1907, Erhard Schmidt defined an analog of singular values for integral operators (which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed by Émile Picard in 1910, who is the first to call the numbers formula_125 "singular values" (or in French, "valeurs singulières"). Practical methods for computing the SVD date back to Kogbetliantz in 1954–1955 and Hestenes in 1958, resembling closely the Jacobi eigenvalue algorithm, which uses plane rotations or Givens rotations. However, these were replaced by the method of Gene Golub and William Kahan published in 1965, which uses Householder transformations or reflections. In 1970, Golub and Christian Reinsch published a variant of the Golub/Kahan algorithm that is still the one most-used today. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "m \\times n" }, { "math_id": 1, "text": "\\mathbf{M} = \\mathbf{U\\Sigma V^*}," }, { "math_id": 2, "text": "\\mathbf \\Sigma" }, { "math_id": 3, "text": "n \\times n" }, { "math_id": 4, "text": "\\mathbf V^*" }, { "math_id": 5, "text": "\\mathbf U \\mathbf \\Sigma \\mathbf V^\\mathrm{T}." }, { "math_id": 6, "text": "\\sigma_i = \\Sigma_{i i}" }, { "math_id": 7, "text": "\\sigma_i" }, { "math_id": 8, "text": "\n\\mathbf{M} = \\sum_{i=1}^{r}\\sigma_i\\mathbf{u}_i\\mathbf{v}_i^{*},\n" }, { "math_id": 9, "text": "r \\leq \\min\\{m,n\\}" }, { "math_id": 10, "text": "\\Sigma_{i i}" }, { "math_id": 11, "text": "\\mathbf{V}" }, { "math_id": 12, "text": "\\mathbf U^* \\mathbf U = \\mathbf V^* \\mathbf V = \\mathbf I_r." }, { "math_id": 13, "text": "\nT : \\left\\{\\begin{aligned}\n K^n &\\to K^m \\\\\n x &\\mapsto \\mathbf{M}x \\end{aligned}\\right.\n" }, { "math_id": 14, "text": "\nT(\\mathbf{V}_i) = \\sigma_i \\mathbf{U}_i, \\qquad i\n= 1, \\ldots, \\min(m, n),\n" }, { "math_id": 15, "text": "\n\\mathbf{M} = \\begin{bmatrix}\n 1 & 0 & 0 & 0 & 2 \\\\\n 0 & 0 & 3 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 2 & 0 & 0 & 0\n\\end{bmatrix}\n" }, { "math_id": 16, "text": "\\begin{align}\n\\mathbf{U} &= \\begin{bmatrix}\n \\color{Green}0 & \\color{Blue}-1 & \\color{Cyan}0 & \\color{Emerald}0 \\\\\n \\color{Green}-1 & \\color{Blue}0 & \\color{Cyan}0 & \\color{Emerald}0 \\\\\n \\color{Green}0 & \\color{Blue}0 & \\color{Cyan}0 & \\color{Emerald}-1 \\\\\n \\color{Green}0 & \\color{Blue}0 & \\color{Cyan}-1 & \\color{Emerald}0\n\\end{bmatrix} \\\\[6pt]\n\n\\mathbf \\Sigma &= \\begin{bmatrix}\n 3 & 0 & 0 & 0 & \\color{Gray}\\mathit{0} \\\\\n 0 & \\sqrt{5} & 0 & 0 & \\color{Gray}\\mathit{0} \\\\\n 0 & 0 & 2 & 0 & \\color{Gray}\\mathit{0} \\\\\n 0 & 0 & 0 & \\color{Red}\\mathbf{0} & \\color{Gray}\\mathit{0}\n\\end{bmatrix} \\\\[6pt]\n\n \\mathbf{V}^* &= \\begin{bmatrix}\n \\color{Violet}0 & \\color{Violet}0 & \\color{Violet}-1 & \\color{Violet}0 &\\color{Violet}0 \\\\\n \\color{Plum}-\\sqrt{0.2}& \\color{Plum}0 & \\color{Plum}0 & \\color{Plum}0 &\\color{Plum}-\\sqrt{0.8} \\\\\n \\color{Magenta}0 & \\color{Magenta}-1 & \\color{Magenta}0 & \\color{Magenta}0 &\\color{Magenta}0 \\\\\n \\color{Orchid}0 & \\color{Orchid}0 & \\color{Orchid}0 & \\color{Orchid}1 &\\color{Orchid}0 \\\\\n \\color{Purple} - \\sqrt{0.8} & \\color{Purple}0 & \\color{Purple}0 & \\color{Purple}0 & \\color{Purple}\\sqrt{0.2}\n\\end{bmatrix}\n\\end{align}" }, { "math_id": 17, "text": "\\begin{align}\n \\mathbf{U} \\mathbf{U}^* &=\n \\begin{bmatrix}\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n \\end{bmatrix} = \\mathbf{I}_4 \\\\[6pt]\n \\mathbf{V} \\mathbf{V}^* &=\n \\begin{bmatrix}\n 1 & 0 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 0 & 1\n \\end{bmatrix} = \\mathbf{I}_5\n\\end{align}" }, { "math_id": 18, "text": "\\mathbf{V}^* = \\begin{bmatrix}\n \\color{Violet}0 & \\color{Violet}1 & \\color{Violet}0 & \\color{Violet}0 & \\color{Violet}0 \\\\\n \\color{Plum}0 & \\color{Plum}0 & \\color{Plum}1 & \\color{Plum}0 & \\color{Plum}0 \\\\\n \\color{Magenta}\\sqrt{0.2} & \\color{Magenta}0 & \\color{Magenta}0 & \\color{Magenta}0 & \\color{Magenta}\\sqrt{0.8} \\\\\n \\color{Orchid}\\sqrt{0.4} & \\color{Orchid}0 & \\color{Orchid}0 & \\color{Orchid}\\sqrt{0.5} & \\color{Orchid}-\\sqrt{0.1} \\\\\n \\color{Purple}-\\sqrt{0.4} & \\color{Purple}0 & \\color{Purple}0 & \\color{Purple}\\sqrt{0.5} & \\color{Purple}\\sqrt{0.1}\n\\end{bmatrix}" }, { "math_id": 19, "text": "\\begin{align}\n\\mathbf{M v} &= \\sigma \\mathbf{u}, \\\\[3mu]\n\\mathbf M^*\\mathbf u &= \\sigma \\mathbf{v}.\n\\end{align}" }, { "math_id": 20, "text": "\n\\mathbf M = \\mathbf U \\mathbf \\Sigma \\mathbf V^*\n" }, { "math_id": 21, "text": "\\begin{align}\n\\mathbf{M}^* \\mathbf{M} &= \\mathbf{V} \\mathbf \\Sigma^* \\mathbf{U}^*\\, \\mathbf{U} \\mathbf \\Sigma \\mathbf{V}^* = \\mathbf{V} (\\mathbf \\Sigma^* \\mathbf \\Sigma) \\mathbf{V}^*, \\\\[3mu]\n\\mathbf{M} \\mathbf{M}^* &= \\mathbf{U} \\mathbf \\Sigma \\mathbf{V}^*\\, \\mathbf{V} \\mathbf \\Sigma^* \\mathbf{U}^* = \\mathbf{U} (\\mathbf \\Sigma \\mathbf \\Sigma^*) \\mathbf{U}^*.\n\\end{align}" }, { "math_id": 22, "text": "\n\\mathbf M^+ = \\mathbf V \\boldsymbol \\Sigma^+ \\mathbf U^\\ast,\n" }, { "math_id": 23, "text": "\\boldsymbol \\Sigma^+" }, { "math_id": 24, "text": "\\boldsymbol \\Sigma" }, { "math_id": 25, "text": " \\| \\mathbf x \\| = 1." }, { "math_id": 26, "text": "\\tilde{\\mathbf{M}}" }, { "math_id": 27, "text": "\\operatorname{rank}\\bigl(\\tilde{\\mathbf{M}}\\bigr) = r," }, { "math_id": 28, "text": "\n\\tilde{\\mathbf{M}} = \\mathbf{U} \\tilde{\\mathbf \\Sigma} \\mathbf{V}^*,\n" }, { "math_id": 29, "text": "\\tilde{\\mathbf \\Sigma}" }, { "math_id": 30, "text": "\n\\mathbf{M} = \\sum_i \\mathbf{A}_i\n= \\sum_i \\sigma_i \\mathbf U_i \\otimes \\mathbf V_i.\n" }, { "math_id": 31, "text": "\n\\alpha = \\frac{\\sigma_1^2}{\\sum_i \\sigma_i^2},\n" }, { "math_id": 32, "text": "\\mathbf M = \\mathbf R \\mathbf P = \\mathbf P' \\mathbf R" }, { "math_id": 33, "text": "\n\\mathbf{O} = \\underset\\Omega\\operatorname{argmin} \\|\\mathbf{A}\\boldsymbol{\\Omega} - \\mathbf{B}\\|_F \\quad\\text{subject to}\\quad \\boldsymbol{\\Omega}^\\operatorname{T}\\boldsymbol{\\Omega} = \\mathbf{I},\n" }, { "math_id": 34, "text": "\\| \\cdot \\|_F" }, { "math_id": 35, "text": "\\mathbf M = \\mathbf A^\\operatorname{T} \\mathbf B" }, { "math_id": 36, "text": "\\kappa := \\sigma_\\text{max} / \\sigma_\\text{min}" }, { "math_id": 37, "text": " f : \\left\\{ \\begin{align}\n\\R^n &\\to \\R \\\\\n\\mathbf{x} &\\mapsto \\mathbf{x}^\\operatorname{T} \\mathbf{M} \\mathbf{x}\n\\end{align}\\right." }, { "math_id": 38, "text": "\\{\\|\\mathbf x\\| = 1\\}." }, { "math_id": 39, "text": "\\nabla \\mathbf{u}^\\operatorname{T} \\mathbf{M} \\mathbf{u} - \\lambda \\cdot \\nabla \\mathbf{u}^\\operatorname{T} \\mathbf{u} = 0" }, { "math_id": 40, "text": "\\nabla \\mathbf{x}^\\operatorname{T} \\mathbf{M} \\mathbf{x} - \\lambda \\cdot \\nabla \\mathbf{x}^\\operatorname{T} \\mathbf{x} = 2(\\mathbf{M}-\\lambda \\mathbf{I})\\mathbf{x}." }, { "math_id": 41, "text": "\\mathbf{M}" }, { "math_id": 42, "text": "\\mathbf{M}^* \\mathbf{M}" }, { "math_id": 43, "text": "\n\\mathbf V^* \\mathbf M^* \\mathbf M \\mathbf V\n= \\bar\\mathbf{D}\n= \\begin{bmatrix} \\mathbf{D} & 0 \\\\ 0 & 0\\end{bmatrix},\n" }, { "math_id": 44, "text": "\\mathbf{D}" }, { "math_id": 45, "text": "\\ell\\times \\ell" }, { "math_id": 46, "text": "\\ell" }, { "math_id": 47, "text": "\\ell\\le\\min(n,m)" }, { "math_id": 48, "text": "i" }, { "math_id": 49, "text": "\\bar{\\mathbf{D}}_{ii}" }, { "math_id": 50, "text": "j" }, { "math_id": 51, "text": "j>\\ell" }, { "math_id": 52, "text": "\\bar{\\mathbf{D}}_{jj}=0" }, { "math_id": 53, "text": "\\mathbf{V}=\\begin{bmatrix}\\mathbf{V}_1 &\\mathbf{V}_2\\end{bmatrix}" }, { "math_id": 54, "text": "\\mathbf{V}_1" }, { "math_id": 55, "text": "\\mathbf{V}_2" }, { "math_id": 56, "text": "\n\\begin{bmatrix} \\mathbf{V}_1^* \\\\ \\mathbf{V}_2^* \\end{bmatrix}\n\\mathbf{M}^* \\mathbf{M}\\, \\begin{bmatrix} \\mathbf{V}_1 & \\!\\! \\mathbf{V}_2 \\end{bmatrix}\n= \\begin{bmatrix}\n \\mathbf{V}_1^* \\mathbf{M}^* \\mathbf{M} \\mathbf{V}_1 & \\mathbf{V}_1^* \\mathbf{M}^* \\mathbf{M} \\mathbf{V}_2 \\\\\n \\mathbf{V}_2^* \\mathbf{M}^* \\mathbf{M} \\mathbf{V}_1 & \\mathbf{V}_2^* \\mathbf{M}^* \\mathbf{M} \\mathbf{V}_2\n\\end{bmatrix}\n= \\begin{bmatrix} \\mathbf{D} & 0 \\\\ 0 & 0 \\end{bmatrix}." }, { "math_id": 57, "text": "\n\\mathbf{V}_1^* \\mathbf{M}^* \\mathbf{M} \\mathbf{V}_1\n= \\mathbf{D}, \\quad \\mathbf{V}_2^* \\mathbf{M}^* \\mathbf{M} \\mathbf{V}_2\n= \\mathbf{0}.\n" }, { "math_id": 58, "text": "\\mathbf{M}\\mathbf{V}_2 = \\mathbf{0}" }, { "math_id": 59, "text": "\\begin{align} \n\\mathbf{V}_1^* \\mathbf{V}_1 &= \\mathbf{I}_1, \\\\\n\\mathbf{V}_2^* \\mathbf{V}_2 &= \\mathbf{I}_2, \\\\\n\\mathbf{V}_1 \\mathbf{V}_1^* + \\mathbf{V}_2 \\mathbf{V}_2^* &= \\mathbf{I}_{12},\n\\end{align}" }, { "math_id": 60, "text": "\n\\mathbf{U}_1 = \\mathbf{M} \\mathbf{V}_1 \\mathbf{D}^{-\\frac{1}{2}}.\n" }, { "math_id": 61, "text": "\n\\mathbf{U}_1 \\mathbf{D}^\\frac{1}{2} \\mathbf{V}_1^* = \\mathbf{M} \\mathbf{V}_1 \\mathbf{D}^{-\\frac{1}{2}} \\mathbf{D}^\\frac{1}{2} \\mathbf{V}_1^* = \\mathbf{M} (\\mathbf{I} - \\mathbf{V}_2\\mathbf{V}_2^*) = \\mathbf{M} - (\\mathbf{M}\\mathbf{V}_2)\\mathbf{V}_2^* = \\mathbf{M},\n" }, { "math_id": 62, "text": "\\mathbf{M}\\mathbf{V}_2 = \\mathbf{0}. " }, { "math_id": 63, "text": "\\mathbf{M}\\mathbf{V}_1\\mathbf{V}_1^* = \\mathbf{M}" }, { "math_id": 64, "text": "\\{\\boldsymbol v_i\\}_{i=1}^\\ell" }, { "math_id": 65, "text": "\\{\\lambda_i\\}_{i=1}^\\ell" }, { "math_id": 66, "text": "\\{\\mathbf M \\boldsymbol v_i\\}_{i=1}^\\ell" }, { "math_id": 67, "text": "\\bigl\\{\\lambda_i^{-1/2}\\mathbf M \\boldsymbol v_i\\bigr\\}\\vphantom|_{i=1}^\\ell" }, { "math_id": 68, "text": "\\mathbf{U}_1" }, { "math_id": 69, "text": "m" }, { "math_id": 70, "text": "n" }, { "math_id": 71, "text": "\n\\mathbf{U}_1^*\\mathbf{U}_1 = \\mathbf{D}^{-\\frac{1}{2}}\\mathbf{V}_1^*\\mathbf{M}^*\\mathbf{M} \\mathbf{V}_1 \\mathbf{D}^{-\\frac{1}{2}}=\\mathbf{D}^{-\\frac{1}{2}}\\mathbf{D}\\mathbf{D}^{-\\frac{1}{2}} = \\mathbf{I_1},\n" }, { "math_id": 72, "text": "\\mathbf{U}_2" }, { "math_id": 73, "text": "\\mathbf{U} = \\begin{bmatrix} \\mathbf{U}_1 & \\mathbf{U}_2 \\end{bmatrix}" }, { "math_id": 74, "text": "\n\\mathbf \\Sigma =\n\\begin{bmatrix}\n \\begin{bmatrix} \\mathbf{D}^\\frac{1}{2} & 0 \\\\ 0 & 0 \\end{bmatrix} \\\\\n 0\n\\end{bmatrix},\n" }, { "math_id": 75, "text": "m\\times n" }, { "math_id": 76, "text": "\n\\begin{bmatrix} \\mathbf{U}_1 & \\mathbf{U}_2 \\end{bmatrix}\n\\begin{bmatrix}\n \\begin{bmatrix} \\mathbf{}D^\\frac{1}{2} & 0 \\\\ 0 & 0 \\end{bmatrix} \\\\\n 0 \\end{bmatrix}\n\\begin{bmatrix} \\mathbf{V}_1 & \\mathbf{V}_2 \\end{bmatrix}^*\n= \\begin{bmatrix} \\mathbf{U}_1 & \\mathbf{U}_2 \\end{bmatrix}\n\\begin{bmatrix} \\mathbf{D}^\\frac{1}{2} \\mathbf{V}_1^* \\\\ 0 \\end{bmatrix}\n= \\mathbf{U}_1 \\mathbf{D}^\\frac{1}{2} \\mathbf{V}_1^* = \\mathbf{M},\n" }, { "math_id": 77, "text": "\n\\mathbf{M} = \\mathbf{U} \\mathbf \\Sigma \\mathbf{V}^*.\n" }, { "math_id": 78, "text": "(k-1)" }, { "math_id": 79, "text": " \\mathbb{R}^k " }, { "math_id": 80, "text": "\\sigma(\\mathbf{u}, \\mathbf{v}) = \\mathbf{u}^\\operatorname{T} \\mathbf{M} \\mathbf{v}," }, { "math_id": 81, "text": "\\mathbf{u} \\in S^{m-1}," }, { "math_id": 82, "text": "\\mathbf{v} \\in S^{n-1}." }, { "math_id": 83, "text": "\n\\nabla \\sigma\n= \\nabla \\mathbf{u}^\\operatorname{T} \\mathbf{M} \\mathbf{v}\n - \\lambda_1 \\cdot \\nabla \\mathbf{u}^\\operatorname{T} \\mathbf{u}\n - \\lambda_2 \\cdot \\nabla \\mathbf{v}^\\operatorname{T} \\mathbf{v}\n" }, { "math_id": 84, "text": " \\begin{align}\n\\mathbf{M} \\mathbf{v}_1 &= 2 \\lambda_1 \\mathbf{u}_1 + 0, \\\\\n\\mathbf{M}^\\operatorname{T} \\mathbf{u}_1 &= 0 + 2 \\lambda_2 \\mathbf{v}_1.\n\\end{align}" }, { "math_id": 85, "text": " \\| \\mathbf u \\| = \\| \\mathbf v \\| = 1" }, { "math_id": 86, "text": "\n\\sigma_1 = 2\\lambda_1 = 2\\lambda_2.\n" }, { "math_id": 87, "text": "\\begin{align}\n\\mathbf{M} \\mathbf{v}_1 &= \\sigma_1 \\mathbf{u}_1, \\\\\n\\mathbf{M}^\\operatorname{T} \\mathbf{u}_1 &= \\sigma_1 \\mathbf{v}_1.\n\\end{align}" }, { "math_id": 88, "text": "\nM\\leftarrow MJ(p, q, \\theta),\n" }, { "math_id": 89, "text": "\\theta" }, { "math_id": 90, "text": "J(p,q,\\theta)" }, { "math_id": 91, "text": "p" }, { "math_id": 92, "text": "q" }, { "math_id": 93, "text": "(p,q)" }, { "math_id": 94, "text": "(p=1\\dots m,q=p+1\\dots m)" }, { "math_id": 95, "text": "M=USV^T" }, { "math_id": 96, "text": "V" }, { "math_id": 97, "text": "U" }, { "math_id": 98, "text": "M" }, { "math_id": 99, "text": "R" }, { "math_id": 100, "text": "\nM \\leftarrow J^TGMJ\n" }, { "math_id": 101, "text": "G" }, { "math_id": 102, "text": "J" }, { "math_id": 103, "text": "U\\leftarrow UG^TJ" }, { "math_id": 104, "text": "V\\leftarrow VJ" }, { "math_id": 105, "text": "\n\\begin{bmatrix}\n \\mathbf{0} & \\mathbf{M} \\\\\n \\mathbf{M}^* & \\mathbf{0}\n\\end{bmatrix}.\n" }, { "math_id": 106, "text": "\\mathbf{M} = z_0\\mathbf{I} + z_1\\sigma_1 + z_2\\sigma_2 + z_3\\sigma_3" }, { "math_id": 107, "text": "z_i \\in \\mathbb{C}" }, { "math_id": 108, "text": "\\begin{align}\n\\sigma_\\pm\n&= \\sqrt{|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2 \\pm\n \\sqrt{\\bigl(|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2\\bigr)^2 - |z_0^2 - z_1^2 - z_2^2 - z_3^2|^2}} \\\\\n&= \\sqrt{|z_0|^2 + |z_1|^2 + |z_2|^2 + |z_3|^2 \\pm \n 2\\sqrt{(\\operatorname{Re}z_0z_1^*)^2 + (\\operatorname{Re}z_0z_2^*)^2 +\n (\\operatorname{Re}z_0z_3^*)^2 + (\\operatorname{Im}z_1z_2^*)^2 + \n (\\operatorname{Im}z_2z_3^*)^2 + (\\operatorname{Im}z_3z_1^*)^2}}\n\\end{align}" }, { "math_id": 109, "text": "\n\\mathbf{M} = \\mathbf{U}_k \\mathbf \\Sigma_k \\mathbf{V}^*_k,\n" }, { "math_id": 110, "text": "k = \\min(m, n)," }, { "math_id": 111, "text": "\n\\mathbf{M} = \\mathbf U_r \\mathbf \\Sigma_r \\mathbf V_r^*.\n" }, { "math_id": 112, "text": "\n\\tilde{\\mathbf{M}} = \\mathbf{U}_t \\mathbf \\Sigma_t \\mathbf{V}_t^*,\n" }, { "math_id": 113, "text": "\\ell^2" }, { "math_id": 114, "text": "\n\\| \\mathbf M \\| = \\| \\mathbf M^* \\mathbf M \\|^\\frac{1}{2}\n" }, { "math_id": 115, "text": " \\|\\mathbf M^* \\mathbf M\\|^{1/2} " }, { "math_id": 116, "text": "\\| \\mathbf M \\| = \\operatorname{Tr}(\\mathbf M^* \\mathbf M)^{1/2}" }, { "math_id": 117, "text": "\n\\langle \\mathbf{M}, \\mathbf{N} \\rangle\n= \\operatorname{tr} \\left( \\mathbf{N}^*\\mathbf{M} \\right).\n" }, { "math_id": 118, "text": "\n\\| \\mathbf{M} \\|\n= \\sqrt{\\langle \\mathbf{M}, \\mathbf{M} \\rangle}\n= \\sqrt{\\operatorname{tr} \\left( \\mathbf{M}^*\\mathbf{M} \\right)}.\n" }, { "math_id": 119, "text": "\n\\| \\mathbf{M} \\| = \\sqrt{\\vphantom\\bigg|\\sum_i \\sigma_i ^2}\n" }, { "math_id": 120, "text": "\n\\sqrt{\\vphantom\\bigg|\\sum_{ij} | m_{ij} |^2}.\n" }, { "math_id": 121, "text": "\n\\mathbf{M} = \\mathbf{U} T_f \\mathbf{V}^*\n" }, { "math_id": 122, "text": "\n\\begin{bmatrix} U_1 \\\\ U_2 \\end{bmatrix}\n" }, { "math_id": 123, "text": "\n\\mathbf M = \\mathbf U \\mathbf V^* \\cdot \\mathbf V T_f \\mathbf V^*\n" }, { "math_id": 124, "text": "\n\\mathbf{M} \\psi = \\mathbf{U} T_f \\mathbf{V}^* \\psi = \\sum_i \\left \\langle \\mathbf{U} T_f \\mathbf{V}^* \\psi, \\mathbf{U} e_i \\right \\rangle \\mathbf{U} e_i = \\sum_i \\sigma_i \\left \\langle \\psi, \\mathbf{V} e_i \\right \\rangle \\mathbf{U} e_i,\n" }, { "math_id": 125, "text": "\\sigma_k" } ]
https://en.wikipedia.org/wiki?curid=142207
14220752
Trans-epoxysuccinate hydrolase
In enzymology, a trans-epoxysuccinate hydrolase (EC 3.3.2.4) is an enzyme that catalyzes the chemical reaction trans-2,3-epoxysuccinate + H2O formula_0 meso-tartrate Thus, the two substrates of this enzyme are trans-2,3-epoxysuccinate and H2O, whereas its product is meso-tartrate. This enzyme belongs to the family of hydrolases, specifically those acting on ether bonds (ether hydrolases). The systematic name of this enzyme class is trans-2,3-epoxysuccinate hydrolase. Other names in common use include trans-epoxysuccinate hydratase, and tartrate epoxydase. This enzyme participates in glyoxylate and dicarboxylate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14220752
14221211
2-C-methyl-D-erythritol 2,4-cyclodiphosphate synthase
Class of enzymes 2-"C"-Methyl--erythritol 2,4-cyclodiphosphate synthase (MEcPP synthase, IspF, EC 4.6.1.12) is a zinc-dependent enzyme and a member of the YgbB N terminal protein domain, which participates in the MEP pathway (non-mevalonate pathway) of isoprenoid precursor biosynthesis. It catalyzes the following reaction: 4-diphosphocytidyl-2-"C"-methyl-)erythritol 2-phosphate formula_0 2-"C"-methyl--erythritol 2,4-cyclodiphosphate + CMP The enzyme is considered a phosphorus-oxygen lyase. The systematic name of this enzyme class is 2-phospho-4-(cytidine 5′-diphospho)-2-"C"-methyl-D--erythritol CMP-lyase (cyclizing; 2-"C"-methyl--erythritol 2,4-cyclodiphosphate-forming). Other names in common use include IspF, YgbB and MEcPP synthase. Structural studies. As of late 2007, 20 structures have been solved for this class of enzymes, with PDB accession codes 1GX1, 1H47, 1H48, 1IV1, 1IV2, 1IV3, 1IV4, 1T0A, 1U3L, 1U3P, 1U40, 1U43, 1VH8, 1VHA, 1W55, 1W57, 1YQN, 2AMT, 2GZL, and 2PMP.
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221211
14221400
3-aminobutyryl-CoA ammonia-lyase
Class of enzymes The enzyme 3-aminobutyryl-CoA ammonia-lyase (EC 4.3.1.14) catalyzes the chemical reaction -3-aminobutyryl-CoA formula_0 crotonoyl-CoA + NH3 This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is -3-aminobutyryl-CoA ammonia-lyase (crotonoyl-CoA-forming). Other names in common use include -3-aminobutyryl-CoA deaminase, and -3-aminobutyryl-CoA ammonia-lyase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221400
14221446
3-ketovalidoxylamine C-N-lyase
Enzyme The enzyme 3-ketovalidoxylamine C-N-lyase (EC 4.3.3.1) catalyzes the chemical reaction 4-nitrophenyl-3-ketovalidamine formula_0 4-nitroaniline + 5--(5/6)-5-"C"-(hydroxymethyl)-2,6-dihydroxycyclohex-2-en-1-one This enzyme belongs to the family of lyases, specifically amine lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is 4-nitrophenyl-3-ketovalidamine 4-nitroaniline-lyase [5--(5/6)-5-"C"-(hydroxymethyl)-2,6-dihydroxycyclohex-2-en-1-one-forming]. Other names in common use include 3-ketovalidoxylamine A C-N-lyase, "p"-nitrophenyl-3-ketovalidamine "p"-nitroaniline lyase, and 4-nitrophenyl-3-ketovalidamine 4-nitroaniline-lyase. It employs one cofactor, Ca2+. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221446
14221481
Aspartate ammonia-lyase
The enzyme aspartate ammonia-lyase (EC 4.3.1.1) catalyzes the chemical reaction -aspartate formula_0 fumarate + NH3 The reaction is the basis of the industrial synthesis of aspartate. This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is -aspartate ammonia-lyase (fumarate-forming). Other names in common use include aspartase, fumaric aminase, -aspartase, and -aspartate ammonia-lyase. This enzyme participates in alanine and aspartate metabolism and nitrogen metabolism. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1J3U and 1JSW. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221481
14221496
Beta-alanyl-CoA ammonia-lyase
The enzyme β-Alanyl-CoA ammonia-lyase (EC 4.3.1.6) catalyzes the chemical reaction β-alanyl-CoA formula_0 acryloyl-CoA + NH3 This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is β-alanyl-CoA ammonia-lyase (acryloyl-CoA-forming). This enzyme is also called β-alanyl coenzyme A ammonia-lyase. This enzyme participates in β-alanine metabolism and propanoate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221496
14221550
Cysteine lyase
The enzyme cysteine lyase (EC 4.4.1.10) catalyzes the chemical reaction -cysteine + sulfite formula_0 -cysteate + hydrogen sulfide This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is -cysteine hydrogen-sulfide-lyase (adding sulfite; -cysteate-forming). Other names in common use include cysteine (sulfite) lyase, and -cysteine hydrogen-sulfide-lyase (adding sulfite). This enzyme participates in cysteine and taurine metabolism. It employs one cofactor, pyridoxal phosphate. Evolution. Genes encoding cysteine lyase (CL) originated around 300 million years ago by a tandem gene duplication and neofunctionalization of cystathionine β-lyase (CBS) shortly after the split of mammalian and reptilian lineages. CL genes are found only in "Sauropsida" where they are involved in a metabolic pathway for sulfur metabolism in the chicken egg. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221550
14221581
Hosoya index
Number of matchings in a graph The Hosoya index, also known as the Z index, of a graph is the total number of matchings in it. The Hosoya index is always at least one, because the empty set of edges is counted as a matching for this purpose. Equivalently, the Hosoya index is the number of non-empty matchings plus one. The index is named after Haruo Hosoya. It is used as a topological index in chemical graph theory. Complete graphs have the largest Hosoya index for any given number of vertices; their Hosoya indices are the telephone numbers. History. This graph invariant was introduced by Haruo Hosoya in 1971. It is often used in chemoinformatics for investigations of organic compounds. In his article, "The Topological Index Z Before and After 1971," on the history of the notion and the associated inside stories, Hosoya writes that he introduced the Z index to report a good correlation of the boiling points of alkane isomers and their Z indices, basing on his unpublished 1957 work carried out while he was an undergraduate student at the University of Tokyo. Example. A linear alkane, for the purposes of the Hosoya index, may be represented as a path graph without any branching. A path with one vertex and no edges (corresponding to the methane molecule) has one (empty) matching, so its Hosoya index is one; a path with one edge (ethane) has two matchings (one with zero edges and one with one edges), so its Hosoya index is two. Propane (a length-two path) has three matchings: either of its edges, or the empty matching. "n"-butane (a length-three path) has five matchings, distinguishing it from isobutane which has four. More generally, a matching in a path with formula_0 edges either forms a matching in the first formula_1 edges, or it forms a matching in the first formula_2 edges together with the final edge of the path. This case analysis shows that the Hosoya indices of linear alkanes obey the recurrence governing the Fibonacci numbers, and because they also have the same base case they must equal the Fibonacci numbers. The structure of the matchings in these graphs may be visualized using a Fibonacci cube. The largest possible value of the Hosoya index, on a graph with formula_3 vertices, is given by the complete graph formula_4. The Hosoya indices for the complete graphs are the telephone numbers <templatestyles src="Block indent/styles.css"/> These numbers can be expressed by a summation formula involving factorials, as formula_5 Every graph that is not complete has a smaller Hosoya index than this upper bound. Algorithms. The Hosoya index is #P-complete to compute, even for planar graphs. However, it may be calculated by evaluating the matching polynomial "mG" at the argument 1. Based on this evaluation, the calculation of the Hosoya index is fixed-parameter tractable for graphs of bounded treewidth and polynomial (with an exponent that depends linearly on the width) for graphs of bounded clique-width. The Hosoya index can be efficiently approximated to any desired constant approximation ratio using a fully-polynomial randomized approximation scheme. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "k-1" }, { "math_id": 2, "text": "k-2" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "K_n" }, { "math_id": 5, "text": " \\sum_{k=0}^{\\lfloor n/2 \\rfloor} \\frac{n!}{2^k \\cdot k! \\cdot \\left(n - 2k\\right)! }. " } ]
https://en.wikipedia.org/wiki?curid=14221581
14221601
D-cysteine desulfhydrase
The enzyme -cysteine desulfhydrase (EC 4.4.1.15) catalyzes the chemical reaction -cysteine + H2O formula_0 sulfide + NH3 + pyruvate This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is -cysteine sulfide-lyase (deaminating; pyruvate-forming). Other names in common use include -cysteine lyase, and -cysteine sulfide-lyase (deaminating). This enzyme participates in cysteine metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221601
14221626
Deacetylipecoside synthase
The enzyme deacetylipecoside synthase (EC 4.3.3.4) catalyzes the chemical reaction deacetylipecoside + H2O formula_0 dopamine + secologanin This enzyme belongs to the family of lyases, specifically amine lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is deacetylipecoside dopamine-lyase (secologanin-forming). This enzyme is also called deacetylipecoside dopamine-lyase. It participates in indole and ipecac alkaloid biosynthesis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221626
14221645
Deacetylisoipecoside synthase
The enzyme deacetylisoipecoside synthase (EC 4.3.3.3) catalyzes the chemical reaction deacetylisoipecoside + H2O formula_0 dopamine + secologanin This enzyme belongs to the family of lyases, specifically amine lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is deacetylisoipecoside dopamine-lyase (secologanin-forming). It is also called deacetylisoipecoside dopamine-lyase. It participates in indole and ipecac alkaloid biosynthesis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221645
14221658
Diaminopropionate ammonia-lyase
The enzyme diaminopropionate ammonia-lyase (EC 4.3.1.15) catalyzes the chemical reaction 2,3-diaminopropanoate + H2O formula_0 pyruvate + 2 NH3 This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is 2,3-diaminopropanoate ammonia-lyase (adding water; pyruvate-forming). Other names in common use include diaminopropionatase, α,β-diaminopropionate ammonia-lyase, 2,3-diaminopropionate ammonia-lyase, and 2,3-diaminopropanoate ammonia-lyase. It employs one cofactor, pyridoxal phosphate. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221658
14221682
Dihydroxyphenylalanine ammonia-lyase
In enzymology, a dihydroxyphenylalanine ammonia-lyase (EC 4.3.1.11, entry deleted) is a non-existing enzyme that catalyzes the chemical reaction 3,4-dihydroxy-L-phenylalanine formula_0 trans-caffeate + NH3 Hence, this enzyme has one substrate, 3,4-dihydroxy-L-phenylalanine (L-DOPA), and two products, trans-caffeate and NH3. This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is 3,4-dihydroxy-L-phenylalanine ammonia-lyase (trans-caffeate-forming). Other names in common use include beta-(3,4-dihydroxyphenyl)-L-alanine (DOPA) ammonia-lyase, and 3,4-dihydroxy-L-phenylalanine ammonia-lyase. This enzyme participates in tyrosine metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221682
14221698
Dimethylpropiothetin dethiomethylase
The enzyme dimethylpropiothetin dethiomethylase (EC 4.4.1.3) catalyzes the chemical reaction "S,S"-dimethyl-β-propiothetin formula_0 dimethyl sulfide + acrylate The enzyme breaks "S,S"-dimethyl-β-propiothetin into dimethyl sulfide and acrylate. This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is S,S"-dimethyl-β-propiothetin dimethyl-sulfide-lyase (acrylate-forming). Other names in common use include desulfhydrase, and S,S"-dimethyl-beta-propiothetin dimethyl-sulfide-lyase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221698
14221724
Erythro-3-hydroxyaspartate ammonia-lyase
The enzyme "erythro"-3-hydroxyaspartate ammonia-lyase (EC 4.3.1.20) catalyzes the chemical reaction "erythro"-3-hydroxy--aspartate formula_0 oxaloacetate + NH3 This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is erythro"-3-hydroxy--aspartate ammonia-lyase (oxaloacetate-forming). Other names in common use include erythro"-β-hydroxyaspartate dehydratase, erythro"-3-hydroxyaspartate dehydratase, erythro"-3-hydroxy-s-aspartate hydro-lyase (deaminating); "erythro"-3-hydroxy-s-aspartate ammonia-lyase. It employs one cofactor, pyridoxal phosphate. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221724
14221746
Ethanolamine ammonia-lyase
The enzyme ethanolamine ammonia-lyase (EC 4.3.1.7) catalyzes the chemical reaction ethanolamine formula_0 acetaldehyde + NH3 This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is ethanolamine ammonia-lyase (acetaldehyde-forming). It is also called ethanolamine deaminase. It participates in glycerophospholipid metabolism. It employs one cofactor, adenosylcobalamin. Structural studies. As of early 2011, several structures have been solved for this class of enzymes. The first structure solved was the active site containing EutB subunit of EAL from Listeria monocytogenes with the PDB accession code 2QEZ. Later, more structures have become available from Escherichia coli that include both EAL subunits bound to various ligands. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221746
14221763
FAD-AMP lyase (cyclizing)
The enzyme FAD-AMP lyase (cyclizing) (EC 4.6.1.15) catalyzes the reaction FAD formula_0 AMP + riboflavin cyclic-4′,5′--phosphate This enzyme belongs to the family of lyases, specifically the class of phosphorus-oxygen lyases. The systematic name of this enzyme class is FAD AMP-lyase (riboflavin-cyclic-4′,5′-phosphate-forming). Other names in common use include FMN cyclase and FAD AMP-lyase (cyclic-FMN-forming). References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221763
14221777
Formimidoyltetrahydrofolate cyclodeaminase
In enzymology, a formimidoyltetrahydrofolate cyclodeaminase (EC 4.3.1.4) is an enzyme that catalyzes the chemical reaction 5-formimidoyltetrahydrofolate formula_0 5,10-methenyltetrahydrofolate + NH3 Hence, this enzyme has one substrate, 5-formimidoyltetrahydrofolate, and two products, 5,10-methenyltetrahydrofolate and NH3. This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is 5-formimidoyltetrahydrofolate ammonia-lyase (cyclizing 5,10-methenyltetrahydrofolate-forming). Other names in common use include formiminotetrahydrofolate cyclodeaminase, and 5-formimidoyltetrahydrofolate ammonia-lyase (cyclizing). This enzyme participates in folate metabolism by catabolising histidine and adding to the C1-tetrahydrofolate pool. In mammals, this enzyme can be found as part of a bifunctional enzyme in a single polypeptide with glutamate formimidoyltransferase (EC 2.1.2.5), the enzyme activity that catalyses the previous step in the histidine catabolic pathway. This arrangement allows the 5-formimidoyltetrahydrofolate intermediate to move directly from one active site to another without being released into solution, in a process called substrate channeling. Structural studies. As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1O5H, 1TT9, and 2PFD. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14221777
1422246
Threshold voltage
Minimum source-to-gate voltage for a field effect transistor to be conducting from source to drain The threshold voltage, commonly abbreviated as Vth or VGS(th), of a field-effect transistor (FET) is the minimum gate-to-source voltage (VGS) that is needed to create a conducting path between the source and drain terminals. It is an important scaling factor to maintain power efficiency. When referring to a junction field-effect transistor (JFET), the threshold voltage is often called pinch-off voltage instead. This is somewhat confusing since "pinch off" applied to insulated-gate field-effect transistor (IGFET) refers to the channel pinching that leads to current saturation behavior under high source–drain bias, even though the current is never off. Unlike "pinch off", the term "threshold voltage" is unambiguous and refers to the same concept in any field-effect transistor. Basic principles. In n-channel "enhancement-mode" devices, a conductive channel does not exist naturally within the transistor. With no VGS, dopant ions added to the body of the FET form a region with no mobile carriers called a depletion region. A positive VGS attracts free-floating electrons within the body towards the gate. But enough electrons must be attracted near the gate to counter the dopant ions and form a conductive channel. This process is called "inversion". The conductive channel connects from source to drain at the FET's "threshold voltage". Even more electrons attract towards the gate at higher VGS, which widens the channel. The reverse is true for the p-channel "enhancement-mode" MOS transistor. When VGS = 0 the device is “OFF” and the channel is open / non-conducting. The application of a negative gate voltage to the p-type "enhancement-mode" MOSFET enhances the channels conductivity turning it “ON”. In contrast, n-channel "depletion-mode" devices have a conductive channel naturally existing within the transistor. Accordingly, the term "threshold voltage" does not readily apply to "turning" such devices on, but is used instead to denote the voltage level at which the channel is wide enough to allow electrons to flow easily. This ease-of-flow threshold also applies to p-channel "depletion-mode" devices, in which a negative voltage from gate to body/source creates a depletion layer by forcing the positively charged holes away from the gate-insulator/semiconductor interface, leaving exposed a carrier-free region of immobile, negatively charged acceptor ions. For the n-channel depletion MOS transistor, a sufficient negative VGS will deplete (hence its name) the conductive channel of its free electrons switching the transistor “OFF”. Likewise for a p-channel "depletion-mode" MOS transistor a sufficient positive gate-source voltage will deplete the channel of its free holes, turning it “OFF”. In wide planar transistors the threshold voltage is essentially independent of the drain–source voltage (VDS) and is therefore a well defined characteristic, however it is less clear in modern nanometer-sized MOSFETs due to drain-induced barrier lowering. In the figures, the source (left side) and drain (right side) are labeled "n+" to indicate heavily doped (blue) n-regions. The depletion layer dopant is labeled "NA−" to indicate that the ions in the (pink) depletion layer are negatively charged and there are very few holes. In the (red) bulk the number of holes "p = NA" making the bulk charge neutral. If the gate voltage is below the threshold voltage (left figure), the "enhancement-mode" transistor is turned off and ideally there is no current from the drain to the source of the transistor. In fact, there is a current even for gate biases below the threshold (subthreshold leakage) current, although it is small and varies exponentially with gate bias. Therefore, datasheets will specify threshold voltage according to a specified measurable amount of current (commonly 250 μA or 1 mA). If the gate voltage is above the threshold voltage (right figure), the "enhancement-mode" transistor is turned on, due to there being many electrons in the channel at the oxide-silicon interface, creating a low-resistance channel where charge can flow from drain to source. For voltages significantly above the threshold, this situation is called strong inversion. The channel is tapered when "VD" > 0 because the voltage drop due to the current in the resistive channel reduces the oxide field supporting the channel as the drain is approached. Body effect. The "body effect" is the change in the threshold voltage by an amount approximately equal to the change in the source-bulk voltage, formula_0, because the body influences the threshold voltage (when it is not tied to the source). It can be thought of as a second gate, and is sometimes referred to as the "back gate", and accordingly the body effect is sometimes called the "back-gate effect". For an enhancement-mode nMOS MOSFET, the body effect upon threshold voltage is computed according to the Shichman–Hodges model, which is accurate for older process nodes, using the following equation: formula_1 where; formula_2 is the threshold voltage when substrate bias is present, formula_0 is the source-to-body substrate bias, formula_3 is the surface potential, formula_4 is threshold voltage for zero substrate bias, formula_5 is the body effect parameter, formula_6 is oxide thickness, formula_7 is oxide permittivity, formula_8 is the permittivity of silicon, formula_9 is a doping concentration, formula_10 is elementary charge. Dependence on oxide thickness. In a given technology node, such as the 90-nm CMOS process, the threshold voltage depends on the choice of oxide and on oxide thickness. Using the body formulas above, formula_2 is directly proportional to formula_11, and formula_12, which is the parameter for oxide thickness. Thus, the thinner the oxide thickness, the lower the threshold voltage. Although this may seem to be an improvement, it is not without cost; because the thinner the oxide thickness, the higher the subthreshold leakage current through the device will be. Consequently, the design specification for 90-nm gate-oxide thickness was set at 1 nm to control the leakage current. This kind of tunneling, called Fowler-Nordheim Tunneling. formula_13 where; formula_14 and formula_15 are constants, formula_16 is the electric field across the gate oxide. Before scaling the design features down to 90 nm, a dual-oxide approach for creating the oxide thickness was a common solution to this issue. With a 90 nm process technology, a triple-oxide approach has been adopted in some cases. One standard thin oxide is used for most transistors, another for I/O driver cells, and a third for memory-and-pass transistor cells. These differences are based purely on the characteristics of oxide thickness on threshold voltage of CMOS technologies. Temperature dependence. As with the case of oxide thickness affecting threshold voltage, temperature has an effect on the threshold voltage of a CMOS device. Expanding on part of the equation in the body effect section formula_17 where; formula_18 is half the contact potential, formula_19 is the Boltzmann constant, formula_20 is temperature, formula_10 is the elementary charge, formula_9 is a doping parameter, formula_21 is the intrinsic doping parameter for the substrate. We see that the surface potential has a direct relationship with the temperature. Looking above, that the threshold voltage does not have a direct relationship but is not independent of the effects. This variation is typically between −4 mV/K and −2 mV/K depending on doping level. For a change of 30 °C this results in significant variation from the 500 mV design parameter commonly used for the 90-nm technology node. Dependence on random dopant fluctuation. Random dopant fluctuation (RDF) is a form of process variation resulting from variation in the implanted impurity concentration. In MOSFET transistors, RDF in the channel region can alter the transistor's properties, especially threshold voltage. In newer process technologies RDF has a larger effect because the total number of dopants is fewer. Research works are being carried out in order to suppress the dopant fluctuation which leads to the variation of threshold voltage between devices undergoing same manufacturing process. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V_{SB}" }, { "math_id": 1, "text": "V_{TN} = V_{TO} + \\gamma\\left( \\sqrt{\\left| V_{SB} + 2\\phi_F \\right|} - \\sqrt{\\left| 2\\phi_F \\right|} \\right)" }, { "math_id": 2, "text": "V_{TN}" }, { "math_id": 3, "text": "2\\phi_F" }, { "math_id": 4, "text": "V_{TO}" }, { "math_id": 5, "text": "\\gamma = \\left(t_{ox}/\\epsilon_{ox}\\right)\\sqrt{2q\\epsilon_\\text{Si} N_A}" }, { "math_id": 6, "text": "t_{ox}" }, { "math_id": 7, "text": "\\epsilon_{ox}" }, { "math_id": 8, "text": "\\epsilon_\\text{Si}" }, { "math_id": 9, "text": "N_A" }, { "math_id": 10, "text": "q" }, { "math_id": 11, "text": "\\gamma" }, { "math_id": 12, "text": "t_{OX}" }, { "math_id": 13, "text": "I_{fn} = C_1WL(E_{ox})^2e^{-\\frac{E_0}{E_{ox}}}" }, { "math_id": 14, "text": "C_1" }, { "math_id": 15, "text": "E_0" }, { "math_id": 16, "text": "E_{ox}" }, { "math_id": 17, "text": "\\phi_F = \\left(\\frac{kT}{q}\\right) \\ln{\\left(\\frac{N_A}{n_i}\\right)}" }, { "math_id": 18, "text": "\\phi_F" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": "T" }, { "math_id": 21, "text": "n_i" } ]
https://en.wikipedia.org/wiki?curid=1422246
14223173
Wiener index
Topological index of a molecule In chemical graph theory, the Wiener index (also Wiener number) introduced by Harry Wiener, is a topological index of a molecule, defined as the sum of the lengths of the shortest paths between all pairs of vertices in the chemical graph representing the non-hydrogen atoms in the molecule. Wiener index can be used for the representation of computer networks and enhancing lattice hardware security. History. The Wiener index is named after Harry Wiener, who introduced it in 1947; at the time, Wiener called it the "path number". It is the oldest topological index related to molecular branching. Based on its success, many other topological indexes of chemical graphs, based on information in the distance matrix of the graph, have been developed subsequently to Wiener's work. The same quantity has also been studied in pure mathematics, under various names including the gross status, the distance of a graph, and the transmission. The Wiener index is also closely related to the closeness centrality of a vertex in a graph, a quantity inversely proportional to the sum of all distances between the given vertex and all other vertices that has been frequently used in sociometry and the theory of social networks. Example. Butane (C4H10) has two different structural isomers: "n"-butane, with a linear structure of four carbon atoms, and isobutane, with a branched structure. The chemical graph for "n"-butane is a four-vertex path graph, and the chemical graph for isobutane is a tree with one central vertex connected to three leaves. The "n"-butane molecule has three pairs of vertices at distance one from each other, two pairs at distance two, and one pair at distance three, so its Wiener index is formula_0 The isobutane molecule has three pairs of vertices at distances one from each other (the three leaf-center pairs), and three pairs at distance two (the leaf-leaf pairs). Therefore, its Wiener index is formula_1 These numbers are instances of formulas for special cases of the Wiener index: it is formula_2 for any formula_3-vertex path graph such as the graph of "n"-butane, and formula_4 for any formula_3-vertex star such as the graph of isobutane. Thus, even though these two molecules have the same chemical formula, and the same numbers of carbon-carbon and carbon-hydrogen bonds, their different structures give rise to different Wiener indices. Relation to chemical properties. Wiener showed that the Wiener index number is closely correlated with the boiling points of alkane molecules. Later work on quantitative structure–activity relationships showed that it is also correlated with other quantities including the parameters of its critical point, the density, surface tension, and viscosity of its liquid phase, and the van der Waals surface area of the molecule. Calculation in arbitrary graphs. The Wiener index may be calculated directly using an algorithm for computing all pairwise distances in the graph. When the graph is unweighted (so the length of a path is just its number of edges), these distances may be calculated by repeating a breadth-first search algorithm, once for each starting vertex. The total time for this approach is O("nm"), where "n" is the number of vertices in the graph and "m" is its number of edges. For weighted graphs, one may instead use the Floyd–Warshall algorithm or Johnson's algorithm, with running time O("n"3) or O("nm" + "n"2 log "n") respectively. Alternative but less efficient algorithms based on repeated matrix multiplication have also been developed within the chemical informatics literature. Calculation in special types of graph. When the underlying graph is a tree (as is true for instance for the alkanes originally studied by Wiener), the Wiener index may be calculated more efficiently. If the graph is partitioned into two subtrees by removing a single edge "e", then its Wiener index is the sum of the Wiener indices of the two subtrees, together with a third term representing the paths that pass through "e". This third term may be calculated in linear time by computing the sum of distances of all vertices from "e" within each subtree and multiplying the two sums. This divide and conquer algorithm can be generalized from trees to graphs of bounded treewidth, and leads to near-linear-time algorithms for such graphs. An alternative method for calculating the Wiener index of a tree, by Bojan Mohar and Tomaž Pisanski, works by generalizing the problem to graphs with weighted vertices, where the weight of a path is the product of its length with the weights of its two endpoints. If "v" is a leaf vertex of the tree then the Wiener index of the tree may be calculated by merging "v" with its parent (adding their weights together), computing the index of the resulting smaller tree, and adding a simple correction term for the paths that pass through the edge from "v" to its parent. By repeatedly removing leaves in this way, the Wiener index may be calculated in linear time. For graphs that are constructed as products of simpler graphs, the Wiener index of the product graph can often be computed by a simple formula that combines the indices of its factors. Benzenoids (graphs formed by gluing regular hexagons edge-to-edge) can be embedded isometrically into the Cartesian product of three trees, allowing their Wiener indices to be computed in linear time by using the product formula together with the linear time tree algorithm. Inverse problem. considered the problem of determining which numbers can be represented as the Wiener index of a graph. They showed that all but two positive integers have such a representation; the two exceptions are the numbers 2 and 5, which are not the Wiener index of any graph. For graphs that must be bipartite, they found that again almost all integers can be represented, with a larger set of exceptions: none of the numbers in the set {2, 3, 5, 6, 7, 11, 12, 13, 15, 17, 19, 33, 37, 39} can be represented as the Wiener index of a bipartite graph. Gutman and Yeh conjectured, but were unable to prove, a similar description of the numbers that can be represented as Wiener indices of trees, with a set of 49 exceptional values: 2, 3, 5, 6, 7, 8, 11, 12, 13, 14, 15, 17, 19, 21, 22, 23, 24, 26, 27, 30, 33, 34, 37, 38, 39, 41, 43, 45, 47, 51, 53, 55, 60, 61, 69, 73, 77, 78, 83, 85, 87, 89, 91, 99, 101, 106, 113, 147, 159 (sequence in the OEIS) The conjecture was later proven by Wagner, Wang, and Yu. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "3\\times 1 + 2\\times 2 + 1\\times 3 = 10." }, { "math_id": 1, "text": "3\\times 1 + 3\\times 2 = 9." }, { "math_id": 2, "text": "(n^3-n)/6" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "(n-1)^2" } ]
https://en.wikipedia.org/wiki?curid=14223173
14225
Hydrogen atom
Atom of the element hydrogen A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral hydrogen atom contains a nucleus of a single positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the baryonic mass of the universe. In everyday life on Earth, isolated hydrogen atoms (called "atomic hydrogen") are extremely rare. Instead, a hydrogen atom tends to combine with other atoms in compounds, or with another hydrogen atom to form ordinary (diatomic) hydrogen gas, H2. "Atomic hydrogen" and "hydrogen atom" in ordinary English use have overlapping, yet distinct, meanings. For example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen (which would refer to isolated hydrogen atoms). Atomic spectroscopy shows that there is a discrete infinite set of states in which a hydrogen (or any) atom can exist, contrary to the predictions of classical physics. Attempts to develop a theoretical understanding of the states of the hydrogen atom have been important to the history of quantum mechanics, since all other atoms can be roughly understood by knowing in detail about this simplest atomic structure. Isotopes. The most abundant isotope, protium (1H), or light hydrogen, contains no neutrons and is simply a proton and an electron. Protium is stable and makes up 99.985% of naturally occurring hydrogen atoms. Deuterium (2H) contains one neutron and one proton in its nucleus. Deuterium is stable, makes up 0.0156% of naturally occurring hydrogen, and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance. Tritium (3H) contains two neutrons and one proton in its nucleus and is not stable, decaying with a half-life of 12.32 years. Because of its short half-life, tritium does not exist in nature except in trace amounts. Heavier isotopes of hydrogen are only created artificially in particle accelerators and have half-lives on the order of 10−22 seconds. They are unbound resonances located beyond the neutron drip line; this results in prompt emission of a neutron. The formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant (correction formula given below) must be used for each hydrogen isotope. Hydrogen ion. Lone neutral hydrogen atoms are rare under normal conditions. However, neutral hydrogen is common when it is covalently bound to another atom, and hydrogen atoms can also exist in cationic and anionic forms. If a neutral hydrogen atom loses its electron, it becomes a cation. The resulting ion, which consists solely of a proton for the usual isotope, is written as "H+" and sometimes called "hydron". Free protons are common in the interstellar medium, and solar wind. In the context of aqueous solutions of classical Brønsted–Lowry acids, such as hydrochloric acid, it is actually hydronium, H3O+, that is meant. Instead of a literal ionized single hydrogen atom being formed, the acid transfers the hydrogen to H2O, forming H3O+. If instead a hydrogen atom gains a second electron, it becomes an anion. The hydrogen anion is written as "H–" and called "hydride". Theoretical analysis. The hydrogen atom has special significance in quantum mechanics and quantum field theory as a simple two-body problem physical system which has yielded many simple analytical solutions in closed-form. Failed classical description. Experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a tenuous negative charge cloud around it. This immediately raised questions about how such a system could be stable. Classical electromagnetism had shown that any accelerating charge radiates energy, as shown by the Larmor formula. If the electron is assumed to orbit in a perfect circle and radiates energy continuously, the electron would rapidly spiral into the nucleus with a fall time of: formula_0 where formula_1 is the Bohr radius and formula_2 is the classical electron radius. If this were true, all atoms would instantly collapse, however atoms seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. Instead, atoms were observed to only emit discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics. Bohr–Sommerfeld Model. In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. The assumptions included: Bohr supposed that the electron's angular momentum is quantized with possible values: formula_3 where formula_4 and formula_5 is Planck constant over formula_6. He also supposed that the centripetal force which keeps the electron in its orbit is provided by the Coulomb force, and that energy is conserved. Bohr derived the energy of each orbit of the hydrogen atom to be: formula_7 where formula_8 is the electron mass, formula_9 is the electron charge, formula_10 is the vacuum permittivity, and formula_11 is the quantum number (now known as the principal quantum number). Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values. For formula_12, the value formula_13 is called the Rydberg unit of energy. It is related to the Rydberg constant formula_14 of atomic physics by formula_15 The exact value of the Rydberg constant assumes that the nucleus is infinitely massive with respect to the electron. For hydrogen-1, hydrogen-2 (deuterium), and hydrogen-3 (tritium) which have finite mass, the constant must be slightly modified to use the reduced mass of the system, rather than simply the mass of the electron. This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. However, since the nucleus is much heavier than the electron, the electron mass and reduced mass are nearly the same. The Rydberg constant "RM" for a hydrogen atom (one electron), "R" is given by formula_16 where formula_17 is the mass of the atomic nucleus. For hydrogen-1, the quantity formula_18 is about 1/1836 (i.e. the electron-to-proton mass ratio). For deuterium and tritium, the ratios are about 1/3670 and 1/5497 respectively. These figures, when added to 1 in the denominator, represent very small corrections in the value of "R", and thus only small corrections to all energy levels in corresponding hydrogen isotopes. There were still problems with Bohr's model: Most of these shortcomings were resolved by Arnold Sommerfeld's modification of the Bohr model. Sommerfeld introduced two additional degrees of freedom, allowing an electron to move on an elliptical orbit characterized by its eccentricity and declination with respect to a chosen axis. This introduced two additional quantum numbers, which correspond to the orbital angular momentum and its projection on the chosen axis. Thus the correct multiplicity of states (except for the factor 2 accounting for the yet unknown electron spin) was found. Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). However, some observed phenomena, such as the anomalous Zeeman effect, remained unexplained. These issues were resolved with the full development of quantum mechanics and the Dirac equation. It is often alleged that the Schrödinger equation is superior to the Bohr–Sommerfeld theory in describing hydrogen atom. This is not the case, as most of the results of both approaches coincide or are very close (a remarkable exception is the problem of hydrogen atom in crossed electric and magnetic fields, which cannot be self-consistently solved in the framework of the Bohr–Sommerfeld theory), and in both theories the main shortcomings result from the absence of the electron spin. It was the complete failure of the Bohr–Sommerfeld theory to explain many-electron systems (such as helium atom or hydrogen molecule) which demonstrated its inadequacy in describing quantum phenomena. Schrödinger equation. The Schrödinger equation is the standard quantum-mechanics model; it allows one to calculate the stationary states and also the time evolution of quantum systems. Exact analytical answers are available for the nonrelativistic hydrogen atom. Before we go to present a formal account, here we give an elementary overview. Given that the hydrogen atom contains a nucleus and an electron, quantum mechanics allows one to predict the probability of finding the electron at any given radial distance formula_21. It is given by the square of a mathematical function known as the "wavefunction", which is a solution of the Schrödinger equation. The lowest energy equilibrium state of the hydrogen atom is known as the ground state. The ground state wave function is known as the formula_22 wavefunction. It is written as: formula_23 Here, formula_1 is the numerical value of the Bohr radius. The probability density of finding the electron at a distance formula_21 in any radial direction is the squared value of the wavefunction: formula_24 The formula_25 wavefunction is spherically symmetric, and the surface area of a shell at distance formula_21 is formula_26, so the total probability formula_27 of the electron being in a shell at a distance formula_21 and thickness formula_28 is formula_29 It turns out that this is a maximum at formula_30. That is, the Bohr picture of an electron orbiting the nucleus at radius formula_1 corresponds to the most probable radius. Actually, there is a finite probability that the electron may be found at any place formula_21, with the probability indicated by the square of the wavefunction. Since the probability of finding the electron "somewhere" in the whole volume is unity, the integral of formula_31 is unity. Then we say that the wavefunction is properly normalized. As discussed below, the ground state formula_25 is also indicated by the quantum numbers formula_32. The second lowest energy states, just above the ground state, are given by the quantum numbers formula_33, formula_34, and formula_35. These formula_36 states all have the same energy and are known as the formula_37 and formula_38 states. There is one formula_37 state: formula_39 and there are three formula_38 states: formula_40 formula_41 An electron in the formula_37 or formula_38 state is most likely to be found in the second Bohr orbit with energy given by the Bohr formula. Wavefunction. The Hamiltonian of the hydrogen atom is the radial kinetic energy operator and Coulomb attraction force between the positive proton and negative electron. Using the time-independent Schrödinger equation, ignoring all spin-coupling interactions and using the reduced mass formula_42, the equation is written as: formula_43 Expanding the Laplacian in spherical coordinates: formula_44 This is a separable, partial differential equation which can be solved in terms of special functions. When the wavefunction is separated as product of functions formula_45, formula_46, and formula_47 three independent differential functions appears with A and B being the separation constants: The normalized position wavefunctions, given in spherical coordinates are: formula_51 where: Note that the generalized Laguerre polynomials are defined differently by different authors. The usage here is consistent with the definitions used by Messiah, and Mathematica. In other places, the Laguerre polynomial includes a factor of formula_60, or the generalized Laguerre polynomial appearing in the hydrogen wave function is formula_61 instead. The quantum numbers can take the following values: Additionally, these wavefunctions are "normalized" (i.e., the integral of their modulus square equals 1) and orthogonal: formula_65 where formula_66 is the state represented by the wavefunction formula_67 in Dirac notation, and formula_68 is the Kronecker delta function. The wavefunctions in momentum space are related to the wavefunctions in position space through a Fourier transform formula_69 which, for the bound states, results in formula_70 where formula_71 denotes a Gegenbauer polynomial and formula_72 is in units of formula_73. The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds. The Schrödinger equation also applies to more complicated atoms and molecules. When there is more than one electron or nucleus the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. Since the Schrödinger equation is only valid for non-relativistic quantum mechanics, the solutions it yields for the hydrogen atom are not entirely correct. The Dirac equation of relativistic quantum theory improves these solutions (see below). Results of Schrödinger equation. The solution of the Schrödinger equation (wave equation) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the "orbitals") are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: the eigenstates of the Hamiltonian (that is, the energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. This corresponds to the fact that angular momentum is conserved in the orbital motion of the electron around the nucleus. Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, formula_58 and formula_59 (both are integers). The angular momentum quantum number formula_74 determines the magnitude of the angular momentum. The magnetic quantum number formula_75 determines the projection of the angular momentum on the (arbitrarily chosen) formula_76-axis. In addition to mathematical expressions for total angular momentum and angular momentum projection of wavefunctions, an expression for the radial dependence of the wave functions must be found. It is only here that the details of the formula_77 Coulomb potential enter (leading to Laguerre polynomials in formula_21). This leads to a third quantum number, the principal quantum number formula_62. The principal quantum number in hydrogen is related to the atom's total energy. Note that the maximum value of the angular momentum quantum number is limited by the principal quantum number: it can run only up to formula_78, i.e., formula_79. Due to angular momentum conservation, states of the same formula_58 but different formula_59 have the same energy (this holds for all problems with rotational symmetry). In addition, for the hydrogen atom, states of the same formula_80 but different formula_58 are also degenerate (i.e., they have the same energy). However, this is a specific property of hydrogen and is no longer true for more complicated atoms which have an (effective) potential differing from the form formula_77 (due to the presence of the inner electrons shielding the nucleus potential). Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the formula_76-axis, which can take on two values. Therefore, any eigenstate of the electron in the hydrogen atom is described fully by four quantum numbers. According to the usual rules of quantum mechanics, the actual state of the electron may be any superposition of these states. This explains also why the choice of formula_76-axis for the directional quantization of the angular momentum vector is immaterial: an orbital of given formula_58 and formula_81 obtained for another preferred axis formula_82 can always be represented as a suitable superposition of the various states of different formula_59 (but same formula_58) that have been obtained for formula_76. Mathematical summary of eigenstates of hydrogen atom. In 1928, Paul Dirac found an equation that was fully compatible with special relativity, and (as a consequence) made the wave function a 4-component "Dirac spinor" including "up" and "down" spin components, with both positive and "negative" energy (or matter and antimatter). The solution to this equation gave the following results, more accurate than the Schrödinger solution. Energy levels. The energy levels of hydrogen, including fine structure (excluding Lamb shift and hyperfine structure), are given by the Sommerfeld fine-structure expression: formula_83 where formula_20 is the fine-structure constant and formula_84 is the total angular momentum quantum number, which is equal to formula_85, depending on the orientation of the electron spin relative to the orbital angular momentum. This formula represents a small correction to the energy obtained by Bohr and Schrödinger as given above. The factor in square brackets in the last expression is nearly one; the extra term arises from relativistic effects (for details, see #Features going beyond the Schrödinger solution). It is worth noting that this expression was first obtained by A. Sommerfeld in 1916 based on the relativistic version of the old Bohr theory. Sommerfeld has however used different notation for the quantum numbers. Visualizing the hydrogen electron orbitals. The image to the right shows the first few hydrogen atom orbitals (energy eigenfunctions). These are cross-sections of the probability density that are color-coded (black represents zero density and white represents the highest density). The angular momentum (orbital) quantum number "ℓ" is denoted in each column, using the usual spectroscopic letter code ("s" means "ℓ" = 0, "p" means "ℓ" = 1, "d" means "ℓ" = 2). The main (principal) quantum number "n" (= 1, 2, 3, ...) is marked to the right of each row. For all pictures the magnetic quantum number "m" has been set to 0, and the cross-sectional plane is the "xz"-plane ("z" is the vertical axis). The probability density in three-dimensional space is obtained by rotating the one shown here around the "z"-axis. The "ground state", i.e. the state of lowest energy, in which the electron is usually found, is the first one, the 1"s" state (principal quantum level "n" = 1, "ℓ" = 0). Black lines occur in each but the first orbital: these are the nodes of the wavefunction, i.e. where the probability density is zero. (More precisely, the nodes are spherical harmonics that appear as a result of solving the Schrödinger equation in spherical coordinates.) The quantum numbers determine the layout of these nodes. There are: Oscilation of orbitals. The frequency of a state in level n is formula_90, so in case of a superposition of multiple orbitals, they would oscillate due to the difference in frequency. For example two states, ψ1and ψ2: The wavefunction is given by formula_91 and the probability function is formula_92 formula_93 The result is a rotating wavefunction. The movement of electrons and change of quantum states radiates light at a frequency of the cosine. Features going beyond the Schrödinger solution. There are several important effects that are neglected by the Schrödinger equation and which are responsible for certain small but measurable deviations of the real spectral lines from the predicted ones: Both of these features (and more) are incorporated in the relativistic Dirac equation, with predictions that come still closer to experiment. Again the Dirac equation may be solved analytically in the special case of a two-body system, such as the hydrogen atom. The resulting solution quantum states now must be classified by the total angular momentum number "j" (arising through the coupling between electron spin and orbital angular momentum). States of the same "j" and the same "n" are still degenerate. Thus, direct analytical solution of Dirac equation predicts 2S() and 2P() levels of hydrogen to have exactly the same energy, which is in a contradiction with observations (Lamb–Retherford experiment). For these developments, it was essential that the solution of the Dirac equation for the hydrogen atom could be worked out exactly, such that any experimentally observed deviation had to be taken seriously as a signal of failure of the theory. Alternatives to the Schrödinger theory. In the language of Heisenberg's matrix mechanics, the hydrogen atom was first solved by Wolfgang Pauli using a rotational symmetry in four dimensions [O(4)-symmetry] generated by the angular momentum and the Laplace–Runge–Lenz vector. By extending the symmetry group O(4) to the dynamical group O(4,2), the entire spectrum and all transitions were embedded in a single irreducible group representation. In 1979 the (non-relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation of quantum mechanics by Duru and Kleinert. This work greatly extended the range of applicability of Feynman's method. Further alternative models are Bohm mechanics and the complex Hamilton-Jacobi formulation of quantum mechanics. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "t_\\text{fall} \\approx \\frac{ a_0^3}{4 r_0^2 c} \\approx 1.6 \\times 10^{-11} \\text{ s} ," }, { "math_id": 1, "text": "a_0" }, { "math_id": 2, "text": "r_0" }, { "math_id": 3, "text": "L = n \\hbar" }, { "math_id": 4, "text": "n = 1,2,3,\\ldots" }, { "math_id": 5, "text": "\\hbar" }, { "math_id": 6, "text": "2 \\pi" }, { "math_id": 7, "text": "E_n = - \\frac{ m_e e^4}{2 ( 4 \\pi \\varepsilon_0)^2 \\hbar^2 } \\frac{1}{n^2}, " }, { "math_id": 8, "text": "m_e " }, { "math_id": 9, "text": "e " }, { "math_id": 10, "text": "\\varepsilon_0 " }, { "math_id": 11, "text": "n " }, { "math_id": 12, "text": "n=1" }, { "math_id": 13, "text": "\\frac{ m_e e^4}{2 ( 4 \\pi \\varepsilon_0)^2 \\hbar^2 } =\\frac{m_{\\text{e}} e^4}{8 h^2 \\varepsilon_0^2} = 1 \\,\\text{Ry} = 13.605\\;693\\;122\\;994(26) \\,\\text{eV} " }, { "math_id": 14, "text": "R_\\infty" }, { "math_id": 15, "text": "1 \\,\\text{Ry} \\equiv h c R_\\infty." }, { "math_id": 16, "text": "R_M = \\frac{R_\\infty}{1+m_{\\text{e}}/M}," }, { "math_id": 17, "text": "M" }, { "math_id": 18, "text": "m_{\\text{e}}/M," }, { "math_id": 19, "text": "\\alpha^2 \\approx 10^{-5}" }, { "math_id": 20, "text": "\\alpha" }, { "math_id": 21, "text": "r" }, { "math_id": 22, "text": "1\\mathrm{s}" }, { "math_id": 23, "text": "\\psi_{1 \\mathrm{s}} (r) = \\frac{1}{\\sqrt{\\pi} a_0^{3 / 2}} \\mathrm{e}^{-r / a_0}." }, { "math_id": 24, "text": "| \\psi_{1 \\mathrm{s}} (r) |^2 = \\frac{1}{\\pi a_0^3} \\mathrm{e}^{-2 r / a_0}." }, { "math_id": 25, "text": "1 \\mathrm{s}" }, { "math_id": 26, "text": "4 \\pi r^2" }, { "math_id": 27, "text": "P(r) \\, dr" }, { "math_id": 28, "text": "dr" }, { "math_id": 29, "text": "P (r) \\, \\mathrm dr = 4 \\pi r^2 | \\psi_{1 \\mathrm{s}} (r) |^2 \\, \\mathrm dr." }, { "math_id": 30, "text": "r = a_0" }, { "math_id": 31, "text": "P(r) \\, \\mathrm dr" }, { "math_id": 32, "text": "(n = 1, \\ell = 0, m = 0)" }, { "math_id": 33, "text": "(2, 0, 0)" }, { "math_id": 34, "text": "(2, 1, 0)" }, { "math_id": 35, "text": "(2, 1, \\pm 1)" }, { "math_id": 36, "text": "n = 2" }, { "math_id": 37, "text": "2 \\mathrm{s}" }, { "math_id": 38, "text": "2 \\mathrm{p}" }, { "math_id": 39, "text": "\\psi_{2, 0, 0} = \\frac{1}{4 \\sqrt{2 \\pi} a_0^{3 / 2}} \\left( 2 - \\frac{r}{a_0} \\right) \\mathrm{e}^{-r / 2 a_0}," }, { "math_id": 40, "text": "\\psi_{2, 1, 0} = \\frac{1}{4 \\sqrt{2 \\pi} a_0^{3 / 2}} \\frac{r}{a_0} \\mathrm{e}^{-r / 2 a_0} \\cos \\theta," }, { "math_id": 41, "text": "\\psi_{2, 1, \\pm 1} = \\mp \\frac{1}{8 \\sqrt{\\pi} a_0^{3/2}} \\frac{r}{a_0} \\mathrm{e}^{-r / 2 a_0} \\sin \\theta ~ e^{\\pm i \\varphi}." }, { "math_id": 42, "text": "\\mu = m_e M/(m_e + M)" }, { "math_id": 43, "text": "\\left( -\\frac{\\hbar^2}{2 \\mu} \\nabla^2 - \\frac{e^2}{4 \\pi \\varepsilon_0 r} \\right) \\psi (r, \\theta, \\varphi) = E \\psi (r, \\theta, \\varphi)" }, { "math_id": 44, "text": "-\\frac{\\hbar^2}{2 \\mu} \\left[ \\frac{1}{r^2} \\frac{\\partial}{\\partial r} \\left( r^2 \\frac{\\partial \\psi}{\\partial r} \\right) + \\frac{1}{r^2 \\sin \\theta} \\frac{\\partial}{\\partial \\theta} \\left( \\sin \\theta \\frac{\\partial \\psi}{\\partial \\theta} \\right) + \\frac{1}{r^2 \\sin^2 \\theta} \\frac{\\partial^2 \\psi}{\\partial \\varphi^2} \\right] - \\frac{e^2}{4 \\pi \\varepsilon_0 r} \\psi = E \\psi" }, { "math_id": 45, "text": "R(r)" }, { "math_id": 46, "text": "\\Theta(\\theta)" }, { "math_id": 47, "text": "\\Phi(\\varphi)" }, { "math_id": 48, "text": "\\frac{d}{dr}\\left(r^2\\frac{dR}{dr}\\right) + \\frac{2\\mu r^2}{\\hbar^2} \\left(E+\\frac{e^2}{4\\pi\\varepsilon_0r}\\right)R - AR = 0" }, { "math_id": 49, "text": "\\frac{\\sin\\theta}{\\Theta}\\frac{d}{d\\theta}\\left(\\sin\\theta\\frac{d\\Theta}{d\\theta}\\right)+A\\sin^2\\theta- B = 0" }, { "math_id": 50, "text": "\\frac{1}{\\Phi} \\frac{d^2\\Phi}{d\\varphi^2}+B=0." }, { "math_id": 51, "text": " \\psi_{n \\ell m}(r, \\theta, \\varphi) = \\sqrt{{\\left( \\frac{2}{n a^*_0} \\right)}^3 \\frac{(n - \\ell - 1)!}{2 n (n + \\ell)!}} \\mathrm{e}^{-\\rho / 2} \\rho^{\\ell} L_{n - \\ell - 1}^{2 \\ell + 1}(\\rho) Y_\\ell^m (\\theta, \\varphi)" }, { "math_id": 52, "text": "\\rho = {2 r \\over {n a^*_0}}" }, { "math_id": 53, "text": "a^*_0" }, { "math_id": 54, "text": "a^*_0 = {{4 \\pi \\varepsilon_0 \\hbar^2} \\over {\\mu e^2}}" }, { "math_id": 55, "text": "L_{n-\\ell-1}^{2\\ell+1}(\\rho) " }, { "math_id": 56, "text": "n - \\ell - 1" }, { "math_id": 57, "text": "Y_\\ell^m (\\theta, \\varphi)" }, { "math_id": 58, "text": "\\ell" }, { "math_id": 59, "text": "m" }, { "math_id": 60, "text": "(n + \\ell) !" }, { "math_id": 61, "text": "L_{n + \\ell}^{2 \\ell + 1} (\\rho)" }, { "math_id": 62, "text": "n = 1, 2, 3, \\ldots" }, { "math_id": 63, "text": "\\ell = 0, 1, 2, \\ldots, n - 1" }, { "math_id": 64, "text": "m=-\\ell, \\ldots, \\ell" }, { "math_id": 65, "text": "\\int_0^{\\infty} r^2 \\, dr \\int_0^{\\pi} \\sin \\theta \\, d\\theta \\int_0^{2 \\pi} d\\varphi \\, \\psi^*_{n \\ell m} (r, \\theta, \\varphi) \\psi_{n' \\ell' m'} (r, \\theta, \\varphi) = \\langle n, \\ell, m | n', \\ell', m' \\rangle = \\delta_{n n'} \\delta_{\\ell \\ell'} \\delta_{m m'}," }, { "math_id": 66, "text": "| n, \\ell, m \\rangle" }, { "math_id": 67, "text": "\\psi_{n \\ell m}" }, { "math_id": 68, "text": "\\delta" }, { "math_id": 69, "text": "\\varphi (p, \\theta_p, \\varphi_p) = (2 \\pi \\hbar)^{-3 / 2} \\int \\mathrm{e}^{-i \\vec{p} \\cdot \\vec{r} / \\hbar} \\psi (r, \\theta,\\varphi) \\, dV," }, { "math_id": 70, "text": "\\varphi (p, \\theta_p, \\varphi_p) = \\sqrt{\\frac{2}{\\pi} \\frac{(n - \\ell - 1)!}{(n + \\ell)!}} n^2 2^{2 \\ell + 2} \\ell! \\frac{n^\\ell p^\\ell}{(n^2 p^2 + 1)^{\\ell + 2}} C_{n - \\ell - 1}^{\\ell + 1} \\left( \\frac{n^2 p^2 - 1}{n^2 p^2 + 1} \\right) Y_\\ell^m (\\theta_p, \\varphi_p)," }, { "math_id": 71, "text": "C_N^\\alpha (x)" }, { "math_id": 72, "text": "p" }, { "math_id": 73, "text": "\\hbar / a^*_0" }, { "math_id": 74, "text": "\\ell = 0, 1, 2, \\ldots" }, { "math_id": 75, "text": "m = -\\ell, \\ldots, +\\ell" }, { "math_id": 76, "text": "z" }, { "math_id": 77, "text": "1 / r" }, { "math_id": 78, "text": "n - 1" }, { "math_id": 79, "text": "\\ell = 0, 1, \\ldots, n - 1" }, { "math_id": 80, "text": "n" }, { "math_id": 81, "text": "m'" }, { "math_id": 82, "text": "z'" }, { "math_id": 83, "text": "\\begin{align}\nE_{j \\, n} = {} & -\\mu c^2 \\left[ 1 - \\left( 1 + \\left[ \\frac{\\alpha}{n - j - \\frac{1}{2} + \\sqrt{\\left( j + \\frac{1}{2} \\right)^2 - \\alpha^2}} \\right]^2 \\right)^{-1 / 2} \\right] \\\\\n\\approx {} & -\\frac{\\mu c^2 \\alpha^2}{2 n^2} \\left[ 1 + \\frac{\\alpha^2}{n^2} \\left( \\frac{n}{j + \\frac{1}{2}} - \\frac{3}{4} \\right) \\right],\n\\end{align}" }, { "math_id": 84, "text": "j" }, { "math_id": 85, "text": "\\left| \\ell \\pm \\tfrac{1}{2} \\right|" }, { "math_id": 86, "text": "n-1" }, { "math_id": 87, "text": "\\varphi" }, { "math_id": 88, "text": "\\ell-m" }, { "math_id": 89, "text": "\\theta" }, { "math_id": 90, "text": "\\omega_n=E_n/\\hbar" }, { "math_id": 91, "text": "\\psi=\\psi_1e^{i\\omega_1t}+\\psi_2e^{i\\omega_2t}" }, { "math_id": 92, "text": "P(t)=|\\psi|^2=(\\psi_1e^{i\\omega_1t}+\\psi_2e^{i\\omega_2t})(\\psi^*_1e^{-i\\omega_1t}+\\psi^*_2e^{-i\\omega_2t})\n" }, { "math_id": 93, "text": "\\propto|\\psi_1\\psi_2|\\cos{[(\\omega_1-\\omega_2)t]}" } ]
https://en.wikipedia.org/wiki?curid=14225
14225143
Rankine theory
Rankine's theory (maximum-normal stress theory), developed in 1857 by William John Macquorn Rankine, is a stress field solution that predicts active and passive earth pressure. It assumes that the soil is cohesionless, the wall is frictionless, the soil-wall interface is vertical, the failure surface on which the soil moves is planar, and the resultant force is angled parallel to the backfill surface. The equations for active and passive lateral earth pressure coefficients are given below. Note that φ' is the angle of shearing resistance of the soil and the backfill is inclined at angle β to the horizontal. formula_0 formula_1 For the case where β is 0, the above equations simplify to formula_2 formula_3 Rankine theory. Rankine's Theory says that failure will occur when the maximum principal stress at any point reaches a value equal to the tensile stress in a simple tension specimen at failure. This theory does not take into account the effect of the other two principal stresses. Rankine's theory is satisfactory for brittle materials, and not applicable to ductile materials. This theory is also called the Maximum Stress Theory. Active and passive soil pressures. This theory, which considers the soil to be in a state of plastic equilibrium, makes the assumptions that the soil is homogeneous, isotropic and has internal friction. The pressure exerted by soil against the wall is referred to as active pressure. The resistance offered by the soil to an object pushing against it is referred to as "passive pressure". Rankine's theory is applicable to incompressible soils. The equation for cohesionless "active earth pressure" is expressed as: formula_4 where: formula_5 and: Ka = Coefficient of active pressure w = weight density of soil h = depth of the section (below top soil) where the pressure is being evaluated. β = angle that the top surface of soil makes with the horizontal. φ = angle of internal friction of soil. The expression for "passive pressure" is: formula_6 where: formula_7 Or in the case of β=0, then the two coefficients are inversely proportional, such that: formula_8 References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " K_a = \\frac{\\cos \\beta - \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}{\\cos \\beta + \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}*cos\\beta" }, { "math_id": 1, "text": " K_p = \\frac{\\cos \\beta + \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}{\\cos \\beta - \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}*cos\\beta" }, { "math_id": 2, "text": " K_a = \\tan ^2 \\left( 45 - \\frac{\\phi}{2} \\right) \\ " }, { "math_id": 3, "text": " K_p = \\tan ^2 \\left( 45 + \\frac{\\phi}{2} \\right) \\ " }, { "math_id": 4, "text": " P_a = K_a w h" }, { "math_id": 5, "text": " K_a = \\frac{\\cos \\beta - \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}{\\cos \\beta + \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}*\\cos\\beta" }, { "math_id": 6, "text": " P_p = K_p w h" }, { "math_id": 7, "text": " K_p = \\frac{\\cos \\beta + \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}{\\cos \\beta - \\left(\\cos ^2 \\beta - \\cos ^2 \\phi \\right)^{1/2}}*\\cos\\beta" }, { "math_id": 8, "text": " K_p = \\frac{1}{K_a}" } ]
https://en.wikipedia.org/wiki?curid=14225143
1422584
Unbounded operator
Linear operator defined on a dense linear subspace In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases. The term "unbounded operator" can be misleading, since In contrast to bounded operators, unbounded operators on a given space do not form an algebra, nor even a linear space, because each one is defined on its own domain. The term "operator" often means "bounded linear operator", but in the context of this article it means "unbounded operator", with the reservations made above. Short history. The theory of unbounded operators developed in the late 1920s and early 1930s as part of developing a rigorous mathematical framework for quantum mechanics. The theory's development is due to John von Neumann and Marshall Stone. Von Neumann introduced using graphs to analyze unbounded operators in 1932. Definitions and basic properties. Let "X", "Y" be Banach spaces. An unbounded operator (or simply "operator") "T" : "D"("T") → "Y" is a linear map T from a linear subspace "D"("T") ⊆ "X"—the domain of T—to the space "Y". Contrary to the usual convention, T may not be defined on the whole space X. An operator T is said to be closed if its graph Γ("T") is a closed set. (Here, the graph Γ("T") is a linear subspace of the direct sum "X" ⊕ "Y", defined as the set of all pairs ("x", "Tx"), where x runs over the domain of T .) Explicitly, this means that for every sequence {"xn"} of points from the domain of T such that "xn" → "x" and "Txn" → "y", it holds that x belongs to the domain of T and "Tx" "y". The closedness can also be formulated in terms of the "graph norm": an operator T is closed if and only if its domain "D"("T") is a complete space with respect to the norm: formula_0 An operator T is said to be densely defined if its domain is dense in X. This also includes operators defined on the entire space X, since the whole space is dense in itself. The denseness of the domain is necessary and sufficient for the existence of the adjoint (if X and Y are Hilbert spaces) and the transpose; see the sections below. If "T" : "X" → "Y" is closed, densely defined and continuous on its domain, then its domain is all of X. A densely defined symmetric operator T on a Hilbert space H is called bounded from below if "T" + "a" is a positive operator for some real number a. That is, ⟨"Tx"|"x"⟩ ≥ −"a" ||"x"||2 for all x in the domain of T (or alternatively ⟨"Tx"|"x"⟩ ≥ "a" ||"x"||2 since "a" is arbitrary). If both T and −"T" are bounded from below then T is bounded. Example. Let "C"([0, 1]) denote the space of continuous functions on the unit interval, and let "C"1([0, 1]) denote the space of continuously differentiable functions. We equip formula_1 with the supremum norm, formula_2, making it a Banach space. Define the classical differentiation operator : "C"1([0, 1]) → "C"([0, 1]) by the usual formula: formula_3 Every differentiable function is continuous, so "C"1([0, 1]) ⊆ "C"([0, 1]). We claim that : "C"([0, 1]) → "C"([0, 1]) is a well-defined unbounded operator, with domain "C"1([0, 1]). For this, we need to show that formula_4 is linear and then, for example, exhibit some formula_5 such that formula_6 and formula_7. This is a linear operator, since a linear combination "a f " + "bg" of two continuously differentiable functions  "f" , "g" is also continuously differentiable, and formula_8 The operator is not bounded. For example, formula_9 satisfy formula_10 but formula_11 as formula_12. The operator is densely defined, and closed. The same operator can be treated as an operator "Z" → "Z" for many choices of Banach space Z and not be bounded between any of them. At the same time, it can be bounded as an operator "X" → "Y" for other pairs of Banach spaces "X", "Y", and also as operator "Z" → "Z" for some topological vector spaces Z. As an example let "I" ⊂ R be an open interval and consider formula_13 where: formula_14 Adjoint. The adjoint of an unbounded operator can be defined in two equivalent ways. Let formula_15 be an unbounded operator between Hilbert spaces. First, it can be defined in a way analogous to how one defines the adjoint of a bounded operator. Namely, the adjoint formula_16 of T is defined as an operator with the property: formula_17 More precisely, formula_18 is defined in the following way. If formula_19 is such that formula_20 is a continuous linear functional on the domain of T, then formula_21 is declared to be an element of formula_22 and after extending the linear functional to the whole space via the Hahn–Banach theorem, it is possible to find some formula_23 in formula_24 such that formula_25 since Riesz representation theorem allows the continuous dual of the Hilbert space formula_24 to be identified with the set of linear functionals given by the inner product. This vector formula_23 is uniquely determined by formula_21 if and only if the linear functional formula_20 is densely defined; or equivalently, if T is densely defined. Finally, letting formula_26 completes the construction of formula_27 which is necessarily a linear map. The adjoint formula_18 exists if and only if T is densely defined. By definition, the domain of formula_28 consists of elements formula_21 in formula_29 such that formula_20 is continuous on the domain of T. Consequently, the domain of formula_28 could be anything; it could be trivial (that is, contains only zero). It may happen that the domain of formula_28 is a closed hyperplane and formula_28 vanishes everywhere on the domain. Thus, boundedness of formula_28 on its domain does not imply boundedness of T. On the other hand, if formula_28 is defined on the whole space then T is bounded on its domain and therefore can be extended by continuity to a bounded operator on the whole space. If the domain of formula_28 is dense, then it has its adjoint formula_30 A closed densely defined operator T is bounded if and only if formula_28 is bounded. The other equivalent definition of the adjoint can be obtained by noticing a general fact. Define a linear operator formula_32 as follows: formula_33 Since formula_32 is an isometric surjection, it is unitary. Hence: formula_34 is the graph of some operator formula_35 if and only if T is densely defined. A simple calculation shows that this "some" formula_35 satisfies: formula_36 for every x in the domain of T. Thus formula_35 is the adjoint of T. It follows immediately from the above definition that the adjoint formula_28 is closed. In particular, a self-adjoint operator (meaning formula_37) is closed. An operator T is closed and densely defined if and only if formula_31 Some well-known properties for bounded operators generalize to closed densely defined operators. The kernel of a closed operator is closed. Moreover, the kernel of a closed densely defined operator formula_38 coincides with the orthogonal complement of the range of the adjoint. That is, formula_39 von Neumann's theorem states that formula_40 and formula_41 are self-adjoint, and that formula_42 and formula_43 both have bounded inverses. If formula_28 has trivial kernel, T has dense range (by the above identity.) Moreover: T is surjective if and only if there is a formula_44 such that formula_45 for all formula_46 in formula_47 (This is essentially a variant of the so-called closed range theorem.) In particular, T has closed range if and only if formula_28 has closed range. In contrast to the bounded case, it is not necessary that formula_49 since, for example, it is even possible that formula_50 does not exist. This is, however, the case if, for example, T is bounded. A densely defined, closed operator T is called "normal" if it satisfies the following equivalent conditions: Every self-adjoint operator is normal. Transpose. Let formula_57 be an operator between Banach spaces. Then the "transpose" (or "dual") formula_58 of formula_48 is the linear operator satisfying: formula_59 for all formula_60 and formula_61 Here, we used the notation: formula_62 The necessary and sufficient condition for the transpose of formula_48 to exist is that formula_48 is densely defined (for essentially the same reason as to adjoints, as discussed above.) For any Hilbert space formula_63 there is the anti-linear isomorphism: formula_64 given by formula_65 where formula_66 Through this isomorphism, the transpose formula_67 relates to the adjoint formula_28 in the following way: formula_68 where formula_69. (For the finite-dimensional case, this corresponds to the fact that the adjoint of a matrix is its conjugate transpose.) Note that this gives the definition of adjoint in terms of a transpose. Closed linear operators. Closed linear operators are a class of linear operators on Banach spaces. They are more general than bounded operators, and therefore not necessarily continuous, but they still retain nice enough properties that one can define the spectrum and (with certain assumptions) functional calculus for such operators. Many important linear operators which fail to be bounded turn out to be closed, such as the derivative and a large class of differential operators. Let "X", "Y" be two Banach spaces. A linear operator "A" : "D"("A") ⊆ "X" → "Y" is closed if for every sequence {"x""n"} in "D"("A") converging to x in X such that "Axn" → "y" ∈ "Y" as "n" → ∞ one has "x" ∈ "D"("A") and "Ax" = "y". Equivalently, A is closed if its graph is closed in the direct sum "X" ⊕ "Y". Given a linear operator A, not necessarily closed, if the closure of its graph in "X" ⊕ "Y" happens to be the graph of some operator, that operator is called the closure of A, and we say that A is closable. Denote the closure of A by . It follows that A is the restriction of to "D"("A"). A core (or essential domain) of a closable operator is a subset C of "D"("A") such that the closure of the restriction of A to C is . Example. Consider the derivative operator "A" = where "X" = "Y" = "C"(["a", "b"]) is the Banach space of all continuous functions on an interval ["a", "b"]. If one takes its domain "D"("A") to be "C"1(["a", "b"]), then A is a closed operator which is not bounded. On the other hand if "D"("A") = "C"∞(["a", "b"]), then A will no longer be closed, but it will be closable, with the closure being its extension defined on "C"1(["a", "b"]). Symmetric operators and self-adjoint operators. An operator "T" on a Hilbert space is "symmetric" if and only if for each "x" and "y" in the domain of T we have formula_70. A densely defined operator T is symmetric if and only if it agrees with its adjoint "T"∗ restricted to the domain of "T", in other words when "T"∗ is an extension of T. In general, if "T" is densely defined and symmetric, the domain of the adjoint "T"∗ need not equal the domain of "T". If "T" is symmetric and the domain of "T" and the domain of the adjoint coincide, then we say that "T" is "self-adjoint". Note that, when "T" is self-adjoint, the existence of the adjoint implies that "T" is densely defined and since "T"∗ is necessarily closed, "T" is closed. A densely defined operator "T" is "symmetric", if the subspace Γ("T") (defined in a previous section) is orthogonal to its image "J"(Γ("T")) under "J" (where "J"("x","y"):=("y",-"x")). Equivalently, an operator "T" is "self-adjoint" if it is densely defined, closed, symmetric, and satisfies the fourth condition: both operators "T" – "i", "T" + "i" are surjective, that is, map the domain of "T" onto the whole space "H". In other words: for every "x" in "H" there exist "y" and "z" in the domain of "T" such that "Ty" – "iy" "x" and "Tz" + "iz" "x". An operator "T" is "self-adjoint", if the two subspaces Γ("T"), "J"(Γ("T")) are orthogonal and their sum is the whole space formula_71 This approach does not cover non-densely defined closed operators. Non-densely defined symmetric operators can be defined directly or via graphs, but not via adjoint operators. A symmetric operator is often studied via its Cayley transform. An operator "T" on a complex Hilbert space is symmetric if and only if the number formula_72 is real for all "x" in the domain of "T". A densely defined closed symmetric operator "T" is self-adjoint if and only if "T"∗ is symmetric. It may happen that it is not. A densely defined operator "T" is called "positive" (or "nonnegative") if its quadratic form is nonnegative, that is, formula_73 for all "x" in the domain of "T". Such operator is necessarily symmetric. The operator "T"∗"T" is self-adjoint and positive for every densely defined, closed "T". The spectral theorem applies to self-adjoint operators and moreover, to normal operators, but not to densely defined, closed operators in general, since in this case the spectrum can be empty. A symmetric operator defined everywhere is closed, therefore bounded, which is the Hellinger–Toeplitz theorem. Extension-related. By definition, an operator "T" is an "extension" of an operator "S" if Γ("S") ⊆ Γ("T"). An equivalent direct definition: for every "x" in the domain of "S", "x" belongs to the domain of "T" and "Sx" "Tx". Note that an everywhere defined extension exists for every operator, which is a purely algebraic fact explained at and based on the axiom of choice. If the given operator is not bounded then the extension is a discontinuous linear map. It is of little use since it cannot preserve important properties of the given operator (see below), and usually is highly non-unique. An operator "T" is called "closable" if it satisfies the following equivalent conditions: 0. Not all operators are closable. A closable operator "T" has the least closed extension formula_74 called the "closure" of "T". The closure of the graph of "T" is equal to the graph of formula_75 Other, non-minimal closed extensions may exist. A densely defined operator "T" is closable if and only if "T"∗ is densely defined. In this case formula_76 and formula_77 If "S" is densely defined and "T" is an extension of "S" then "S"∗ is an extension of "T"∗. Every symmetric operator is closable. A symmetric operator is called "maximal symmetric" if it has no symmetric extensions, except for itself. Every self-adjoint operator is maximal symmetric. The converse is wrong. An operator is called "essentially self-adjoint" if its closure is self-adjoint. An operator is essentially self-adjoint if and only if it has one and only one self-adjoint extension. A symmetric operator may have more than one self-adjoint extension, and even a continuum of them. A densely defined, symmetric operator "T" is essentially self-adjoint if and only if both operators "T" – "i", "T" + "i" have dense range. Let "T" be a densely defined operator. Denoting the relation ""T" is an extension of "S"" by "S" ⊂ "T" (a conventional abbreviation for Γ("S") ⊆ Γ("T")) one has the following. Importance of self-adjoint operators. The class of self-adjoint operators is especially important in mathematical physics. Every self-adjoint operator is densely defined, closed and symmetric. The converse holds for bounded operators but fails in general. Self-adjointness is substantially more restricting than these three properties. The famous spectral theorem holds for self-adjoint operators. In combination with Stone's theorem on one-parameter unitary groups it shows that self-adjoint operators are precisely the infinitesimal generators of strongly continuous one-parameter unitary groups, see . Such unitary groups are especially important for describing time evolution in classical and quantum mechanics. Notes. <templatestyles src="Reflist/styles.css" /> References. Citations. <templatestyles src="Reflist/styles.css" /> Bibliography. <templatestyles src="Refbegin/styles.css" /> "This article incorporates material from Closed operator on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "\\|x\\|_T = \\sqrt{ \\|x\\|^2 + \\|Tx\\|^2 }." }, { "math_id": 1, "text": "C([0,1])" }, { "math_id": 2, "text": "\\|\\cdot\\|_{\\infty}" }, { "math_id": 3, "text": " \\left (\\frac{d}{dx}f \\right )(x) = \\lim_{h \\to 0} \\frac{f(x+h) - f(x)}{h}, \\qquad \\forall x \\in [0, 1]." }, { "math_id": 4, "text": "\\frac{d}{dx}" }, { "math_id": 5, "text": "\\{f_n\\}_n \\subset C^1([0,1])" }, { "math_id": 6, "text": "\\|f_n\\|_\\infty=1" }, { "math_id": 7, "text": "\\sup_n \\|\\frac{d}{dx} f_n\\|_\\infty=+\\infty" }, { "math_id": 8, "text": "\\left (\\tfrac{d}{dx} \\right )(af+bg)= a \\left (\\tfrac{d}{dx} f \\right ) + b \\left (\\tfrac{d}{dx} g \\right )." }, { "math_id": 9, "text": "\\begin{cases} f_n : [0, 1] \\to [-1, 1] \\\\ f_n(x) = \\sin (2\\pi n x) \\end{cases}" }, { "math_id": 10, "text": " \\left \\|f_n \\right \\|_{\\infty} = 1," }, { "math_id": 11, "text": " \\left \\| \\left (\\tfrac{d}{dx} f_n \\right ) \\right \\|_{\\infty} = 2\\pi n \\to \\infty" }, { "math_id": 12, "text": "n\\to\\infty" }, { "math_id": 13, "text": "\\frac{d}{dx} : \\left (C^1 (I), \\|\\cdot \\|_{C^1} \\right ) \\to \\left ( C (I), \\| \\cdot \\|_{\\infty} \\right)," }, { "math_id": 14, "text": "\\| f \\|_{C^1} = \\| f \\|_{\\infty} + \\| f' \\|_{\\infty}." }, { "math_id": 15, "text": "T : D(T) \\subseteq H_1 \\to H_2" }, { "math_id": 16, "text": "T^* : D\\left(T^*\\right) \\subseteq H_2 \\to H_1" }, { "math_id": 17, "text": "\\langle Tx \\mid y \\rangle_2 = \\left \\langle x \\mid T^*y \\right \\rangle_1, \\qquad x \\in D(T)." }, { "math_id": 18, "text": "T^* y" }, { "math_id": 19, "text": "y \\in H_2" }, { "math_id": 20, "text": "x \\mapsto \\langle Tx \\mid y \\rangle" }, { "math_id": 21, "text": "y" }, { "math_id": 22, "text": "D\\left(T^*\\right)," }, { "math_id": 23, "text": "z" }, { "math_id": 24, "text": "H_1" }, { "math_id": 25, "text": "\\langle Tx \\mid y \\rangle_2 = \\langle x \\mid z \\rangle_1, \\qquad x \\in D(T)," }, { "math_id": 26, "text": "T^* y = z" }, { "math_id": 27, "text": "T^*," }, { "math_id": 28, "text": "T^*" }, { "math_id": 29, "text": "H_2" }, { "math_id": 30, "text": "T^{**}." }, { "math_id": 31, "text": "T^{**} = T." }, { "math_id": 32, "text": "J" }, { "math_id": 33, "text": "\\begin{cases} J: H_1 \\oplus H_2 \\to H_2 \\oplus H_1 \\\\ J(x \\oplus y) = -y \\oplus x \\end{cases}" }, { "math_id": 34, "text": "J(\\Gamma(T))^{\\bot}" }, { "math_id": 35, "text": "S" }, { "math_id": 36, "text": "\\langle Tx \\mid y \\rangle_2 = \\langle x \\mid Sy \\rangle_1," }, { "math_id": 37, "text": "T = T^*" }, { "math_id": 38, "text": "T : H_1 \\to H_2" }, { "math_id": 39, "text": "\\operatorname{ker}(T) = \\operatorname{ran}(T^*)^\\bot." }, { "math_id": 40, "text": "T^* T" }, { "math_id": 41, "text": "T T^*" }, { "math_id": 42, "text": "I + T^* T" }, { "math_id": 43, "text": "I + T T^*" }, { "math_id": 44, "text": "K > 0" }, { "math_id": 45, "text": "\\|f\\|_2 \\leq K \\left\\|T^* f\\right\\|_1" }, { "math_id": 46, "text": "f" }, { "math_id": 47, "text": "D\\left(T^*\\right)." }, { "math_id": 48, "text": "T" }, { "math_id": 49, "text": "(T S)^* = S^* T^*," }, { "math_id": 50, "text": "(T S)^*" }, { "math_id": 51, "text": "T^* T = T T^*" }, { "math_id": 52, "text": "\\|T x\\| = \\left\\|T^* x\\right\\|" }, { "math_id": 53, "text": "A, B" }, { "math_id": 54, "text": "T = A + i B," }, { "math_id": 55, "text": "T^* = A - i B," }, { "math_id": 56, "text": "\\|T x\\|^2 = \\|A x\\|^2 + \\|B x\\|^2" }, { "math_id": 57, "text": "T : B_1 \\to B_2" }, { "math_id": 58, "text": "{}^t T: {B_2}^* \\to {B_1}^*" }, { "math_id": 59, "text": "\\langle T x, y' \\rangle = \\langle x, \\left({}^t T\\right) y' \\rangle" }, { "math_id": 60, "text": "x \\in B_1" }, { "math_id": 61, "text": "y \\in B_2^*." }, { "math_id": 62, "text": "\\langle x, x' \\rangle = x'(x)." }, { "math_id": 63, "text": "H," }, { "math_id": 64, "text": "J: H^* \\to H" }, { "math_id": 65, "text": "J f = y" }, { "math_id": 66, "text": "f(x) = \\langle x \\mid y \\rangle_H, (x \\in H)." }, { "math_id": 67, "text": "{}^t T" }, { "math_id": 68, "text": "T^* = J_1 \\left({}^t T\\right) J_2^{-1}," }, { "math_id": 69, "text": "J_j: H_j^* \\to H_j" }, { "math_id": 70, "text": "\\langle Tx \\mid y \\rangle = \\lang x \\mid Ty \\rang" }, { "math_id": 71, "text": " H \\oplus H ." }, { "math_id": 72, "text": " \\langle Tx \\mid x \\rangle " }, { "math_id": 73, "text": "\\langle Tx \\mid x \\rangle \\ge 0 " }, { "math_id": 74, "text": " \\overline T " }, { "math_id": 75, "text": " \\overline T. " }, { "math_id": 76, "text": "\\overline T = T^{**} " }, { "math_id": 77, "text": " (\\overline T)^* = T^*. " } ]
https://en.wikipedia.org/wiki?curid=1422584
14225958
Q-function
Statistics function In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words, formula_0 is the probability that a normal (Gaussian) random variable will obtain a value larger than formula_1 standard deviations. Equivalently, formula_0 is the probability that a standard normal random variable takes a value larger than formula_1. If formula_2 is a Gaussian random variable with mean formula_3 and variance formula_4, then formula_5 is standard normal and formula_6 where formula_7. Other definitions of the "Q"-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally. Because of its relation to the cumulative distribution function of the normal distribution, the "Q"-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics. Definition and basic properties. Formally, the "Q"-function is defined as formula_8 Thus, formula_9 where formula_10 is the cumulative distribution function of the standard normal Gaussian distribution. The "Q"-function can be expressed in terms of the error function, or the complementary error function, as formula_11 An alternative form of the "Q"-function known as Craig's formula, after its discoverer, is expressed as: formula_12 This expression is valid only for positive values of "x", but it can be used in conjunction with "Q"("x") = 1 − "Q"(−"x") to obtain "Q"("x") for negative values. This form is advantageous in that the range of integration is fixed and finite. Craig's formula was later extended by Behnad (2020) for the "Q"-function of the sum of two non-negative variables, as follows: formula_13 formula_14 where formula_15 is the density function of the standard normal distribution, and the bounds become increasingly tight for large "x". Using the substitution "v" ="u"2/2, the upper bound is derived as follows: formula_16 Similarly, using formula_17 and the quotient rule, formula_18 Solving for "Q"("x") provides the lower bound. The geometric mean of the upper and lower bound gives a suitable approximation for formula_0: formula_19 formula_20 For formula_21, the best upper bound is given by formula_22 and formula_23 with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by formula_24 and formula_25 with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by formula_26 and formula_27 with maximum absolute relative error of 1.17%. formula_28 formula_29 formula_30 formula_31 In particular, they presented a systematic methodology to solve the numerical coefficients formula_32 that yield a minimax approximation or bound: formula_33, formula_34, or formula_35 for formula_36. With the example coefficients tabulated in the paper for formula_37, the relative and absolute approximation errors are less than formula_38 and formula_39, respectively. The coefficients formula_32 for many variations of the exponential approximations and bounds up to formula_40 have been released to open access as a comprehensive dataset. formula_43 The absolute error between formula_44 and formula_45 over the range formula_46 is minimized by evaluating formula_47 Using formula_48 and numerically integrating, they found the minimum error occurred when formula_49 which gave a good approximation for formula_50 Substituting these values and using the relationship between formula_0 and formula_45 from above gives formula_51 Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound. formula_52 The fitting coefficients formula_53 can be optimized over any desired range of arguments in order to minimize the sum of square errors (formula_54, formula_55, formula_56 for formula_57) or minimize the maximum absolute error (formula_58, formula_59, formula_60 for formula_57). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of formula_0 is trivial and does not alter the algebraic form of the approximation). Inverse "Q". The inverse "Q"-function can be related to the inverse error functions: formula_61 The function formula_62 finds application in digital communications. It is usually expressed in dB and generally called Q-factor: formula_63 where "y" is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to "y". Values. The "Q"-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the "Q"-function are given below for reference. <templatestyles src="Col-begin/styles.css"/> Generalization to high dimensions. The "Q"-function can be generalized to higher dimensions: formula_64 where formula_65 follows the multivariate normal distribution with covariance formula_66 and the threshold is of the form formula_67 for some positive vector formula_68 and positive constant formula_69. As in the one dimensional case, there is no simple analytical formula for the "Q"-function. Nevertheless, the "Q"-function can be approximated arbitrarily well as formula_70 becomes larger and larger.
[ { "math_id": 0, "text": "Q(x)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "Y" }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": "\\sigma^2" }, { "math_id": 5, "text": "X = \\frac{Y-\\mu}{\\sigma}" }, { "math_id": 6, "text": "P(Y > y) = P(X > x) = Q(x)" }, { "math_id": 7, "text": "x = \\frac{y-\\mu}{\\sigma}" }, { "math_id": 8, "text": "Q(x) = \\frac{1}{\\sqrt{2\\pi}} \\int_x^\\infty \\exp\\left(-\\frac{u^2}{2}\\right) \\, du." }, { "math_id": 9, "text": "Q(x) = 1 - Q(-x) = 1 - \\Phi(x)\\,\\!," }, { "math_id": 10, "text": "\\Phi(x)" }, { "math_id": 11, "text": "\n\\begin{align}\nQ(x) &=\\frac{1}{2}\\left( \\frac{2}{\\sqrt{\\pi}} \\int_{x/\\sqrt{2}}^\\infty \\exp\\left(-t^2\\right) \\, dt \\right)\\\\\n&= \\frac{1}{2} - \\frac{1}{2} \\operatorname{erf} \\left( \\frac{x}{\\sqrt{2}} \\right) ~~\\text{ -or-}\\\\\n&= \\frac{1}{2}\\operatorname{erfc} \\left(\\frac{x}{\\sqrt{2}} \\right).\n\\end{align}\n" }, { "math_id": 12, "text": "Q(x) = \\frac{1}{\\pi} \\int_0^{\\frac{\\pi}{2}} \\exp \\left( - \\frac{x^2}{2 \\sin^2 \\theta} \\right) d\\theta." }, { "math_id": 13, "text": "Q(x+y) = \\frac{1}{\\pi} \\int_0^{\\frac{\\pi}{2}} \\exp \\left( - \\frac{x^2}{2 \\sin^2 \\theta} - \\frac{y^2}{2 \\cos^2 \\theta} \\right) d\\theta, \\quad x,y \\geqslant 0 ." }, { "math_id": 14, "text": "\\left (\\frac{x}{1+x^2} \\right ) \\phi(x) < Q(x) < \\frac{\\phi(x)}{x}, \\qquad x>0," }, { "math_id": 15, "text": "\\phi(x)" }, { "math_id": 16, "text": "Q(x) =\\int_x^\\infty\\phi(u)\\,du <\\int_x^\\infty\\frac ux\\phi(u)\\,du =\\int_{\\frac{x^2}{2}}^\\infty\\frac{e^{-v}}{x\\sqrt{2\\pi}}\\,dv=-\\biggl.\\frac{e^{-v}}{x\\sqrt{2\\pi}}\\biggr|_{\\frac{x^2}{2}}^\\infty=\\frac{\\phi(x)}{x}." }, { "math_id": 17, "text": "\\phi'(u) = - u \\phi(u)" }, { "math_id": 18, "text": "\\left(1+\\frac1{x^2}\\right)Q(x) =\\int_x^\\infty \\left(1+\\frac1{x^2}\\right)\\phi(u)\\,du >\\int_x^\\infty \\left(1+\\frac1{u^2}\\right)\\phi(u)\\,du =-\\biggl.\\frac{\\phi(u)}u\\biggr|_x^\\infty\n=\\frac{\\phi(x)}x. " }, { "math_id": 19, "text": "Q(x) \\approx \\frac{\\phi(x)}{\\sqrt{1 + x^2}}, \\qquad x \\geq 0. " }, { "math_id": 20, "text": " \\tilde{Q}(x) = \\frac{\\phi(x)}{(1-a)x + a\\sqrt{x^2 + b}}. " }, { "math_id": 21, "text": "x \\geq 0" }, { "math_id": 22, "text": "a = 0.344" }, { "math_id": 23, "text": "b = 5.334" }, { "math_id": 24, "text": "a = 0.339" }, { "math_id": 25, "text": "b = 5.510" }, { "math_id": 26, "text": "a = 1/\\pi" }, { "math_id": 27, "text": "b = 2 \\pi" }, { "math_id": 28, "text": "Q(x)\\leq e^{-\\frac{x^2}{2}}, \\qquad x>0" }, { "math_id": 29, "text": "Q(x)\\leq \\tfrac{1}{4}e^{-x^2}+\\tfrac{1}{4}e^{-\\frac{x^2}{2}} \\leq \\tfrac{1}{2}e^{-\\frac{x^2}{2}}, \\qquad x>0" }, { "math_id": 30, "text": "Q(x)\\approx \\frac{1}{12}e^{-\\frac{x^2}{2}}+\\frac{1}{4}e^{-\\frac{2}{3} x^2}, \\qquad x>0 " }, { "math_id": 31, "text": "\\tilde{Q}(x) = \\sum_{n=1}^N a_n e^{-b_n x^2}." }, { "math_id": 32, "text": "\\{(a_n,b_n)\\}_{n=1}^N" }, { "math_id": 33, "text": "Q(x) \\approx \\tilde{Q}(x)" }, { "math_id": 34, "text": "Q(x) \\leq \\tilde{Q}(x)" }, { "math_id": 35, "text": "Q(x) \\geq \\tilde{Q}(x)" }, { "math_id": 36, "text": "x\\geq0" }, { "math_id": 37, "text": "N = 20" }, { "math_id": 38, "text": "2.831 \\cdot 10^{-6}" }, { "math_id": 39, "text": "1.416 \\cdot 10^{-6}" }, { "math_id": 40, "text": "N = 25" }, { "math_id": 41, "text": "x \\in [0,\\infty)" }, { "math_id": 42, "text": "\\{A, B\\}" }, { "math_id": 43, "text": "f(x; A, B) = \\frac{\\left(1 - e^{-Ax}\\right)e^{-x^2}}{B\\sqrt{\\pi} x} \\approx \\operatorname{erfc} \\left(x\\right)." }, { "math_id": 44, "text": "f(x; A, B)" }, { "math_id": 45, "text": "\\operatorname{erfc}(x)" }, { "math_id": 46, "text": "[0, R]" }, { "math_id": 47, "text": "\\{A, B\\} = \\underset{\\{A,B\\}}{\\arg \\min} \\frac{1}{R} \\int_0^R | f(x; A, B) - \\operatorname{erfc}(x) |dx." }, { "math_id": 48, "text": "R = 20" }, { "math_id": 49, "text": "\\{A, B\\} = \\{1.98, 1.135\\}," }, { "math_id": 50, "text": "\\forall x \\ge 0." }, { "math_id": 51, "text": " Q(x)\\approx\\frac{\\left( 1-e^{\\frac{-1.98x} {\\sqrt{2}}}\\right) e^{-\\frac{x^{2}}{2}}}{1.135\\sqrt{2\\pi}x}, x \\ge 0. " }, { "math_id": 52, "text": " Q(x) \\approx e^{-ax^2-bx-c}, \\qquad x \\ge 0. " }, { "math_id": 53, "text": " (a,b,c) " }, { "math_id": 54, "text": "a = 0.3842" }, { "math_id": 55, "text": "b = 0.7640" }, { "math_id": 56, "text": "c = 0.6964" }, { "math_id": 57, "text": "x \\in [0,20]" }, { "math_id": 58, "text": "a = 0.4920" }, { "math_id": 59, "text": "b = 0.2887" }, { "math_id": 60, "text": "c = 1.1893" }, { "math_id": 61, "text": "Q^{-1}(y) = \\sqrt{2}\\ \\mathrm{erf}^{-1}(1-2y) = \\sqrt{2}\\ \\mathrm{erfc}^{-1}(2y)" }, { "math_id": 62, "text": "Q^{-1}(y)" }, { "math_id": 63, "text": "\\mathrm{Q\\text{-}factor} = 20 \\log_{10}\\!\\left(Q^{-1}(y)\\right)\\!~\\mathrm{dB}" }, { "math_id": 64, "text": "Q(\\mathbf{x})= \\mathbb{P}(\\mathbf{X}\\geq \\mathbf{x})," }, { "math_id": 65, "text": "\\mathbf{X}\\sim \\mathcal{N}(\\mathbf{0},\\, \\Sigma) " }, { "math_id": 66, "text": "\\Sigma " }, { "math_id": 67, "text": "\\mathbf{x}=\\gamma\\Sigma\\mathbf{l}^*" }, { "math_id": 68, "text": " \\mathbf{l}^*>\\mathbf{0}" }, { "math_id": 69, "text": "\\gamma>0" }, { "math_id": 70, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=14225958
1422748
Nonlinear Schrödinger equation
Nonlinear form of the Schrödinger equation In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers and planar waveguides and to Bose–Einstein condensates confined to highly anisotropic, cigar-shaped traps, in the mean-field regime. Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water; the Langmuir waves in hot plasmas; the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere; the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains; and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state. The 1D NLSE is an example of an integrable model. In quantum mechanics, the 1D NLSE is a special case of the classical nonlinear Schrödinger field, which in turn is a classical limit of a quantum Schrödinger field. Conversely, when the classical Schrödinger field is canonically quantized, it becomes a quantum field theory (which is linear, despite the fact that it is called ″quantum "nonlinear" Schrödinger equation″) that describes bosonic point particles with delta-function interactions — the particles either repel or attract when they are at the same point. In fact, when the number of particles is finite, this quantum field theory is equivalent to the Lieb–Liniger model. Both the quantum and the classical 1D nonlinear Schrödinger equations are integrable. Of special interest is the limit of infinite strength repulsion, in which case the Lieb–Liniger model becomes the Tonks–Girardeau gas (also called the hard-core Bose gas, or impenetrable Bose gas). In this limit, the bosons may, by a change of variables that is a continuum generalization of the Jordan–Wigner transformation, be transformed to a system one-dimensional noninteracting spinless fermions. The nonlinear Schrödinger equation is a simplified 1+1-dimensional form of the Ginzburg–Landau equation introduced in 1950 in their work on superconductivity, and was written down explicitly by R. Y. Chiao, E. Garmire, and C. H. Townes (1964, equation (5)) in their study of optical beams. Multi-dimensional version replaces the second spatial derivative by the Laplacian. In more than one dimension, the equation is not integrable, it allows for a collapse and wave turbulence. Equation. The nonlinear Schrödinger equation is a nonlinear partial differential equation, applicable to classical and quantum mechanics. Classical equation. The classical field equation (in dimensionless form) is: Nonlinear Schrödinger equation "(Classical field theory)" formula_0 for the complex field "ψ"("x","t"). This equation arises from the Hamiltonian formula_1 with the Poisson brackets formula_2 formula_3 Unlike its linear counterpart, it never describes the time evolution of a quantum state. The case with negative κ is called focusing and allows for bright soliton solutions (localized in space, and having spatial attenuation towards infinity) as well as breather solutions. It can be solved exactly by use of the inverse scattering transform, as shown by (see below). The other case, with κ positive, is the defocusing NLS which has dark soliton solutions (having constant amplitude at infinity, and a local spatial dip in amplitude). Quantum mechanics. To get the quantized version, simply replace the Poisson brackets by commutators formula_4 and normal order the Hamiltonian formula_5 The quantum version was solved by Bethe ansatz by Lieb and Liniger. Thermodynamics was described by Chen-Ning Yang. Quantum correlation functions also were evaluated by Korepin in 1993. The model has higher conservation laws - Davies and Korepin in 1989 expressed them in terms of local fields. Solving the equation. The nonlinear Schrödinger equation is integrable in 1d: Zakharov and Shabat (1972) solved it with the inverse scattering transform. The corresponding linear system of equations is known as the Zakharov–Shabat system: formula_6 where formula_7 The nonlinear Schrödinger equation arises as compatibility condition of the Zakharov–Shabat system: formula_8 By setting "q" = "r"* or "q" = − "r"* the nonlinear Schrödinger equation with attractive or repulsive interaction is obtained. An alternative approach uses the Zakharov–Shabat system directly and employs the following Darboux transformation: formula_9 which leaves the system invariant. Here, "φ" is another invertible matrix solution (different from "ϕ") of the Zakharov–Shabat system with spectral parameter Ω: formula_10 Starting from the trivial solution "U" = 0 and iterating, one obtains the solutions with "n" solitons. The NLS equation is a partial differential equation like the Gross–Pitaevskii equation. Usually it does not have analytic solution and the same numerical methods used to solve the Gross–Pitaevskii equation, such as the split-step Crank–Nicolson and Fourier spectral methods, are used for its solution. There are different Fortran and C programs for its solution. Galilean invariance. The nonlinear Schrödinger equation is Galilean invariant in the following sense: Given a solution "ψ"("x, t") a new solution can be obtained by replacing "x" with "x" + "vt" everywhere in ψ("x, t") and by appending a phase factor of formula_11: formula_12 The nonlinear Schrödinger equation in fiber optics. In optics, the nonlinear Schrödinger equation occurs in the Manakov system, a model of wave propagation in fiber optics. The function ψ represents a wave and the nonlinear Schrödinger equation describes the propagation of the wave through a nonlinear medium. The second-order derivative represents the dispersion, while the "κ" term represents the nonlinearity. The equation models many nonlinearity effects in a fiber, including but not limited to self-phase modulation, four-wave mixing, second-harmonic generation, stimulated Raman scattering, optical solitons, ultrashort pulses, etc. The nonlinear Schrödinger equation in water waves. For water waves, the nonlinear Schrödinger equation describes the evolution of the envelope of modulated wave groups. In a paper in 1968, Vladimir E. Zakharov describes the Hamiltonian structure of water waves. In the same paper Zakharov shows, that for slowly modulated wave groups, the wave amplitude satisfies the nonlinear Schrödinger equation, approximately. The value of the nonlinearity parameter "к" depends on the relative water depth. For deep water, with the water depth large compared to the wave length of the water waves, "к" is negative and envelope solitons may occur. Additionally, the group velocity of these envelope solitons could be increased by an acceleration induced by an external time-dependent water flow. For shallow water, with wavelengths longer than 4.6 times the water depth, the nonlinearity parameter "к" is positive and "wave groups" with "envelope" solitons do not exist. In shallow water "surface-elevation" solitons or waves of translation do exist, but they are not governed by the nonlinear Schrödinger equation. The nonlinear Schrödinger equation is thought to be important for explaining the formation of rogue waves. The complex field "ψ", as appearing in the nonlinear Schrödinger equation, is related to the amplitude and phase of the water waves. Consider a slowly modulated carrier wave with water surface elevation "η" of the form: formula_13 where "a"("x"0, "t"0) and "θ"("x"0, "t"0) are the slowly modulated amplitude and phase. Further "ω"0 and "k"0 are the (constant) angular frequency and wavenumber of the carrier waves, which have to satisfy the dispersion relation "ω"0 = Ω("k"0). Then formula_14 So its modulus |"ψ"| is the wave amplitude "a", and its argument arg("ψ") is the phase "θ". The relation between the physical coordinates ("x"0, "t"0) and the ("x, t") coordinates, as used in the nonlinear Schrödinger equation given above, is given by: formula_15 Thus ("x, t") is a transformed coordinate system moving with the group velocity Ω'("k"0) of the carrier waves, The dispersion-relation curvature Ω"("k"0) – representing group velocity dispersion – is always negative for water waves under the action of gravity, for any water depth. For waves on the water surface of deep water, the coefficients of importance for the nonlinear Schrödinger equation are: formula_16   so   formula_17 where "g" is the acceleration due to gravity at the Earth's surface. In the original ("x"0, "t"0) coordinates the nonlinear Schrödinger equation for water waves reads: formula_18 with formula_19 (i.e. the complex conjugate of formula_20) and formula_21 So formula_22 for deep water waves. Gauge equivalent counterpart. NLSE (1) is gauge equivalent to the following isotropic Landau-Lifshitz equation (LLE) or Heisenberg ferromagnet equation formula_23 Note that this equation admits several integrable and non-integrable generalizations in 2 + 1 dimensions like the Ishimori equation and so on. Zero-curvature formulation. The NLSE is equivalent to the curvature of a particular formula_24-connection on formula_25 being equal to zero. Explicitly, with coordinates formula_26 on formula_25, the connection components formula_27 are given by formula_28 formula_29 where the formula_30 are the Pauli matrices. Then the zero-curvature equation formula_31 is equivalent to the NLSE formula_32. The zero-curvature equation is so named as it corresponds to the curvature being equal to zero if it is defined formula_33. The pair of matrices formula_34 and formula_35 are also known as a Lax pair for the NLSE, in the sense that the zero-curvature equation recovers the PDE rather than them satisfying Lax's equation. Relation to vortices. showed that the work of da Rios (1906) on vortex filaments is closely related to the nonlinear Schrödinger equation. Subsequently, used this correspondence to show that breather solutions can also arise for a vortex filament. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Other. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "i\\partial_t\\psi=-{1\\over 2}\\partial^2_x\\psi+\\kappa|\\psi|^2 \\psi" }, { "math_id": 1, "text": "H=\\int \\mathrm{d}x \\left[{1\\over 2}|\\partial_x\\psi|^2+{\\kappa \\over 2}|\\psi|^4\\right]" }, { "math_id": 2, "text": "\\{\\psi(x),\\psi(y)\\}=\\{\\psi^*(x),\\psi^*(y)\\}=0 \\, " }, { "math_id": 3, "text": "\\{\\psi^*(x),\\psi(y)\\}=i\\delta(x-y). \\," }, { "math_id": 4, "text": "\\begin{align}\n {}[\\psi(x),\\psi(y)] &= [\\psi^*(x),\\psi^*(y)] = 0\\\\\n {}[\\psi^*(x),\\psi(y)] &= -\\delta(x-y)\n\\end{align}" }, { "math_id": 5, "text": "H=\\int dx \\left[{1\\over 2}\\partial_x\\psi^\\dagger\\partial_x\\psi+{\\kappa \\over 2}\\psi^\\dagger\\psi^\\dagger\\psi\\psi\\right]." }, { "math_id": 6, "text": " \\begin{align}\n \\phi_x &= J\\phi\\Lambda + U\\phi \\\\\n \\phi_t &= 2J\\phi\\Lambda^2 + 2U\\phi\\Lambda + \\left(JU^2 - JU_x\\right)\\phi,\n\\end{align} " }, { "math_id": 7, "text": "\n \\Lambda =\n \\begin{pmatrix}\n \\lambda_1&0\\\\\n 0&\\lambda_2\n \\end{pmatrix}, \\quad\n J = i\\sigma_z =\n \\begin{pmatrix}\n i & 0 \\\\\n 0 & -i\n \\end{pmatrix}, \\quad\n U = i\n \\begin{pmatrix}\n 0 & q \\\\\n r & 0\n \\end{pmatrix}.\n" }, { "math_id": 8, "text": " \\phi_{xt} = \\phi_{tx}\n \\quad \\Rightarrow \\quad\n U_t = -JU_{xx} + 2JU^2 U\n \\quad \\Leftrightarrow \\quad \n \\begin{cases}\n iq_t = q_{xx} + 2qrq \\\\\n ir_t = -r_{xx} - 2qrr. \n \\end{cases}\n" }, { "math_id": 9, "text": " \\begin{align}\n \\phi \\to \\phi[1] &= \\phi\\Lambda - \\sigma\\phi \\\\\n U \\to U[1] &= U + [J, \\sigma] \\\\\n \\sigma &= \\varphi\\Omega\\varphi^{-1} \n\\end{align} " }, { "math_id": 10, "text": " \\begin{align}\n \\varphi_x &= J\\varphi\\Omega + U\\varphi \\\\\n \\varphi_t &= 2J\\varphi\\Omega^2 + 2U\\varphi\\Omega + \\left(JU^2 - JU_x\\right)\\varphi.\n\\end{align} " }, { "math_id": 11, "text": "e^{-iv(x+vt/2)}\\," }, { "math_id": 12, "text": "\\psi(x,t) \\mapsto \\psi_{[v]}(x,t)=\\psi(x+vt,t)\\; e^{-iv(x+vt/2)}." }, { "math_id": 13, "text": "\n \\eta = a(x_0,t_0)\\; \\cos \\left[ k_0\\, x_0 - \\omega_0\\, t_0 - \\theta(x_0,t_0) \\right],\n" }, { "math_id": 14, "text": " \\psi = a\\; \\exp \\left( i \\theta \\right). " }, { "math_id": 15, "text": " x = k_0 \\left[ x_0 - \\Omega'(k_0)\\; t_0 \\right], \\quad t = k_0^2 \\left[ -\\Omega''(k_0) \\right]\\; t_0 " }, { "math_id": 16, "text": "\\kappa = - 2 k_0^2, \\quad \\Omega(k_0) = \\sqrt{g k_0} = \\omega_0 \\,\\!" }, { "math_id": 17, "text": "\\Omega'(k_0) = \\frac{1}{2} \\frac{\\omega_0}{k_0}, \\quad \\Omega''(k_0) = -\\frac{1}{4} \\frac{\\omega_0}{k_0^2}, \\,\\!" }, { "math_id": 18, "text": "i\\, \\partial_{t_0} A + i\\, \\Omega'(k_0)\\, \\partial_{x_0} A + \\tfrac12 \\Omega''(k_0)\\, \\partial_{x_0 x_0} A - \\nu\\, |A|^2\\, A = 0," }, { "math_id": 19, "text": "A=\\psi^*" }, { "math_id": 20, "text": "\\psi" }, { "math_id": 21, "text": "\\nu=\\kappa\\, k_0^2\\, \\Omega''(k_0)." }, { "math_id": 22, "text": "\\nu = \\tfrac12 \\omega_0 k_0^2" }, { "math_id": 23, "text": "\\vec{S}_t=\\vec{S}\\wedge \\vec{S}_{xx}. \\qquad " }, { "math_id": 24, "text": "\\mathfrak{su}(2)" }, { "math_id": 25, "text": "\\mathbb{R}^2" }, { "math_id": 26, "text": "(x,t)" }, { "math_id": 27, "text": "A_\\mu" }, { "math_id": 28, "text": "A_x = \\begin{pmatrix}i\\lambda & i\\varphi^* \\\\ i\\varphi & -i\\lambda\\end{pmatrix} " }, { "math_id": 29, "text": "A_t = \\begin{pmatrix} 2i\\lambda^2 - i|\\varphi|^2 & 2i\\lambda\\varphi^* + \\varphi_x^* \\\\ 2i\\lambda\\varphi - \\varphi_x & -2i\\lambda^2 + i|\\varphi|^2\\end{pmatrix} " }, { "math_id": 30, "text": "\\sigma_i" }, { "math_id": 31, "text": "\\partial_t A_x - \\partial_x A_t + [A_x, A_t] = 0" }, { "math_id": 32, "text": "i\\varphi_t + \\varphi_{xx} + 2|\\varphi|^2\\varphi = 0" }, { "math_id": 33, "text": "F_{\\mu\\nu} = [\\partial_\\mu - A_\\mu, \\partial_\\nu - A_\\nu]" }, { "math_id": 34, "text": "A_x" }, { "math_id": 35, "text": "A_t" } ]
https://en.wikipedia.org/wiki?curid=1422748
14227786
Transmissibility (structural dynamics)
Transmissibility, in the context of structural dynamics, can be defined as the ratio of the maximum force (formula_0) on the floor as a result of the vibration of a machine to the maximum machine force (formula_1): formula_2 Where formula_3 is equal to the damping ratio and formula_4 is equal to the frequency ratio. formula_5 is the ratio of the dynamic to static amplitude. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_{max}" }, { "math_id": 1, "text": "P_0" }, { "math_id": 2, "text": "TR = \\frac{f_{max}}{P_0} = R_d\\sqrt{1+(2\\zeta\\beta)^2}" }, { "math_id": 3, "text": "\\zeta" }, { "math_id": 4, "text": "\\beta" }, { "math_id": 5, "text": "R_d" } ]
https://en.wikipedia.org/wiki?curid=14227786
14230307
Kozeny–Carman equation
Relation used in the field of fluid dynamics The Kozeny–Carman equation (or Carman–Kozeny equation or Kozeny equation) is a relation used in the field of fluid dynamics to calculate the pressure drop of a fluid flowing through a packed bed of solids. It is named after Josef Kozeny and Philip C. Carman. The equation is only valid for creeping flow, i.e. in the slowest limit of laminar flow. The equation was derived by Kozeny (1927) and Carman (1937, 1956) from a starting point of (a) modelling fluid flow in a packed bed as laminar fluid flow in a collection of curving passages/tubes crossing the packed bed and (b) Poiseuille's law describing laminar fluid flow in straight, circular section pipes. Equation. The equation is given as: formula_0 where: This equation holds for flow through packed beds with particle Reynolds numbers up to approximately 1.0, after which point frequent shifting of flow channels in the bed causes considerable kinetic energy losses. This equation is a partial case of the Darcy's law stating that "flow is proportional to the pressure gradient and inversely proportional to the fluid viscosity" and is given as: q formula_8 Combining these equations gives the final Kozeny equation for absolute (single phase) permeability: formula_9 where: History. The equation was first proposed by Kozeny (1927) and later modified by Carman (1937, 1956). A similar equation was derived independently by Fair and Hatch in 1933. A comprehensive review of other equations has been published. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\Delta P}{L} = \\frac{150 \\mu}{\\mathit{\\Phi}_\\mathrm{s}^2 d_\\mathrm{p}^2}\\frac{(1-\\varepsilon)^2}{\\varepsilon^3}V_\\mathrm{0}" }, { "math_id": 1, "text": "\\Delta P" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "V_\\mathrm{0}" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "\\mathit{\\Phi}_\\mathrm{s}" }, { "math_id": 7, "text": "d_\\mathrm{p}" }, { "math_id": 8, "text": " = \\frac{\\kappa}{\\mu} \\frac{\\Delta P}{L}" }, { "math_id": 9, "text": " \\kappa = \\mathit{\\Phi}_\\mathrm{s}^2 \\frac {\\varepsilon^3 d_\\mathrm{p}^2}{180(1-\\varepsilon)^2} " }, { "math_id": 10, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=14230307
14230769
Integraph
Mechanical graphing calculator for solving differential equations An Integraph is a mechanical analog computing device for plotting the integral of a graphically defined function. History. Gaspard-Gustave de Coriolis first described the fundamental principal of a mechanical integraph in 1836 in the "Journal de Mathématiques Pures et Appliquées". A full description of an integraph was published independently around 1880 by both British physicist Sir Charles Vernon Boys and Bruno Abdank-Abakanowicz, a Polish-Lithuanian mathematician/electrical engineer. Boys described a design for an integraph in 1881 in the "Philosophical Magazine". Abakanowicz developed a practical working prototype in 1878, with improved versions of the prototype being manufactured by firms such as Coradi in Zürich, Switzerland. Customized and further improved versions of Abakanowicz's design were manufactured until well after 1900, with these later modifications being made by Abakanowicz in collaboration M. D. Napoli, the "principal inspector of the railroad Chemin de Fer de l’Est and head of its testing laboratory". Description. The input to the integraph is a tracing point that is the guiding point that traces the differential curve. The output is defined by the path a disk that rolls along the paper without slipping takes. The mechanism sets the angle of the output disk based on the position of the input curve: if the input is zero, the disk is angled to roll straight, parallel to the x axis on the Cartesian plane. If the input is above zero the disk is angled slightly toward the positive y direction, such that the y value of its position increases as it rolls in that direction. If the input is below zero, the disk is angled the other way such that its y position decreases as it rolls. The hardware consists of a rectangular carriage which moves left to right on rollers. Two sides of the carriage run parallel to the x axis. The other two sides are parallel to the y axis. Along the trailing vertical (y axis) rail slides a smaller carriage holding a tracing point. Along the leading vertical rail slides a second smaller carriage to which is affixed a small, sharp disc, which rests and rolls (but does not slide) on the graphing paper. The trailing carriage is connected both with a point in the center of the carriage and the disc on the leading rail by a system of sliding crossheads and wires, such that the tracing point must follow the disc's tangential path. Mechanism. The integraph plots (traces) the "integral curve" formula_0 when we are given the "differential curve", formula_1 The mathematical basis of the mechanism depends on the following considerations: For any point ("x", "y") of the differential curve, construct the auxiliary triangle with vertices ("x", "y"), ("x", 0) and ("x" − 1, 0). The hypotenuse of this right triangle intersects the "X"-axis making an angle the value of whose tangent is "y". This hypotenuse is parallel to the tangent line of the integral curve at ("X", "Y") that corresponds to ("x", "y"). The integraph may be used to obtain a quadrature of the circle. If the differential curve is the unit circle, the integral curve intersects the lines "X" = ± 1 at points that are equally spaced at a distance of π/2. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Gauthier-Villars, 1886 available at Google Books
[ { "math_id": 0, "text": "Y = F(x) = \\int f(x) dx," }, { "math_id": 1, "text": " y = f(x)." } ]
https://en.wikipedia.org/wiki?curid=14230769
14232419
Anelloviridae
Family of viruses Anelloviridae is a family of viruses. They are classified as vertebrate viruses and have a non-enveloped capsid, which is round with isometric, icosahedral symmetry and has a triangulation number of 3. The name is derived from Italian "anello" 'ring', referring to the circular genome of anelloviruses. Genome. The genome is not segmented and contains a single molecule of circular, negative-sense, single-stranded DNA. The complete genome is 3000–4000 nucleotides long. They also contain a non-coding region with one to two 80–110 nt sequences that contain high GC content, forming a secondary structure of stems and loops. The genome has ORFs and a high degree of genetic diversity. Although the mechanism of replication has not been studied heavily, anelloviridae appears to use the rolling circle mechanism where first ssDNA is converted to dsDNA. It requires a host polymerase for replication to occur as the genome itself does not encode for a viral polymerase and, as a result, anelloviridae must replicate inside the cell's nucleus. Anelloviridae also have two main open reading frames, ORF1 and ORF2. They initiate at two different AUG codons. Additional ORFs can be formed as well. These ORFs may overlap partially. ORF1 is thought to encode the putative capsid protein and replication-associated protein of anelloviruses. The specific role of these replication-associated proteins are still being studied. ORF2 is thought to either encode a protein with phosphatase activity (TTMVs) or a peptide that suppresses the NF-formula_0B pathways (TTVs). It was seen to have a highly conserved motif in the N-terminal part. Clinical. Anellovirus species are highly prevalent and genetically diverse. Their virome has been present in most humans. They enter in the cell early in life and replicate persistently. This happens in the first month of life. It remains debated whether or not the first infection is symptomatic or not, however. They are probably repressed by host immunity, as the anelloviruses increase during host immunosuppression. The overall prevalence in the general population is over 90% and has been found in all continents. They cause chronic human viral infections that have not yet been associated with disease. There is also no evidence of viral clearance following infection. At least 200 different species are present in humans and animals. It has been shown that there are multiple methods of transmission such as saliva droplets and maternal or sexual routes. Taxonomy. Most genera has a name pattern of a certain (Greek, Arabic or Hebrew) letter+torquevirus with the exception of gyrovirus, as Alphatorquevirus (where torque means necklace) is one of the first genera to represent the family. The family contains the following genera: &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=14232419
1423256
Differential signalling
Method for electrically transmitting information Differential signalling is a method for electrically transmitting information using two complementary signals. The technique sends the same electrical signal as a differential pair of signals, each in its own conductor. The pair of conductors can be wires in a twisted-pair or ribbon cable or traces on a printed circuit board. Electrically, the two conductors carry voltage signals which are equal in magnitude, but of opposite polarity. The receiving circuit responds to the difference between the two signals, which results in a signal with a magnitude twice as large. The symmetrical signals of differential signalling may be referred to as "balanced", but this term is more appropriately applied to balanced circuits and balanced lines which reject common-mode interference when fed into a differential receiver. Differential signalling does not make a line balanced, nor does noise rejection in balanced circuits require differential signalling. Differential signalling is to be contrasted to single-ended signalling which drives only one conductor with signal, while the other is connected to a fixed reference voltage. Advantages. Contrary to popular belief, differential signalling does not affect noise cancellation. Balanced lines with differential receivers will reject noise regardless of whether the signal is differential or single-ended, but since balanced line noise rejection requires a differential receiver anyway, differential signalling is often used on balanced lines. Some of the benefits of differential signalling include: Differential signalling works for both analog signalling, as in balanced audio, and in digital signalling, as in RS-422, RS-485, Ethernet over twisted pair, PCI Express, DisplayPort, HDMI and USB. Suitability for use with low-voltage electronics. The electronics industry, particularly in portable and mobile devices, continually strives to lower supply voltage to save power. A low supply voltage, however, reduces noise immunity. Differential signalling helps to reduce these problems because, for a given supply voltage, it provides twice the noise immunity of a single-ended system. To see why, consider a single-ended digital system with supply voltage formula_0. The high logic level is formula_1 and the low logic level is 0 V. The difference between the two levels is therefore formula_2. Now consider a differential system with the same supply voltage. The voltage difference in the high state, where one wire is at formula_1 and the other at 0 V, is formula_2. The voltage difference in the low state, where the voltages on the wires are exchanged, is formula_3. The difference between high and low logic levels is therefore formula_4. This is twice the difference of the single-ended system. If the voltage noise on one wire is uncorrelated to the noise on the other one, it takes twice as much noise to cause an error with the differential system as with the single-ended system. In other words, differential signalling doubles the noise immunity. Comparison with single-ended signalling. In single-ended signalling, the transmitter generates a single voltage that the receiver compares with a fixed reference voltage, both relative to a common ground connection shared by both ends. In many instances, single-ended designs are not feasible. Another difficulty is the electromagnetic interference that can be generated by a single-ended signalling system that attempts to operate at high speed. Relation to balanced interfaces. When transmitting signals differentially between two pieces of equipment it is common to do so through a balanced interface. An "interface" is a subsystem containing three parts: a driver, a line, and a receiver. These three components complete a full circuit for a signal to travel through and the impedances of this circuit is what determines whether the interface as a whole is balanced or not: "A balanced circuit is a two-conductor circuit in which both conductors and all circuits connected to them have the same impedance to ground and to all other conductors." Balanced interfaces were developed as a protection scheme against noise. In theory, it can reject any interference so long as it is common-mode (voltages that appear with equal magnitude and the same polarity in both conductors). There exists great confusion as to what constitutes a balanced interface and how it relates to differential signalling. In reality, they are two completely independent concepts: balanced interfacing concerns noise and interference rejection, while differential signalling only concerns headroom. The impedance balance of a circuit does not determine the signals it can carry and vice versa. Uses of differential pairs. The technique minimizes electronic crosstalk and electromagnetic interference, both noise emission and noise acceptance, and can achieve a constant or known characteristic impedance, allowing impedance matching techniques important in a high-speed signal transmission line or high-quality balanced line and balanced circuit audio signal path. Differential pairs include: Differential pairs generally carry differential or semi-differential signals, such as high-speed digital serial interfaces including LVDS differential ECL, PECL, LVPECL, Hypertransport, Ethernet over twisted pair, serial digital interface, RS-422, RS-485, USB, Serial ATA, TMDS, FireWire, and HDMI, etc., or else high quality and/or high frequency analog signals (e.g. video signals, balanced audio signals, etc.). Differential signalling often uses length-matched wires or conductors which are used in high speed serial links. Data rate examples. Data rates of some interfaces implemented with differential pairs include the following: Transmission lines. The type of transmission line that connects two devices (chips, modules) often dictates the type of signalling. Single-ended signalling is typically used with coaxial cables, in which one conductor totally screens the other from the environment. All screens (or shields) are combined into a single piece of material to form a common ground. Differential signalling, however, is typically used with a balanced pair of conductors. For short cables and low frequencies, the two methods are equivalent, so cheap single-ended circuits with a common ground can be used with cheap cables. As signalling speeds become faster, wires begin to behave as transmission lines. Use in computers. Differential signalling is often used in computers to reduce electromagnetic interference, because complete screening is not possible with microstrips and chips in computers, due to geometric constraints and the fact that screening does not work at DC. If a DC power supply line and a low-voltage signal line share the same ground, the power current returning through the ground can induce a significant voltage in it. A low-resistance ground reduces this problem to some extent. A balanced pair of microstrip lines is a convenient solution because it does not need an additional PCB layer, as a stripline does. Because each line causes a matching image current in the ground plane, which is required anyway for supplying power, the pair looks like four lines and therefore has a shorter crosstalk distance than a simple isolated pair. In fact, it behaves as well as a twisted pair. Low crosstalk is important when many lines are packed into a small space, as on a typical PCB. High-voltage differential signalling. High-voltage differential (HVD) signalling uses high-voltage signals. In computer electronics, "high voltage" normally means 5 volts or more. SCSI-1 variations included a high voltage differential (HVD) implementation whose maximum cable length was many times that of the single-ended version. SCSI equipment, for example, allows a maximum total cable length of 25 meters using HVD, while single-ended SCSI allows a maximum cable length of 1.5 to 6 meters, depending on bus speed. LVD versions of SCSI allow less than 25 m cable length not because of the lower voltage, but because these SCSI standards allow much higher speeds than the older HVD SCSI. The generic term "high-voltage differential signalling" describes a variety of systems. Low-voltage differential signalling (LVDS), on the other hand, is a specific system defined by a TIA/EIA standard. Polarity switching. Some integrated circuits dealing with differential signals provide a hardware option (via strapping options, under firmware control, or even automatic) to swap the polarity of the two differential signals, called "differential pair swapping", "polarity reversion", "differential pair inversion", "polarity inversion", or "lane inversion". This can be utilized to simplify or improve the routing of high-speed differential pairs of traces on printed circuit boards in hardware development, to help to cope with common cabling errors through swapped wires, or easily fix common design errors under firmware control. Many Ethernet PHY transceivers support this as "auto polarity detection and correction" (not to be confused with a similar "auto crossover" feature). PCIe and USB SuperSpeed also support lane polarity inversion. Another way to deal with polarity errors is to use polarity-insensitive line codes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_S" }, { "math_id": 1, "text": "V_S\\," }, { "math_id": 2, "text": "V_S - 0\\,\\mathrm{V} = V_S" }, { "math_id": 3, "text": "0\\,\\mathrm{V} - V_S = -V_S" }, { "math_id": 4, "text": "V_S - (-V_S) = 2V_S\\," } ]
https://en.wikipedia.org/wiki?curid=1423256
14234559
EMI 2001
The EMI 2001 broadcast studio camera was an early, very successful British made Plumbicon studio camera that included the lens within the body of the camera. Four 30 mm tubes allowed one tube to be dedicated solely to producing a relatively high resolution monochrome signal, with the other three tubes each providing red, green and blue signals. Even though semiconductors were used in most of the camera, the highly sensitive head amplifiers still used thermionic valves in the first generation of the design. Design. Integrating the lens within the body of the camera had both positive and negative effects. On the positive side, it meant the optical nodal point of the camera was close to the centre of gravity, which could make operation easier and more instinctive when used on movable camera mounts such as pedestals. The downside was that lens manufacturers were limited to which lenses they could adapt to fit to the camera. This made the 2001 less attractive for outside broadcasts. The 2001 was both heavy and large. The pull-out handles at each corner needed four people to safely move the camera with the lens in place. It also required a separate remote camera control unit and the cable connecting the two was over 2 inches thick. The standard servo-controlled studio zoom lens had a 5 to 50° horizontal angle of view, with a minimum focus distance of either 36 inches (J type) or 18 inches (K type). Four-tube prism optics. The EMI 2001 used a four-way prism assembly to split the light into its components, using the same novel principles that had been developed by Philips for its three-way splitter. These new assemblies used the property of total internal reflection, within the prisms, to direct the light to the pick-up tubes. The techniques were described in a patent first filed in 1961. The three-way prism was also described in a description of the LDE3 camera. The technique of using a prism assembly in this way was far superior to the earlier light-splitting arrangements, since the prism assembly was neat and compact and reproducibility in manufacture was much improved. The problems previously experienced with double imaging (common with plate glass dichroic mirrors) were also eliminated. Furthermore, because of the near-normal incidence of light onto the dichroic surfaces, sensitivity to polarised light was reduced. Consequently, EMI chose to use a four-tube version of the prism splitter for its new colour camera, in order to retain all the advantages of the method. However, devising a single prism arrangement for four tubes was less easy than for three and several alternatives were initially considered. In an early configuration of the prism block, shown in the thumbnail, three of the pick-up tubes were envisaged to be in a common plane, but with the fourth (red) tube sticking up, nearly at right angles to the other three. (This configuration was to be used in the Russian four-tube camera type KT-116M.) For the final optical arrangement in the EMI 2001, the green prism was changed to have a fully silvered mirror at about 45 degrees, to deflect the green light sideways, resulting in the final four-spoke arrangement. (When viewed from the back of a camera the four tubes were seen as a diagonal cross). This optical arrangement defined the cross-section dimensions of the camera (which was not small – 380 × 380 mm), but it did allow the zoom lens to be located within the camera body. Also the removal of individual pick-up tubes was possible without any need to remove the scanning coils, as the tube bases were easily accessible at the outer corners. Generating the image. The composite signals of the NTSC, PAL and SECAM systems are made up of a wideband luminance signal and two narrow-band colour difference signals containing B-Y and R-Y. If a band-limited version of the signal from the luminance tube is used to derive the colour difference signals, without modification, then colour errors would occur. This is because the luminance characteristic expected by the colour processing must be made up in a particular way, using specific proportions of red, green and blue, whereas the signal from the luminance tube has a more general monochromatic characteristic akin to that from a conventional black and white camera. In addition, the application of gamma correction to the signals further complicates the situation (display tubes have, approximately, a square law characteristic with γ ≈ 2.2). As shown below, it is beneficial, but not sufficient, to shape the luminance response to simulate that of the NTSC (PAL or SECAM) luminance characteristic (by, for example, placing an optical filter in front of the luminance tube to pass light with the required luminosity function, or by a special dichroic surface which reflects light to the luminance tube with the required luminosity function). For a basic 3-colour system the wideband luminance signal (Y'), for NTSC, PAL and SECAM, is given by: formula_0 In the case of a separate luminance tube, with appropriate spectral shaping, the output signal (Y) is given by: formula_1 which when gamma-corrected gives: formula_2 formula_3 does not equal formula_4 except when R = G = B, which corresponds to neutral (grey) tones. When deriving the two narrowband colour difference signals containing RN and BN , a bandlimited version of the luminance signal (y') is required, namely: formula_5 but the band limited signal from the luminance tube is: formula_6 As before, formula_7 does not equal formula_8. If the gamma corrected luminance signal formula_7 is simply used instead of y' then colour errors result which can be appreciable for saturated colours. In the EMI 2001 a process known as Delta-L Correction is used to overcome this problem. A band-limited luminance difference correction signal, ΔL is formed, where: formula_9 This narrowband signal is used to correct the wideband luminance channel at low frequencies, so the monochrome signal transmitted becomes: formula_10 With this corrected luminance signal, the correct colour rendition is obtained, whilst still retaining the sharp luminance detail of a 4-tube camera. The narrowband R, G and B signals are gamma corrected and applied to a suitable matrixing circuit, to derive the correction. With grey scale scenes formula_7 = formula_8 and the signal reverts to that of the luminance tube only. Transistor circuits – the ring-of-three. The circuitry in the 2001 was all solid state apart from the pick-up tubes and, in the early cameras, the first stage of the head amplifiers. The circuitry made extensive use of the 'ring-of-three' amplifier configuration, shown simplified in the figure. This circuit was easily adapted for various uses. In the normal, non-inverting mode, the bottom of resistor R2 is grounded and the input is via Vin(1). In this mode, the amplifier behaves somewhat like a 'current feedback amplifier'. The circuit maintains its bandwidth as the gain is increased (by reducing R2), unlike a conventional voltage feedback op. amp. The circuit has a 'virtual earth' point at 'A' so that inverting or summing amplifiers are possible. In this mode, the base of TR1 is grounded and the input is via Vin(2) and the series resistor R2. Band defining linear phase filters. The EMI 2001 used band defining filters in all four channels. For the colour channels and narrow-band luminance, the low-pass filters had a Gaussian shaped pass-band and, although such filters were not 'sharp-cut', they were linear phase and gave negligible overshoots on transients. The wide-band luminance channel had its bandwidth defined by a linear phase low pass filter with a 3 dB cut-off at 6.8 MHz. Its design follows the lattice filter methods of Bode. The amplitude responses of the two filters are shown below. Also shown are the phase deviations of the two filters from the linear phase/frequency characteristic, given by: formula_11 where f is the frequency in Hz and φ(f) is in degrees. At low frequencies, the propagation delays of the two filters are both the same (177 ns, approximately). Comet tails and blooming. "Blooming" refers to the situation where bright areas in a picture 'bleed' into adjacent dark areas, with a consequential loss of picture sharpness and detail. The condition leads to "comet-tails" which streak across a picture, following moving highlights. The pictures from early Plumbicon cameras were susceptible to comet tails and blooming and, although these effects had not been a major concern in the previous generation of cameras, which employed vidicon or image orthicon tubes, they were an annoying feature of Plumbicon tubes. The problems arose when the beam current of a pick-up tube was insufficient to fully discharge the target in very bright areas of an image. Reducing the target voltage and increasing the beam current of the pick-up tube helped to mitigate the problem but, with early Plumbicons, this resulted in loss of resolution and increased lag. This problem with Plumbicon tubes was already of concern in the early 1960s and all the early plumbicon cameras suffered from it, including the EMI 2001. With separate mesh tubes there was some improvement, because higher beam currents could be used without loss of resolution. Some cameras introduced anti comet-tail circuits to provide dynamic correction, when an overload was sensed (ACT circuits), but these were not used in the EMI 2001. The problem was not satisfactorily resolved until the late 1960s when an extra 'anti-comet-tail' gun was introduced into Plumbicon tubes. New camera designs, produced in the 1970s, were able to include the new improved tubes, and usually did so. Some 2001 cameras were modified to take the new tubes, but it was a difficult retrofit procedure, because of the complexity of the additional circuitry. ACT and the EMI 2001/1. As supplied by EMI, the 2001 and the later 2001/1, did not have any form of ACT (anti-comet tail) or HOP (highlight overload protection). This is why its performance was poor in this respect, when compared with the next generation of cameras supplied in the 1970s. None of the first generation of true broadcast cameras in the middle to late 1960s had ACT, so the EMI 2001 was not unusual. When observing old recordings, such as those from the 2001, it is very easy to tell if a programme used EMI 2001s (or any other first-generation PAL colour camera) to capture the images as the comet tails would often be coloured "blobs" or "splodges" (usually caused by a light source or light reflecting off a highly reflective or polished surface) simply because the camera did not have ACT circuits. Some broadcasters modified their cameras to have ACT, but retrofitting ACT/HOP was not an easy modification as four new HOP camera tubes would be needed, the tube bases, wiring harness, four head amplifiers and four video amplifiers and the tube beam current boards would all have needed work done to them. ACT and HOP works by using an extra electrode in the tube to 'flood discharge' the target during the flyback period. Great care was needed in setting up the HOP voltages as damage to the tube's emission could occur. Once fitted, the ACT circuits were adjusted so that the comet tail didn't appear as a "blob". Even when ACT circuits had been retro-fitted, comet tails would sometimes occur, consisting of either a mix of two separate colours, one colour inside the other (e.g. a comet tail that is red with a smaller comet tail inside that one that may be green), or the comet tail may be a non-primary colour, such as pink. The problems occurred when the settings of the ACT circuits were not well matched. Development. In 1963, prior to the development of the 2001, an experimental four tube camera was constructed by EMI engineers. This experimental camera had been inspired by RCA's new four tube camera, the TK-42, and used the same tube arrangement, i.e. a image Orthicon tube in the luminance channel and three Vidicon tubes in the colour channels. In addition, the experimental camera had an integrally mounted Varotal III zoom lens. It was demonstrated to the BBC in 1964 where it received a mixed reception. Pictures from the camera had disappointing colorimetry, but sharp luminance detail. A production version of this camera was planned, the EMI 2000, but this camera was never built after the BBC had initially specified Vidicon tubes. Before production, the BBC changed policy to adopt the newly available Plumbicon tubes supplied by Philips, in the new camera, the EMI 2001, which delayed production as design revisions were necessary to accommodate the performance parameters of the new tubes in the circuits that they would be integrated with. After development delays, production quantities of this camera were not ready for the launch of the BBC's colour television service in 1967 and Marconi Mk VII four-tube cameras were urgently ordered. When the EMI 2001 was ready for production in early 1968, the Marconi Mk VII cameras the BBC ordered were moved to the weather, news and presentation studios in Television Centre (where movement would be less of an issue as camera operators had complained of discomfort from operating them with unusual postures). The BBC had also ordered Philips LDK3 three-tube cameras, mainly used for outside broadcasts and in some regional studios. EMI's experimental 4-tube camera. EMI engineers visited the United States in 1963, in order to view RCA's new four-tube colour camera, the TK42. Immediately following this visit, EMI Research Labs. embarked on a program to build an experimental camera using the same format. The construction took only six weeks of intensive effort, aided by the cannibalization of parts from existing EMI cameras. Items were taken from an EMI Type 203 image Orthicon monochrome studio camera, for the luminance channel, and a Type 204 industrial colour camera, for the colour channels. This camera contained three Vidicon tubes and a colour-splitting system using plate glass dichroic mirrors. In addition, a Varotal III zoom lens was integrated into the body of the experimental camera. The camera was housed in a simple box-shaped structure with ribs of extruded aluminium and plain side panels. The experimental camera was demonstrated to the BBC in 1963 where it received a mixed reception. At that time, the BBC was evaluating an early Philips three-tube camera which used some newly available Plumbicon pick-up tubes. It had been set up by BBC engineers to give highly saturated colour pictures and they were unimpressed by the 'tinted' pictures of the EMI camera. In order to better judge the performance of the then existing cameras, the BBC organised comparison tests between the experimental EMI camera, a Philips camera and a Marconi three I.O. camera. In these tests, the colorimetry of the pictures from the EMI camera compared unfavourably with the other two, but it did give the sharpest pictures. The development of the EMI 2001. In spite of the BBC's lukewarm reception of the experimental camera EMI persisted with the four-tube concept, but now using Plumbicon tubes, as suggested by Wood although there was some delay before the work started. There were several reasons for the delay. Firstly, EMI's board hesitated to provide the financial investment needed for the project. Secondly, there was indecision regarding where to place the work but, eventually, the Colour TV Department of the Research Labs. was chosen in preference to the Broadcast Equipment Division (the existing supplier of EMI's monochrome cameras and studio equipment). Thirdly, there was concern regarding the reliability of supply of Plumbicon tubes, as Philips was the only supplier. Fourthly, there were concerns regarding the variable quality standards of the early Plumbicon pick-up tubes, as some tubes were found to produce unstable pictures. Although most issues with tube quality were quickly resolved by Philips, there remained concerns regarding 'comet tails' and 'blooming'. After the EMI board granted approval for the new camera, in late 1964, work on it progressed rapidly. The camera was to use four Plumbicon pick-up tubes and solid state circuitry, include a zoom lens as standard and to use prism optics. After the late start, the first fully operational prototype was shown to the BBC and others in 1966, only just in time to meet BBC time-scales for the introduction of its new colour service. Early cameras used thermionic valves (vacuum tubes) in the first stages of the head amplifiers but later FET amplifiers were introduced, such cameras being designated type 2001/1. All other circuitry in the cameras, apart from the pick-up tubes, was solid state. Sales of the Type 2001 were very successful in the UK. The BBC and many of the independent TV companies installed the cameras in their studios during the rapid expansion of the UK colour services after 1967. However, by the time EMI had fulfilled its UK orders (towards the end of the decade), the boom in the US market had been missed and the European market had yet to fully develop or was already dominated by Philips cameras. In addition, rival companies were already bringing out new designs and EMI now found only a limited market for a camera with a four-tube configuration. Operational history. First produced in 1966, by the early 1970s almost all of BBC Television's studios and many outside broadcast (OB) units were equipped with the 2001. Several ITV companies purchased or leased the camera including Thames Television, Yorkshire Television, Associated Television/Central Independent Television, Granada, HTV, Anglia, London Weekend Television and Independent Television News. Independent outfits such as the early cable television stations Rediffusion Cablevision, Sheffield Cablevision and the educational television arm of the Inner London Education Authority also purchased the camera. When sold abroad, the EMI 2001 was carried under the Thomson SA brand – hence "Thomson TH.T 2001". How this came about is unknown as EMI and Thomson SA did not have business links. The Thomson 2001s, like the EMIs, also used Plumbicons; however, due to a brochure which was printed in French, it was presumed that they used Vidicon tubes. But, apart from the silver viewfinder squares (instead of white) and the brand name change on the front and sides, the cameras were the same. In the United States, the cameras were marketed by International Video Corporation as the IVC/EMI 2001-B (four tubes), with another version, the IVC/EMI 2001-C, consisting of three tubes. Only one U.S. station is known to have purchased the 2001: WSNS-TV in Chicago, in the early years of its operation. Although there was no predicted lifespan for the camera, the heavy hot-running four-tube design was considered somewhat outdated even when it was new, which contributed to the camera's near-total failure to sell to broadcasters outside the UK. Furthermore, when EMI closed-down the Broadcast Equipment Division in the late 1970s, studios were deprived of technical and spares support for their cameras. Consequently, several ITV companies began replacing them in the late-1970s with the last commercial operators (Yorkshire &amp; Central) both phasing them out in 1986 (in the main, Central had disposed of them in 1984, however they were used for continuity and presentation from its Birmingham operation until 1986). However the BBC kept a number of such cameras in operation at BBC Television Centre, its various regional outposts and its BBC Elstree Centre for some years afterwards, the last being at Elstree until July 1991; they were kept running by "cannibalising" identical cameras left behind by Central when the BBC purchased Elstree from it in 1984 as well as BBC EMI 2001s disposed of in previous years. Zoom lenses for the 2001. Rank Taylor Hobson declined to offer a zoom lens for EMI's new camera, claiming it was fully committed elsewhere, but Angenieux (see Pierre Angenieux) expressed its interest in supplying zoom lenses for the project. The French company offered two zoom lenses for the camera; the first was a 10:1 zoom for studio use and the second a larger unit for outside broadcasts. Both could be accommodated within the body of the camera, although the O.B. lens did protrude a little. To accommodate a four-way prism splitter, extra distance was needed from the back of the lens to the image focal plane, when compared with a three-tube splitter. This set severe demands for the lens designer, but Angenieux was able to achieve EMI's requirements, provided that field flattening lenses were fitted in front of each pick-up tube. Early cameras used this arrangement, but with later zoom designs these lenses became unnecessary. The servo motors and the servo amplifiers were supplied by Evershed Power Optics. The driver amplifiers for the servo motors were mounted in the camera body alongside the lens. The servo-driven zoom lens and the associated amplifier circuitry added considerably to the weight of the camera. In addition, incorporating the servo drivers within the camera body precluded the use of other makes of zoom lens. Benefits of an integral zoom lens. The integral zoom lens was a popular feature of the EMI 2001, which was liked by camera operators, and it was sometimes referred to within the television industry as "the cameraman's camera". With no protruding zoom lens, the studio camera was only 537mm long enabling it to be used in small spaces and to be panned very easily (it had a low moment of inertia). In addition, pictures produced while panning had a more natural look. The operational flexibility of the camera was demonstrated in training videos. Although the integral zoom lens camera was popular within the UK, this concept had little influence on designs or sales of cameras elsewhere. Only Marconi with its small neat Mk VIII and the cameras from Link a few years later followed the concept. Most camera manufacturers claimed that a format where the lens protrudes out in front of the camera gave a greater choice of lens supplier and, of course, it was a format that made life easier for camera designers, so the enthusiasm of camera operators for the integral zoom concept was found to have little long term influence on designers. Even EMI abandoned the notion of having an integral zoom lens, with its new camera, the Type 2005, which had a format reminiscent of the very earliest Philips experimental camera, with its three horizontally configured tubes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; See also. Four-tube television camera References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y' = 0.3R^{\\frac{1}{\\gamma}} + 0.59G^{\\frac{1}{\\gamma}} + 0.11B^{\\frac{1}{\\gamma}}" }, { "math_id": 1, "text": "Y = 0.3R + 0.59G + 0.11B " }, { "math_id": 2, "text": "Y^{\\frac{1}{\\gamma}} = (0.3R +0.59G + 0.11B)^{\\frac{1}{\\gamma}}" }, { "math_id": 3, "text": "Y^{\\frac{1}{\\gamma}}" }, { "math_id": 4, "text": " Y' " }, { "math_id": 5, "text": "y' = 0.3R_N^{\\frac{1}{\\gamma}} + 0.59G_N^{\\frac{1}{\\gamma}} + 0.11B_N^{\\frac{1}{\\gamma}}" }, { "math_id": 6, "text": "y^{\\frac{1}{\\gamma}} = (0.3R_N +0.59G_N + 0.11B_N)^{\\frac{1}{\\gamma}}" }, { "math_id": 7, "text": "y^{\\frac{1}{\\gamma}}" }, { "math_id": 8, "text": " y' " }, { "math_id": 9, "text": " \\Delta L = y^{\\frac{1}{\\gamma}}- y'" }, { "math_id": 10, "text": "Y^{\\frac{1}{\\gamma}} - \\Delta L = Y^{\\frac{1}{\\gamma}} - (y^{\\frac{1}{\\gamma}}- y')" }, { "math_id": 11, "text": "\\phi (f) = -6.36 \\times 10^{-5} \\times f " } ]
https://en.wikipedia.org/wiki?curid=14234559
14240296
One-way quantum computer
Method of quantum computing The one-way quantum computer, also known as measurement-based quantum computer (MBQC), is a method of quantum computing that first prepares an entangled "resource state", usually a cluster state or graph state, then performs single qubit measurements on it. It is "one-way" because the resource state is destroyed by the measurements. The outcome of each individual measurement is random, but they are related in such a way that the computation always succeeds. In general, the choices of basis for later measurements need to depend on the results of earlier measurements, and hence the measurements cannot all be performed at the same time. The hardware implementation of MBQC mainly relies on photonic devices, due to the difficulty of entangling photons without measurements, and the relative simplicity of creating and measuring them. However, MBQC is also possible with matter-based qubits. The process of entanglement and measurement can be described with the help of graph tools and group theory, in particular by the elements from the stabilizer group. Definition. The purpose of quantum computing focuses on building an information theory with the features of quantum mechanics: instead of encoding a binary unit of information (bit), which can be switched to 1 or 0, a quantum binary unit of information (qubit) can simultaneously turn to be 0 and 1 at the same time, thanks to the phenomenon called superposition. Another key feature for quantum computing relies on the entanglement between the qubits. In the quantum logic gate model, a set of qubits, called register, is prepared at the beginning of the computation, then a set of logic operations over the qubits, carried by unitary operators, is implemented. A quantum circuit is formed by a register of qubits on which unitary transformations are applied over the qubits. In the measurement-based quantum computation, instead of implementing a logic operation via unitary transformations, the same operation is executed by entangling a number formula_1 of input qubits with a cluster of formula_2 ancillary qubits, forming an overall source state of formula_3 qubits, and then measuring a number formula_4 of them. The remaining formula_5 output qubits will be affected by the measurements because of the entanglement with the measured qubits. The one-way computer has been proved to be a universal quantum computer, which means it can reproduce any unitary operation over an arbitrary number of qubits. General procedure. The standard process of measurement-based quantum computing consists of three steps: entangle the qubits, measure the ancillae (auxiliary qubits) and correct the outputs. In the first step, the qubits are entangled in order to prepare the source state. In the second step, the ancillae are measured, affecting the state of the output qubits. However, the measurement outputs are non-deterministic result, due to undetermined nature of quantum mechanics: in order to carry on the computation in a deterministic way, some correction operators, called byproducts, are introduced. Preparing the source state. At the beginning of the computation, the qubits can be distinguished into two categories: the input and the ancillary qubits. The inputs represent the qubits set in a generic formula_6 state, on which some unitary transformations are to be acted. In order to prepare the source state, all the ancillary qubits must be prepared in the formula_7 state: formula_8 where formula_9 and formula_10 are the quantum encoding for the classical formula_11 and formula_12 bits: formula_13. A register with formula_14 qubits will be therefore set as formula_15. Thereafter, the entanglement between two qubits can be performed by applying a formula_16 gate operation. The matrix representation of such two-qubits operator is given by formula_17 The action of a formula_16 gate over two qubits can be described by the following system: formula_18 When applying a formula_16 gate over two ancillae in the formula_19 state, the overall state formula_20 turns to be an entangled pair of qubits. When entangling two ancillae, no importance is given about which is the control qubit and which one the target, as far as the outcome turns to be the same. Similarly, as the formula_16 gates are represented in a diagonal form, they all commute each other, and no importance is given about which qubits to entangle first. Photons are the most common source to prepare entangled physical qubits. Measuring the qubits. The process of measurement over a single-particle state can be described by projecting the state on the eigenvector of an observable. Consider an observable formula_21 with two possible eigenvectors, say formula_22 and formula_23, and suppose to deal with a multi-particle quantum system formula_24. Measuring the formula_25-th qubit by the formula_21 observable means to project the formula_24 state over the eigenvectors of formula_21: formula_26. The actual state of the formula_25-th qubit is now formula_27, which can turn to be formula_28 or formula_29, depending on the outcome from the measurement (which is probabilistic in quantum mechanics). The measurement projection can be performed over the eigenstates of the formula_30 observable: formula_31, where formula_32 and formula_33 belong to the Pauli matrices. The eigenvectors of formula_34 are formula_35. Measuring a qubit on the formula_32-formula_33 plane, i.e. by the formula_34 observable, means to project it over formula_36 or formula_37. In the one-way quantum computing, once a qubit has been measured, there is no way to recycle it in the flow of computation. Therefore, instead of using the formula_38 notation, it is common to find formula_39 to indicate a projective measurement over the formula_25-th qubit. Correcting the output. After all the measurements have been performed, the system has been reduced to a smaller number of qubits, which form the output state of the system. Due to the probabilistic outcome of measurements, the system is not set in a deterministic way: after a measurement on the formula_32-formula_33 plane, the output may change whether the outcome had been formula_40 or formula_41. In order to perform a deterministic computation, some corrections must be introduced. The correction operators, or byproduct operators, are applied to the output qubits after all the measurements have been performed. The byproduct operators which can be implemented are formula_32 and formula_42. Depending on the outcome of the measurement, a byproduct operator can be applied or not to the output state: a formula_32 correction over the formula_43-th qubit, depending on the outcome of the measurement performed over the formula_25-th qubit via the formula_34 observable, can be described as formula_44, where formula_45 is set to be formula_11 if the outcome of measurement was formula_46, otherwise is formula_12 if it was formula_47. In the first case, no correction will occur, in the latter one a formula_32 operator will be implemented on the formula_43-th qubit. Eventually, even though the outcome of a measurement is not deterministic in quantum mechanics, the results from measurements can be used in order to perform corrections, and carry on a deterministic computation. "CME" pattern. The operations of entanglement, measurement and correction can be performed in order to implement unitary gates. Such operations can be performed time by time for any logic gate in the circuit, or rather in a pattern which allocates all the entanglement operations at the beginning, the measurements in the middle and the corrections at the end of the circuit. Such pattern of computation is referred to as "CME" standard pattern. In the "CME" formalism, the operation of entanglement between the formula_25 and formula_43 qubits is referred to as formula_51. The measurement on the formula_25 qubit, in the formula_32-formula_33 plane, with respect to a formula_52 angle, is defined as formula_53. At last, the formula_32 byproduct over a formula_25 qubit, with respect to the measurement over a formula_43 qubit, is described as formula_54, where formula_55 is set to formula_11 if the outcome is the formula_46 state, formula_12 when the outcome is formula_47. The same notation holds for the formula_42 byproducts. When performing a computation following the "CME" pattern, it may happen that two measurements formula_56 and formula_57 on the formula_32-formula_33 plane depend one on the outcome from the other. For example, the sign in front of the angle of measurement on the formula_43-th qubit can be flipped with respect to the measurement over the formula_25-th qubit: in such case, the notation will be written as formula_58, and therefore the two operations of measurement do commute each other no more. If formula_45 is set to formula_11, no flip on the formula_59 sign will occur, otherwise (when formula_60) the formula_59 angle will be flipped to formula_61. The notation formula_62 can therefore be rewritten as formula_63. An example: Euler rotations. As an illustrative example, consider the Euler rotation in the formula_64 basis: such operation, in the gate model of quantum computation, is described as formula_65, where formula_66 are the angles for the rotation, while formula_67 defines a global phase which is irrelevant for the computation. To perform such operation in the one-way computing frame, it is possible to implement the following "CME" pattern: formula_68, where the input state formula_69 is the qubit formula_12, all the other qubits are auxiliary ancillae and therefore have to be prepared in the formula_49 state. In the first step, the input state formula_70 must be entangled with the second qubits; in turn, the second qubit must be entangled with the third one and so on. The entangling operations formula_51 between the qubits can be performed by the formula_16 gates. In the second place, the first and the second qubits must be measured by the formula_34 observable, which means they must be projected onto the eigenstates formula_71 of such observable. When the formula_52 is zero, the formula_72 states reduce to formula_73 ones, i.e. the eigenvectors for the formula_32 Pauli operator. The first measurement formula_74 is performed on the qubit formula_12 with a formula_75 angle, which means it has to be projected onto the formula_76 states. The second measurement formula_77 is performed with respect to the formula_78 angle, i.e. the second qubit has to be projected on the formula_79 state. However, if the outcome from the previous measurement has been formula_80, the sign of the formula_81 angle has to be flipped, and the second qubit will be projected to the formula_82 state; if the outcome from the first measurement has been formula_83, no flip needs to be performed. The same operations have to be repeated for the third formula_84 and the fourth formula_85 measurements, according to the respective angles and sign flips. The sign over the formula_86 angle is set to be formula_87. Eventually the fifth qubit (the only one not to be measured) figures out to be the output state. At last, the corrections formula_88 over the output state have to be performed via the byproduct operators. For instance, if the measurements over the second and the fourth qubits turned to be formula_89 and formula_90, no correction will be conducted by the formula_91 operator, as formula_92. The same result holds for a formula_93 formula_94 outcome, as formula_95 and thus the squared Pauli operator formula_96 returns the identity. As seen in such example, in the measurement-based computation model, the physical input qubit (the first one) and output qubit (the third one) may differ each other. Equivalence between quantum circuit model and MBQC. The one-way quantum computer allows the implementation of a circuit of unitary transformations through the operations of entanglement and measurement. At the same time, any quantum circuit can be in turn converted into a "CME" pattern: a technique to translate quantum circuits into a "MBQC" pattern of measurements has been formulated by V. Danos et al. Such conversion can be carried on by using a universal set of logic gates composed by the formula_16 and the formula_97 operators: therefore, any circuit can be decomposed into a set of formula_16 and the formula_97 gates. The formula_97 single-qubit operator is defined as follows: formula_98. The formula_97 can be converted into a "CME" pattern as follows, with qubit 1 being the input and qubit 2 being the output: formula_99 which means, to implement a formula_97 operator, the input qubits formula_48 must be entangled with an ancilla qubit formula_49, therefore the input must be measured on the formula_32-formula_33 plane, thereafter the output qubit is corrected by the formula_100 byproduct. Once every formula_97 gate has been decomposed into the "CME" pattern, the operations in the overall computation will consist of formula_51 entanglements, formula_101 measurements and formula_102 corrections. In order to lead the whole flow of computation to a "CME" pattern, some rules are provided. Standardization. In order to move all the formula_51 entanglements at the beginning of the process, some rules of commutation must be pointed out: formula_103 formula_104 formula_105. The entanglement operator formula_51 commutes with the formula_42 Pauli operators and with any other operator formula_106 acting on a qubit formula_107, but not with the formula_32 Pauli operators acting on the formula_25-th or formula_43-th qubits. Pauli simplification. The measurement operations formula_53 commute with the corrections in the following manner: formula_108 formula_109, where formula_110. Such operation means that, when shifting the formula_32 corrections at the end of the pattern, some dependencies between the measurements may occur. The formula_111 operator is called signal shifting, whose action will be explained in the next paragraph. For particular formula_52 angles, some simplifications, called Pauli simplifications, can be introduced: formula_112 formula_113. Signal shifting. The action of the signal shifting operator formula_111 can be explained through its rules of commutation: formula_114 formula_115. The formula_116 operation has to be explained: suppose to have a sequence of signals formula_117, consisting of formula_118, the operation formula_116 means to substitute formula_45 with formula_119 in the sequence formula_117, which becomes formula_120. If no formula_45 appears in the formula_117 sequence, no substitution will occur. To perform a correct "CME" pattern, every signal shifting operator formula_111 must be translated at the end of the pattern. Stabilizer formalism. When preparing the source state of entangled qubits, a graph representation can be given by the stabilizer group. The stabilizer group formula_121 is an abelian subgroup from the Pauli group formula_122, which one can be described by its generators formula_123. A stabilizer state is a formula_14-qubit state formula_124 which is a unique eigenstate for the generators formula_125 of the formula_121 stabilizer group: formula_126 Of course, formula_127. It is therefore possible to define a formula_14 qubit graph state formula_128 as a quantum state associated with a graph, i.e. a set formula_129 whose vertices formula_130 correspond to the qubits, while the edges formula_131 represent the entanglements between the qubits themselves. The vertices can be labelled by a formula_25 index, while the edges, linking the formula_25-th vertex to the formula_43-th one, by two-indices labels, such as formula_132. In the stabilizer formalism, such graph structure can be encoded by the formula_133 generators of formula_121, defined as formula_134, where formula_135 stands for all the formula_43 qubits neighboring with the formula_25-th one, i.e. the formula_43 vertices linked by a formula_132 edge with the formula_25 vertex. Each formula_133 generator commute with all the others. A graph composed by formula_14 vertices can be described by formula_14 generators from the stabilizer group: formula_136. While the number of formula_137 is fixed for each formula_133 generator, the number of formula_138 may differ, with respect to the connections implemented by the edges in the graph. The Clifford group. The Clifford group formula_139 is composed by elements which leave invariant the elements from the Pauli's group formula_122: formula_140. The Clifford group requires three generators, which can be chosen as the Hadamard gate formula_0 and the phase rotation formula_141 for the single-qubit gates, and another two-qubits gate from the formula_142 (controlled NOT gate) or the formula_16 (controlled phase gate): formula_143. Consider a state formula_128 which is stabilized by a set of stabilizers formula_125. Acting via an element formula_50 from the Clifford group on such state, the following equalities hold: formula_144. Therefore, the formula_50 operations map the formula_145 state to formula_146 and its formula_125 stabilizers to formula_147. Such operation may give rise to different representations for the formula_133 generators of the stabilizer group. The Gottesman–Knill theorem states that, given a set of logic gates from the Clifford group, followed by formula_42 measurements, such computation can be efficiently simulated on a classical computer in the strong sense, i.e. a computation which elaborates in a polynomial-time the probability formula_148 for a given output formula_149 from the circuit. Hardware and applications. Topological cluster state quantum computer. Measurement-based computation on a periodic 3D lattice cluster state can be used to implement topological quantum error correction. Topological cluster state computation is closely related to Kitaev's toric code, as the 3D topological cluster state can be constructed and measured over time by a repeated sequence of gates on a 2D array. Implementations. One-way quantum computation has been demonstrated by running the 2 qubit Grover's algorithm on a 2x2 cluster state of photons. A linear optics quantum computer based on one-way computation has been proposed. Cluster states have also been created in optical lattices, but were not used for computation as the atom qubits were too close together to measure individually. AKLT state as a resource. It has been shown that the (spin formula_150) AKLT state on a 2D honeycomb lattice can be used as a resource for MBQC. More recently it has been shown that a spin-mixture AKLT state can be used as a resource. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "a+k=n" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "k=n-a" }, { "math_id": 6, "text": "| \\psi \\rangle = \\alpha |0\\rangle + \\beta |1 \\rangle" }, { "math_id": 7, "text": " |+\\rangle " }, { "math_id": 8, "text": " |+\\rangle = \\tfrac{| 0 \\rangle + | 1 \\rangle}{\\sqrt{2}}, " }, { "math_id": 9, "text": " | 0 \\rangle " }, { "math_id": 10, "text": " | 1 \\rangle " }, { "math_id": 11, "text": "0" }, { "math_id": 12, "text": "1" }, { "math_id": 13, "text": " | 0 \\rangle = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix};\\quad | 1 \\rangle = \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} " }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": " | + \\rangle^{\\otimes n} " }, { "math_id": 16, "text": "CZ" }, { "math_id": 17, "text": " CZ =\\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & -1 \\end{bmatrix}. " }, { "math_id": 18, "text": "\n\\begin{cases}\nCZ | 0+ \\rangle = | 0+ \\rangle \\\\\nCZ | 0- \\rangle = | 0- \\rangle \\\\\nCZ | 1+ \\rangle = | 1- \\rangle \\\\\nCZ | 1- \\rangle = | 1+ \\rangle \n\\end{cases}\n" }, { "math_id": 19, "text": "|+ \\rangle" }, { "math_id": 20, "text": "CZ| ++ \\rangle = \\frac{| 0+ \\rangle + | 1- \\rangle}{\\sqrt{2}}" }, { "math_id": 21, "text": "O" }, { "math_id": 22, "text": "| o_1 \\rangle" }, { "math_id": 23, "text": "| o_2 \\rangle" }, { "math_id": 24, "text": "| \\Psi \\rangle" }, { "math_id": 25, "text": "i" }, { "math_id": 26, "text": " | \\Psi' \\rangle = |o_i \\rangle \\langle o_i | \\Psi \\rangle" }, { "math_id": 27, "text": "|o_i \\rangle" }, { "math_id": 28, "text": " | o_1 \\rangle" }, { "math_id": 29, "text": " | o_2 \\rangle" }, { "math_id": 30, "text": "M(\\theta) = \\cos(\\theta)X + \\sin(\\theta)Y" }, { "math_id": 31, "text": " M(\\theta) = \\cos(\\theta) \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} + \\sin(\\theta) \\begin{bmatrix} 0 & -i \\\\ i & 0 \\end{bmatrix} = \\begin{bmatrix} 0 & e^{-i \\theta} \\\\ e^{i \\theta} & 0 \\end{bmatrix} " }, { "math_id": 32, "text": "X" }, { "math_id": 33, "text": "Y" }, { "math_id": 34, "text": "M(\\theta)" }, { "math_id": 35, "text": "|\\theta_\\pm \\rangle = |0 \\rangle \\pm e^{i \\theta} |1 \\rangle" }, { "math_id": 36, "text": "|\\theta_+ \\rangle" }, { "math_id": 37, "text": "|\\theta_- \\rangle" }, { "math_id": 38, "text": "|o_i \\rangle \\langle o_i |" }, { "math_id": 39, "text": "\\langle o_i |" }, { "math_id": 40, "text": "| \\theta_+ \\rangle " }, { "math_id": 41, "text": "| \\theta_- \\rangle " }, { "math_id": 42, "text": "Z" }, { "math_id": 43, "text": "j" }, { "math_id": 44, "text": "X_j^{s_i}" }, { "math_id": 45, "text": "s_i" }, { "math_id": 46, "text": "| \\theta_+ \\rangle" }, { "math_id": 47, "text": "| \\theta_- \\rangle" }, { "math_id": 48, "text": "| \\psi \\rangle" }, { "math_id": 49, "text": "| + \\rangle" }, { "math_id": 50, "text": "U" }, { "math_id": 51, "text": "E_{ij}" }, { "math_id": 52, "text": "\\theta" }, { "math_id": 53, "text": "M_i^\\theta" }, { "math_id": 54, "text": "X_i^{s_j}" }, { "math_id": 55, "text": "s_j" }, { "math_id": 56, "text": "M_i^{\\theta_1}" }, { "math_id": 57, "text": "M_j^{\\theta_2}" }, { "math_id": 58, "text": "[M_j^{\\theta_2}]^{s_i} M_i^{\\theta_1}" }, { "math_id": 59, "text": "\\theta_2" }, { "math_id": 60, "text": "s_i=1" }, { "math_id": 61, "text": "-\\theta_2" }, { "math_id": 62, "text": "[M_j^{\\theta_2}]^{s_i}" }, { "math_id": 63, "text": "M_j^{(-)^{s_i}\\theta_2}" }, { "math_id": 64, "text": "XZX" }, { "math_id": 65, "text": " e^{i \\gamma}R_X(\\phi) R_Z(\\theta)R_X(\\lambda) " }, { "math_id": 66, "text": "\\phi, \\theta, \\lambda" }, { "math_id": 67, "text": "\\gamma" }, { "math_id": 68, "text": "Z_5^{s_1+s_3}X_5^{s_2+s_4} [M_4^{-\\phi}]^{s_1+s_3} [M_3^{-\\theta}]^{s_2} [M_2^{-\\lambda}]^{s_1} M_1^{0} E_{4,5} E_{3,4} E_{2,3} E_{1,2}" }, { "math_id": 69, "text": "| \\psi \\rangle = \\alpha | 0 \\rangle + \\beta | 1 \\rangle" }, { "math_id": 70, "text": "|\\psi \\rangle" }, { "math_id": 71, "text": "| \\theta \\rangle " }, { "math_id": 72, "text": "| \\theta_\\pm \\rangle" }, { "math_id": 73, "text": "|\\pm \\rangle" }, { "math_id": 74, "text": "M_1^{0}" }, { "math_id": 75, "text": "\\theta=0" }, { "math_id": 76, "text": "\\langle \\pm |" }, { "math_id": 77, "text": "[M_2^{-\\lambda}]^{s_1}" }, { "math_id": 78, "text": "-\\lambda" }, { "math_id": 79, "text": "\\langle 0 | \\pm e^{i \\lambda} \\langle 1 |" }, { "math_id": 80, "text": "\\langle - |" }, { "math_id": 81, "text": "\\lambda" }, { "math_id": 82, "text": "\\langle 0 | + e^{-i \\lambda} \\langle 1 |" }, { "math_id": 83, "text": "\\langle + |" }, { "math_id": 84, "text": "[M_3^{\\theta}]^{s_2}" }, { "math_id": 85, "text": "[M_4^{\\phi}]^{s_1+s_3}" }, { "math_id": 86, "text": "\\phi" }, { "math_id": 87, "text": "(-)^{s_1+s_3}" }, { "math_id": 88, "text": "Z_5^{s_1+s_3}X_5^{s_2+s_4}" }, { "math_id": 89, "text": "\\langle \\phi_+ |" }, { "math_id": 90, "text": "\\langle \\lambda_+ |" }, { "math_id": 91, "text": "X_5" }, { "math_id": 92, "text": "s_2=s_4=0" }, { "math_id": 93, "text": "\\langle \\phi_- |" }, { "math_id": 94, "text": "\\langle \\lambda_- |" }, { "math_id": 95, "text": "s_2=s_4=1" }, { "math_id": 96, "text": "X^2" }, { "math_id": 97, "text": "J(\\theta)" }, { "math_id": 98, "text": "J(\\theta) = \\frac{1}{\\sqrt 2} \\begin{pmatrix} 1 & e^{i \\theta} \\\\ 1 & -e^{i\\theta} \\end{pmatrix}" }, { "math_id": 99, "text": "J(\\theta) = X_2^{s_1} M_1^{-\\theta} E_{1,2}" }, { "math_id": 100, "text": "X_2" }, { "math_id": 101, "text": "M_i^{-\\theta_i}" }, { "math_id": 102, "text": "X_j" }, { "math_id": 103, "text": "E_{ij} Z_i^s = Z_i^s E_{ij}" }, { "math_id": 104, "text": "E_{ij} X_i^s = X_i^s Z_j^s E_{ij}" }, { "math_id": 105, "text": "E_{ij} A_k = A_k E_{ij}" }, { "math_id": 106, "text": "A_k" }, { "math_id": 107, "text": "k\\neq i,j" }, { "math_id": 108, "text": "M_i^\\theta X_i^s = [M_i^\\theta]^s" }, { "math_id": 109, "text": "M_i^\\theta Z_i^t = S_i^t M_i^\\theta" }, { "math_id": 110, "text": "[M_i^\\theta]^s=M_i^{(-)^s\\theta}" }, { "math_id": 111, "text": "S_i^t" }, { "math_id": 112, "text": "M_i^0 X_i^s = M_i^0" }, { "math_id": 113, "text": "M_i^{\\pi/2} X_i^s = M_i^{\\pi/2} Z_i^s" }, { "math_id": 114, "text": "X_i^{s} S_i^t = S_i^t X_i^{s[(s_i+t)/s_i]}" }, { "math_id": 115, "text": "Z_i^{s} S_i^t = S_i^t Z_i^{s[(s_i+t)/s_i]}" }, { "math_id": 116, "text": "s[(t+s_i)/s_i]" }, { "math_id": 117, "text": "s" }, { "math_id": 118, "text": "s_1 + s_2 + ... + s_i + ..." }, { "math_id": 119, "text": "s_i+t" }, { "math_id": 120, "text": "s_1 + s_2 + ... + s_i + t + ..." }, { "math_id": 121, "text": "\\mathcal{S}_n" }, { "math_id": 122, "text": "\\mathcal{P}_n" }, { "math_id": 123, "text": "\\{\\pm 1, \\pm i\\} \\times \\{I,X,Y,Z\\}^{\\otimes n}" }, { "math_id": 124, "text": "| \\Psi \\rangle " }, { "math_id": 125, "text": "S_i" }, { "math_id": 126, "text": "S_i | \\Psi \\rangle = | \\Psi \\rangle." }, { "math_id": 127, "text": "S_i \\in \\mathcal{S}_n \\, \\forall i" }, { "math_id": 128, "text": "| G \\rangle" }, { "math_id": 129, "text": "G=(V,E)" }, { "math_id": 130, "text": "V" }, { "math_id": 131, "text": "E" }, { "math_id": 132, "text": "(i,j)" }, { "math_id": 133, "text": "K_i" }, { "math_id": 134, "text": " K_i = X_i \\prod_{j \\in (i,j)} Z_j " }, { "math_id": 135, "text": "{j \\in (i,j)}" }, { "math_id": 136, "text": "\\langle K_1, K_2, ..., K_n\\rangle" }, { "math_id": 137, "text": "X_i" }, { "math_id": 138, "text": "Z_j" }, { "math_id": 139, "text": "\\mathcal{C}_n" }, { "math_id": 140, "text": "\\mathcal{C}_n = \\{ U \\in SU(2^n) \\; | \\; U S U^\\dagger \\in \\mathcal{P}_n, S \\in \\mathcal{P}_n \\}" }, { "math_id": 141, "text": "S" }, { "math_id": 142, "text": "CNOT" }, { "math_id": 143, "text": " H = \\frac{1}{\\sqrt 2} \\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix}, \\quad S = \\begin{bmatrix} 1 & 0 \\\\ 0 & i \\end{bmatrix}, \\quad CNOT = \\begin{bmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0 \\end{bmatrix} " }, { "math_id": 144, "text": "U|G\\rangle = U S_i |G\\rangle = U S_i U^\\dagger U |G\\rangle = S'_i U |G\\rangle" }, { "math_id": 145, "text": "|G\\rangle" }, { "math_id": 146, "text": "U |G\\rangle" }, { "math_id": 147, "text": "U S_i U^\\dagger" }, { "math_id": 148, "text": "P(x)" }, { "math_id": 149, "text": "x" }, { "math_id": 150, "text": " \\tfrac{3}{2}" } ]
https://en.wikipedia.org/wiki?curid=14240296
14241105
Van der Waals surface
Molecule interaction model The van der Waals surface of a molecule is an abstract representation or model of that molecule, illustrating where, in very rough terms, a surface might reside for the molecule based on the hard cutoffs of van der Waals radii for individual atoms, and it represents a surface through which the molecule might be conceived as interacting with other molecules. Also referred to as a "van der Waals envelope," the van der Waals surface is named for Johannes Diderik van der Waals, a Dutch theoretical physicist and thermodynamicist who developed theory to provide a liquid-gas equation of state that accounted for the non-zero volume of atoms and molecules, and on their exhibiting an attractive force when they interacted (theoretical constructions that also bear his name). van der Waals surfaces are therefore a tool used in the abstract representations of molecules, whether accessed, as they were originally, via hand calculation, or via physical wood/plastic models, or now digitally, via computational chemistry software. Practically speaking, CPK models, developed by and named for Robert Corey, Linus Pauling, and Walter Koltun, were the first widely used physical molecular models based on van der Waals radii, and allowed broad pedagogical and research use of a model showing the van der Waals surfaces of molecules. van der Waals volume and van der Waals surface area. Related to the title concept are the ideas of a "van der Waals volume", Vw, and a "van der Waals surface area," abbreviated variously as Aw, vdWSA, VSA, and WSA. A van der Waals surface area is an abstract conception of the surface area of atoms or molecules from a mathematical estimation, either computing it from first principles or by integrating over a corresponding van der Waals volume. In simplest case, for a spherical monatomic gas, it is simply the computed surface area of a sphere of radius equal to the van der Waals radius of the gaseous atom: formula_0 . The "van der Waals volume", a type of "atomic" or "molecular volume," is a property directly related to the van der Waals radius, and is defined as the volume occupied by an individual atom, or in a combined sense, by all atoms of a molecule. It may be calculated for atoms if the van der Waals radius is known, and for molecules if its atoms radii and the inter-atomic distances and angles are known. As above, in simplest case, for a spherical monatomic gas, Vw is simply the computed volume of a sphere of radius equal to the van der Waals radius of the gaseous atom: formula_1 . For a molecule, Vw is the volume enclosed by the "van der Waals surface"; hence, computation of Vw presumes ability to describe and compute a van der Waals surface. van der Waals volumes of molecules are always smaller than the sum of the van der Waals volumes of their constituent atoms, due to the fact that the interatomic distances resulting from chemical bond are less than the sum of the atomic van der Waals radii. In this sense, a van der Waals surface of a homonuclear diatomic molecule can be viewed as an pictorial overlap of the two spherical van der Waals surfaces of the individual atoms, likewise for larger molecules like methane, ammonia, etc. (see images). van der Waals radii and volumes may be determined from the mechanical properties of gases (the original method, determining the van der Waals constant), from the critical point (e.g., of a fluid), from crystallographic measurements of the spacing between pairs of unbonded atoms in crystals, or from measurements of electrical or optical properties (i.e., polarizability or molar refractivity). In all cases, measurements are made on macroscopic samples and results are expressed as molar quantities. van der Waals volumes of a single atom or molecules are arrived at by dividing the macroscopically determined volumes by the Avogadro constant. The various methods give radius values which are similar, but not identical—generally within 1–2 Å (100–200 pm). Useful tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will be seen to present different values for the van der Waals radius of the same atom. As well, it has been argued that the van der Waals radius is not a fixed property of an atom in all circumstances, rather, that it will vary with the chemical environment of the atom. References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_{\\rm w} = 4\\pi r_{\\rm w}^2" }, { "math_id": 1, "text": "V_{\\rm w} = {4\\over 3}\\pi r_{\\rm w}^3" } ]
https://en.wikipedia.org/wiki?curid=14241105
14241236
Cluster state
In quantum information and quantum computing, a cluster state is a type of highly entangled state of multiple qubits. Cluster states are generated in lattices of qubits with Ising type interactions. A cluster "C" is a connected subset of a "d"-dimensional lattice, and a cluster state is a pure state of the qubits located on "C". They are different from other types of entangled states such as GHZ states or W states in that it is more difficult to eliminate quantum entanglement (via projective measurements) in the case of cluster states. Another way of thinking of cluster states is as a particular instance of graph states, where the underlying graph is a connected subset of a "d"-dimensional lattice. Cluster states are especially useful in the context of the one-way quantum computer. For a comprehensible introduction to the topic see. Formally, cluster states formula_0 are states which obey the set eigenvalue equations: formula_1 where formula_2 are the correlation operators formula_3 with formula_4 and formula_5 being Pauli matrices, formula_6 denoting the neighbourhood of formula_7 and formula_8 being a set of binary parameters specifying the particular instance of a cluster state. Examples with qubits. Here are some examples of one-dimensional cluster states ("d"=1), for formula_9, where formula_10 is the number of qubits. We take formula_11 for all formula_7, which means the cluster state is the unique simultaneous eigenstate that has corresponding eigenvalue 1 under all correlation operators. In each example the set of correlation operators formula_12and the corresponding cluster state is listed. formula_15This is an EPR-pair (up to local transformations). formula_17 formula_18 This is the GHZ-state (up to local transformations). formula_20 formula_21. This is not a GHZ-state and can not be converted to a GHZ-state with local operations. In all examples formula_22 is the identity operator, and tensor products are omitted. The states above can be obtained from the all zero state formula_23 by first applying a Hadamard gate to every qubit, and then a controlled-Z gate between all qubits that are adjacent to each other. Experimental creation of cluster states. Cluster states can be realized experimentally. One way to create a cluster state is by encoding logical qubits into the polarization of photons, one common encoding is the following: formula_24 This is not the only possible encoding, however it is one of the simplest: with this encoding entangled pairs can be created experimentally through spontaneous parametric down-conversion. The entangled pairs that can be generated this way have the form formula_25 equivalent to the logical state formula_26 for the two choices of the phase formula_27 the two Bell states formula_28 are obtained: these are themselves two examples of two-qubits cluster states. Through the use of linear optic devices as beam-splitters or wave-plates these Bell states can interact and form more complex cluster states. Cluster states have been created also in optical lattices of cold atoms. Entanglement criteria and Bell inequalities for cluster states. After a cluster state was created in an experiment, it is important to verify that indeed, an entangled quantum state has been created. The fidelity with respect to the formula_29-qubit cluster state formula_30 is given by formula_31 It has been shown that if formula_32, then the state formula_33 has genuine multiparticle entanglement. Thus, one can obtain an entanglement witness detecting entanglement close the cluster states as formula_34 where formula_35 signals genuine multiparticle entanglement. Such a witness cannot be measured directly. It has to be decomposed to a sum of correlations terms, which can then be measured. However, for large systems this approach can be difficult. There are also entanglement witnesses that work in very large systems, and they also detect genuine multipartite entanglement close to cluster states. They need only the minimal two local measurement settings. Similar conditions can also be used to put a lower bound on the fidelity with respect to an ideal cluster state. These criteria have been used first in an experiment realizing four-qubit cluster states with photons. These approaches have also been used to propose methods for detecting entanglement in a smaller part of a large cluster state or graph state realized in optical lattices. Bell inequalities have also been developed for cluster states. All these entanglement conditions and Bell inequalities are based on the stabilizer formalism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\phi_{\\{\\kappa\\}}\\rangle_{C}" }, { "math_id": 1, "text": "K^{(a)} {\\left|\\phi_{\\{\\kappa\\}}\\right\\rangle_{C}} =(-1)^{\\kappa_{a}} {\\left|\\phi_{\\{\\kappa\\}}\\right\\rangle_{C}} " }, { "math_id": 2, "text": "K^{(a)}" }, { "math_id": 3, "text": "K^{(a)} = \\sigma_x^{(a)} \\bigotimes_{b\\in \\mathrm{N}(a)} \\sigma_z^{(b)} " }, { "math_id": 4, "text": "\\sigma_x" }, { "math_id": 5, "text": "\\sigma_z" }, { "math_id": 6, "text": "N(a)" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "\\{\\kappa_a\\in\\{0,1\\}|a\\in C\\}" }, { "math_id": 9, "text": "n=2,3,4" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\kappa_a=0" }, { "math_id": 12, "text": "\\{K^{(a)}\\}_a" }, { "math_id": 13, "text": "n=2" }, { "math_id": 14, "text": "\\{\\sigma_x\\sigma_z,\\ \\sigma_z\\sigma_x\\} " }, { "math_id": 15, "text": "|\\phi \\rangle = \\frac{1}{\\sqrt{2}}(|0+\\rangle + |1-\\rangle) " }, { "math_id": 16, "text": " n=3" }, { "math_id": 17, "text": "\\{ \\sigma_x\\sigma_z I,\\ \\sigma_z\\sigma_x \\sigma_z,\\ I\\sigma_z\\sigma_x\\} " }, { "math_id": 18, "text": " |\\phi\\rangle=\\frac{1}{\\sqrt{2}}(|+0+\\rangle + |-1-\\rangle )" }, { "math_id": 19, "text": " n=4" }, { "math_id": 20, "text": "\\{ \\sigma_x\\sigma_z I I,\\ \\sigma_z\\sigma_x \\sigma_z I,\\ I\\sigma_z\\sigma_x\\sigma_z,\\ II \\sigma_z\\sigma_x \\} " }, { "math_id": 21, "text": " |\\phi\\rangle=\\frac{1}{2}(|+0+0\\rangle + |+0-1\\rangle + |-1-0\\rangle + |-1+1\\rangle)" }, { "math_id": 22, "text": "I" }, { "math_id": 23, "text": "|0\\ldots 0 \\rangle " }, { "math_id": 24, "text": "\\begin{cases}\n|0\\rangle_{\\rm L} \\longleftrightarrow |\\rm H\\rangle\\\\\n|1\\rangle_{\\rm L} \\longleftrightarrow |\\rm V\\rangle\n\\end{cases}" }, { "math_id": 25, "text": "|\\psi\\rangle = \\frac{1}{\\sqrt{2}}\\big(|\\rm H\\rangle|\\rm \n H\\rangle+e^{i\\phi}|\\rm V\\rangle|\\rm V\\rangle\\big)" }, { "math_id": 26, "text": "|\\psi\\rangle = \\frac{1}{\\sqrt{2}}\\big(|0\\rangle|0\\rangle + e^{i\\phi}|1\\rangle|1\\rangle\\big)" }, { "math_id": 27, "text": "\\phi = 0, \\pi" }, { "math_id": 28, "text": "|\\Phi^+\\rangle, |\\Phi^-\\rangle" }, { "math_id": 29, "text": "N" }, { "math_id": 30, "text": "|C_N\\rangle" }, { "math_id": 31, "text": " F_{CN}={\\rm Tr}(\\rho |C_N\\rangle\\langle C_N|), " }, { "math_id": 32, "text": "F_{CN}>1/2" }, { "math_id": 33, "text": "\\rho" }, { "math_id": 34, "text": " W_{CN}=\\frac1 2 {\\rm Identity}- |C_N\\rangle\\langle C_N|. " }, { "math_id": 35, "text": " \\langle W_{CN} \\rangle <0 " } ]
https://en.wikipedia.org/wiki?curid=14241236
14241411
Spache readability formula
The Spache readability formula is a readability test for writing in English, designed by George Spache. It works best on texts that are for children up to fourth grade. For older children, the Dale–Chall readability formula is more appropriate. It was introduced in 1953 in Spache's "A new readability formula for primary-grade reading materials," ("The Elementary School Journal", 53, 410–413), and has subsequently been revised. Calculation. The method compares words in a text to a set list of everyday words. The number of words per sentence and the percentage of unfamiliar words determine the reading age. The original formula was: formula_0 The revised formula is: formula_1
[ { "math_id": 0, "text": "\n\\mbox{Grade Level} = \\left ( 0.141 \\times \\mbox{Average sentence length} \\right ) + \\left ( 0.086 \\times \\mbox{Percentage of unique unfamiliar words} \\right) + 0.839\n" }, { "math_id": 1, "text": "\n\\mbox{Grade Level} = \\left ( 0.121 \\times \\mbox{Average sentence length} \\right ) + \\left ( 0.082 \\times \\mbox{Percentage of unique unfamiliar words} \\right) + 0.659\n" } ]
https://en.wikipedia.org/wiki?curid=14241411
14242868
Langmuir circulation
Series of shallow, slow, counter-rotating vortices at the ocean's surface aligned with the wind In physical oceanography, Langmuir circulation consists of a series of shallow, slow, counter-rotating vortices at the ocean's surface aligned with the wind. These circulations are developed when wind blows steadily over the sea surface. Irving Langmuir discovered this phenomenon after observing windrows of seaweed in the Sargasso Sea in 1927. Langmuir circulations circulate within the mixed layer; however, it is not yet so clear how strongly they can cause mixing at the base of the mixed layer. Theory. The driving force of these circulations is an interaction of the mean flow with wave averaged flows of the surface waves. Stokes drift velocity of the waves stretches and tilts the vorticity of the flow near the surface. The production of vorticity in the upper ocean is balanced by downward (often turbulent) diffusion formula_0. For a flow driven by a wind formula_1 characterized by friction velocity formula_2 the ratio of vorticity diffusion and production defines the Langmuir number formula_3 where the first definition is for a monochromatic wave field of amplitude formula_4, frequency formula_5, and wavenumber formula_6 and the second uses a generic inverse length scale formula_7, and Stokes velocity scale formula_8. This is exemplified by the Craik–Leibovich equations which are an approximation of the Lagrangian mean. In the Boussinesq approximation the governing equations can be written formula_9 formula_10 formula_11 where In the open ocean conditions where there may not be a dominant length scale controlling the scale of the Langmuir cells the concept of Langmuir Turbulence is advanced. Observations. The circulation has been observed to be between 0°–20° to the right of the wind in the northern hemisphere and the helix forming bands of divergence and convergence at the surface. At the convergence zones, there are commonly concentrations of floating seaweed, foam and debris along these bands. Along these divergent zones, the ocean surface is typically clear of debris since diverging currents force material out of this zone and into adjacent converging zones. At the surface the circulation will set a current from the divergence zone to the convergence zone and the spacing between these zones are of the order of . Below convergence zones narrow jets of downward flow form and the magnitude of the current will be comparable to the horizontal flow. The downward propagation will typically be in the order of meters or tenths of meters and will not penetrate the pycnocline. The upwelling is less intense and takes place over a wider band under the divergence zone. In wind speeds ranging from the maximum vertical velocity ranged from with a ratio of down-welling to wind velocities ranging from −0.0025 to −0.0085. Biological effects. Langmuir circulations (LCs), which are counter-rotating cylindrical roll vortices in the upper ocean, have significant role in vertical mixing. Though they are transient and their strength as well as direction depend on wind and wave properties, they facilitate mixing of nutrients and affect the distribution of marine organisms like plankton in the upper mixed layer of ocean. The wind-generated roll vortices create regions where organisms of different buoyancy, orientation and swimming behavior can aggregate, resulting in patchiness. Indeed, LC can produce significant aggregation of algae during events like red tide. Theoretically, LC size increases with the wind speed unless limited by density discontinuities by pycnocline. But the visibility of surface effects of LC could be limited by the breaking waves during strong winds that disperse the materials present at the surface. So, the surface effects of LC are more likely to be visible during winds stronger than critical wind speed of 3 m/s but not too strong. Moreover, previous studies have shown that organisms and materials can aggregate at different regions within LC like downwelling current in convergent zone, upwelling current in divergent zone, retention zone in LC vortex and region between upwelling and downwelling zones. Similarly, LC are found to have higher windward surface current in convergent zones due to jet like flow. This faster moving convergent region in water surface can enhance the transport of organisms and materials in the direction of wind. Effect on plants. In 1927, Langmuir saw the organized rows of "Sargassum" "natans" while crossing the Sargasso Sea in the Atlantic Ocean. Unlike active swimmers like animals and zooplankton, plants and phytoplankton are usually passive bodies in water and their aggregation are determined by the flow behavior. In windrows, concentrated planktonic organisms color the water and indicate the presence of LC. There has been observation of greater variability in plankton tows collected along the wind direction than samples collected perpendicular to the wind. And one of the reason for such variation could be due to LC that results convergence (high sample) or in between (low sample) zones in alongwind tow. Similarly, such converging effect of LC has also been observed as high chlorophyll zone at about 100 m in Lake Tahoe which could be due to oblique towing through LC. In addition, "Sargassum" get carried from surface to benthos in downwelling zone of LC and can lose buoyancy after sinking at depth for enough time. Some of the plants that are usually observed floating in water could get submerged during high wind conditions due to downwelling current of LC. Besides, LC could also lead to patchiness of positively buoyant dinoflagellates (including toxic red tide organisms) during blooms. Moreover, the negatively buoyant phytoplankters which would sink slowly in still water has been observed to get retained in euphotic zone which may be due to suspension created by vertical convection cells. Furthermore, a broader study on the Langmuir supercells in which the circulation can reach the seafloor observed the aggregation of macroalgae "Colpomenia sp." in the sea floor of shallow waters (~5 m) in Great Bahama Bank due to local wind speed of around 8 to 13 m/s. Such LC could be responsible for transport of carbon biomass from shallow water to deep sea. This effect was evident as the concentration of the algae were found to reduce dramatically after the occurrence of LC as observed from ocean color satellite imagery (NASA) during the period of the study. Such aggregation of negatively buoyant macroalgae on sea floor is similar to windrows of positively buoyant particles on water surface due to LC. Effect on animals. While plants have passive reaction to LC, animals can react to both the LC, presence of plant/food aggregation and light. One of such observation was the adaptation of "Physalia" to windrows containing entangling "Sargassum". "Physalia" tend to drift across the windrows which also increased food or zooplankter availability in divergent zones. Moreover, studies in Lake Mendota have shown good correlation between "Daphnia pulex" concentration and the appearance of foam lines. Similarly, significant differences were observed in catches of "Daphnia" "hyaline" when sampling in and out of foamlines in South Wales lake, with greater number appearing in divergent zone. Such distribution of particles and animals can be described using mathematical model developed by Stommel that suggested area of retention on upwelling zone for sinking particles and on downwelling zone for positively buoyant particles. Actually, the zooplankton could become trapped in upwelling zones to a point where animals are stimulated to swim downwards. A more detailed model was later developed by Stavn describing the zooplankton aggregation where the animal orientation, dorsal light reaction and current velocity determined their region of concentration in either downwelling (due to slow current), upwelling (due to high current) and in between latter two zones (due to intermediate currents). There has been further improvement in such models like the modification of Stommel's model by Titman &amp; Kilham in order to consider the difference in maximum downwelling and upwelling velocities and by Evans &amp; Taylor that discussed the instability of Stommel's regions due to varying swimming speed with depth which produced spiral trajectories affecting accumulation region. Nevertheless, high concentration of planktonic organisms within LC can attract birds and fish. Schools of White Bass "Roccus chrysops" were observed feeding upon "Daphnia" along the foam track. In contrast, lesser Flamingoes "Phoeniconaias minor" were observed feeding on bubble lines containing concentrated blue-green algae. Similarly, medusae were found to aggregate in linear pattern (average spacing of 129 m) parallel with wind in the Bering Sea which could be due to large LCs. Such aggregation can affect the feeding and predation of medusae. Effect on surface tension. High concentration of surfactants (surface-active substances) produced by phytoplanktons can result higher Marangoni stress in converging regions in LC. Numerical simulation suggest that such Marangoni stress due to surfactant can increase the size of vortical structures, vertical velocity and remixing of water and biological/chemical components in the local region compared to that without surfactant. Finally, more theoretical and experimental investigations are needed to confirm the significance of LC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_T" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "u_*" }, { "math_id": 3, "text": "\n\\mathrm{La} = \\sqrt{\\frac{\\nu^3_Tk^6}{\\sigma a^2u^2_*k^4}}\n~ \\text{ or } ~\n\\sqrt{\\frac{\\nu_T^3\\beta^6}{u^2_*S_0\\beta^3}}\n" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "\\sigma" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\beta" }, { "math_id": 8, "text": "S_0" }, { "math_id": 9, "text": "\n\\frac{\\partial u_i}{\\partial t}+u_j \\, \\nabla_ju_i = -2\\varepsilon_{ijk}\\Omega_j(u^s_k+u_k) - \\nabla_i\\left(\\frac{P}{\\rho_0}+\\frac{1}{2}u^s_ju^s_j+u^s_ju_j\\right) +\\varepsilon_{ijk} u^s_j \\varepsilon_{k\\ell m} \\, \\nabla_\\ell u_m+g_i\\frac{\\rho}{\\rho_0}+\\nabla_j\\nu \\, \\nabla_ju_i \n" }, { "math_id": 10, "text": "\\nabla_i u_i = 0" }, { "math_id": 11, "text": "\\frac{\\partial \\rho}{\\partial t} + u_j \\, \\nabla_j\\rho = \\nabla_i\\kappa \\, \\nabla_i\\rho" }, { "math_id": 12, "text": "u_i" }, { "math_id": 13, "text": "\\Omega" }, { "math_id": 14, "text": "u^s_i" }, { "math_id": 15, "text": "P" }, { "math_id": 16, "text": "g_i" }, { "math_id": 17, "text": "\\rho" }, { "math_id": 18, "text": "\\rho_0" }, { "math_id": 19, "text": "\\nu" }, { "math_id": 20, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=14242868
1424309
Differential equation
Type of functional equation (mathematics) In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. History. Differential equations came into existence with the invention of calculus by Isaac Newton and Gottfried Leibniz. In Chapter 2 of his 1671 work "Methodus fluxionum et Serierum Infinitarum", Newton listed three kinds of differential equations: formula_0 In all these cases, y is an unknown function of x (or of "x"1 and "x"2), and f is a given function. He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form formula_1 for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in "Théorie analytique de la chaleur" (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum. Example. In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity. Types. Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts. Ordinary differential equations. An ordinary differential equation ("ODE") is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, "depends" on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to "more than" one independent variable. Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals. Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function). As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer. Partial differential equations. A partial differential equation ("PDE") is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness. Non-linear differential equations. A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution. Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations. Equation order and degree. The order of the differential equation is the highest "order of derivative" of the unknown function that appears in the differential equation. For example, an equation containing only first-order derivatives is a "first-order differential equation", an equation containing the second-order derivative is a "second-order differential equation", and so on. When it is written as a polynomial equation in the unknown function and its derivatives, its degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function, or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation formula_2 is of degree one for the first meaning but not for the second one. Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation. Examples. In the first group of examples "u" is an unknown function of "x", and "c" and "ω" are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between "linear" and "nonlinear" differential equations, and between "homogeneous" differential equations and "heterogeneous" ones. In the next group of examples, the unknown function "u" depends on two variables "x" and "t" or "x" and "y". Existence of solutions. Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest. For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point formula_11 in the xy-plane, define some rectangular region formula_12, such that formula_13 and formula_11 is in the interior of formula_12. If we are given a differential equation formula_14 and the condition that formula_15 when formula_16, then there is locally a solution to this problem if formula_17 and formula_18 are both continuous on formula_12. This solution exists on some interval with its center at formula_19. The solution may not be unique. (See Ordinary differential equation for other results.) However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order: formula_20 such that formula_21 For any nonzero formula_22, if formula_23 and formula_24 are continuous on some interval containing formula_25, formula_26 exists and is unique. Connection to difference equations. The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation. Applications. The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods. Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation. The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations. Software. Some CAS software can solve differential equations. These are the commands used in the leading programs: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n \\frac {dy}{dx} &= f(x) \\\\[4pt]\n \\frac {dy}{dx} &= f(x, y) \\\\[4pt]\n x_1 \\frac {\\partial y}{\\partial x_1} &+ x_2 \\frac {\\partial y}{\\partial x_2} = y\n\\end{align}" }, { "math_id": 1, "text": "y'+ P(x)y = Q(x)y^n\\," }, { "math_id": 2, "text": "y'+y^2=0" }, { "math_id": 3, "text": " \\frac{du}{dx} = cu+x^2. " }, { "math_id": 4, "text": " \\frac{d^2u}{dx^2} - x\\frac{du}{dx} + u = 0. " }, { "math_id": 5, "text": " \\frac{d^2u}{dx^2} + \\omega^2u = 0. " }, { "math_id": 6, "text": " \\frac{du}{dx} = u^2 + 4. " }, { "math_id": 7, "text": " L\\frac{d^2u}{dx^2} + g\\sin u = 0. " }, { "math_id": 8, "text": " \\frac{\\partial u}{\\partial t} + t\\frac{\\partial u}{\\partial x} = 0. " }, { "math_id": 9, "text": " \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} = 0. " }, { "math_id": 10, "text": " \\frac{\\partial u}{\\partial t} = 6u\\frac{\\partial u}{\\partial x} - \\frac{\\partial^3 u}{\\partial x^3}. " }, { "math_id": 11, "text": "(a, b)" }, { "math_id": 12, "text": "Z" }, { "math_id": 13, "text": "Z = [l, m]\\times[n, p]" }, { "math_id": 14, "text": "\\frac{dy}{dx} = g(x, y)" }, { "math_id": 15, "text": "y = b" }, { "math_id": 16, "text": "x = a" }, { "math_id": 17, "text": "g(x, y)" }, { "math_id": 18, "text": "\\frac{\\partial g}{\\partial x}" }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "f_{n}(x)\\frac{d^n y}{dx^n} + \\cdots + f_{1}(x)\\frac{d y}{dx} + f_{0}(x)y = g(x)" }, { "math_id": 21, "text": "\\begin{align}\n y(x_{0}) &= y_{0}, &\n y'(x_{0}) &= y'_{0}, &\n y''(x_{0}) &= y''_{0}, &\n \\ldots\n\\end{align}" }, { "math_id": 22, "text": "f_{n}(x)" }, { "math_id": 23, "text": "\\{f_{0},f_{1},\\ldots\\}" }, { "math_id": 24, "text": "g" }, { "math_id": 25, "text": "x_{0}" }, { "math_id": 26, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=1424309
142432
Homology (mathematics)
Applying Algebraic structures to topological spaces In mathematics, the term homology, originally developed in algebraic topology, has three primary, closely-related usages. The most direct usage of the term is to take the "homology of a chain complex", resulting in a sequence of abelian groups called "homology groups." This operation, in turn, allows one to associate various named "homologies" or "homology theories" to various other types of mathematical objects. Lastly, since there are many homology theories for topological spaces that produce the same answer, one also often speaks of the "homology of a topological space". (This latter notion of homology admits more intuitive descriptions for 1- or 2-dimensional topological spaces, and is sometimes referenced in popular mathematics.) There is also a related notion of the cohomology of a cochain complex, giving rise to various cohomology theories, in addition to the notion of the cohomology of a topological space. Homology of Chain Complexes. To take the homology of a chain complex, one starts with a chain complex, which is a sequence formula_0 of abelian groups formula_1 (whose elements are called chains) and group homomorphisms formula_2 (called boundary maps) such that the composition of any two consecutive maps is zero: formula_3 The formula_4th homology group formula_5 of this chain complex is then the quotient group formula_6 of cycles modulo boundaries, where the formula_4th group of cycles formula_7 is given by the kernel subgroupformula_8, and the formula_4th group of boundaries formula_9 is given by the image subgroupformula_10. One can optionally endow chain complexes with additional structure, for example by additionally taking the groups formula_11 to be modules over a coefficient ring formula_12, and taking the boundary maps formula_2 to be formula_12-module homomorphisms, resulting in homology groups formula_5 that are also quotient modules. Tools from homological algebra can be used to relate homology groups of different chain complexes. Homology Theories. To associate a "homology theory" to other types of mathematical objects, one first gives a prescription for associating chain complexes to that object, and then takes the homology of such a chain complex. For the homology theory to be valid, all such chain complexes associated the same mathematical object must have the same homology. The resulting homology theory is often named according to the type of chain complex prescribed. For example, singular homology, Morse homology, Khovanov homology, and Hochschild homology are respectively obtained from singular chain complexes, Morse complexes, Khovanov complexes, and Hochschild complexes. In other cases, such as for group homology, there are multiple common methods to compute the same homology groups. In the language of category theory, a homology theory is a type of functor from the category of the mathematical object being studied to the category of abelian groups and group homomorphisms, or more generally to the category corresponding to the associated chain complexes. One can also formulate homology theories as derived functors on appropriate abelian categories, measuring the failure of an appropriate functor to be exact. One can describe this latter construction explicitly in terms of resolutions, or more abstractly from the perspective of derived categories or model categories. Regardless of how they are formulated, homology theories help provide information about the structure of the mathematical objects to which they are associated, and can sometimes help distinguish different objects. Homology of a Topological Space. Perhaps the most familiar usage of the term homology is for the "homology of a topological space". For sufficiently nice topological spaces and compatible choices of coefficient rings, any homology theory satisfying the Eilenberg-Steenrod axioms yields the same homology groups as the singular homology (see below) of that topological space, with the consequence that one often simply refers to the "homology" of that space, instead of specifying which homology theory was used to compute the homology groups in question. For 1-dimensional topological spaces, probably the simplest homology theory to use is graph homology, which could be regarded as a 1-dimensional special case of simplicial homology, the latter of which involves a decomposition of the topological space into simplices. (Simplices are a generalization of triangles to arbitrary dimension; for example, an edge in a graph is homeomorphic to a one-dimensional simplex, and a triangle-based pyramid is a 3-simplex.) Simplicial homology can in turn be generalized to singular homology, which allows more general maps of simplices into the topological space. Replacing simplices with disks of various dimensions results in a related construction called cellular homology. There are also other ways of computing these homology groups, for example via Morse homology, or by taking the output of the Universal Coefficient Theorem when applied to a cohomology theory such as Čech cohomology or (in the case of real coefficients) De Rham cohomology. Inspirations for homology (informal discussion). One of the ideas that led to the development of homology was the observation that certain low-dimensional shapes can be topologically distinguished by examining their "holes." For instance, a figure-eight shape has more holes than a circle formula_13, and a 2-torus formula_14 (a 2-dimensional surface shaped like an inner tube) has different holes from a 2-sphere formula_15 (a 2-dimensional surface shaped like a basketball). Studying topological features such as these led to the notion of the "cycles" that represent homology classes (the elements of homology groups). For example, the two embedded circles in a figure-eight shape provide examples of one-dimensional cycles, or 1-cycles, and the 2-torus formula_14 and 2-sphere formula_15 represent 2-cycles. Cycles form a group under the operation of "formal addition," which refers to adding cycles symbolically rather than combining them geometrically. Any formal sum of cycles is again called a cycle. Cycles and Boundaries (informal discussion). Explicit constructions of homology groups are somewhat technical. As mentioned above, an explicit realization of the homology groups formula_16 of a topological space formula_17 is defined in terms of the "cycles" and "boundaries" of a "chain complex" formula_0 associated to formula_17, where the type of chain complex depends on the choice of homology theory in use. These cycles and boundaries are elements of abelian groups, and are defined in terms of the boundary homomorphisms formula_18of the chain complex, where each formula_19 is an abelian group, and the formula_2 are group homomorphisms that satisfy formula_20 for all formula_21. Since such constructions are somewhat technical, informal discussions of homology sometimes focus instead on topological notions that parallel some of the group-theoretic aspects of cycles and boundaries. For example, in the context of chain complexes, a boundary is any element of the imageformula_10of the boundary homomorphism formula_18, for some formula_21. In topology, the boundary of a space is technically obtained by taking the space's closure minus its interior, but it is also a notion familiar from examples, e.g., the boundary of the unit disk is the unit circle, or more topologically, the boundary of formula_22 is formula_13. Topologically, the boundary of the closed interval formula_23 is given by the disjoint union formula_24, and with respect to suitable orientation conventions, the oriented boundary of formula_23 is given by the union of a positively-oriented formula_25 with a negatively oriented formula_26 The simplicial chain complex analog of this statement is thatformula_27. (Since formula_28 is a homomorphism, this implies formula_29 for any integer formula_30.) In the context of chain complexes, a cycle is any element of the kernelformula_8, for some formula_21. In other words, formula_31 is a cycle if and only if formula_32. The closest topological analog of this idea would be a shape that has "no boundary," in the sense that its boundary is the empty set. For example, since formula_33, and formula_34 have no boundary, one can associate cycles to each of these spaces. However, the chain complex notion of cycles (elements whose boundary is a "zero chain") is more general than the topological notion of a shape with no boundary. It is this topological notion of no boundary that people generally have in mind when they claim that cycles can intuitively be thought of as detecting holes. The idea is that for no-boundary shapes like formula_13, formula_15, and formula_14, it is possible in each case to glue on a larger shape for which the original shape is the boundary. For instance, starting with a circle formula_13, one could glue a 2-dimensional disk formula_22 to that formula_13 such that the formula_13 is the boundary of that formula_22. Similarly, given a two-sphere formula_15, one can glue a ball formula_35 to that formula_15 such that the formula_15 is the boundary of that formula_35. This phenomenon is sometimes described as saying that formula_15 has a formula_35-shaped "hole" or that it could be "filled in" with a formula_35. More generally, any shape with no boundary can be "filled in" with a cone, since given a space formula_36, one can glue formula_36 to the cone on formula_36, and then formula_36 will be the boundary of that cone. (For example, a cone on formula_13 is homeomorphic to a disk formula_22 whose boundary is that formula_13.) However, it is sometimes desirable to restrict to nicer spaces such as manifolds, and not every cone is homeomorphic to a manifold. Embedded representatives of 1-cycles, 3-cycles, and oriented 2-cycles all admit manifold-shaped holes, but for example the real projective plane formula_37 and complex projective plane formula_38 have nontrivial cobordism classes and therefore cannot be "filled in" with manifolds. On the other hand, the boundaries discussed in the homology of a topological space formula_39 are different from the boundaries of "filled in" holes, because the homology of a topological space formula_39 has to do with the original space formula_39, and not with new shapes built from gluing extra pieces onto formula_39. For example, any embedded circle formula_40 in formula_15 already bounds some embedded disk formula_41 in formula_15, so such formula_40 gives rise to a boundary class in the homology of formula_15. By contrast, no embedding of formula_13 into one of the 2 lobes of the figure-eight shape formula_42 gives a boundary, despite the fact that it is possible to glue a disk onto a figure-eight lobe. Homology groups. Given a sufficiently-nice topological space formula_17, a choice of appropriate homology theory, and a chain complex formula_0 associated to formula_17 that is compatible with that homology theory, the formula_21th homology group formula_16 is then given by the quotient group formula_43of formula_21-cycles (formula_21-dimensional cycles) modulo formula_21-dimensional boundaries. In other words, the elements of formula_16, called "homology classes", are equivalence classes whose representatives are formula_21-cycles, and any two cycles are regarded as equal in formula_16 if and only if they differ by the addition of a boundary. This also implies that the "zero" element of formula_16 is given by the group of formula_21-dimensional boundaries, which also includes formal sums of such boundaries. Informal examples. The homology of a topological space "X" is a set of topological invariants of "X" represented by its "homology groups" formula_44 where the formula_45 homology group formula_46 describes, informally, the number of holes in "X" with a "k"-dimensional boundary. A 0-dimensional-boundary hole is simply a gap between two components. Consequently, formula_47 describes the path-connected components of "X". A one-dimensional sphere formula_13 is a circle. It has a single connected component and a one-dimensional-boundary hole, but no higher-dimensional holes. The corresponding homology groups are given as formula_48 where formula_49 is the group of integers and formula_50 is the trivial group. The group formula_51 represents a finitely-generated abelian group, with a single generator representing the one-dimensional hole contained in a circle. A two-dimensional sphere formula_15 has a single connected component, no one-dimensional-boundary holes, a two-dimensional-boundary hole, and no higher-dimensional holes. The corresponding homology groups are formula_52 In general for an "n"-dimensional sphere formula_53the homology groups are formula_54 A two-dimensional ball formula_55 is a solid disc. It has a single path-connected component, but in contrast to the circle, has no higher-dimensional holes. The corresponding homology groups are all trivial except for formula_56. In general, for an "n"-dimensional ball formula_57 formula_58 The torus is defined as a product of two circles formula_59. The torus has a single path-connected component, two independent one-dimensional holes (indicated by circles in red and blue) and one two-dimensional hole as the interior of the torus. The corresponding homology groups are formula_60 If "n" products of a topological space "X" is written as formula_61, then in general, for an "n"-dimensional torus formula_62, formula_63 (see Torus#n-dimensional torus and Betti number#More examples for more details). The two independent 1-dimensional holes form independent generators in a finitely-generated abelian group, expressed as the product group formula_64 For the projective plane "P", a simple computation shows (where formula_65 is the cyclic group of order 2): formula_66 formula_67 corresponds, as in the previous examples, to the fact that there is a single connected component. formula_68 is a new phenomenon: intuitively, it corresponds to the fact that there is a single non-contractible "loop", but if we do the loop twice, it becomes contractible to zero. This phenomenon is called torsion. Construction of homology groups. The following text describes a general algorithm for constructing the homology groups. It may be easier for the reader to look at some simple examples first: graph homology and simplicial homology. The general construction begins with an object such as a topological space "X", on which one first defines a chain complex "C"("X") encoding information about "X". A chain complex is a sequence of abelian groups or modules formula_69. connected by homomorphisms formula_70 which are called boundary operators. That is, formula_71 where 0 denotes the trivial group and formula_72 for "i" &lt; 0. It is also required that the composition of any two consecutive boundary operators be trivial. That is, for all "n", formula_73 i.e., the constant map sending every element of formula_74 to the group identity in formula_75 The statement that the boundary of a boundary is trivial is equivalent to the statement that formula_76, where formula_77 denotes the image of the boundary operator and formula_78 its kernel. Elements of formula_79 are called boundaries and elements of formula_80 are called cycles. Since each chain group "Cn" is abelian all its subgroups are normal. Then because formula_78 is a subgroup of "Cn", formula_78 is abelian, and since formula_81 therefore formula_77 is a normal subgroup of formula_78. Then one can create the quotient group formula_82 called the n"th homology group of "X. The elements of "Hn"("X") are called homology classes. Each homology class is an equivalence class over cycles and two cycles in the same homology class are said to be homologous. A chain complex is said to be exact if the image of the ("n"+1)th map is always equal to the kernel of the "n"th map. The homology groups of "X" therefore measure "how far" the chain complex associated to "X" is from being exact. The reduced homology groups of a chain complex "C"("X") are defined as homologies of the augmented chain complex formula_83 where the boundary operator formula_84 is formula_85 for a combination formula_86 of points formula_87 which are the fixed generators of "C"0. The reduced homology groups formula_88 coincide with formula_89 for formula_90 The extra formula_49 in the chain complex represents the unique map formula_91 from the empty simplex to "X". Computing the cycle formula_92 and boundary formula_93 groups is usually rather difficult since they have a very large number of generators. On the other hand, there are tools which make the task easier. The "simplicial homology" groups "Hn"("X") of a "simplicial complex" "X" are defined using the simplicial chain complex "C"("X"), with "Cn"("X") the free abelian group generated by the "n"-simplices of "X". See simplicial homology for details. The "singular homology" groups "Hn"("X") are defined for any topological space "X", and agree with the simplicial homology groups for a simplicial complex. Cohomology groups are formally similar to homology groups: one starts with a cochain complex, which is the same as a chain complex but whose arrows, now denoted formula_94 point in the direction of increasing "n" rather than decreasing "n"; then the groups formula_95 of "cocycles" and formula_96 of coboundaries follow from the same description. The "n"th cohomology group of "X" is then the quotient group formula_97 in analogy with the "n"th homology group. Homology vs. homotopy. The nth homotopy group formula_98 of a topological space formula_17 is the group of homotopy classes of basepoint-preserving maps from the formula_21-sphere formula_99 to formula_17, under the group operation of concatenation. The most fundamental homotopy group is the fundamental group formula_100. For connected formula_17, the Hurewicz theorem describes a homomorphism formula_101 called the Hurewicz homomorphism. For formula_102, this homomorphism can be complicated, but when formula_103, the Hurewicz homomorphism coincides with abelianization. That is, formula_104 is surjective and its kernel is the commutator subgroup of formula_100, with the consequence that formula_105 is isomorphic to the abelianization of formula_100. Higher homotopy groups are sometimes difficult to compute. For instance, the homotopy groups of spheres are poorly understood and are not known in general, in contrast to the straightforward description given above for the homology groups. For an formula_103 example, suppose formula_17 is the figure eight. As usual, its first homotopy group, or fundamental group, formula_100 is the group of homotopy classes of directed loops starting and ending at a predetermined point (e.g. its center). It is isomorphic to the free group of rank 2, formula_106, which is not commutative: looping around the lefthand cycle and then around the righthand cycle is different from looping around the righthand cycle and then looping around the lefthand cycle. By contrast, the figure eight's first homology group formula_107 is abelian. To express this explicitly in terms of homology classes of cycles, one could take the homology class formula_108 of the lefthand cycle and the homology class formula_109 of the righthand cycle as basis elements of formula_105, allowing us to write formula_110. Types of homology. The different types of homology theory arise from functors mapping from various categories of mathematical objects to the category of chain complexes. In each case the composition of the functor from objects to chain complexes and the functor from chain complexes to homology groups defines the overall homology functor for the theory. Simplicial homology. The motivating example comes from algebraic topology: the simplicial homology of a simplicial complex "X". Here the chain group "Cn" is the free abelian group or module whose generators are the "n"-dimensional oriented simplexes of "X". The orientation is captured by ordering the complex's vertices and expressing an oriented simplex formula_111 as an "n"-tuple formula_112 of its vertices listed in increasing order (i.e. formula_113 in the complex's vertex ordering, where formula_114 is the formula_115th vertex appearing in the tuple). The mapping formula_116 from "Cn" to "Cn−1" is called the boundary mapping and sends the simplex formula_117 to the formal sum formula_118 which is considered 0 if formula_119 This behavior on the generators induces a homomorphism on all of "Cn" as follows. Given an element formula_31, write it as the sum of generators formula_120 where formula_121 is the set of "n"-simplexes in "X" and the "mi" are coefficients from the ring "Cn" is defined over (usually integers, unless otherwise specified). Then define formula_122 The dimension of the "n"-th homology of "X" turns out to be the number of "holes" in "X" at dimension "n". It may be computed by putting matrix representations of these boundary mappings in Smith normal form. Singular homology. Using simplicial homology example as a model, one can define a "singular homology" for any topological space "X". A chain complex for "X" is defined by taking "Cn" to be the free abelian group (or free module) whose generators are all continuous maps from "n"-dimensional simplices into "X". The homomorphisms ∂"n" arise from the boundary maps of simplices. Group homology. In abstract algebra, one uses homology to define derived functors, for example the Tor functors. Here one starts with some covariant additive functor "F" and some module "X". The chain complex for "X" is defined as follows: first find a free module formula_123 and a surjective homomorphism formula_124 Then one finds a free module formula_125 and a surjective homomorphism formula_126 Continuing in this fashion, a sequence of free modules formula_127 and homomorphisms formula_128 can be defined. By applying the functor "F" to this sequence, one obtains a chain complex; the homology formula_129 of this complex depends only on "F" and "X" and is, by definition, the "n"-th derived functor of "F", applied to "X". A common use of group (co)homology formula_130is to classify the possible extension groups "E" which contain a given "G"-module "M" as a normal subgroup and have a given quotient group "G", so that formula_131 Other homology theories. &lt;templatestyles src="Div col/styles.css"/&gt; Homology functors. Chain complexes form a category: A morphism from the chain complex (formula_132) to the chain complex (formula_133) is a sequence of homomorphisms formula_134 such that formula_135 for all "n". The "n"-th homology "Hn" can be viewed as a covariant functor from the category of chain complexes to the category of abelian groups (or modules). If the chain complex depends on the object "X" in a covariant manner (meaning that any morphism formula_136 induces a morphism from the chain complex of "X" to the chain complex of "Y"), then the "Hn" are covariant functors from the category that "X" belongs to into the category of abelian groups (or modules). The only difference between homology and cohomology is that in cohomology the chain complexes depend in a "contravariant" manner on "X", and that therefore the homology groups (which are called "cohomology groups" in this context and denoted by "Hn") form "contravariant" functors from the category that "X" belongs to into the category of abelian groups or modules. Properties. If (formula_132) is a chain complex such that all but finitely many "An" are zero, and the others are finitely generated abelian groups (or finite-dimensional vector spaces), then we can define the "Euler characteristic" formula_137 (using the rank in the case of abelian groups and the Hamel dimension in the case of vector spaces). It turns out that the Euler characteristic can also be computed on the level of homology: formula_138 and, especially in algebraic topology, this provides two ways to compute the important invariant formula_139 for the object "X" which gave rise to the chain complex. Every short exact sequence formula_140 of chain complexes gives rise to a long exact sequence of homology groups formula_141 All maps in this long exact sequence are induced by the maps between the chain complexes, except for the maps formula_142 The latter are called connecting homomorphisms and are provided by the zig-zag lemma. This lemma can be applied to homology in numerous ways that aid in calculating homology groups, such as the theories of relative homology and Mayer-Vietoris sequences. Applications. Application in pure mathematics. Notable theorems proved using homology include the following: Application in science and engineering. In topological data analysis, data sets are regarded as a point cloud sampling of a manifold or algebraic variety embedded in Euclidean space. By linking nearest neighbor points in the cloud into a triangulation, a simplicial approximation of the manifold is created and its simplicial homology may be calculated. Finding techniques to robustly calculate homology using various triangulation strategies over multiple length scales is the topic of persistent homology. In sensor networks, sensors may communicate information via an ad-hoc network that dynamically changes in time. To understand the global context of this set of local measurements and communication paths, it is useful to compute the homology of the network topology to evaluate, for instance, holes in coverage. In dynamical systems theory in physics, Poincaré was one of the first to consider the interplay between the invariant manifold of a dynamical system and its topological invariants. Morse theory relates the dynamics of a gradient flow on a manifold to, for example, its homology. Floer homology extended this to infinite-dimensional manifolds. The KAM theorem established that periodic orbits can follow complex trajectories; in particular, they may form braids that can be investigated using Floer homology. In one class of finite element methods, boundary-value problems for differential equations involving the Hodge-Laplace operator may need to be solved on topologically nontrivial domains, for example, in electromagnetic simulations. In these simulations, solution is aided by fixing the cohomology class of the solution based on the chosen boundary conditions and the homology of the domain. FEM domains can be triangulated, from which the simplicial homology can be calculated. Software. Various software packages have been developed for the purposes of computing homology groups of finite cell complexes. Linbox is a C++ library for performing fast matrix operations, including Smith normal form; it interfaces with both Gap and Maple. Chomp, CAPD::Redhom and Perseus are also written in C++. All three implement pre-processing algorithms based on simple-homotopy equivalence and discrete Morse theory to perform homology-preserving reductions of the input cell complexes before resorting to matrix algebra. Kenzo is written in Lisp, and in addition to homology it may also be used to generate presentations of homotopy groups of finite simplicial complexes. Gmsh includes a homology solver for finite element meshes, which can generate Cohomology bases directly usable by finite element software. Some non-homology-based discussions of surfaces. Origins. Homology theory can be said to start with the Euler polyhedron formula, or Euler characteristic. This was followed by Riemann's definition of genus and "n"-fold connectedness numerical invariants in 1857 and Betti's proof in 1871 of the independence of "homology numbers" from the choice of basis. Surfaces. On the ordinary sphere formula_15, the curve "b" in the diagram can be shrunk to the pole, and even the equatorial great circle "a" can be shrunk in the same way. The Jordan curve theorem shows that any closed curve such as "c" can be similarly shrunk to a point. This implies that formula_15 has trivial fundamental group, so as a consequence, it also has trivial first homology group. The torus formula_14 has closed curves which cannot be continuously deformed into each other, for example in the diagram none of the cycles "a", "b" or "c" can be deformed into one another. In particular, cycles "a" and "b" cannot be shrunk to a point whereas cycle "c" can. If the torus surface is cut along both "a" and "b", it can be opened out and flattened into a rectangle or, more conveniently, a square. One opposite pair of sides represents the cut along "a", and the other opposite pair represents the cut along "b". The edges of the square may then be glued back together in different ways. The square can be twisted to allow edges to meet in the opposite direction, as shown by the arrows in the diagram. The various ways of gluing the sides yield just four topologically distinct surfaces: formula_152 is the Klein bottle, which is a torus with a twist in it (In the square diagram, the twist can be seen as the reversal of the bottom arrow). It is a theorem that the re-glued surface must self-intersect (when immersed in Euclidean 3-space). Like the torus, cycles "a" and "b" cannot be shrunk while "c" can be. But unlike the torus, following "b" forwards right round and back reverses left and right, because "b" happens to cross over the twist given to one join. If an equidistant cut on one side of "b" is made, it returns on the other side and goes round the surface a second time before returning to its starting point, cutting out a twisted Möbius strip. Because local left and right can be arbitrarily re-oriented in this way, the surface as a whole is said to be non-orientable. The projective plane formula_153 has both joins twisted. The uncut form, generally represented as the Boy surface, is visually complex, so a hemispherical embedding is shown in the diagram, in which antipodal points around the rim such as "A" and "A′" are identified as the same point. Again, "a" is non-shrinkable while "c" is. If "b" were only wound once, it would also be non-shrinkable and reverse left and right. However it is wound a second time, which swaps right and left back again; it can be shrunk to a point and is homologous to "c". Cycles can be joined or added together, as "a" and "b" on the torus were when it was cut open and flattened down. In the Klein bottle diagram, "a" goes round one way and −"a" goes round the opposite way. If "a" is thought of as a cut, then −"a" can be thought of as a gluing operation. Making a cut and then re-gluing it does not change the surface, so "a" + (−"a") = 0. But now consider two "a"-cycles. Since the Klein bottle is nonorientable, you can transport one of them all the way round the bottle (along the "b"-cycle), and it will come back as −"a". This is because the Klein bottle is made from a cylinder, whose "a"-cycle ends are glued together with opposite orientations. Hence 2"a" = "a" + "a" = "a" + (−"a") = 0. This phenomenon is called torsion. Similarly, in the projective plane, following the unshrinkable cycle "b" round twice remarkably creates a trivial cycle which "can" be shrunk to a point; that is, "b" + "b" = 0. Because "b" must be followed around twice to achieve a zero cycle, the surface is said to have a torsion coefficient of 2. However, following a "b"-cycle around twice in the Klein bottle gives simply "b" + "b" = 2"b", since this cycle lives in a torsion-free homology class. This corresponds to the fact that in the fundamental polygon of the Klein bottle, only one pair of sides is glued with a twist, whereas in the projective plane both sides are twisted. A square is a contractible topological space, which implies that it has trivial homology. Consequently, additional cuts disconnect it. The square is not the only shape in the plane that can be glued into a surface. Gluing opposite sides of an octagon, for example, produces a surface with two holes. In fact, all closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2"n"-gons) can be glued to make different manifolds. Conversely, a closed surface with "n" non-zero classes can be cut into a 2"n"-gon. Variations are also possible, for example a hexagon may also be glued to form a torus. The first recognisable theory of homology was published by Henri Poincaré in his seminal paper "Analysis situs", "J. Ecole polytech." (2) 1. 1–121 (1895). The paper introduced homology classes and relations. The possible configurations of orientable cycles are classified by the Betti numbers of the manifold (Betti numbers are a refinement of the Euler characteristic). Classifying the non-orientable cycles requires additional information about torsion coefficients. The complete classification of 1- and 2-manifolds is given in the table. Notes # For a non-orientable surface, a hole is equivalent to two cross-caps. # Any closed 2-manifold can be realised as the connected sum of "g" tori and "c" projective planes, where the 2-sphere formula_15 is regarded as the empty connected sum. Homology is preserved by the operation of connected sum. In a search for increased rigour, Poincaré went on to develop the simplicial homology of a triangulated manifold and to create what is now called a simplicial chain complex. Chain complexes (since greatly generalized) form the basis for most modern treatments of homology. Emmy Noether and, independently, Leopold Vietoris and Walther Mayer further developed the theory of algebraic homology groups in the period 1925–28. The new combinatorial topology formally treated topological classes as abelian groups. Homology groups are finitely generated abelian groups, and homology classes are elements of these groups. The Betti numbers of the manifold are the rank of the free part of the homology group, and in the special case of surfaces, the torsion part of the homology group only occurs for non-orientable cycles. The subsequent spread of homology groups brought a change of terminology and viewpoint from "combinatorial topology" to "algebraic topology". Algebraic homology remains the primary method of classifying manifolds. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (C_\\bullet, d_\\bullet)" }, { "math_id": 1, "text": "C_{n}" }, { "math_id": 2, "text": "d_n" }, { "math_id": 3, "text": " C_\\bullet: \\cdots \\longrightarrow \nC_{n+1} \\stackrel{d_{n+1}}{\\longrightarrow}\nC_n \\stackrel{d_n}{\\longrightarrow}\nC_{n-1} \\stackrel{d_{n-1}}{\\longrightarrow}\n\\cdots, \\quad d_n \\circ d_{n+1}=0." }, { "math_id": 4, "text": "n\n" }, { "math_id": 5, "text": "H_{n}" }, { "math_id": 6, "text": "H_n = Z_n/B_n" }, { "math_id": 7, "text": "Z_n\n" }, { "math_id": 8, "text": "Z_n := \\ker d_n :=\\{c \\in C_n \\,|\\; d_n(c) = 0\\}" }, { "math_id": 9, "text": "B\n_n\n" }, { "math_id": 10, "text": "B_n := \\mathrm{im}\\, d_{n+1} :=\\{d_{n+1}(c)\\,|\\; c\\in C_{n+1}\\}" }, { "math_id": 11, "text": "C_n\n" }, { "math_id": 12, "text": "R\n" }, { "math_id": 13, "text": "S^1" }, { "math_id": 14, "text": "T^2" }, { "math_id": 15, "text": "S^2" }, { "math_id": 16, "text": "H_n(X)" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "d_n: C_n \\to C_{n-1}" }, { "math_id": 19, "text": "C_n" }, { "math_id": 20, "text": "d_{n-1} \\circ d_n=0" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "D^2" }, { "math_id": 23, "text": "[0,1]" }, { "math_id": 24, "text": "\\{0\\} \\, \\amalg \\, \\{1\\} " }, { "math_id": 25, "text": "\\{1\\} " }, { "math_id": 26, "text": "\\{0\\}. " }, { "math_id": 27, "text": "d_1([0,1]) = \\{1\\} - \\{0\\} " }, { "math_id": 28, "text": "d_1 " }, { "math_id": 29, "text": "d_1(k\\cdot[0,1]) = k\\cdot\\{1\\} - k\\cdot\\{0\\} " }, { "math_id": 30, "text": "k " }, { "math_id": 31, "text": "c \\in C_n" }, { "math_id": 32, "text": "d_n(c) = 0" }, { "math_id": 33, "text": "S^1, S^2 " }, { "math_id": 34, "text": "T^2 " }, { "math_id": 35, "text": "B^3" }, { "math_id": 36, "text": "Y\n" }, { "math_id": 37, "text": "\\mathbb{RP}^2" }, { "math_id": 38, "text": "\\mathbb{CP}^2" }, { "math_id": 39, "text": "X\n" }, { "math_id": 40, "text": "C" }, { "math_id": 41, "text": "D" }, { "math_id": 42, "text": "X_8\n" }, { "math_id": 43, "text": "H_n(X)=Z_n/B_n" }, { "math_id": 44, "text": "H_0(X), H_1(X), H_2(X), \\ldots" }, { "math_id": 45, "text": "k^{\\rm th}" }, { "math_id": 46, "text": "H_k(X)" }, { "math_id": 47, "text": "H_0(X)" }, { "math_id": 48, "text": "H_k\\left(S^1\\right) = \\begin{cases}\n \\Z & k = 0, 1 \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 49, "text": "\\Z" }, { "math_id": 50, "text": "\\{0\\}" }, { "math_id": 51, "text": "H_1\\left(S^1\\right) = \\Z" }, { "math_id": 52, "text": "H_k\\left(S^2\\right) = \\begin{cases}\n \\Z & k = 0, 2 \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 53, "text": "S^n," }, { "math_id": 54, "text": "H_k\\left(S^n\\right) = \\begin{cases}\n \\Z & k = 0, n \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 55, "text": "B^2" }, { "math_id": 56, "text": "H_0\\left(B^2\\right) = \\Z" }, { "math_id": 57, "text": "B^n," }, { "math_id": 58, "text": "H_k\\left(B^n\\right) = \\begin{cases}\n \\Z & k = 0 \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 59, "text": "T^2 = S^1 \\times S^1" }, { "math_id": 60, "text": "H_k(T^2) = \\begin{cases}\n \\Z & k = 0, 2 \\\\\n \\Z \\times \\Z & k = 1 \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 61, "text": "X^n" }, { "math_id": 62, "text": "T^n = (S^1)^n" }, { "math_id": 63, "text": "H_k(T^n) = \\begin{cases}\n \\Z^\\binom{n}{k} & 0 \\le k \\le n \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 64, "text": "\\Z \\times \\Z." }, { "math_id": 65, "text": "\\Z_2" }, { "math_id": 66, "text": "H_k(P) = \\begin{cases}\n \\Z & k = 0 \\\\\n \\Z_2 & k = 1 \\\\\n \\{0\\} & \\text{otherwise}\n\\end{cases}" }, { "math_id": 67, "text": "H_0(P) = \\Z" }, { "math_id": 68, "text": "H_1(P) = \\Z_2" }, { "math_id": 69, "text": "C_0, C_1, C_2, \\ldots" }, { "math_id": 70, "text": "\\partial_n : C_n \\to C_{n-1}," }, { "math_id": 71, "text": "\n\\dotsb\n\\overset{\\partial_{n+1}}{\\longrightarrow\\,} C_n\n\\overset{\\partial_n}{\\longrightarrow\\,} C_{n-1}\n\\overset{\\partial_{n-1}}{\\longrightarrow\\,} \\dotsb\n\\overset{\\partial_2}{\\longrightarrow\\,} C_1\n\\overset{\\partial_1}{\\longrightarrow\\,} C_0\n\\overset{\\partial_0}{\\longrightarrow\\,} 0\n" }, { "math_id": 72, "text": "C_i\\equiv0" }, { "math_id": 73, "text": "\\partial_n \\circ \\partial_{n+1} = 0_{n+1, n-1}," }, { "math_id": 74, "text": "C_{n+1}" }, { "math_id": 75, "text": "C_{n-1}." }, { "math_id": 76, "text": "\\mathrm{im}(\\partial_{n+1})\\subseteq\\ker(\\partial_n)" }, { "math_id": 77, "text": "\\mathrm{im}(\\partial_{n+1})" }, { "math_id": 78, "text": "\\ker(\\partial_n)" }, { "math_id": 79, "text": "B_n(X) = \\mathrm{im}(\\partial_{n+1})" }, { "math_id": 80, "text": "Z_n(X) = \\ker(\\partial_n)" }, { "math_id": 81, "text": "\\mathrm{im}(\\partial_{n+1}) \\subseteq\\ker(\\partial_n)" }, { "math_id": 82, "text": "H_n(X) := \\ker(\\partial_n) / \\mathrm{im}(\\partial_{n+1}) = Z_n(X)/B_n(X)," }, { "math_id": 83, "text": "\n\\dotsb\n\\overset{\\partial_{n+1}}{\\longrightarrow\\,} C_n\n\\overset{\\partial_n}{\\longrightarrow\\,} C_{n-1}\n\\overset{\\partial_{n-1}}{\\longrightarrow\\,} \\dotsb\n\\overset{\\partial_2}{\\longrightarrow\\,} C_1\n\\overset{\\partial_1}{\\longrightarrow\\,} C_0\n\\overset{\\epsilon}{\\longrightarrow\\,} \\Z\n{\\longrightarrow\\,} 0\n" }, { "math_id": 84, "text": "\\epsilon" }, { "math_id": 85, "text": "\\epsilon \\left(\\sum_i n_i \\sigma_i\\right) = \\sum_i n_i" }, { "math_id": 86, "text": "\\sum n_i \\sigma_i," }, { "math_id": 87, "text": "\\sigma_i," }, { "math_id": 88, "text": "\\tilde{H}_i(X)" }, { "math_id": 89, "text": "H_i(X)" }, { "math_id": 90, "text": "i \\neq 0." }, { "math_id": 91, "text": "[\\emptyset] \\longrightarrow X" }, { "math_id": 92, "text": "Z_n(X)" }, { "math_id": 93, "text": "B_n(X)" }, { "math_id": 94, "text": "d_n," }, { "math_id": 95, "text": "\\ker\\left(d^n\\right) = Z^n(X)" }, { "math_id": 96, "text": "\\mathrm{im}\\left(d^{n-1}\\right) = B^n(X)" }, { "math_id": 97, "text": "H^n(X) = Z^n(X)/B^n(X)," }, { "math_id": 98, "text": "\\pi_n(X)" }, { "math_id": 99, "text": "S^n" }, { "math_id": 100, "text": "\\pi_1(X)" }, { "math_id": 101, "text": "h_*: \\pi_n(X) \\to H_n(X)" }, { "math_id": 102, "text": "n>1" }, { "math_id": 103, "text": "n=1" }, { "math_id": 104, "text": "h_*: \\pi_1(X) \\to H_1(X)" }, { "math_id": 105, "text": "H_1(X)" }, { "math_id": 106, "text": "\\pi_1(X) \\cong \\mathbb{Z} * \\mathbb{Z}" }, { "math_id": 107, "text": "H_1(X)\\cong \\mathbb{Z} \\times \\mathbb{Z}" }, { "math_id": 108, "text": "l" }, { "math_id": 109, "text": "r" }, { "math_id": 110, "text": "H_1(X)=\\{a_l l + a_r r\\,|\\; a_l, a_r \\in \\mathbb{Z}\\} " }, { "math_id": 111, "text": "\\sigma" }, { "math_id": 112, "text": "(\\sigma[0], \\sigma[1], \\dots, \\sigma[n])" }, { "math_id": 113, "text": "\\sigma[0] < \\sigma[1] < \\cdots < \\sigma[n]" }, { "math_id": 114, "text": "\\sigma[i]" }, { "math_id": 115, "text": "i" }, { "math_id": 116, "text": "\\partial_n" }, { "math_id": 117, "text": "\\sigma = (\\sigma[0], \\sigma[1], \\dots, \\sigma[n])" }, { "math_id": 118, "text": "\\partial_n(\\sigma) = \\sum_{i=0}^n (-1)^i \\left (\\sigma[0], \\dots, \\sigma[i-1], \\sigma[i+1], \\dots, \\sigma[n] \\right )," }, { "math_id": 119, "text": "n = 0." }, { "math_id": 120, "text": "c = \\sum_{\\sigma_i \\in X_n} m_i \\sigma_i," }, { "math_id": 121, "text": "X_n" }, { "math_id": 122, "text": "\\partial_n(c) = \\sum_{\\sigma_i \\in X_n} m_i \\partial_n(\\sigma_i)." }, { "math_id": 123, "text": "F_1" }, { "math_id": 124, "text": "p_1 : F_1 \\to X." }, { "math_id": 125, "text": "F_2" }, { "math_id": 126, "text": "p_2 : F_2 \\to \\ker\\left(p_1\\right)." }, { "math_id": 127, "text": "F_n" }, { "math_id": 128, "text": "p_n" }, { "math_id": 129, "text": "H_n" }, { "math_id": 130, "text": "H^2(G, M)" }, { "math_id": 131, "text": "G = E / M." }, { "math_id": 132, "text": "d_n : A_n \\to A_{n-1}" }, { "math_id": 133, "text": "e_n : B_n \\to B_{n-1}" }, { "math_id": 134, "text": "f_n : A_n \\to B_n" }, { "math_id": 135, "text": "f_{n-1} \\circ d_n = e_n \\circ f_n" }, { "math_id": 136, "text": "X \\to Y" }, { "math_id": 137, "text": "\\chi = \\sum (-1)^n \\, \\mathrm{rank}(A_n)" }, { "math_id": 138, "text": "\\chi = \\sum (-1)^n \\, \\mathrm{rank}(H_n)" }, { "math_id": 139, "text": "\\chi" }, { "math_id": 140, "text": "0 \\rightarrow A \\rightarrow B \\rightarrow C \\rightarrow 0" }, { "math_id": 141, "text": "\\cdots \\to H_n(A) \\to H_n(B) \\to H_n(C) \\to H_{n-1}(A) \\to H_{n-1}(B) \\to H_{n-1}(C) \\to H_{n-2}(A) \\to \\cdots" }, { "math_id": 142, "text": "H_n(C) \\to H_{n-1}(A)" }, { "math_id": 143, "text": "a \\in B^n" }, { "math_id": 144, "text": "f(a) = a." }, { "math_id": 145, "text": "\\R^n" }, { "math_id": 146, "text": "f : U \\to \\R^n" }, { "math_id": 147, "text": "V = f(U)" }, { "math_id": 148, "text": "k \\geq 1" }, { "math_id": 149, "text": "U \\subseteq \\R^m" }, { "math_id": 150, "text": "V \\subseteq \\R^n" }, { "math_id": 151, "text": "m = n." }, { "math_id": 152, "text": "K^2" }, { "math_id": 153, "text": "P^2" } ]
https://en.wikipedia.org/wiki?curid=142432
14245076
Hundred-dollar, Hundred-digit Challenge problems
Problems in numerical mathematics The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by Nick Trefethen (2002). A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for the contest was May 20, 2002. In the end, 20 teams solved all of the problems perfectly within the required precision, and an anonymous donor aided in producing the required prize monies. The challenge and its solutions were described in detail in the book (Folkmar Bornemann, Dirk Laurie &amp; Stan Wagon et al. 2004). The problems. From : Solutions. These answers have been assigned the identifiers OEIS: , OEIS: , OEIS: , OEIS: , OEIS: , OEIS: , OEIS: , OEIS: , OEIS: , and OEIS:  in the On-Line Encyclopedia of Integer Sequences.
[ { "math_id": 0, "text": " \\lim_{\\varepsilon \\to 0}\\int_\\varepsilon^1 x^{-1} \\cos\\left(x^{-1} \\log x\\right)\\,dx" }, { "math_id": 1, "text": "a_{11}=1, a_{12}=1/2, a_{21}=1/3, a_{13}=1/4, a_{22}=1/5, a_{31}=1/6, \\dots " }, { "math_id": 2, "text": "\\ell^2" }, { "math_id": 3, "text": "||A||" }, { "math_id": 4, "text": "\\exp\\left(\\sin\\left(50x\\right)\\right) + \\sin\\left(60e^y\\right) + \\sin\\left(70 \\sin x\\right)+\\sin\\left(\\sin\\left(80y\\right)\\right) - \\sin\\left(10\\left(x+y\\right)\\right) + 1/4\\left(x^2 + y^2\\right)" }, { "math_id": 5, "text": "f(z)=1/\\Gamma(z)" }, { "math_id": 6, "text": "\\Gamma(z)" }, { "math_id": 7, "text": "p(z)" }, { "math_id": 8, "text": "f(z)" }, { "math_id": 9, "text": "||.||_\\infty" }, { "math_id": 10, "text": "||f-p||_\\infty" }, { "math_id": 11, "text": "(0,0)" }, { "math_id": 12, "text": "1/4" }, { "math_id": 13, "text": "1/4+\\varepsilon" }, { "math_id": 14, "text": "1/4-\\varepsilon" }, { "math_id": 15, "text": "1/2" }, { "math_id": 16, "text": "\\varepsilon" }, { "math_id": 17, "text": "a_{ij}" }, { "math_id": 18, "text": "|i-j|=1, 2, 4, 8, \\dots, 16384" }, { "math_id": 19, "text": "A^{-1}" }, { "math_id": 20, "text": "[-1,1]\\times [-1,1]" }, { "math_id": 21, "text": "u=0" }, { "math_id": 22, "text": "t=0" }, { "math_id": 23, "text": "u=5" }, { "math_id": 24, "text": "u_{t} = \\Delta u" }, { "math_id": 25, "text": "u=1" }, { "math_id": 26, "text": "I(\\alpha)=\\int_0^2\\left[2+\\sin\\left(10\\alpha\\right)\\right]x^\\alpha \\sin\\left(\\alpha/\\left(2-x\\right)\\right)\\,dx" } ]
https://en.wikipedia.org/wiki?curid=14245076
1424521
Quantum correlation
In quantum mechanics, quantum correlation is the expected value of the product of the alternative outcomes. In other words, it is the expected change in physical characteristics as one quantum system passes through an interaction site. In John Bell's 1964 paper that inspired the Bell test, it was assumed that the outcomes A and B could each only take one of two values, -1 or +1. It followed that the product, too, could only be -1 or +1, so that the average value of the product would be formula_0 where, for example, N++ is the number of simultaneous instances ("coincidences") of the outcome +1 on both sides of the experiment. However, in actual experiments, detectors are not perfect and produce many null outcomes. The correlation can still be estimated using the sum of coincidences, since clearly zeros do not contribute to the average, but in practice, instead of dividing by Ntotal, it is customary to divide by formula_1 the total number of observed coincidences. The legitimacy of this method relies on the assumption that the observed coincidences constitute a fair sample of the emitted pairs. Following local realist assumptions as in Bell's paper, the estimated quantum correlation converges after a sufficient number of trials to formula_2 where "a" and "b" are detector settings and λ is the hidden variable, drawn from a distribution ρ(λ). The quantum correlation is the key statistic in the CHSH inequality and some of the other Bell inequalities, tests that open the way for experimental discrimination between quantum mechanics and local realism or local hidden-variable theory. Outside Bell test experiments. Quantum correlations give rise to various phenomena, including interference of particles separated in time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{N_{++}- N_{+-}- N_{-+}+N_{--}}{N_{total}}" }, { "math_id": 1, "text": "N_{++} + N_{+-}+ N_{-+} + N_{--}" }, { "math_id": 2, "text": "QC(a, b) = \\int d \\lambda \\rho (\\lambda) A(a, \\lambda)B(b, \\lambda) " } ]
https://en.wikipedia.org/wiki?curid=1424521