id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
14456948
Multiple inositol-polyphosphate phosphatase
The enzyme multiple inositol-polyphosphate phosphatase (EC 3.1.3.62) catalyzes the reaction "myo"-inositol hexakisphosphate + H2O formula_0 "myo"-inositol pentakisphosphate (mixed isomers) + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is 1-"myo"-inositol-hexakisphosphate 5-phosphohydrolase. Other names in common use include inositol (1,3,4,5)-tetrakisphosphate 3-phosphatase, inositol 1,3,4,5-tetrakisphosphate 3-phosphomonoesterase, inositol 1,3,4,5-tetrakisphosphate-5-phosphomonoesterase, inositol tetrakisphosphate phosphomonoesterase, inositol-1,3,4,5-tetrakisphosphate 3-phosphatase, and MIPP. This enzyme participates in inositol phosphate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14456948
14456982
N-acetylgalactosaminoglycan deacetylase
The enzyme "N"-acetylgalactosaminoglycan deacetylase (EC 3.1.1.58) catalyzes the reaction "N"-acetyl--galactosaminoglycan + H2O formula_0 -galactosaminoglycan + acetate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is N"-acetyl--galactosaminoglycan acetylhydrolase. Other names in common use include polysaccharide deacetylase, Vi-polysaccharide deacetylase, and N"-acetyl galactosaminoglycan deacetylase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14456982
14456997
N-acetylglucosamine-1-phosphodiester alpha-N-acetylglucosaminidase
The enzyme "N"-acetylglucosamine-1-phosphodiester α-"N"-acetylglucosaminidase (EC 3.1.4.45) catalyzes the reaction glycoprotein "N"-acetyl--glucosaminyl-phospho--mannose + H2O formula_0 "N"-acetyl--glucosamine + glycoprotein phospho--mannose This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric diester bonds. The systematic name is glycoprotein-"N"-acetyl--glucosaminyl-phospho--mannose "N"-acetyl--glucosaminylphosphohydrolase. Other names in common use include α-"N"-acetylglucosaminyl phosphodiesterase, lysosomal α-"N"-acetylglucosaminidase, phosphodiester glycosidase, α-"N"-acetyl--glucosamine-1-phosphodiester, "N"-acetylglucosaminidase, 2-acetamido-2-deoxy-α--glucose 1-phosphodiester, and acetamidodeoxyglucohydrolase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14456997
14457028
N-acylneuraminate-9-phosphatase
The enzyme "N"-acylneuraminate-9-phosphatase (EC 3.1.3.29) catalyzes the reaction "N"-acylneuraminate 9-phosphate + H2O formula_0 "N"-acylneuraminate + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is N"-acylneuraminate-9-phosphate phosphohydrolase. Other names in common use include acylneuraminate 9-phosphatase, N"-acylneuraminic acid 9-phosphate phosphatase, and "N"-acylneuraminic (sialic) acid 9-phosphatase. This enzyme participates in aminosugars metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457028
14457043
Oleoyl-(acyl-carrier-protein) hydrolase
The enzyme oleoyl-[acyl-carrier-protein] hydrolase (EC 3.1.2.14) catalyzes the reaction an oleoyl-[acyl-carrier-protein] + H2O formula_0 an [acyl-carrier-protein] + oleate This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name is oleoyl-[acyl-carrier-protein] hydrolase. Other names in common use include acyl-[acyl-carrier-protein] hydrolase, acyl-ACP-hydrolase, acyl-acyl carrier protein hydrolase, oleoyl-ACP thioesterase, and oleoyl-acyl carrier protein thioesterase. It participates in fatty acid biosynthesis. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2OWN and 2PFF. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457043
14457087
Palmitoyl(protein) hydrolase
Class of enzymes Palmitoyl protein hydrolase/thioesterases is an enzyme (EC 3.1.2.22) that removes thioester-linked fatty acyl groups such as palmitate from modified cysteine residues in proteins or peptides during lysosomal degradation. It catalyzes the reaction palmitoyl[protein] + H2O formula_0 palmitate + protein This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name is palmitoyl[protein] hydrolase. Other names in common use include palmitoyl-protein thioesterase, and palmitoyl-(protein) hydrolase. This enzyme participates in fatty acid elongation in mitochondria. Neuronal ceroid lipofuscinoses (NCL) represent a group of encephalopathies that occur in 1 in 12,500 children. Mutations in the palmitoyl protein thioesterase gene causing infantile neuronal ceroid lipofuscinosis. The most common mutation results in intracellular accumulation of the polypeptide and undetectable enzyme activity in the brain. Direct sequencing of cDNAs derived from brain RNA of INCL patients has shown a mis-sense transversion of A to T at nucleotide position 364, which results in substitution of Trp for Arg at position 122 in the protein - Arg 122 is immediately adjacent to a lipase consensus sequence that contains the putative active site Ser of PPT. The occurrence of this and two other independent mutations in the PPT gene strongly suggests that defects in this gene cause INCL. Examples. Human proteins containing this domain include: Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1EH5, 1EI9, 1EXW, and 1PJA. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457087
14457112
Orsellinate-depside hydrolase
The enzyme orsellinate-depside hydrolase (EC 3.1.1.40) catalyzes the reaction orsellinate depside + H2O formula_0 2 orsellinate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is orsellinate-depside hydrolase. This enzyme is also called lecanorate hydrolase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457112
14457164
Phenylacetyl-CoA hydrolase
The enzyme phenylacetyl-CoA hydrolase (EC 3.1.2.25) catalyzes the reaction phenylglyoxylyl-CoA + H2O formula_0 phenylglyoxylate + CoA This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name of this enzyme class is phenylglyoxylyl-CoA hydrolase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457164
14457182
Phorbol-diester hydrolase
The enzyme phorbol-diester hydrolase (EC 3.1.1.51) catalyzes the reaction phorbol 12,13-dibutanoate + H2O formula_0 phorbol 13-butanoate + butanoate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is 12,13-diacylphorbate 12-acylhydrolase. Other names in common use include diacylphorbate 12-hydrolase, diacylphorbate 12-hydrolase, phorbol-12,13-diester 12-ester hydrolase, and PDEH. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457182
14457196
Phosphatidate phosphatase
The enzyme phosphatidate phosphatase (PAP, EC 3.1.3.4) is a key regulatory enzyme in lipid metabolism, catalyzing the conversion of phosphatidate to diacylglycerol: a 1,2-diacylglycerol 3-phosphate + H2O formula_0 a 1,2-diacyl-"sn"-glycerol + phosphate The reverse conversion is catalyzed by the enzyme diacylglycerol kinase, which replaces the hydroxyl group on diacylgylcerol with a phosphate from ATP, generating ADP in the process. In yeast, the forward direction is Mg2+-dependent, while the reverse process is Ca2+-dependent. PAP1, a cytosolic phosphatidate phosphatase found in the lung, is also <chem>Mg^2+</chem>-dependent, but PAP2, a six-transmembrane-domain integral protein found in the plasma membrane, is not. Role in the regulation of lipid flux. Phosphatidate phosphatase regulates lipid metabolism in several ways. In short, it is a key player in controlling the overall flux of triacylglycerols to phospholipids and vice versa, also exerting control through the generation and degradation of lipid-signaling molecules related to phosphatidate. When the phosphatase is active, diacylglycerols formed by it can go on to form any of several products, including phosphatidylethanolamine, phosphatidylcholine, phosphatidylserine, and triacylglycerol. Phospholipids can be formed from diacylglycerol through reaction with activated alcohols, and triacylglycerols can be formed from diacylglycerols through reaction with fatty acyl CoA molecules. When phosphatidate phosphatase is inactive, diacylglycerol kinase catalyzes the reverse conversion, allowing phosphatidate to accumulate as it brings down diacylglycerol levels. Phosphatidate can then be converted into an activated form, CDP-diacylglycerol by liberation of a pyrophosphate from a CTP molecule, or into cardiolipin. This is a principal precursor used by the body in phospholipid synthesis. Furthermore, because both phosphatidate and diacylglycerol function as secondary messengers, phosphatidate phosphatase is able to exert extensive and intricate control of lipid metabolism far beyond its local effect on phopshatidate and diacylglycerol concentrations and the resulting effect on the direction of lipid flux as outlined above. Enzyme regulation. Phosphatidate phosphatase is up-regulated by CDP-diacylglycerol, phosphatidylinositol (formed from reaction of CDP-diacylglycerol with inositol), and cardiolipin. It is down-regulated by sphingosine and dihydrosphingosine. This makes sense in the context of the discussion above. Namely, a build up of products that are formed from phosphatidate serves to up-regulate the phosphatase, the enzyme that consumes phosphatidate, thereby acting as a signal that phosphatidate is in abundance and causing its consumption. At the same time, a build up of products that are formed from DAG serves to down regulate the enzyme that forms diacylglycerol, thereby acting as a signal that this is in abundance and its production should be slowed. Classification. PAP belongs to the family of enzymes known as hydrolases, and more specifically to the hydrolases that act on phosphoric monoester bonds. This enzyme participates in 4 metabolic pathways: glycerolipid, glycerophospholipid, ether lipid, and sphingolipid metabolism. Nomenclature. The systematic name is diacylglycerol-3-phosphate phosphohydrolase. Other names in common use include: Types. There are several different genes that code for phosphatidate phosphatases. They fall into one of two types (type I and type II), depending on their cellular localization and substrate specificity. Type I. Type I phosphatidate phosphatases are soluble enzymes that can associate to membranes. They are found mainly in the cytosol and the nucleus. Encoded for by a group of genes named "Lipin," they are substrate specific only to phosphatidate. There are speculated to be involved in the "de novo" synthesis of glycerolipids. Each of the 3 "Lipin" proteins found in mammals—"Lipin1, Lipin2," and "Lipin3"—has unique tissue expression motifs and distinct physiological functions. Regulation. Regulation of mammalian "Lipin" PAP enzymes occurs at the transcriptional level. For example, "Lipin1" is induced by glucocorticoids during adipocyte differentiation as well as in cells that are experiencing proliferation of the endoplasmic reticulum (ER). "Lipin2", on the other hand, is repressed during adipocyte differentiation. Lipin is phosphorylated in response to insulin in skeletal muscle and adipocytes, linking the physiologic action of insulin to fat cell differentiation. Lipin phosphorylation is inhibited by treatment with rapamycin, suggesting that mTOR controls signal transduction feeding into lipin and may partially explain dyslipidemia resulting from rapamycin therapy. Type II. Type II phosphatidate phosphatases are transmembrane enzymes found mainly in the plasma membrane. They can dephosphorylate other substrates besides phosphatidate, and therefore are also known as lipid phosphate phosphatases. Their main role is in lipid signaling and in phospholipid head-group remodeling. One example of a type II phosphatidate phosphatase is PgpB (PDBe: 5jwy). PgpB is one of three integral membrane phosphatases in "Escherichia coli" that catalyzes the dephosphorylation of phosphatidylglycerol phosphate (PGP) to PG (phosphatidylglycerol). The other two are PgpA and PgpC. While all three catalyze the reaction from PGP to PG, their amino acid sequences are dissimilar and it is predicted that their active sites open to different sides of the cytoplasmic membrane. PG accounts for approximately 20% of the total membrane lipid composition in the inner membrane of bacteria. PgpB is competitively inhibited by phosphatidylethanolamine (PE), a phospholipid formed from DAG. This is therefore an example of negative feedback regulation. The enzyme active site contains a catalytic triad Asp-211, His-207, and His-163 that establishes a charge relay system. However, this catalytic triad is essential for the dephosphorylation of lysophosphatidic acid, phosphatidic acid, and sphingosine-1-phosphate, but is not essential in its entirety for the enzyme's native substrate, phosphatidylglycerol phosphate; His-207 alone is sufficient to hydrolyze PGP. In the cartoon depiction of PgpB below, one can see its six transmembrane alpha helices, which are here shown horizontally. Of the three PGP phosphatases discussed above, PgpB is the only to have multiple transmembrane alpha helices. Genes. Human genes that encode phosphatidate phosphatases include: Pathology. "Lipin"-1 deficiency in mice results in lipodystrophy, insulin resistance, and neuropathy. In humans, variations in "Lipin"-1 expression levels can result in insulin sensitivity, hypertension, and risk for metabolic syndrome. Serious mutations in "Lipin"-2 lead to an inflammatory disorder in humans. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457196
14457220
Phosphatidylglycerophosphatase
The enzyme phosphatidylglycerophosphatase (EC 3.1.3.27) catalyzes the following reaction: phosphatidylglycerophosphate + H2O formula_0 phosphatidylglycerol + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is phosphatidylglycerophosphate phosphohydrolase. Other names in common use include phosphatidylglycerol phosphate phosphatase, phosphatidylglycerol phosphatase, and PGP phosphatase. It participates in glycerophospholipid metabolism. This is a family of proteins that acts as a mitochondrial phosphatase in cardiolipin biosynthesis. Cardiolipin is a unique dimeric phosphoglycerolipid predominantly present in mitochondrial membranes. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457220
14457244
Phosphatidylinositol-3,4-bisphosphate 4-phosphatase
The enzyme phosphatidylinositol-3,4-bisphosphate 4-phosphatase (EC 3.1.3.66) that catalyzes the reaction 1-phosphatidyl-"myo"-inositol 3,4-bisphosphate + H2O formula_0 1-phosphatidyl-1-"myo"-inositol 3-phosphate + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is 1-phosphatidyl-1-"myo"-inositol-3,4-bisphosphate 4-phosphohydrolase. Other names in common use include inositol-3,4-bisphosphate 4-phosphatase, -"myo"-inositol-3,4-bisphosphate 4-phosphohydrolase, phosphoinositide 4-phosphatase, inositol polyphosphate 4-phosphatase, -"myo"-inositol-3,4-bisphosphate 4-phosphohydrolase, and inositol polyphosphate 4-phosphatase type II. This enzyme participates in inositol phosphate metabolism and phosphatidylinositol signaling system. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457244
14457265
Phosphatidylinositol-3-phosphatase
Class of enzymes The enzyme phosphatidylinositol-3-phosphatase (EC 3.1.3.64) catalyzes the reaction 1-phosphatidyl-1-"myo"-inositol 3-phosphate + H2O formula_0 1-phosphatidyl-1-"myo"-inositol + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is 1-phosphatidyl-1-"myo"-inositol-3-phosphate 3-phosphohydrolase. Other names in common use include inositol-1,3-bisphosphate 3-phosphatase, inositol 1,3-bisphosphate phosphatase, inositol-polyphosphate 3-phosphatase, -"myo"-inositol-1,3-bisphosphate 3-phosphohydrolase, and phosphatidyl-3-phosphate 3-phosphohydrolase. This enzyme participates in inositol phosphate metabolism and phosphatidylinositol signaling system. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1LW3 and 1M7R. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457265
14457289
Phosphatidylinositol deacylase
The enzyme phosphatidylinositol deacylase (EC 3.1.1.52) catalyzes the reaction 1-phosphatidyl--"myo"-inositol + H2O formula_0 1-acylglycerophosphoinositol + a carboxylate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is 1-phosphatidyl--"myo"-inositol 2-acylhydrolase. Other names in common use include phosphatidylinositol phospholipase A2, and phospholipase A2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457289
14457308
Phosphoenolpyruvate phosphatase
Enzyme The enzyme phospho"enol "pyruvate phosphatase (EC 3.1.3.60) catalyzes the reaction phospho"enol"pyruvate + H2O formula_0 pyruvate + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is phospho"enol"pyruvate phosphohydrolase. This enzyme is also called PEP phosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457308
14457326
Phosphoglycerate phosphatase
The enzyme phosphoglycerate phosphatase (EC 3.1.3.20) catalyzes the reaction -glycerate 2-phosphate + H2O formula_0 -glycerate + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is -glycerate-2-phosphate phosphohydrolase. Other names in common use include -2-phosphoglycerate phosphatase, and glycerophosphate phosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457326
14457331
Lead–lead dating
Method for dating geological samples Lead–lead dating is a method for dating geological samples, normally based on 'whole-rock' samples of material such as granite. For most dating requirements it has been superseded by uranium–lead dating (U–Pb dating), but in certain specialized situations (such as dating meteorites and the age of the Earth) it is more important than U–Pb dating. Decay equations for common Pb–Pb dating. There are three stable "daughter" Pb isotopes that result from the radioactive decay of uranium and thorium in nature; they are 206Pb, 207Pb, and 208Pb. 204Pb is the only non-radiogenic lead isotope, therefore is not one of the daughter isotopes. These daughter isotopes are the final decay products of U and Th radioactive decay chains beginning from 238U, 235U and 232Th respectively. With the progress of time, the final decay product accumulates as the parent isotope decays at a constant rate. This shifts the ratio of radiogenic Pb versus non-radiogenic 204Pb (207Pb/204Pb or 206Pb/204Pb) in favor of radiogenic 207Pb or 206Pb. This can be expressed by the following decay equations: formula_0 formula_1 where the subscripts P and I refer to present-day and initial Pb isotope ratios, λ235 and λ238 are decay constants for 235U and 238U, and t is the age. The concept of common Pb–Pb dating (also referred to as whole rock lead isotope dating) was deduced through mathematical manipulation of the above equations. It was established by dividing the first equation above by the second, under the assumption that the U/Pb system was undisturbed. This rearranged equation formed: formula_2 where the factor of 137.88 is the present-day 238U/235U ratio. As evident by the equation, initial Pb isotope ratios, as well as the age of the system are the two factors which determine the present day Pb isotope compositions. If the sample behaved as a closed system then graphing the difference between the present and initial ratios of 207Pb/204Pb versus 206Pb/204Pb should produce a straight line. The distance the point moves along this line is dependent on the U/Pb ratio, whereas the slope of the line depends on the time since Earth's formation. This was first established by Nier et al., 1941. The development of the Geochron database. The development of the Geochron database was mainly attributed to Clair Cameron Patterson’s application of Pb–Pb dating on meteorites in 1956. The Pb ratios of three stony and two iron meteorites were measured. The dating of meteorites would then help Patterson in determining not only the age of these meteorites but also the age of Earth's formation. By dating meteorites Patterson was directly dating the age of various planetesimals. Assuming the process of elemental differentiation is identical on Earth as it is on other planets, the core of these planetesimals would be depleted of uranium and thorium, while the crust and mantle would contain higher U/Pb ratios. As planetesimals collided, various fragments were scattered and produced meteorites. Iron meteorites were identified as pieces of the core, while stony meteorites were segments of the mantle and crustal units of these various planetesimals. Samples of iron meteorite from Canyon Diablo (Meteor Crater) Arizona were found to have the least radiogenic composition of any material in the solar system. The U/Pb ratio was so low that no radiogenic decay was detected in the isotopic composition. As illustrated in figure 1, this point defines the lower (left) end of the isochron. Therefore, troilite found in Canyon Diablo represents the primeval lead isotope composition of the solar system, dating back to 4.55 +/- 0.07 Byr. Stony meteorites however, exhibited very high 207Pb/204Pb versus 206Pb/204Pb ratios, indicating that these samples came from the crust or mantle of the planetesimal. Together, these samples define an isochron, whose slope gives the age of meteorites as 4.55 Byr. Patterson also analyzed terrestrial sediment collected from the ocean floor, which was believed to be representative of the Bulk Earth composition. Because the isotope composition of this sample plotted on the meteorite isochron, it suggested that earth had the same age and origin as meteorites, therefore solving the age of the Earth and giving rise to the name 'geochron'. Lead isotope isochron diagram used by C. C. Patterson to determine the age of the Earth in 1956. Animation shows progressive growth over 4550 million years (Myr) of the lead isotope ratios for two stony meteorites (Nuevo Laredo and Forest City) from initial lead isotope ratios matching those of the Canyon Diablo iron meteorite. Precise Pb–Pb dating of meteorites. Chondrules and calcium–aluminium-rich inclusions (CAIs) are spherical particles that make up chondritic meteorites and are believed to be the oldest objects in the Solar System. Hence precise dating of these objects is important to constrain the early evolution of the Solar System and the age of the Earth. The U–Pb dating method can yield the most precise ages for early Solar System objects due to the optimal half-life of 238U. However, the absence of zircon or other uranium-rich minerals in chondrites, and the presence of initial non-radiogenic Pb (common Pb), rules out direct use of the U-Pb concordia method. Therefore, the most precise dating method for these meteorites is the Pb–Pb method, which allows a correction for common Pb. When the abundance of 204Pb is relatively low, this isotope has larger measurement errors than the other Pb isotopes, leading to very strong correlation of errors between the measured ratios. This makes it difficult to determine the analytical uncertainty on the age. To avoid this problem, researchers developed an 'alternative Pb–Pb isochron diagram' (see figure) with reduced error correlation between the measured ratios. In this diagram the 204Pb/206Pb ratio (the reciprocal of the normal ratio) is plotted on the x-axis, so that a point on the y axis (zero 204Pb/206Pb) would have infinitely radiogenic Pb. The ratio plotted on this axis is the 207Pb/206Pb ratio, corresponding to the slope of a normal Pb/Pb isochron, which yields the age. The most accurate ages are produced by samples near the y-axis, which was achieved by step-wise leaching and analysis of the samples. Previously, when applying the alternative Pb–Pb isochron diagram, the 238U/235U isotope ratios were assumed to be invariant among meteoritic material. However, it has been shown that 238U/235U ratios are variable among meteoritic material. To accommodate this, U-corrected Pb–Pb dating analysis is used to generate ages for the oldest solid material in the Solar System using a revised 238U/235U value of 137.786 ± 0.013 to represent the mean 238U/235U isotope ratio bulk inner Solar System materials. The result of U-corrected Pb–Pb dating has produced ages of 4567.35 ± 0.28 My for CAIs (A) and chondrules with ages between 4567.32 ± 0.42 and 4564.71 ± 0.30 My (B and C) (see figure). This supports the idea that CAIs crystallization and chondrule formation occurred around the same time during the formation of the solar system. However, chondrules continued to form for approximately 3 My after CAIs. Hence the best age for the original formation of the Solar System is 4567.7 My. This date also represents the time of initiation of planetary accretion. Successive collisions between accreted bodies led to the formation of larger and larger planetesimals, finally forming the Earth–Moon system in a giant impact event. The age difference between CAIs and chondrules measured in these studies verifies the chronology of the early Solar System derived from extinct short-lived nuclide methods such as 26Al-26Mg, thus improving our understanding of the development of the Solar System and the formation of the Earth. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " {\\left(\\frac\\ce{^{207}Pb}\\ce{^{204}Pb}\\right)_{P}} = {\\left(\\frac\\ce{^{207}Pb}\\ce{^{204}Pb}\\right)_{I}} + {\\left(\\frac\\ce{^{235}U}\\ce{^{204}Pb}\\right)_{P}} {\\left({e^{\\lambda_{235}t}-1}\\right)} " }, { "math_id": 1, "text": " {\\left(\\frac\\ce{^{206}Pb}\\ce{^{204}Pb}\\right)_{P}} = {\\left(\\frac\\ce{^{206}Pb}\\ce{^{204}Pb}\\right)_{I}} + {\\left(\\frac\\ce{^{238}U}\\ce{^{204}Pb}\\right)_{P}} {\\left({e^{\\lambda_{238}t}-1}\\right)} " }, { "math_id": 2, "text": " \\left[\\frac{\\left(\\frac\\ce{^{207}Pb}\\ce{^{204}Pb}\\right)_{P}-\\left(\\frac\\ce{^{207}Pb}\\ce{^{204}Pb}\\right)_{I}}{\\left(\\frac\\ce{^{206}Pb}\\ce{^{204}Pb}\\right)_{P}-\\left(\\frac\\ce{^{206}Pb}\\ce{^{204}Pb}\\right)_{I}}\\right]= {\\left(\\frac{1}{137.88}\\right)}{\\left(\\frac{e^{\\lambda_{235}t}-1}{e^{\\lambda_{238}t}-1}\\right)}" } ]
https://en.wikipedia.org/wiki?curid=14457331
14457374
Phosphoinositide 5-phosphatase
The enzyme phosphoinositide 5-phosphatase (EC 3.1.3.36) catalyzes the reaction 1-phosphatidyl-1-"myo"-inositol 4,5-bisphosphate + H2O formula_0 1-phosphatidyl-1-"myo"-inositol 4-phosphate + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is phosphatidyl-"myo"-inositol-4,5-bisphosphate 4-phosphohydrolase. Other names in common use include type II inositol polyphosphate 5-phosphatase, triphosphoinositide phosphatase, IP3 phosphatase, PtdIns(4,5)P2 phosphatase, triphosphoinositide phosphomonoesterase, diphosphoinositide phosphatase, inositol 1,4,5-triphosphate 5-phosphomonoesterase, inositol triphosphate 5-phosphomonoesterase, phosphatidylinositol-bisphosphatase, phosphatidyl-"myo"-inositol-4,5-bisphosphate phosphatase, phosphatidylinositol 4,5-bisphosphate phosphatase, polyphosphoinositol lipid 5-phosphatase, and phosphatidyl-inositol-bisphosphate phosphatase. This enzyme participates in inositol phosphate metabolism and phosphatidylinositol signaling system. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1UFW, 1W80, 2DNR, and 2QV2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457374
14457412
(phosphorylase) phosphatase
Class of enzymes The enzyme phosphorylase a phosphatase (EC 3.1.3.17) catalyzes the reaction [phosphorylase "a"] + 4 H2O formula_0 2 [phosphorylase "b"] + 4 phosphate It is synonymous with Protein phosphatase 1. This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is [phosphorylase "a"] phosphohydrolase. Other names in common use include PR-enzyme, phosphorylase "a" phosphatase, glycogen phosphorylase phosphatase, protein phosphatase C, and type 1 protein phosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457412
14457429
Phosphoserine phosphatase
The enzyme phosphoserine phosphatase (EC 3.1.3.3) catalyzes the reaction "O"-phospho-(or )-serine + H2O formula_0 (or )-serine + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is "O"-phosphoserine phosphohydrolase. This enzyme participates in glycine, serine and threonine metabolism. Structural studies. As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1F5S, 1J97, 1L7M, 1L7N, 1L7O, 1L7P, 1L8L, 1L8O, 1NNL, 2J6Y, 2J6Z, and 2J70. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457429
14457522
Polyneuridine-aldehyde esterase
The enzyme polyneuridine-aldehyde esterase (EC 3.1.1.78) catalyzes the following reaction: polyneuridine aldehyde + H2O formula_0 16-epivellosimine + CO2 + methanol This enzyme participates in indole and ipecac alkaloid biosynthesis. Nomenclature. This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is polyneuridine aldehyde hydrolase (decarboxylating). Other names in common use include: Homologues. This enzyme is found in various forms in plant species such as "Arabidopsis thaliana", "Glycine max" (soybean), "Vitis vinifera" (wine grape), and "Solanum lycopersicum" (tomato) among others. Polyneuridine-aldehyde esterase also appears in select bacteria including "Enterobacter cloacae". Structure. The secondary structure of this enzyme consists mainly of α helices. In its native form, this enzyme has a tertiary structure that includes two main lobes (as depicted above in the blue 3D representation on the top right). Reaction. Polyneuridine-aldehyde esterase catalyzes the hydrolysis of the methyl ester in polyneuridine aldehyde to form polyneuridine β-aldehydoacid and methanol. The carboxylic acid in the product spontaneously undergoes decarboxylation, yielding 16-epivellosimine and carbon dioxide. Mechanism. The mechanism of hydrolysis performed by polyneuridine-aldehyde esterase is not known. It has been suggested that the enzyme utilizes a catalytic triad composed of Ser-87, Asp-216 and His-244. The catalytic amino acid order is the same as the order of enzymes that are part of the α/β hydrolase family. Thus polyneuridine-aldehyde esterase may be a novel member of the α/β hydrolase group. Broader significance. This enzyme is a part of the pathway of indole alkaloid biosynthesis. The indole alkaloids that result from this metabolic pathway are used by many plant species as a defense against herbivores and parasites. Open questions. The precise mechanisms by which this enzyme performs its function is still unknown. As noted above, researchers are formulating suggestions as to how polyneuridine-aldehyde esterase catalyses the decomposition of polyneuridine-aldehyde, but a mechanism has not yet been affirmed with absolute certainty. Due to the lack of complete understanding of polyneuridine-aldehyde esterase's precise mechanism, this enzyme cannot be grouped into a family of enzymes. Based on mechanism theories, suggestions can be made as to how this enzyme should be categorized, and some parallels can be drawn between polyneuridine-aldehyde esterase and other enzymes. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457522
14457542
Polynucleotide 3'-phosphatase
The enzyme polynucleotide 3′-phosphatase (EC 3.1.3.32) catalyzes the reaction a 3′-phosphopolynucleotide + H2O formula_0 a polynucleotide + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is polynucleotide 3'-phosphohydrolase. Other names in common use include 2′(3′)-polynucleotidase, DNA 3′-phosphatase, deoxyribonucleate 3′-phosphatase, and 5′-polynucleotidekinase 3′-phosphatase. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1UJX and 2BRF. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457542
14457555
Polynucleotide 5'-phosphatase
The enzyme polynucleotide 5′-phosphatase (RNA 5′-triphosphatase, RTPase, EC 3.1.3.33) is an enzyme that catalyzes the reaction a 5′-phosphopolynucleotide + H2O formula_0 a polynucleotide + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is polynucleotide 5′-phosphohydrolase. This enzyme is also called 5′-polynucleotidase. The only specific molecular function known is the catalysis of the reaction: a 5′-end triphospho-(purine-ribonucleotide) in mRNA + H2O = a 5′-end diphospho-(purine-ribonucleoside) in mRNA + phosphate RTPases cleave the 5′-terminal γ-β phosphoanhydride bond of nascent messenger RNA molecules, enabling the addition of a five-prime cap as part of post-transcriptional modifications. RTPases generate 5′-diphosphate-ended mRNA and a phosphate ion from 5′-triphosphate-ended precursor mRNA. mRNA guanylyltransferase then adds a backwards guanosine monophosphate (GMP) group from GTP, generating pyrophosphate, and mRNA (guanine-N7-)-methyltransferase methylates the guanine to form the final 5′-cap structure. There are two families of RTPases known so far: Structural studies. As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1D8H, 1D8I, 1I9S, 1I9T, and 1YN9. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457555
14457574
Prenyl-diphosphatase
The enzyme prenyl-diphosphatase (EC 3.1.7.1) catalyzes the reaction prenyl diphosphate + H2O formula_0 prenol + diphosphate This enzyme belongs to the family of hydrolases, specifically those acting on diphosphoric monoester bonds. The systematic name is prenyl-diphosphate diphosphohydrolase. Other names in common use include prenyl-pyrophosphatase, prenol pyrophosphatase, and prenylphosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457574
14457589
Protein-glutamate methylesterase
The enzyme protein-glutamate methylesterase (EC 3.1.1.61) catalyzes the reaction protein -glutamate "O" 5-methyl ester + H2O formula_0 protein -glutamate + methanol This enzyme is a demethylase, and more specifically it belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is protein-glutamate-"O" 5-methyl-ester acylhydrolase. Other names in common use include chemotaxis-specific methylesterase, methyl-accepting chemotaxis protein methyl-esterase, CheB methylesterase, methylesterase CheB, protein methyl-esterase, protein carboxyl methylesterase, PME, protein methylesterase, and protein--glutamate-5-"O"-methyl-ester acylhydrolase. This enzyme participates in 3 metabolic pathways: two-component system - general, bacterial chemotaxis - general, and bacterial chemotaxis - organism-specific. CheB is part of a two-component signal transduction system. These systems enable bacteria to sense, respond, and adapt to a wide range of environments, stressors, and growth conditions. Two-component systems are composed of a sensor histidine kinase (HK) and its cognate response regulator (RR). The HK catalyses its own autophosphorylation followed by the transfer of the phosphoryl group to the receiver domain on RR; phosphorylation of the RR usually activates an attached output domain, in this case a methyltransferase domain. CheB is involved in chemotaxis. CheB methylesterase is responsible for removing the methyl group from the gamma-glutamyl methyl ester residues in the methyl-accepting chemotaxis proteins (MCP). CheB is regulated through phosphorylation by CheA. The "N"-terminal region of the protein is similar to that of other regulatory components of sensory transduction systems. Structural studies. As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1A2O and 1CHD. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457589
14457607
Pyridoxal phosphatase
The enzyme pyridoxal phosphatase (EC 3.1.3.74) catalyzes the reaction pyridoxal 5′-phosphate + H2O formula_0 pyridoxal + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is pyridoxal-5′-phosphate phosphohydrolase. Other names in common use include vitamine B6 (pyridoxine) phosphatase, PLP phosphatase, vitamin B6-phosphate phosphatase, and PNP phosphatase. This enzyme participates in vitamin B6 metabolism. Structural studies. As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 2CFR, 2CFS, 2CFT, 2OYC, 2P27, and 2P69. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457607
14457627
(pyruvate dehydrogenase (acetyl-transferring))-phosphatase
Class of enzymes The enzyme [pyruvate dehydrogenase (acetyl-transferring)]-phosphatase (EC 3.1.3.43) catalyzes the reaction [pyruvate dehydrogenase (acetyl-transferring)] phosphate + H2O formula_0 [pyruvate dehydrogenase (acetyl-transferring)] + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is [pyruvate dehydrogenase (acetyl-transferring)]-phosphate phosphohydrolase. Other names in common use include pyruvate dehydrogenase phosphatase, phosphopyruvate dehydrogenase phosphatase, [pyruvate dehydrogenase (lipoamide)]-phosphatase, and [pyruvate dehydrogenase (lipoamide)]-phosphate phosphohydrolase. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2PNQ. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457627
14457649
(pyruvate kinase)-phosphatase
Class of enzymes The enzyme [pyruvate kinase]-phosphatase (EC 3.1.3.49) catalyzes the reaction [pyruvate kinase] phosphate + H2O formula_0 [pyruvate kinase] + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is [ATP:pyruvate 2-"O"-phosphotransferase]-phosphate phosphohydrolase. This enzyme is also called pyruvate kinase phosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457649
14457661
Retinyl-palmitate esterase
In enzymology, a retinyl-palmitate esterase (EC 3.1.1.21) is an enzyme that catalyzes the chemical reaction. retinyl palmitate + H2O formula_0 retinol + palmitate Thus, the two substrates of this enzyme are retinyl palmitate and H2O, whereas its two products are retinol and palmitate. This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name of this enzyme class is retinyl-palmitate palmitohydrolase. Other names in common use include retinyl palmitate hydrolase, retinyl palmitate hydrolyase, and retinyl ester hydrolase. This enzyme participates in retinol metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457661
14457697
Serine-ethanolaminephosphate phosphodiesterase
The enzyme serine-ethanolaminephosphate phosphodiesterase (EC 3.1.4.13) catalyzes the reaction serine phosphoethanolamine + H2O formula_0 serine + ethanolamine phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric diester bonds. The systematic name is serine-phosphoethanolamine ethanolaminephosphohydrolase. Other names in common use include serine ethanolamine phosphodiester phosphodiesterase, and SEP diesterase. This enzyme participates in glycerophospholipid metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457697
14457705
S-formylglutathione hydrolase
The enzyme "S"-formylglutathione hydrolase (EC 3.1.2.12) catalyzes the reaction S-formylglutathione + H2O formula_0 glutathione + formate This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name is "S"-formylglutathione hydrolase. It participates in Methane Metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457705
14457732
Sinapine esterase
The enzyme sinapine esterase (EC 3.1.1.49) catalyzes the reaction sinapoylcholine + H2O formula_0 sinapate + choline This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name of this enzyme class is sinapoylcholine sinapohydrolase. This enzyme is also called aromatic choline esterase. This enzyme participates in phenylpropanoid biosynthesis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457732
14457742
(S)-methylmalonyl-CoA hydrolase
Class of enzymes The enzyme ("S")-methylmalonyl-CoA hydrolase (EC 3.1.2.17) catalyzes the reaction ("S")-methylmalonyl-CoA + H2O formula_0 methylmalonate + CoA This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name of this enzyme class is ("S")-methylmalonyl-CoA hydrolase. This enzyme is also called -methylmalonyl-coenzyme A hydrolase. This enzyme participates in propanoate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457742
14457757
Sorbitol-6-phosphatase
Enzyme The enzyme sorbitol-6-phosphatase (EC 3.1.3.50) catalyzes the reaction sorbitol 6-phosphate + H2O formula_0 sorbitol + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is sorbitol-6-phosphate phosphohydrolase. This enzyme is also called sorbitol-6-phosphate phosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457757
14457768
S-succinylglutathione hydrolase
The enzyme "S"-succinylglutathione hydrolase (EC 3.1.2.13) catalyzes the reaction "S"-succinylglutathione + H2O formula_0 glutathione + succinate This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name is "S"-succinylglutathione hydrolase. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457768
14457779
Steroid-lactonase
The enzyme steroid-lactonase (EC 3.1.1.37) catalyzes the reaction testololactone + H2O formula_0 testolate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name testololactone lactonohydrolase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457779
14457794
Sterol esterase
The enzyme sterol esterase (EC 3.1.1.13) catalyzes the reaction a sterol ester + H2O formula_0 a sterol + a fatty acid This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is steryl-ester acylhydrolase. Other names in common use include cholesterol esterase, cholesteryl ester synthase, triterpenol esterase, cholesteryl esterase, cholesteryl ester hydrolase, sterol ester hydrolase, cholesterol ester hydrolase, cholesterase, and acylcholesterol lipase. This enzyme participates in bile acid biosynthesis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457794
14457810
Streptomycin-6-phosphatase
The enzyme streptomycin-6-phosphatase (EC 3.1.3.39) catalyzes the reaction streptomycin 6-phosphate + H2O formula_0 streptomycin + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is streptomycin-6-phosphate phosphohydrolase. Other names in common use include streptomycin 6-phosphate phosphatase, streptomycin 6-phosphate phosphohydrolase, and streptomycin-6-"P" phosphohydrolase. This enzyme participates in streptomycin biosynthesis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457810
14457829
Succinyl-CoA hydrolase
The enzyme succinyl-CoA hydrolase (EC 3.1.2.3) catalyzes the reaction succinyl-CoA + H2O formula_0 CoA + succinate This enzyme belongs to the family of hydrolases, specifically those acting on thioester bonds. The systematic name is succinyl-CoA hydrolase. Other names in common use include succinyl-CoA acylase, succinyl coenzyme A hydrolase, and succinyl coenzyme A deacylase. This enzyme participates in the tricaboxylate cycle. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457829
14457847
Sucrose-phosphatase
The enzyme sucrose-phosphatase (EC 3.1.3.24) catalyzes the reaction sucrose 6"F"-phosphate + H2O formula_0 sucrose + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is sucrose-6"F"-phosphate phosphohydrolase. Other names in common use include sucrose 6-phosphate hydrolase, sucrose-phosphate hydrolase, sucrose-phosphate phosphohydrolase, and sucrose-6-phosphatase. This enzyme participates in starch and sucrose metabolism. Structural studies. As of late 2007, 9 structures have been solved for this class of enzymes, with PDB accession codes 1S2O, 1TJ3, 1TJ4, 1TJ5, 1U2S, 1U2T, 2B1Q, 2B1R, and 2D2V. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457847
14457863
Sugar-phosphatase
The enzyme sugar-phosphatase (EC 3.1.3.23) catalyzes the reaction sugar phosphate + H2O formula_0 sugar + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is sugar-phosphate phosphohydrolase. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2HF2. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457863
14457880
Sugar-terminal-phosphatase
The enzyme sugar-terminal-phosphatase (EC 3.1.3.58) catalyzes the chemical reaction -glucose 6-phosphate + H2O formula_0 -glucose + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is sugar-ω-phosphate phosphohydrolase. This enzyme is also called xylitol-5-phosphatase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457880
14457924
Thymidylate 5'-phosphatase
The enzyme thymidylate 5′-phosphatase (EC 3.1.3.35) catalyzes the reaction thymidylate + H2O formula_0 thymidine + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is thymidylate 5′-phosphohydrolase. Other names in common use include thymidylate 5′-nucleotidase, deoxythymidylate 5′-nucleotidase, thymidylate nucleotidase, deoxythymidylic 5′-nucleotidase, deoxythymidylate phosphohydrolase, and dTMPase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457924
14457942
Trehalose-phosphatase
The enzyme trehalose-phosphatase (EC 3.1.3.12) catalyzes the reaction α,α-trehalose 6-phosphate + H2O formula_0 α,α-trehalose + phosphate This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is α,α-trehalose-6-phosphate phosphohydrolase. Other names in common use include trehalose 6-phosphatase, trehalose 6-phosphate phosphatase, and trehalose-6-phosphate phosphohydrolase. This enzyme participates in starch and sucrose metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457942
14457962
Triacetate-lactonase
The enzyme triacetate-lactonase (EC 3.1.1.38) catalyzes the reaction triacetate lactone + H2O formula_0 triacetate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is triacetolactone lactonohydrolase. Other names in common use include triacetic lactone hydrolase, triacetic acid lactone hydrolase, TAL hydrolase, and triacetate lactone hydrolase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457962
14457980
Tropinesterase
The enzyme tropinesterase (EC 3.1.1.10) catalyzes the reaction atropine + H2O formula_0 tropine + tropate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is atropine acylhydrolase. Other names in common use include tropine esterase, atropinase, and atropine esterase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14457980
14458
Hail
Form of solid precipitation Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused. It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets generally fall in cold weather, while hail growth is greatly inhibited during low surface temperatures. Unlike other forms of water ice precipitation, such as graupel (which is made of rime ice), ice pellets (which are smaller and translucent), and snow (which consists of tiny, delicately crystalline flakes or needles), hailstones usually measure between and in diameter. The METAR reporting code for hail or greater is GR, while smaller hailstones and graupel are coded GS. Hail is possible within most thunderstorms (as it is produced by cumulonimbus), as well as within of the parent storm. Hail formation requires environments of strong, upward motion of air within the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while, in the tropics, it tends to be confined to high elevations. There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures, and, most commonly, farmers' crops. Definition. Any thunderstorm which produces hail that reaches the ground is known as a hailstorm. An ice crystal with a diameter of > is considered a hailstone. Hailstones can grow to and weigh more than . Unlike ice pellets, hailstones are often layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between and golf-ball-sized . Stones larger than are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected. The US National Weather Service has a diameter threshold, effective January 2010, an increase over the previous threshold of hail. Other countries have different thresholds according to local sensitivity to hail; for instance, grape-growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells), as the more powerful updrafts in a stronger storm can keep larger hailstones aloft. Formation. Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid-water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing (). These types of strong updrafts can also indicate the presence of a tornado. The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear. Layer nature of the hailstones. Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means that the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true. The storm's updraft, with upwardly directed wind speeds as high as , blows the forming hailstones up the cloud. As the hailstone ascends, it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice. Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft, where they can pass more time growing. As the hailstone grows, it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes "wet growth", the outer layer is "sticky" (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape. Hail can also undergo "dry growth", in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the "wet growth" mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section. The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes, based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature. Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case. Factors favoring hail. Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling, which lowers the freezing level of thunderstorm clouds, giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations. Hail growth becomes vanishingly small when air temperatures fall below , as supercooled water droplets become rare at these temperatures. Around thunderstorms, hail is most likely within the cloud at elevations above . Between and , 60% of hail is still within the thunderstorm, though 40% now lies within the clear air under the anvil. Below , hail is equally distributed in and around a thunderstorm to a distance of . Climatology. Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes. Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely. The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888. China also experiences significant hailstorms. Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, southern and eastern Benelux, and northern Italy. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail. Some mediterranean countries register the maximum frequency of hail during the Fall season. In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley". Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season. To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events. Hailstorms are also common in several regions of South America, particularly in the temperate latitudes. The central region of Argentina, extending from the Mendoza region eastward towards Córdoba, experiences some of the most frequent hailstorms in the world, with 10-30 storms per year on average. The Patagonia region of southern Argentina also sees frequent hailstorms, though this may be partially due to graupel (small hail) being counted as hail in this colder region. The triple border region between the Brazilian states of Paraná, Santa Catarina, and Argentina, in southern Brazil is another area known for damaging hailstorms. Hailstorms are also common in parts of Paraguay, Uruguay, and Bolivia that border the high-frequency hail regions of northern Argentina. The high frequency of hailstorms in these areas of South America is attributed to the region's orographic forcing of convection, combined with moisture transport from the Amazon and instability created by temperature contrasts between the surface and upper atmosphere. In Colombia, the cities of Bogotá and Medellín also see frequent hailstorms due to their high elevation. Southern Chile also sees persistent hail from mid april through october. Short-term detection. Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development. Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate. Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground. Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities. More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain. The use of differential reflectivity (formula_0), in combination with horizontal reflectivity (formula_1) has led to a variety of hail classification algorithms. Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method. Size and terminal velocity. The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins. Using objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms. When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least . GR is derived from the French word "grêle". Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word "grésil". Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of in diameter falls at a rate of , while stones the size of in diameter fall at a rate of . Hailstone velocity is dependent on the size of the stone, its drag coefficient, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hailstones are not perfect spheres, it is difficult to accurately calculate their drag coefficient - and, thus, their speed. Size comparisons to objects. In the United States, the National Weather Service reports hail size as a comparison to everyday objects. Hailstones larger than 1 inch in diameter are denoted as "severe." Hail records. Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail", which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records. Hazards. Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops. Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems. Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings. Hail is one of the most significant thunderstorm hazards to aircraft. When hailstones exceed in diameter, planes can be seriously damaged within seconds. The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows unless parked in a garage or covered with a shielding material. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage. Hail is one of Canada's most expensive hazards. Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls. Accumulations. Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths, which can be detectable by satellite after the storms pass by. Hailstorms normally last from a few minutes up to 15 minutes in duration. Accumulating hail storms can blanket the ground with over of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail. Depths of up to have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure. Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations. On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10:00 pm and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump truck loads of hail. Research focused on four individual days that accumulated more than of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics, suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths. Suppression and prevention. During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat, particularly across the Soviet Union, where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells. But these effects have not been replicated in randomized trials conducted in the West. Hail suppression programs have been undertaken by 15 countries between 1965 and 2005. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Z_{dr}" }, { "math_id": 1, "text": "Z_{h}" } ]
https://en.wikipedia.org/wiki?curid=14458
14458002
Uronolactonase
Class of enzymes The enzyme uronolactonase (EC 3.1.1.19) catalyzes the reaction -glucurono-6,2-lactone + H2O formula_0 -glucuronate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is -glucurono-6,2-lactone lactonohydrolase. It is also called glucuronolactonase. It participates in ascorbate and aldarate metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14458002
14458019
Wax-ester hydrolase
The enzyme wax-ester hydrolase (EC 3.1.1.50) catalyzes the reaction a wax ester + H2O formula_0 a long-chain alcohol + a long-chain carboxylate Thus, the two substrates of this enzyme are wax ester and H2O, whereas its two products are long-chain alcohol and long-chain carboxylate. This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name of this enzyme class is wax-ester acylhydrolase. Other names in common use include jojoba wax esterase, and WEH. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14458019
14458034
Xylono-1,4-lactonase
The enzyme xylono-1,4-lactonase (EC 3.1.1.68) catalyzes the reaction -xylono-1,4-lactone + H2O formula_0 -xylonate This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name of this enzyme class is -xylono-1,4-lactone lactonohydrolase. Other names in common use include xylono-γ-lactonase, and xylonolactonase. This enzyme participates in pentose and glucuronate interconversions. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14458034
14459043
Influence line
In engineering, an influence line graphs the variation of a function (such as the shear, moment etc. felt in a structural member) at a specific point on a beam or truss caused by a unit load placed at any point along the structure. Common functions studied with influence lines include reactions (forces that the structure's supports must apply for the structure to remain static), shear, moment, and deflection (Deformation). Influence lines are important in designing beams and trusses used in bridges, crane rails, conveyor belts, floor girders, and other structures where loads will move along their span. The influence lines show where a load will create the maximum effect for any of the functions studied. Influence lines are both scalar and additive. This means that they can be used even when the load that will be applied is not a unit load or if there are multiple loads applied. To find the effect of any non-unit load on a structure, the ordinate results obtained by the influence line are multiplied by the magnitude of the actual load to be applied. The entire influence line can be scaled, or just the maximum and minimum effects experienced along the line. The scaled maximum and minimum are the critical magnitudes that must be designed for in the beam or truss. In cases where multiple loads may be in effect, influence lines for the individual loads may be added together to obtain the total effect felt the structure bears at a given point. When adding the influence lines together, it is necessary to include the appropriate offsets due to the spacing of loads across the structure. For example, a truck load is applied to the structure. Rear axle, B, is three feet behind front axle, A, then the effect of A at "x" feet along the structure must be added to the effect of B at ("x" – 3) feet along the structure—not the effect of B at "x" feet along the structure. Many loads are distributed rather than concentrated. Influence lines can be used with either concentrated or distributed loadings. For a concentrated (or point) load, a unit point load is moved along the structure. For a distributed load of a given width, a unit-distributed load of the same width is moved along the structure, noting that as the load nears the ends and moves off the structure only part of the total load is carried by the structure. The effect of the distributed unit load can also be obtained by integrating the point load's influence line over the corresponding length of the structures. The Influence lines of determinate structures becomes a mechanism whereas the Influence lines of indeterminate structures become just determinate. Demonstration from Betti's theorem. Influence lines are based on Betti's theorem. From there, consider two external force systems, formula_0 and formula_1, each one associated with a displacement field whose displacements measured in the force's point of application are represented by formula_2 and formula_3. Consider that the formula_0 system represents actual forces applied to the structure, which are in equilibrium. Consider that the formula_1 system is formed by a single force, formula_4. The displacement field formula_3 associated with this forced is defined by releasing the structural restraints acting on the point where formula_4 is applied and imposing a relative unit displacement that is kinematically admissible in the negative direction, represented as formula_5. From Betti's theorem, we obtain the following result: formula_6 Concept. When designing a beam or truss, it is necessary to design for the scenarios causing the maximum expected reactions, shears, and moments within the structure members to ensure that no member fails during the life of the structure. When dealing with dead loads (loads that never move, such as the weight of the structure itself), this is relatively easy because the loads are easy to predict and plan for. For live loads (any load that moves during the life of the structure, such as furniture and people), it becomes much harder to predict where the loads will be or how concentrated or distributed they will be throughout the life of the structure. Influence lines graph the response of a beam or truss as a unit load travels across it. The influence line helps designers find where to place a live load in order to calculate the maximum resulting response for each of the following functions: reaction, shear, or moment. The designer can then scale the influence line by the greatest expected load to calculate the maximum response of each function for which the beam or truss must be designed. Influence lines can also be used to find the responses of other functions (such as deflection or axial force) to the applied unit load, but these uses of influence lines are less common. Methods for constructing influence lines. There are three methods used for constructing the influence line. The first is to tabulate the influence values for multiple points along the structure, then use those points to create the influence line. The second is to determine the influence-line equations that apply to the structure, thereby solving for all points along the influence line in terms of "x", where "x" is the number of feet from the start of the structure to the point where the unit load is applied. The third method is called the Müller-Breslau's principle. It creates a qualitative influence line. This influence line will still provide the designer with an accurate idea of where the unit load will produce the largest response of a function at the point being studied, but it cannot be used directly to calculate what the magnitude that response will be, whereas the influence lines produced by the first two methods can. Tabulate values. To tabulate the influence values with respect to some point A on the structure, a unit load must be placed at various points along the structure. Statics is used to calculate what the value of the function (reaction, shear, or moment) is at point A. Typically an upwards reaction is seen as positive. Shear and moments are given positive or negative values according to the same conventions used for shear and moment diagrams. R. C. Hibbeler states, in his book "Structural Analysis", “All statically determinate beams will have influence lines that consist of straight line segments.” Therefore, it is possible to minimize the number of computations by recognizing the points that will cause a change in the slope of the influence line and only calculating the values at those points. The slope of the inflection line can change at supports, mid-spans, and joints. An influence line for a given function, such as a reaction, axial force, shear force, or bending moment, is a graph that shows the variation of that function at any given point on a structure due to the application of a unit load at any point on the structure. An influence line for a function differs from a shear, axial, or bending moment diagram. Influence lines can be generated by independently applying a unit load at several points on a structure and determining the value of the function due to this load, i.e. shear, axial, and moment at the desired location. The calculated values for each function are then plotted where the load was applied and then connected together to generate the influence line for the function. Once the influence values have been tabulated, the influence line for the function at point A can be drawn in terms of "x". First, the tabulated values must be located. For the sections in between the tabulated points, interpolation is required. Therefore, straight lines may be drawn to connect the points. Once this is done, the influence line is complete. Influence-line equations. It is possible to create equations defining the influence line across the entire span of a structure. This is done by solving for the reaction, shear, or moment at the point A caused by a unit load placed at "x" feet along the structure instead of a specific distance. This method is similar to the tabulated values method, but rather than obtaining a numeric solution, the outcome is an equation in terms of "x". It is important to understanding where the slope of the influence line changes for this method because the influence-line equation will change for each linear section of the influence line. Therefore, the complete equation is a piecewise linear function with a separate influence-line equation for each linear section of the influence line. Müller-Breslau's Principle. According to www.public.iastate.edu, “The Müller-Breslau Principle can be utilized to draw qualitative influence lines, which are directly proportional to the actual influence line.” Instead of moving a unit load along a beam, the Müller-Breslau Principle finds the deflected shape of the beam caused by first releasing the beam at the point being studied, and then applying the function (reaction, shear, or moment) being studied to that point. The principle states that the influence line of a function will have a scaled shape that is the same as the deflected shape of the beam when the beam is acted upon by the function. To understand how the beam deflects under the function, it is necessary to remove the beam's capacity to resist the function. Below are explanations of how to find the influence lines of a simply supported, rigid beam (such as the one displayed in Figure 1). * When determining the reaction caused at a support, the support is replaced with a roller, which cannot resist a vertical reaction. Then an upward (positive) reaction is applied to the point where the support was. Since the support has been removed, the beam will rotate upwards, and since the beam is rigid, it will create a triangle with the point at the second support. If the beam extends beyond the second support as a cantilever, a similar triangle will be formed below the cantilevers position. This means that the reaction’s influence line will be a straight, sloping line with a value of zero at the location of the second support. * When determining the shear caused at some point B along the beam, the beam must be cut and a roller-guide (which is able to resist moments but not shear) must be inserted at point B. Then, by applying a positive shear to that point, it can be seen that the left side will rotate down, but the right side will rotate up. This creates a discontinuous influence line that reaches zero at the supports and whose slope is equal on either side of the discontinuity. If point B is at a support, then the deflection between point B and any other supports will still create a triangle, but if the beam is cantilevered, then the entire cantilevered side will move up or down creating a rectangle. * When determining the moment caused by at some point B along the beam, a hinge will be placed at point B, releasing it to moments but resisting shear. Then when a positive moment is placed at point B, both sides of the beam will rotate up. This will create a continuous influence line, but the slopes will be equal and opposite on either side of the hinge at point B. Since the beam is simply supported, its end supports (pins) cannot resist moment; therefore, it can be observed that the supports will never experience moments in a static situation regardless of where the load is placed. The Müller-Breslau Principle can only produce qualitative influence lines. This means that engineers can use it to determine where to place a load to incur the maximum of a function, but the magnitude of that maximum cannot be calculated from the influence line. Instead, the engineer must use statics to solve for the functions value in that loading case. Alternate loading cases. Multiple loads. The simplest loading case is a single point load, but influence lines can also be used to determine responses due to multiple loads and distributed loads. Sometimes it is known that multiple loads will occur at some fixed distance apart. For example, on a bridge the wheels of cars or trucks create point loads that act at relatively standard distances. To calculate the response of a function to all these point loads using an influence line, the results found with the influence line can be scaled for each load, and then the scaled magnitudes can be summed to find the total response that the structure must withstand. The point loads can have different magnitudes themselves, but even if they apply the same force to the structure, it will be necessary to scale them separately because they act at different distances along the structure. For example, if a car's wheels are 10 feet apart, then when the first set is 13 feet onto the bridge, the second set will be only 3 feet onto the bridge. If the first set of wheels is 7 feet onto the bridge, the second set has not yet reached the bridge, and therefore only the first set is placing a load on the bridge. Also, if, between two loads, one of the loads is heavier, the loads must be examined in both loading orders (the larger load on the right and the larger load on the left) to ensure that the maximum load is found. If there are three or more loads, then the number of cases to be examined increases. Distributed loads. Many loads do not act as point loads, but instead act over an extended length or area as distributed loads. For example, a tractor with continuous tracks will apply a load distributed over the length of each track. To find the effect of a distributed load, the designer can integrate an influence line, found using a point load, over the affected distance of the structure. For example, if a three-foot-long track acts between 5 feet and 8 feet along a beam, the influence line of that beam must be integrated between 5 and 8 feet. The integration of the influence line gives the effect that would be felt if the distributed load had a unit magnitude. Therefore, after integrating, the designer must still scale the results to get the actual effect of the distributed load. Indeterminate structures. While the influence lines of statically determinate structures (as mentioned above) are made up of straight line segments, the same is not true for indeterminate structures. Indeterminate structures are not considered rigid; therefore, the influence lines drawn for them will not be straight lines but rather curves. The methods above can still be used to determine the influence lines for the structure, but the work becomes much more complex as the properties of the beam itself must be taken into consideration. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "F^P_i" }, { "math_id": 1, "text": "F^Q_i" }, { "math_id": 2, "text": "d^P_i" }, { "math_id": 3, "text": "d^Q_i" }, { "math_id": 4, "text": "F^Q" }, { "math_id": 5, "text": "d^Q_1 = -1" }, { "math_id": 6, "text": "\n-F^P_1 + \\sum^n_{i=2}F^P_id^Q_i = F^Q\\times 0 \\iff F^P_1 = \\sum^n_{i=2}F^P_id^Q_i\n" } ]
https://en.wikipedia.org/wiki?curid=14459043
1446277
Bessel's inequality
Theorem on orthonormal sequences In mathematics, especially functional analysis, Bessel's inequality is a statement about the coefficients of an element formula_0 in a Hilbert space with respect to an orthonormal sequence. The inequality was derived by F.W. Bessel in 1828. Let formula_1 be a Hilbert space, and suppose that formula_2 is an orthonormal sequence in formula_1. Then, for any formula_0 in formula_1 one has formula_3 where ⟨·,·⟩ denotes the inner product in the Hilbert space formula_1. If we define the infinite sum formula_4 consisting of "infinite sum" of vector resolute formula_0 in direction formula_5, Bessel's inequality tells us that this series converges. One can think of it that there exists formula_6 that can be described in terms of potential basis formula_7. For a complete orthonormal sequence (that is, for an orthonormal sequence that is a basis), we have Parseval's identity, which replaces the inequality with an equality (and consequently formula_8 with formula_0). Bessel's inequality follows from the identity formula_9 which holds for any natural "n". References. <templatestyles src="Reflist/styles.css" /> External links. "This article incorporates material from Bessel inequality on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "e_1, e_2, ..." }, { "math_id": 3, "text": "\\sum_{k=1}^{\\infty}\\left\\vert\\left\\langle x,e_k\\right\\rangle \\right\\vert^2 \\le \\left\\Vert x\\right\\Vert^2," }, { "math_id": 4, "text": "x' = \\sum_{k=1}^{\\infty}\\left\\langle x,e_k\\right\\rangle e_k, " }, { "math_id": 5, "text": "e_k" }, { "math_id": 6, "text": "x' \\in H" }, { "math_id": 7, "text": "e_1, e_2, \\dots" }, { "math_id": 8, "text": "x'" }, { "math_id": 9, "text": "\\begin{align}\n0 \\leq \\left\\| x - \\sum_{k=1}^n \\langle x, e_k \\rangle e_k\\right\\|^2 &= \\|x\\|^2 - 2 \\sum_{k=1}^n \\operatorname{Re} \\langle x, \\langle x, e_k \\rangle e_k \\rangle + \\sum_{k=1}^n | \\langle x, e_k \\rangle |^2 \\\\\n&= \\|x\\|^2 - 2 \\sum_{k=1}^n |\\langle x, e_k \\rangle |^2 + \\sum_{k=1}^n | \\langle x, e_k \\rangle |^2 \\\\\n&= \\|x\\|^2 - \\sum_{k=1}^n | \\langle x, e_k \\rangle |^2,\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1446277
14463
Harmonic mean
Inverse of the average of the inverses of a set of numbers In mathematics, the harmonic mean is one of several kinds of average, and in particular, one of the Pythagorean means. It is sometimes appropriate for situations when the average rate is desired. The harmonic mean can be expressed as the reciprocal of the arithmetic mean of the reciprocals of the given set of observations. As a simple example, the harmonic mean of 1, 4, and 4 is formula_0 Definition. The harmonic mean "H" of the positive real numbers formula_1 is defined to be formula_2 It is the reciprocal of the arithmetic mean of the reciprocals, and vice versa: formula_3 where the arithmetic mean is defined as formula_4 The harmonic mean is a Schur-concave function, and dominated by the minimum of its arguments, in the sense that for any positive set of arguments, formula_5. Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged). The harmonic mean is also concave, which is an even stronger property than Schur-concavity. One has to take care to only use positive numbers though, since the mean fails to be concave if negative values are used. Relationship with other means. For all "positive" data sets "containing at least one pair of nonequal values", the harmonic mean is always the least of the three Pythagorean means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between. (If all values in a nonempty data set are equal, the three means are always equal to one another; e.g., the harmonic, geometric, and arithmetic means of {2, 2, 2} are all 2.) It is the special case "M"−1 of the power mean: formula_6 Since the harmonic mean of a list of numbers tends strongly toward the least elements of the list, it tends (compared to the arithmetic mean) to mitigate the impact of large outliers and aggravate the impact of small ones. The arithmetic mean is often mistakenly used in places calling for the harmonic mean. In the speed example below for instance, the arithmetic mean of 40 is incorrect, and too big. The harmonic mean is related to the other Pythagorean means, as seen in the equation below. This can be seen by interpreting the denominator to be the arithmetic mean of the product of numbers "n" times but each time omitting the "j"-th term. That is, for the first term, we multiply all "n" numbers except the first; for the second, we multiply all "n" numbers except the second; and so on. The numerator, excluding the "n", which goes with the arithmetic mean, is the geometric mean to the power "n". Thus the "n"-th harmonic mean is related to the "n"-th geometric and arithmetic means. The general formula is formula_7 If a set of non-identical numbers is subjected to a mean-preserving spread — that is, two or more elements of the set are "spread apart" from each other while leaving the arithmetic mean unchanged — then the harmonic mean always decreases. Harmonic mean of two or three numbers. Two numbers. For the special case of just two numbers, formula_8 and formula_9, the harmonic mean can be written formula_10 or formula_11 In this special case, the harmonic mean is related to the arithmetic mean formula_12 and the geometric mean formula_13 by formula_14 Since formula_15 by the inequality of arithmetic and geometric means, this shows for the "n" = 2 case that "H" ≤ "G" (a property that in fact holds for all "n"). It also follows that formula_16, meaning the two numbers' geometric mean equals the geometric mean of their arithmetic and harmonic means. Three numbers. For the special case of three numbers, formula_8, formula_9 and formula_17, the harmonic mean can be written formula_18 Three positive numbers "H", "G", and "A" are respectively the harmonic, geometric, and arithmetic means of three positive numbers if and only if the following inequality holds formula_19 Weighted harmonic mean. If a set of weights formula_20, ..., formula_21 is associated to the data set formula_8, ..., formula_22, the weighted harmonic mean is defined by formula_23 The unweighted harmonic mean can be regarded as the special case where all of the weights are equal. Examples. In physics. Average speed. In many situations involving rates and ratios, the harmonic mean provides the correct average. For instance, if a vehicle travels a certain distance "d" outbound at a speed "x" (e.g. 60 km/h) and returns the same distance at a speed "y" (e.g. 20 km/h), then its average speed is the harmonic mean of "x" and "y" (30 km/h), not the arithmetic mean (40 km/h). The total travel time is the same as if it had traveled the whole distance at that average speed. This can be proven as follows: Average speed for the entire journey However, if the vehicle travels for a certain amount of "time" at a speed "x" and then the same amount of time at a speed "y", then its average speed is the arithmetic mean of "x" and "y", which in the above example is 40 km/h. Average speed for the entire journey The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same "distance", then the average speed is the "harmonic" mean of all the sub-trip speeds; and if each sub-trip takes the same amount of "time", then the average speed is the "arithmetic" mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed. For the arithmetic mean, the speed of each portion of the trip is weighted by the duration of that portion, while for the harmonic mean, the corresponding weight is the distance. In both cases, the resulting formula reduces to dividing the total distance by the total time.) However, one may avoid the use of the harmonic mean for the case of "weighting by distance". Pose the problem as finding "slowness" of the trip where "slowness" (in hours per kilometre) is the inverse of speed. When trip slowness is found, invert it so as to find the "true" average trip speed. For each trip segment i, the slowness si = 1/speedi. Then take the weighted arithmetic mean of the si's weighted by their respective distances (optionally with the weights normalized so they sum to 1 by dividing them by trip length). This gives the true average slowness (in time per kilometre). It turns out that this procedure, which can be done with no knowledge of the harmonic mean, amounts to the same mathematical operations as one would use in solving this problem by using the harmonic mean. Thus it illustrates why the harmonic mean works in this case. Density. Similarly, if one wishes to estimate the density of an alloy given the densities of its constituent elements and their mass fractions (or, equivalently, percentages by mass), then the predicted density of the alloy (exclusive of typically minor volume changes due to atom packing effects) is the weighted harmonic mean of the individual densities, weighted by mass, rather than the weighted arithmetic mean as one might at first expect. To use the weighted arithmetic mean, the densities would have to be weighted by volume. Applying dimensional analysis to the problem while labeling the mass units by element and making sure that only like element-masses cancel makes this clear. Electricity. If one connects two electrical resistors in parallel, one having resistance "x" (e.g., 60 Ω) and one having resistance "y" (e.g., 40 Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean of "x" and "y" (48 Ω): the equivalent resistance, in either case, is 24 Ω (one-half of the harmonic mean). This same principle applies to capacitors in series or to inductors in parallel. However, if one connects the resistors in series, then the average resistance is the arithmetic mean of "x" and "y" (50 Ω), with total resistance equal to twice this, the sum of "x" and "y" (100 Ω). This principle applies to capacitors in parallel or to inductors in series. As with the previous example, the same principle applies when more than two resistors, capacitors or inductors are connected, provided that all are in parallel or all are in series. The "conductivity effective mass" of a semiconductor is also defined as the harmonic mean of the effective masses along the three crystallographic directions. Optics. As for other optic equations, the thin lens equation = + can be rewritten such that the focal length "f" is one-half of the harmonic mean of the distances of the subject "u" and object "v" from the lens. Two thin lenses of focal length "f"1 and "f"2 in series is equivalent to two thin lenses of focal length "f"hm, their harmonic mean, in series. Expressed as optical power, two thin lenses of optical powers "P"1 and "P"2 in series is equivalent to two thin lenses of optical power "P"am, their arithmetic mean, in series. In finance. The weighted harmonic mean is the preferable method for averaging multiples, such as the price–earnings ratio (P/E). If these ratios are averaged using a weighted arithmetic mean, high data points are given greater weights than low data points. The weighted harmonic mean, on the other hand, correctly weights each data point. The simple weighted arithmetic mean when applied to non-price normalized ratios such as the P/E is biased upwards and cannot be numerically justified, since it is based on equalized earnings; just as vehicles speeds cannot be averaged for a roundtrip journey (see above). For example, consider two firms, one with a market capitalization of $150 billion and earnings of $5 billion (P/E of 30) and one with a market capitalization of $1 billion and earnings of $1 million (P/E of 1000). Consider an index made of the two stocks, with 30% invested in the first and 70% invested in the second. We want to calculate the P/E ratio of this index. Using the weighted arithmetic mean (incorrect): formula_24 Using the weighted harmonic mean (correct): formula_25 Thus, the correct P/E of 93.46 of this index can only be found using the weighted harmonic mean, while the weighted arithmetic mean will significantly overestimate it. In geometry. In any triangle, the radius of the incircle is one-third of the harmonic mean of the altitudes. For any point P on the minor arc BC of the circumcircle of an equilateral triangle ABC, with distances "q" and "t" from B and C respectively, and with the intersection of PA and BC being at a distance "y" from point P, we have that "y" is half the harmonic mean of "q" and "t". In a right triangle with legs "a" and "b" and altitude "h" from the hypotenuse to the right angle, "h"² is half the harmonic mean of "a"² and "b"². Let "t" and "s" ("t" > "s") be the sides of the two inscribed squares in a right triangle with hypotenuse "c". Then "s"² equals half the harmonic mean of "c"² and "t"². Let a trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC. (This is provable using similar triangles.) One application of this trapezoid result is in the crossed ladders problem, where two ladders lie oppositely across an alley, each with feet at the base of one sidewall, with one leaning against a wall at height "A" and the other leaning against the opposite wall at height "B", as shown. The ladders cross at a height of "h" above the alley floor. Then "h" is half the harmonic mean of "A" and "B". This result still holds if the walls are slanted but still parallel and the "heights" "A", "B", and "h" are measured as distances from the floor along lines parallel to the walls. This can be proved easily using the area formula of a trapezoid and area addition formula. In an ellipse, the semi-latus rectum (the distance from a focus to the ellipse along a line parallel to the minor axis) is the harmonic mean of the maximum and minimum distances of the ellipse from a focus. In other sciences. In computer science, specifically information retrieval and machine learning, the harmonic mean of the precision (true positives per predicted positive) and the recall (true positives per real positive) is often used as an aggregated performance score for the evaluation of algorithms and systems: the F-score (or F-measure). This is used in information retrieval because only the positive class is of relevance, while number of negatives, in general, is large and unknown. It is thus a trade-off as to whether the correct positive predictions should be measured in relation to the number of predicted positives or the number of real positives, so it is measured versus a putative number of positives that is an arithmetic mean of the two possible denominators. A consequence arises from basic algebra in problems where people or systems work together. As an example, if a gas-powered pump can drain a pool in 4 hours and a battery-powered pump can drain the same pool in 6 hours, then it will take both pumps , which is equal to 2.4 hours, to drain the pool together. This is one-half of the harmonic mean of 6 and 4: 4.8. That is, the appropriate average for the two types of pump is the harmonic mean, and with one pair of pumps (two pumps), it takes half this harmonic mean time, while with two pairs of pumps (four pumps) it would take a quarter of this harmonic mean time. In hydrology, the harmonic mean is similarly used to average hydraulic conductivity values for a flow that is perpendicular to layers (e.g., geologic or soil) - flow parallel to layers uses the arithmetic mean. This apparent difference in averaging is explained by the fact that hydrology uses conductivity, which is the inverse of resistivity. In sabermetrics, a baseball player's Power–speed number is the harmonic mean of their home run and stolen base totals. In population genetics, the harmonic mean is used when calculating the effects of fluctuations in the census population size on the effective population size. The harmonic mean takes into account the fact that events such as population bottleneck increase the rate genetic drift and reduce the amount of genetic variation in the population. This is a result of the fact that following a bottleneck very few individuals contribute to the gene pool limiting the genetic variation present in the population for many generations to come. When considering fuel economy in automobiles two measures are commonly used – miles per gallon (mpg), and litres per 100 km. As the dimensions of these quantities are the inverse of each other (one is distance per volume, the other volume per distance) when taking the mean value of the fuel economy of a range of cars one measure will produce the harmonic mean of the other – i.e., converting the mean value of fuel economy expressed in litres per 100 km to miles per gallon will produce the harmonic mean of the fuel economy expressed in miles per gallon. For calculating the average fuel consumption of a fleet of vehicles from the individual fuel consumptions, the harmonic mean should be used if the fleet uses miles per gallon, whereas the arithmetic mean should be used if the fleet uses litres per 100 km. In the USA the CAFE standards (the federal automobile fuel consumption standards) make use of the harmonic mean. In chemistry and nuclear physics the average mass per particle of a mixture consisting of different species (e.g., molecules or isotopes) is given by the harmonic mean of the individual species' masses weighted by their respective mass fraction. Beta distribution. The harmonic mean of a beta distribution with shape parameters "α" and "β" is: formula_26 The harmonic mean with "α" < 1 is undefined because its defining expression is not bounded in [0, 1]. Letting "α" = "β" formula_27 showing that for "α" = "β" the harmonic mean ranges from 0 for "α" = "β" = 1, to 1/2 for "α" = "β" → ∞. The following are the limits with one parameter finite (non-zero) and the other parameter approaching these limits: formula_28 With the geometric mean the harmonic mean may be useful in maximum likelihood estimation in the four parameter case. A second harmonic mean ("H"1 − X) also exists for this distribution formula_29 This harmonic mean with "β" < 1 is undefined because its defining expression is not bounded in [ 0, 1 ]. Letting "α" = "β" in the above expression formula_30 showing that for "α" = "β" the harmonic mean ranges from 0, for "α" = "β" = 1, to 1/2, for "α" = "β" → ∞. The following are the limits with one parameter finite (non zero) and the other approaching these limits: formula_31 Although both harmonic means are asymmetric, when "α" = "β" the two means are equal. Lognormal distribution. The harmonic mean ( "H" ) of the lognormal distribution of a random variable "X" is formula_32 where "μ" and "σ"2 are the parameters of the distribution, i.e. the mean and variance of the distribution of the natural logarithm of "X". The harmonic and arithmetic means of the distribution are related by formula_33 where "C"v and "μ"* are the coefficient of variation and the mean of the distribution respectively.. The geometric ("G"), arithmetic and harmonic means of the distribution are related by formula_34 Pareto distribution. The harmonic mean of type 1 Pareto distribution is formula_35 where "k" is the scale parameter and "α" is the shape parameter. Statistics. For a random sample, the harmonic mean is calculated as above. Both the mean and the variance may be infinite (if it includes at least one term of the form 1/0). Sample distributions of mean and variance. The mean of the sample "m" is asymptotically distributed normally with variance "s"2. formula_36 The variance of the mean itself is formula_37 where "m" is the arithmetic mean of the reciprocals, "x" are the variates, "n" is the population size and "E" is the expectation operator. Delta method. Assuming that the variance is not infinite and that the central limit theorem applies to the sample then using the delta method, the variance is formula_38 where "H" is the harmonic mean, "m" is the arithmetic mean of the reciprocals formula_39 "s"2 is the variance of the reciprocals of the data formula_40 and "n" is the number of data points in the sample. Jackknife method. A jackknife method of estimating the variance is possible if the mean is known. This method is the usual 'delete 1' rather than the 'delete m' version. This method first requires the computation of the mean of the sample ("m") formula_41 where "x" are the sample values. A series of value "wi" is then computed where formula_42 The mean ("h") of the "w"i is then taken: formula_43 The variance of the mean is formula_44 Significance testing and confidence intervals for the mean can then be estimated with the t test. Size biased sampling. Assume a random variate has a distribution "f"( "x" ). Assume also that the likelihood of a variate being chosen is proportional to its value. This is known as length based or size biased sampling. Let "μ" be the mean of the population. Then the probability density function "f"*( "x" ) of the size biased population is formula_45 The expectation of this length biased distribution E*( "x" ) is formula_46 where "σ"2 is the variance. The expectation of the harmonic mean is the same as the non-length biased version E( "x" ) formula_47 The problem of length biased sampling arises in a number of areas including textile manufacture pedigree analysis and survival analysis Akman "et al." have developed a test for the detection of length based bias in samples. Shifted variables. If "X" is a positive random variable and "q" > 0 then for all "ε" > 0 formula_48 Moments. Assuming that "X" and E("X") are > 0 then formula_49 This follows from Jensen's inequality. Gurland has shown that for a distribution that takes only positive values, for any "n" > 0 formula_50 Under some conditions formula_51 where ~ means approximately equal to. Sampling properties. Assuming that the variates ("x") are drawn from a lognormal distribution there are several possible estimators for "H": formula_52 where formula_53 formula_54 Of these "H"3 is probably the best estimator for samples of 25 or more. Bias and variance estimators. A first order approximation to the bias and variance of "H"1 are formula_55 where "C"v is the coefficient of variation. Similarly a first order approximation to the bias and variance of "H"3 are formula_56 In numerical experiments "H"3 is generally a superior estimator of the harmonic mean than "H"1. "H"2 produces estimates that are largely similar to "H"1. Notes. The Environmental Protection Agency recommends the use of the harmonic mean in setting maximum toxin levels in water. In geophysical reservoir engineering studies, the harmonic mean is widely used. Notes. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\left(\\frac{1^{-1} + 4^{-1} + 4^{-1}}{3}\\right)^{-1} = \\frac{3}{\\frac{1}{1} + \\frac{1}{4} + \\frac{1}{4}} = \\frac{3}{1.5} = 2\\,." }, { "math_id": 1, "text": "x_1, x_2, \\ldots, x_n" }, { "math_id": 2, "text": "H(x_1, x_2, \\ldots, x_n) = \\frac{n}{\\displaystyle \\frac1{x_1} + \\frac1{x_2} + \\cdots + \\frac1{x_n}} = \\frac{n}{\\displaystyle \\sum_{i=1}^n \\frac1{x_i}}." }, { "math_id": 3, "text": "\\begin{align}\nH(x_1, x_2, \\ldots, x_n) &= \\frac{1}{\\displaystyle A\\left(\\frac1{x_1}, \\frac1{x_2}, \\ldots \\frac1{x_n}\\right)}, \\\\\nA(x_1, x_2, \\ldots, x_n) &= \\frac{1}{\\displaystyle H\\left(\\frac1{x_1}, \\frac1{x_2}, \\ldots \\frac1{x_n}\\right)},\n\\end{align}" }, { "math_id": 4, "text": "A(x_1, x_2, \\ldots, x_n) = \\tfrac1n \\sum_{i=1}^n x_i." }, { "math_id": 5, "text": "\\min(x_1 \\ldots x_n) \\le H(x_1 \\ldots x_n) \\le n \\min(x_1 \\ldots x_n)" }, { "math_id": 6, "text": "H\\left(x_1, x_2, \\ldots, x_n\\right) = M_{-1}\\left(x_1, x_2, \\ldots, x_n\\right) = \\frac{n}{x_1^{-1} + x_2^{-1} + \\cdots + x_n^{-1}}" }, { "math_id": 7, "text": "H\\left(x_1, \\ldots, x_n\\right) =\n \\frac{\\left(G\\left(x_1, \\ldots, x_n\\right)\\right)^n}\n {A\\left(x_2 x_3 \\cdots x_n, x_1 x_3 \\cdots x_n, \\ldots, x_1 x_2 \\cdots x_{n-1}\\right)} =\n \\frac{\\left(G\\left(x_1, \\ldots, x_n\\right)\\right)^n}\n {A\\left(\n \\frac{1}{x_1} {\\prod\\limits_{i=1}^n x_i},\n \\frac{1}{x_2} {\\prod\\limits_{i=1}^n x_i},\n \\ldots,\n \\frac{1}{x_n} {\\prod\\limits_{i=1}^n x_i}\n \\right)}.\n" }, { "math_id": 8, "text": "x_1" }, { "math_id": 9, "text": "x_2" }, { "math_id": 10, "text": "H = \\frac{2x_1 x_2}{x_1 + x_2} \\qquad " }, { "math_id": 11, "text": " \\qquad \\frac{1}{H} = \\frac{(1/x_1) + (1/x_2)}{2}." }, { "math_id": 12, "text": "A = \\frac{x_1 + x_2}{2}" }, { "math_id": 13, "text": "G = \\sqrt{x_1 x_2}," }, { "math_id": 14, "text": "H = \\frac{G^2}{A} = G\\left(\\frac{G}{A}\\right)." }, { "math_id": 15, "text": "\\tfrac{G}{A} \\le 1" }, { "math_id": 16, "text": "G = \\sqrt{AH}" }, { "math_id": 17, "text": "x_3" }, { "math_id": 18, "text": "H = \\frac{3 x_1 x_2 x_3}{x_1 x_2 + x_1 x_3 + x_2 x_3}." }, { "math_id": 19, "text": "\\frac{A^3}{G^3} + \\frac{G^3}{H^3} + 1 \\le \\frac3{4} \\left(1 + \\frac{A}{H}\\right)^2." }, { "math_id": 20, "text": "w_1" }, { "math_id": 21, "text": "w_n" }, { "math_id": 22, "text": "x_n" }, { "math_id": 23, "text": "\n H = \\frac{\\sum\\limits_{i=1}^n w_i}{\\sum\\limits_{i=1}^n \\frac{w_i}{x_i}}\n = \\left( \\frac{\\sum\\limits_{i=1}^n w_i x_i^{-1}}{\\sum\\limits_{i=1}^n w_i} \\right)^{-1}.\n" }, { "math_id": 24, "text": "P/E = 0.3 \\times 30 + 0.7 \\times 1000 = 709" }, { "math_id": 25, "text": "P/E = \\frac{0.3 + 0.7}{0.3/30 + 0.7/1000} \\approx 93.46" }, { "math_id": 26, "text": "H = \\frac{\\alpha - 1}{\\alpha + \\beta - 1} \\text{ conditional on } \\alpha > 1 \\, \\, \\& \\, \\, \\beta > 0 " }, { "math_id": 27, "text": "H = \\frac{\\alpha - 1}{2 \\alpha - 1}" }, { "math_id": 28, "text": "\\begin{align}\n \\lim_{\\alpha \\to 0} H &= \\text{ undefined } \\\\\n \\lim_{\\alpha \\to 1} H &= \\lim_{\\beta \\to \\infty} H = 0 \\\\\n \\lim_{\\beta \\to 0} H &= \\lim_{\\alpha \\to \\infty} H = 1\n\\end{align}" }, { "math_id": 29, "text": "H_{1-X} = \\frac{\\beta - 1}{\\alpha + \\beta - 1} \\text{ conditional on } \\beta > 1 \\, \\, \\& \\, \\, \\alpha > 0" }, { "math_id": 30, "text": "H_{1-X} = \\frac{\\beta - 1}{2 \\beta - 1} " }, { "math_id": 31, "text": "\\begin{align}\n \\lim_{\\beta \\to 0} H_{1-X} &= \\text{ undefined } \\\\\n \\lim_{\\beta \\to 1} H_{1-X} &= \\lim_{\\alpha \\to \\infty} H_{1-X} = 0 \\\\\n \\lim_{\\alpha \\to 0} H_{1-X} &= \\lim_{\\beta \\to \\infty} H_{1-X} = 1\n\\end{align}" }, { "math_id": 32, "text": "H = \\exp \\left( \\mu - \\frac{1}{2} \\sigma^2 \\right)," }, { "math_id": 33, "text": "\\frac{\\mu^*}{H} = 1 + C_v^2 \\, ," }, { "math_id": 34, "text": "H \\mu^* = G^2." }, { "math_id": 35, "text": "H = k \\left( 1 + \\frac{1}{\\alpha} \\right)" }, { "math_id": 36, "text": "s^2 = \\frac{m \\left[\\operatorname{E}\\left(\\frac{1}{x} - 1\\right)\\right]}{m^2 n}" }, { "math_id": 37, "text": "\\operatorname{Var}\\left(\\frac{1}{x}\\right) = \\frac{m \\left[\\operatorname{E}\\left(\\frac{1}{x} - 1\\right)\\right]}{n m^2}" }, { "math_id": 38, "text": "\\operatorname{Var}(H) = \\frac{1}{n}\\frac{s^2}{m^4}" }, { "math_id": 39, "text": "m = \\frac{1}{n} \\sum{ \\frac{1}{x} }." }, { "math_id": 40, "text": "s^2 = \\operatorname{Var}\\left( \\frac{1}{x} \\right) " }, { "math_id": 41, "text": "m = \\frac{n}{ \\sum{ \\frac{1}{x} } }" }, { "math_id": 42, "text": "w_i = \\frac{n - 1}{ \\sum_{j \\neq i} \\frac{1}{x} }." }, { "math_id": 43, "text": "h = \\frac{1}{n} \\sum{w_i}" }, { "math_id": 44, "text": "\\frac{n - 1}{n} \\sum{(m - w_i)}^2." }, { "math_id": 45, "text": "f^*(x) = \\frac{x f(x)}{\\mu}" }, { "math_id": 46, "text": "\\operatorname{E}^*(x) = \\mu \\left[ 1 + \\frac{\\sigma^2}{\\mu^2} \\right]" }, { "math_id": 47, "text": " E^*( x^{ -1 } ) = E( x )^{ -1 } " }, { "math_id": 48, "text": "\\operatorname{Var} \\left[\\frac{1}{(X + \\epsilon)^q}\\right] < \\operatorname{Var} \\left(\\frac{1}{X^q}\\right) ." }, { "math_id": 49, "text": "\\operatorname{E}\\left[ \\frac{1}{X} \\right] \\ge \\frac{1}{ \\operatorname{E}(X) }" }, { "math_id": 50, "text": "\\operatorname{E} \\left(X^{-1}\\right) \\ge \\frac{\\operatorname{E} \\left(X^{n-1}\\right)}{\\operatorname{E}\\left(X^n\\right)} ." }, { "math_id": 51, "text": "\\operatorname{E}(a + X)^{-n} \\sim \\operatorname{E}\\left(a + X^{-n}\\right)" }, { "math_id": 52, "text": "\\begin{align}\n H_1 &= \\frac{n}{ \\sum\\left(\\frac{1}{x}\\right) } \\\\\n H_2 &= \\frac{\\left( \\exp\\left[ \\frac{1}{n} \\sum \\log_e(x) \\right] \\right)^2}{ \\frac{1}{n} \\sum(x) } \\\\\n H_3 &= \\exp \\left(m - \\frac{1}{2} s^2 \\right)\n\\end{align}" }, { "math_id": 53, "text": "m = \\frac{1}{n} \\sum \\log_e (x)" }, { "math_id": 54, "text": "s^2 = \\frac{1}{n} \\sum \\left(\\log_e (x) - m\\right)^2" }, { "math_id": 55, "text": "\\begin{align}\n \\operatorname{bias}\\left[ H_1 \\right] &= \\frac{H C_v}{n} \\\\\n \\operatorname{Var}\\left[ H_1 \\right] &= \\frac{H^2 C_v}{n}\n\\end{align}" }, { "math_id": 56, "text": "\\begin{align}\n \\frac{H \\log_e \\left(1 + C_v\\right)}{2n} \\left[ 1 + \\frac{1 + C_v^2}{2} \\right] \\\\\n \\frac{H \\log_e \\left(1 + C_v\\right)}{n} \\left[ 1 + \\frac{1 + C_v^2}{4} \\right]\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=14463
14463033
Free surface
Surface of a fluid that is subject to zero parallel shear stress In physics, a free surface is the surface of a fluid that is subject to zero parallel shear stress, such as the interface between two homogeneous fluids. An example of two such homogeneous fluids would be a body of water (liquid) and the air in the Earth's atmosphere (gas mixture). Unlike liquids, gases cannot form a free surface on their own. Fluidized/liquified solids, including slurries, granular materials, and powders may form a free surface. A liquid in a gravitational field will form a free surface if unconfined from above. Under mechanical equilibrium this free surface must be perpendicular to the forces acting on the liquid; if not there would be a force along the surface, and the liquid would flow in that direction. Thus, on the surface of the Earth, all free surfaces of liquids are horizontal unless disturbed (except near solids dipping into them, where surface tension distorts the surface in a region called the meniscus). In a free liquid that is not affected by outside forces such as a gravitational field, internal attractive forces only play a role (e.g. Van der Waals forces, hydrogen bonds). Its free surface will assume the shape with the least surface area for its volume: a perfect sphere. Such behaviour can be expressed in terms of surface tension. It can be demonstrated experimentally by observing a large globule of oil placed below the surface of a mixture of water and alcohol having the same density so the oil has neutral buoyancy. Flatness. Flatness refers to the shape of a liquid's free surface. On Earth, the flatness of a liquid is a function of the curvature of the planet, and from trigonometry, can be found to deviate from true flatness by approximately 19.6 nanometers over an area of 1 square meter, a deviation which is dominated by the effects of surface tension. This calculation uses Earth's mean radius at sea level, however a liquid will be slightly flatter at the poles. Over large distances or planetary scale, the surface of an undisturbed liquid tends to conform to equigeopotential surfaces; for example, mean sea level follows approximately the geoid. Waves. If the free surface of a liquid is disturbed, waves are produced on the surface. These waves are not elastic waves due to any elastic force; they are gravity waves caused by the force of gravity tending to bring the surface of the disturbed liquid back to its horizontal level. Momentum causes the wave to overshoot, thus oscillating and spreading the disturbance to the neighboring portions of the surface. The velocity of the surface waves varies as the square root of the wavelength if the liquid is deep; therefore long waves on the sea go faster than short ones. Very minute waves or ripples are not due to gravity but to capillary action, and have properties different from those of the longer ocean surface waves, because the surface is increased in area by the ripples and the capillary forces are in this case large compared with the gravitational forces. Capillary ripples are damped both by sub-surface viscosity and by surface rheology. Rotation. If a liquid is contained in a cylindrical vessel and is rotating around a vertical axis coinciding with the axis of the cylinder, the free surface will assume a parabolic surface of revolution known as a paraboloid. The free surface at each point is at a right angle to the force acting at it, which is the resultant of the force of gravity and the centrifugal force from the motion of each point in a circle. Since the main mirror in a telescope must be parabolic, this principle is used to create liquid-mirror telescopes. Consider a cylindrical container filled with liquid rotating in the "z" direction in cylindrical coordinates, the equations of motion are: formula_0 where formula_1 is the pressure, formula_2 is the density of the fluid, formula_3 is the radius of the cylinder, formula_4 is the angular frequency, and formula_5 is the gravitational acceleration. Taking a surface of constant pressure formula_6 the total differential becomes formula_7 Integrating, the equation for the free surface becomes formula_8 where formula_9 is the distance of the free surface from the bottom of the container along the axis of rotation. If one integrates the volume of the paraboloid formed by the free surface and then solves for the original height, one can find the height of the fluid along the centerline of the cylindrical container: formula_10 The equation of the free surface at any distance formula_3 from the center becomes formula_11 If a free liquid is rotating about an axis, the free surface will take the shape of an oblate spheroid: the approximate shape of the Earth due to its equatorial bulge. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial P}{\\partial r} = \\rho r \\omega^2, \\quad\n \\frac{\\partial P}{\\partial \\theta} = 0, \\quad\n \\frac{\\partial P}{\\partial z} = -\\rho g," }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "\\omega" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "(dP = 0)" }, { "math_id": 7, "text": "dP = \\rho r \\omega^2 dr - \\rho g dz \\to \\frac{dz_\\text{isobar}}{dr} = \\frac{r \\omega^2}{g}." }, { "math_id": 8, "text": "z_s = \\frac{\\omega^2}{2g} r^2 + h_c," }, { "math_id": 9, "text": "h_c" }, { "math_id": 10, "text": "h_c = h_0 - \\frac{\\omega^2 R^2}{4g}." }, { "math_id": 11, "text": "z_s = h_0 - \\frac{\\omega^2}{4g} (R^2 - 2 r^2)." }, { "math_id": 12, "text": "\\frac{Dp}{Dt} = 0.\n" } ]
https://en.wikipedia.org/wiki?curid=14463033
14463498
Eilenberg–Mazur swindle
Method of proof involving paradoxical properties of infinite sums In mathematics, the Eilenberg–Mazur swindle, named after Samuel Eilenberg and Barry Mazur, is a method of proof that involves paradoxical properties of infinite sums. In geometric topology it was introduced by Mazur (1959, 1961) and is often called the Mazur swindle. In algebra it was introduced by Samuel Eilenberg and is known as the Eilenberg swindle or Eilenberg telescope (see telescoping sum). The Eilenberg–Mazur swindle is similar to the following well known joke "proof" that 1 = 0: 1 = 1 + (−1 + 1) + (−1 + 1) + ... = 1 − 1 + 1 − 1 + ... = (1 − 1) + (1 − 1) + ... = 0 This "proof" is not valid as a claim about real numbers because Grandi's series 1 − 1 + 1 − 1 + ... does not converge, but the analogous argument can be used in some contexts where there is some sort of "addition" defined on some objects for which infinite sums do make sense, to show that if "A" + "B" = 0 then "A" = "B" = 0. Mazur swindle. In geometric topology the addition used in the swindle is usually the connected sum of knots or manifolds. Example : A typical application of the Mazur swindle in geometric topology is the proof that the sum of two non-trivial knots "A" and "B" is non-trivial. For knots it is possible to take infinite sums by making the knots smaller and smaller, so if "A" + "B" is trivial then formula_0 so "A" is trivial (and "B" by a similar argument). The infinite sum of knots is usually a wild knot, not a tame knot. See for more geometric examples. Example: The oriented "n"-manifolds have an addition operation given by connected sum, with 0 the "n"-sphere. If "A" + "B" is the "n"-sphere, then "A" + "B" + "A" + "B" + ... is Euclidean space so the Mazur swindle shows that the connected sum of "A" and Euclidean space is Euclidean space, which shows that "A" is the 1-point compactification of Euclidean space and therefore "A" is homeomorphic to the "n"-sphere. (This does not show in the case of smooth manifolds that "A" is diffeomorphic to the "n"-sphere, and in some dimensions, such as 7, there are examples of exotic spheres "A" with inverses that are not diffeomorphic to the standard "n"-sphere.) Eilenberg swindle. In algebra the addition used in the swindle is usually the direct sum of modules over a ring. Example: A typical application of the Eilenberg swindle in algebra is the proof that if "A" is a projective module over a ring "R" then there is a free module "F" with "A" ⊕ "F" ≅ "F". To see this, choose a module "B" such that "A" ⊕ "B" is free, which can be done as "A" is projective, and put "F" = "B" ⊕ "A" ⊕ "B" ⊕ "A" ⊕ "B" ⊕ ⋯. so that "A" ⊕ "F" = "A" ⊕ ("B" ⊕ "A") ⊕ ("B" ⊕ "A") ⊕ ⋯ = ("A" ⊕ "B") ⊕ ("A" ⊕ "B") ⊕ ⋯ ≅ "F". Example: Finitely generated free modules over commutative rings "R" have a well-defined natural number as their dimension which is additive under direct sums, and are isomorphic if and only if they have the same dimension. This is false for some noncommutative rings, and a counterexample can be constructed using the Eilenberg swindle as follows. Let "X" be an abelian group such that "X" ≅ "X" ⊕ "X" (for example the direct sum of an infinite number of copies of any nonzero abelian group), and let "R" be the ring of endomorphisms of "X". Then the left "R"-module "R" is isomorphic to the left "R"-module "R" ⊕ "R". Example: If "A" and "B" are any groups then the Eilenberg swindle can be used to construct a ring "R" such that the group rings "R"["A"] and "R"["B"] are isomorphic rings: take "R" to be the group ring of the restricted direct product of infinitely many copies of "A" ⨯ "B". Other examples. The proof of the Cantor–Bernstein–Schroeder theorem might be seen as antecedent of the Eilenberg–Mazur swindle. In fact, the ideas are quite similar. If there are injections of sets from "X" to "Y" and from "Y" to "X", this means that formally we have "X" "Y" + "A" and "Y" "X" + "B" for some sets "A" and "B", where + means disjoint union and = means there is a bijection between two sets. Expanding the former with the latter, "X" = "X" + "A" + "B". In this bijection, let "Z" consist of those elements of the left hand side that correspond to an element of "X" on the right hand side. This bijection then expands to the bijection "X" = "A" + "B" + "A" + "B" + ⋯ + "Z". Substituting the right hand side for "X" in "Y" = "B" + "X" gives the bijection "Y" = "B" + "A" + "B" + "A" + ⋯ + "Z". Switching every adjacent pair "B" + "A" yields "Y" = "A" + "B" + "A" + "B" + ⋯ + "Z". Composing the bijection for "X" with the inverse of the bijection for "Y" then yields "X" = "Y". This argument depended on the bijections "A" + "B" "B" + "A" and "A" + ("B" + "C") ("A" + "B") + "C" as well as the well-definedness of infinite disjoint union.
[ { "math_id": 0, "text": "A=A+(B+A)+(B+A)+\\cdots = (A+B)+(A+B)+\\cdots=0\\," } ]
https://en.wikipedia.org/wiki?curid=14463498
14464469
Outline of black holes
Overview of and topical guide to black holes The following outline is provided as an overview of and topical guide to black holes: Black hole – mathematically defined region of spacetime exhibiting such a strong gravitational pull that no particle or electromagnetic radiation can escape from inside it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, making it essentially impossible to observe. What type of thing is a black hole? A black hole can be described as all of the following: History of black holes. History of black holes References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=14464469
1446490
Temperature gradient
Temperature difference per unit of length A temperature gradient is a physical quantity that describes in which direction and at what rate the temperature changes the most rapidly around a particular location. The temperature spatial gradient is a vector quantity with dimension of temperature difference per unit length. The SI unit is kelvin per meter (K/m). Temperature gradients in the atmosphere are important in the atmospheric sciences (meteorology, climatology and related fields). Mathematical description. Assuming that the temperature "T" is an intensive quantity, i.e., a single-valued, continuous and differentiable function of three-dimensional space (often called a scalar field), i.e., that formula_0 where "x", "y" and "z" are the coordinates of the location of interest, then the temperature gradient is the vector quantity defined as formula_1 Physical processes. Meteorology. Differences in air temperature between different locations are critical in weather forecasting and climate. The absorption of solar light at or near the planetary surface increases the temperature gradient and may result in convection (a major process of cloud formation, often associated with precipitation). Meteorological fronts are regions where the horizontal temperature gradient may reach relatively high values, as these are boundaries between air masses with rather distinct properties. Clearly, the temperature gradient may change substantially in time, as a result of diurnal or seasonal heating and cooling for instance. This most likely happens during an inversion. For instance, during the day the temperature at ground level may be cold while it's warmer up in the atmosphere. As the day shifts over to night the temperature might drop rapidly while at other places on the land stay warmer or cooler at the same elevation. This happens on the West Coast of the United States sometimes due to geography. Weathering. Expansion and contraction of rock, caused by temperature changes during a wildfire, through thermal stress weathering, may result in thermal shock and subsequent structure failure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T=T(x,y,z)" }, { "math_id": 1, "text": "\\nabla T = \\begin{pmatrix}\n{\\frac{\\partial T}{\\partial x}}, \n{\\frac{\\partial T}{\\partial y}}, \n{\\frac{\\partial T}{\\partial z}}\n\\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=1446490
144652
Riemannian manifold
Smooth manifold with an inner product on each tangent space In differential geometry, a Riemannian manifold is a geometric space on which many geometric notions such as distance, angles, length, volume, and curvature are defined. Euclidean space, the formula_0-sphere, hyperbolic space, and smooth surfaces in three-dimensional space, such as ellipsoids and paraboloids, are all examples of Riemannian manifolds. Riemannian manifolds are named after German mathematician Bernhard Riemann, who first conceptualized them. Formally, a Riemannian metric (or just a metric) on a smooth manifold is a choice of inner product for each tangent space of the manifold. A Riemannian manifold is a smooth manifold together with a Riemannian metric. The techniques of differential and integral calculus are used to pull geometric data out of the Riemannian metric. For example, integration leads to the Riemannian distance function, whereas differentiation is used to define curvature and parallel transport. Any smooth surface in three-dimensional Euclidean space is a Riemannian manifold with a Riemannian metric coming from the way it sits inside the ambient space. The same is true for any submanifold of Euclidean space of any dimension. Although John Nash proved that every Riemannian manifold arises as a submanifold of Euclidean space, and although some Riemannian manifolds are naturally exhibited or defined in that way, the idea of a Riemannian manifold emphasizes the intrinsic point of view, which defines geometric notions directly on the abstract space itself without referencing an ambient space. In many instances, such as for hyperbolic space and projective space, Riemannian metrics are more naturally defined or constructed using the intrinsic point of view. Additionally, many metrics on Lie groups and homogeneous spaces are defined intrinsically by using group actions to transport an inner product on a single tangent space to the entire manifold, and many special metrics such as constant scalar curvature metrics and Kähler–Einstein metrics are constructed intrinsically using tools from partial differential equations. Riemannian geometry, the study of Riemannian manifolds, has deep connections to other areas of math, including geometric topology, complex geometry, and algebraic geometry. Applications include physics (especially general relativity and gauge theory), computer graphics, machine learning, and cartography. Generalizations of Riemannian manifolds include pseudo-Riemannian manifolds, Finsler manifolds, and sub-Riemannian manifolds. History. In 1827, Carl Friedrich Gauss discovered that the Gaussian curvature of a surface embedded in 3-dimensional space only depends on local measurements made within the surface (the first fundamental form). This result is known as the Theorema Egregium ("remarkable theorem" in Latin). A map that preserves the local measurements of a surface is called a local isometry. Call a property of a surface an intrinsic property if it is preserved by local isometries and call it an extrinsic property if it is not. In this language, the Theorema Egregium says that the Gaussian curvature is an intrinsic property of surfaces. Riemannian manifolds and their curvature were first introduced non-rigorously by Bernhard Riemann in 1854. However, they would not be formalized until much later. In fact, the more primitive concept of a smooth manifold was first explicitly defined only in 1913 in a book by Hermann Weyl. Élie Cartan introduced the Cartan connection, one of the first concepts of a connection. Levi-Civita defined the Levi-Civita connection, a special connection on a Riemannian manifold. Albert Einstein used the theory of pseudo-Riemannian manifolds (a generalization of Riemannian manifolds) to develop general relativity. Specifically, the Einstein field equations are constraints on the curvature of spacetime, which is a 4-dimensional pseudo-Riemannian manifold. Definition. Riemannian metrics and Riemannian manifolds. Let formula_1 be a smooth manifold. For each point formula_2, there is an associated vector space formula_3 called the tangent space of formula_1 at formula_4. Vectors in formula_3 are thought of as the vectors tangent to formula_1 at formula_4. However, formula_3 does not come equipped with an inner product, a measuring stick that gives tangent vectors a concept of length and angle. This is an important deficiency because calculus teaches that to calculate the length of a curve, the length of vectors tangent to the curve must be defined. A Riemannian metric puts a measuring stick on every tangent space. A "Riemannian metric" formula_5 on formula_1 assigns to each formula_4 a positive-definite inner product formula_6 in a smooth way (see the section on regularity below). This induces a norm formula_7 defined by formula_8. A smooth manifold formula_1 endowed with a Riemannian metric formula_5 is a "Riemannian manifold", denoted formula_9. A Riemannian metric is a special case of a metric tensor. A Riemannian metric is not to be confused with the distance function of a metric space, which is also called a metric. The Riemannian metric in coordinates. If formula_10 are smooth local coordinates on formula_1, the vectors formula_11 form a basis of the vector space formula_3 for any formula_12. Relative to this basis, one can define the Riemannian metric's components at each point formula_4 by formula_13. These formula_14 functions formula_15 can be put together into an formula_16 matrix-valued function on formula_17. The requirement that formula_18 is a positive-definite inner product then says exactly that this matrix-valued function is a symmetric positive-definite matrix at formula_4. In terms of the tensor algebra, the Riemannian metric can be written in terms of the dual basis formula_19 of the cotangent bundle as formula_20 Regularity of the Riemannian metric. The Riemannian metric formula_5 is "continuous" if its components formula_15 are continuous in any smooth coordinate chart formula_21 The Riemannian metric formula_5 is "smooth" if its components formula_22 are smooth in any smooth coordinate chart. One can consider many other types of Riemannian metrics in this spirit, such as Lipschitz Riemannian metrics or measurable Riemannian metrics. There are situations in geometric analysis in which one wants to consider non-smooth Riemannian metrics. See for instance (Gromov 1999) and (Shi and Tam 2002). However, in this article, formula_5 is assumed to be smooth unless stated otherwise. Musical isomorphism. In analogy to how an inner product on a vector space induces an isomorphism between a vector space and its dual given by formula_23, a Riemannian metric induces an isomorphism of bundles between the tangent bundle and the cotangent bundle. Namely, if formula_5 is a Riemannian metric, then formula_24 is a isomorphism of smooth vector bundles from the tangent bundle formula_25 to the cotangent bundle formula_26. Isometries. An isometry is a function between Riemannian manifolds which preserves all of the structure of Riemannian manifolds. If two Riemannian manifolds have an isometry between them, they are called "isometric", and they are considered to be the same manifold for the purpose of Riemannian geometry. Specifically, if formula_9 and formula_27 are two Riemannian manifolds, a diffeomorphism formula_28 is called an "isometry" if formula_29, that is, if formula_30 for all formula_31 and formula_32 For example, translations and rotations are both isometries from Euclidean space (to be defined soon) to itself. One says that a smooth map formula_33 not assumed to be a diffeomorphism, is a "local isometry" if every formula_31 has an open neighborhood formula_17 such that formula_34 is an isometry (and thus a diffeomorphism). Volume. An oriented formula_0-dimensional Riemannian manifold formula_9 has a unique formula_0-form formula_35 called the "Riemannian volume form". The Riemannian volume form is preserved by orientation-preserving isometries. The volume form gives rise to a measure on formula_1 which allows measurable functions to be integrated. If formula_1 is compact, the "volume of formula_1" is formula_36. Examples. Euclidean space. Let formula_37 denote the standard coordinates on formula_38 The (canonical) "Euclidean metric" formula_39 is given by formula_40 or equivalently formula_41 or equivalently by its coordinate functions formula_42 where formula_43 is the Kronecker delta which together form the matrix formula_44 The Riemannian manifold formula_45 is called "Euclidean space". Submanifolds. Let formula_9 be a Riemannian manifold and let formula_48 be an immersed submanifold or an embedded submanifold of formula_1. The pullback formula_49 of formula_5 is a Riemannian metric on formula_50, and formula_51 is said to be a "Riemannian submanifold" of formula_9. In the case where formula_52, the map formula_48 is given by formula_53 and the metric formula_49 is just the restriction of formula_5 to vectors tangent along formula_50. In general, the formula for formula_49 is formula_54 where formula_55 is the pushforward of formula_56 by formula_57 Examples: is a smooth embedded submanifold of Euclidean space formula_47. The Riemannian metric this induces on formula_46 is called the "round metric" or "standard metric". is a smooth embedded submanifold of Euclidean space formula_61. On the other hand, if formula_50 already has a Riemannian metric formula_66, then the immersion (or embedding) formula_48 is called an "isometric immersion" (or "isometric embedding") if formula_67. Hence isometric immersions and isometric embeddings are Riemannian submanifolds. Products. Let formula_9 and formula_27 be two Riemannian manifolds, and consider the product manifold formula_68. The Riemannian metrics formula_5 and formula_69 naturally put a Riemannian metric formula_70 on formula_71 which can be described in a few ways. For example, consider the formula_0-torus formula_83. If each copy of formula_84 is given the round metric, the product Riemannian manifold formula_85 is called the "flat torus". As another example, the Riemannian product formula_86, where each copy of formula_87 has the Euclidean metric, is isometric to formula_88 with the Euclidean metric. Positive combinations of metrics. Let formula_89 be Riemannian metrics on formula_90 If formula_91 are any positive smooth functions on formula_1, then formula_92 is another Riemannian metric on formula_90 Every smooth manifold admits a Riemannian metric. Theorem: Every smooth manifold admits a (non-canonical) Riemannian metric. This is a fundamental result. Although much of the basic theory of Riemannian metrics can be developed using only that a smooth manifold is a locally Euclidean topological space, for this result it is necessary to use that smooth manifolds are Hausdorff and paracompact. The reason is that the proof makes use of a partition of unity. An alternative proof uses the Whitney embedding theorem to embed formula_1 into Euclidean space and then pulls back the metric from Euclidean space to formula_1. On the other hand, the Nash embedding theorem states that, given any smooth Riemannian manifold formula_93 there is an embedding formula_94 for some formula_50 such that the pullback by formula_95 of the standard Riemannian metric on formula_96 is formula_97 That is, the entire structure of a smooth Riemannian manifold can be encoded by a diffeomorphism to a certain embedded submanifold of some Euclidean space. Therefore, one could argue that nothing can be gained from the consideration of abstract smooth manifolds and their Riemannian metrics. However, there are many natural smooth Riemannian manifolds, such as the set of rotations of three-dimensional space and hyperbolic space, of which any representation as a submanifold of Euclidean space will fail to represent their remarkable symmetries and properties as clearly as their abstract presentations do. Metric space structure. An "admissible curve" is a piecewise smooth curve formula_98 whose velocity formula_99 is nonzero everywhere it is defined. The nonnegative function formula_100 is defined on the interval formula_101 except for at finitely many points. The length formula_102 of an admissible curve formula_98 is defined as formula_103 The integrand is bounded and continuous except at finitely many points, so it is integrable. For "formula_9" a connected Riemannian manifold, define formula_104 by formula_105 Theorem: formula_106 is a metric space, and the metric topology on formula_106 coincides with the topology on formula_1. Although the length of a curve is given by an explicit formula, it is generally impossible to write out the distance function formula_109 by any explicit means. In fact, if formula_1 is compact, there always exist points where formula_110 is non-differentiable, and it can be remarkably difficult to even determine the location or nature of these points, even in seemingly simple cases such as when formula_9 is an ellipsoid. If one works with Riemannian metrics that are merely continuous but possibly not smooth, the length of an admissible curve and the Riemannian distance function are defined exactly the same, and, as before, formula_106 is a metric space and the metric topology on formula_106 coincides with the topology on formula_1. Diameter. The "diameter" of the metric space formula_106 is formula_111 The Hopf–Rinow theorem shows that if formula_106 is complete and has finite diameter, it is compact. Conversely, if formula_106 is compact, then the function formula_110 has a maximum, since it is a continuous function on a compact metric space. This proves the following. If formula_106 is complete, then it is compact if and only if it has finite diameter. This is not the case without the completeness assumption; for counterexamples one could consider any open bounded subset of a Euclidean space with the standard Riemannian metric. It is also not true that "any" complete metric space of finite diameter must be compact; it matters that the metric space came from a Riemannian manifold. Connections, geodesics, and curvature. Connections. An (affine) connection is an additional structure on a Riemannian manifold that defines differentiation of one vector field with respect to another. Connections contain geometric data, and two Riemannian manifolds with different connections have different geometry. Let formula_112 denote the space of vector fields on formula_1. An "(affine) connection" formula_113 on formula_1 is a bilinear map formula_114 such that The expression formula_118 is called the "covariant derivative of formula_119 with respect to formula_120". Levi-Civita connection. Two Riemannian manifolds with different connections have different geometry. Thankfully, there is a natural connection associated to a Riemannian manifold called the Levi-Civita connection. A connection formula_121 is said to "preserve the metric" if formula_122 A connection formula_121 is "torsion-free" if formula_123 where formula_124 is the Lie bracket. A "Levi-Civita connection" is a torsion-free connection that preserves the metric. Once a Riemannian metric is fixed, there exists a unique Levi-Civita connection. Note that the definition of preserving the metric uses the regularity of formula_5. Covariant derivative along a curve. If formula_98 is a smooth curve, a "smooth vector field along formula_108" is a smooth map formula_125 such that formula_126 for all formula_127. The set formula_128 of smooth vector fields along formula_108 is a vector space under pointwise vector addition and scalar multiplication. One can also pointwise multiply a smooth vector field along formula_108 by a smooth function formula_129: formula_130 for formula_131 Let formula_120 be a smooth vector field along formula_108. If formula_132 is a smooth vector field on a neighborhood of the image of formula_108 such that formula_133, then formula_132 is called an "extension of formula_120". Given a fixed connection formula_121 on formula_1 and a smooth curve formula_98, there is a unique operator formula_134, called the "covariant derivative along formula_108", such that: Geodesics. Geodesics are curves with no intrinsic acceleration. Equivalently, geodesics are curves that locally take the shortest path between two points. They are the generalization of straight lines in Euclidean space to arbitrary Riemannian manifolds. An ant living in a Riemannian manifold walking straight ahead without making any effort to accelerate or turn would trace out a geodesic. Fix a connection formula_121 on formula_1. Let formula_98 be a smooth curve. The "acceleration of formula_108" is the vector field formula_138 along formula_108. If formula_139 for all formula_140, formula_108 is called a "geodesic". For every formula_2 and formula_141, there exists a geodesic formula_142 defined on some open interval formula_143 containing 0 such that formula_144 and formula_145. Any two such geodesics agree on their common domain. Taking the union over all open intervals formula_143 containing 0 on which a geodesic satisfying formula_144 and formula_145 exists, one obtains a geodesic called a "maximal geodesic" of which every geodesic satisfying formula_144 and formula_145 is a restriction. Every curve formula_98 that has the shortest length of any admissible curve with the same endpoints as formula_108 is a geodesic (in a unit-speed reparameterization). Hopf–Rinow theorem. The Riemannian manifold formula_1 with its Levi-Civita connection is "geodesically complete" if the domain of every maximal geodesic is formula_150. The plane formula_146 is geodesically complete. On the other hand, the punctured plane formula_151 with the restriction of the Riemannian metric from formula_146 is not geodesically complete as the maximal geodesic with initial conditions formula_148, formula_149 does not have domain formula_87. The Hopf–Rinow theorem characterizes geodesically complete manifolds. Theorem: Let formula_9 be a connected Riemannian manifold. The following are equivalent: Parallel transport. In Euclidean space, all tangent spaces are canonically identified with each other via translation, so it is easy to move vectors from one tangent space to another. Parallel transport is a way of moving vectors from one tangent space to another along a curve in the setting of a general Riemannian manifold. Given a fixed connection, there is a unique way to do parallel transport. Specifically, call a smooth vector field formula_152 along a smooth curve formula_108 "parallel along formula_108" if formula_153 identically. Fix a curve formula_98 with formula_144 and formula_154. to parallel transport a vector formula_141 to a vector in formula_155 along formula_108, first extend formula_56 to a vector field parallel along formula_108, and then take the value of this vector field at formula_156. The images below show parallel transport induced by the Levi-Civita connection associated to two different Riemannian metrics on the punctured plane formula_157. The curve the parallel transport is done along is the unit circle. In polar coordinates, the metric on the left is the standard Euclidean metric formula_158, while the metric on the right is formula_159. This second metric has a singularity at the origin, so it does not extend past the puncture, but the first metric extends to the entire plane. Warning: This is parallel transport on the punctured plane "along" the unit circle, not parallel transport "on" the unit circle. Indeed, in the first image, the vectors fall outside of the tangent space to the unit circle. Riemann curvature tensor. The Riemann curvature tensor measures precisely the extent to which parallel transporting vectors around a small rectangle is not the identity map. The Riemann curvature tensor is 0 at every point if and only if the manifold is locally isometric to Euclidean space. Fix a connection formula_121 on formula_1. The "Riemann curvature tensor" is the map formula_160 defined by formula_161 where formula_162 is the Lie bracket of vector fields. The Riemann curvature tensor is a formula_163-tensor field. Ricci curvature tensor. Fix a connection formula_121 on formula_1. The Ricci curvature tensor is formula_164 where formula_165 is the trace. The Ricci curvature tensor is a covariant 2-tensor field. Einstein manifolds. The Ricci curvature tensor formula_166 plays a defining role in the theory of Einstein manifolds, which has applications to the study of gravity. A (pseudo-)Riemannian metric formula_5 is called an "Einstein metric" if "Einstein's equation" formula_167 for some constant formula_107 holds, and a (pseudo-)Riemannian manifold whose metric is Einstein is called an "Einstein manifold". Examples of Einstein manifolds include Euclidean space, the formula_0-sphere, hyperbolic space, and complex projective space with the Fubini-Study metric. Constant curvature and space forms. A Riemannian manifold is said to have "constant curvature" κ if every sectional curvature equals the number κ. This is equivalent to the condition that, relative to any coordinate chart, the Riemann curvature tensor can be expressed in terms of the metric tensor as formula_168 This implies that the Ricci curvature is given by "R""jk" ("n" – 1)"κg""jk" and the scalar curvature is "n"("n" – 1)"κ", where n is the dimension of the manifold. In particular, every Riemannian manifold of constant curvature is an Einstein manifold, thereby having constant scalar curvature. As found by Bernhard Riemann in his 1854 lecture introducing Riemannian geometry, the locally-defined Riemannian metric formula_169 has constant curvature κ. Any two Riemannian manifolds of the same constant curvature are locally isometric, and so it follows that any Riemannian manifold of constant curvature κ can be covered by coordinate charts relative to which the metric has the above form. A "Riemannian space form" is a Riemannian manifold with constant curvature which is additionally connected and geodesically complete. A Riemannian space form is said to be a "spherical space form" if the curvature is positive, a "Euclidean space form" if the curvature is zero, and a "hyperbolic space form" or "hyperbolic manifold" if the curvature is negative. In any dimension, the sphere with its standard Riemannian metric, Euclidean space, and hyperbolic space are Riemannian space forms of constant curvature 1, 0, and –1 respectively. Furthermore, the Killing–Hopf theorem says that any simply-connected spherical space form is homothetic to the sphere, any simply-connected Euclidean space form is homothetic to Euclidean space, and any simply-connected hyperbolic space form is homothetic to hyperbolic space. Using the covering manifold construction, any Riemannian space form is isometric to the quotient manifold of a simply-connected Riemannian space form, modulo a certain group action of isometries. For example, the isometry group of the n-sphere is the orthogonal group O("n" + 1). Given any finite subgroup G thereof in which only the identity matrix possesses 1 as an eigenvalue, the natural group action of the orthogonal group on the n-sphere restricts to a group action of G, with the quotient manifold "S""n" / "G" inheriting a geodesically complete Riemannian metric of constant curvature 1. Up to homothety, every spherical space form arises in this way; this largely reduces the study of spherical space forms to problems in group theory. For instance, this can be used to show directly that every even-dimensional spherical space form is homothetic to the standard metric on either the sphere or real projective space. There are many more odd-dimensional spherical space forms, although there are known algorithms for their classification. The list of three-dimensional spherical space forms is infinite but explicitly known, and includes the lens spaces and the Poincaré dodecahedral space. The case of Euclidean and hyperbolic space forms can likewise be reduced to group theory, based on study of the isometry group of Euclidean space and hyperbolic space. For example, the class of two-dimensional Euclidean space forms includes Riemannian metrics on the Klein bottle, the Möbius strip, the torus, the cylinder "S"1 × ℝ, along with the Euclidean plane. Unlike the case of two-dimensional spherical space forms, in some cases two space form structures on the same manifold are not homothetic. The case of two-dimensional hyperbolic space forms is even more complicated, having to do with Teichmüller space. In three dimensions, the Euclidean space forms are known, while the geometry of hyperbolic space forms in three and higher dimensions remains an area of active research known as hyperbolic geometry. Riemannian metrics on Lie groups. Left-invariant metrics on Lie groups. Let G be a Lie group, such as the group of rotations in three-dimensional space. Using the group structure, any inner product on the tangent space at the identity (or any other particular tangent space) can be transported to all other tangent spaces to define a Riemannian metric. Formally, given an inner product "g""e" on the tangent space at the identity, the inner product on the tangent space at an arbitrary point p is defined by formula_170 where for arbitrary x, "L""x" is the left multiplication map "G" → "G" sending a point y to "xy". Riemannian metrics constructed this way are "left-invariant"; right-invariant Riemannian metrics could be constructed likewise using the right multiplication map instead. The Levi-Civita connection and curvature of a general left-invariant Riemannian metric can be computed explicitly in terms of "g""e", the adjoint representation of G, and the Lie algebra associated to G. These formulas simplify considerably in the special case of a Riemannian metric which is "bi-invariant" (that is, simultaneously left- and right-invariant). All left-invariant metrics have constant scalar curvature. Left- and bi-invariant metrics on Lie groups are an important source of examples of Riemannian manifolds. Berger spheres, constructed as left-invariant metrics on the special unitary group SU(2), are among the simplest examples of the collapsing phenomena, in which a simply-connected Riemannian manifold can have small volume without having large curvature. They also give an example of a Riemannian metric which has constant scalar curvature but which is not Einstein, or even of parallel Ricci curvature. Hyperbolic space can be given a Lie group structure relative to which the metric is left-invariant. Any bi-invariant Riemannian metric on a Lie group has nonnegative sectional curvature, giving a variety of such metrics: a Lie group can be given a bi-invariant Riemannian metric if and only if it is the product of a compact Lie group with an abelian Lie group. Homogeneous spaces. A Riemannian manifold ("M", "g") is said to be "homogeneous" if for every pair of points x and y in M, there is some isometry f of the Riemannian manifold sending x to y. This can be rephrased in the language of group actions as the requirement that the natural action of the isometry group is transitive. Every homogeneous Riemannian manifold is geodesically complete and has constant scalar curvature. Up to isometry, all homogeneous Riemannian manifolds arise by the following construction. Given a Lie group G with compact subgroup K which does not contain any nontrivial normal subgroup of G, fix any complemented subspace W of the Lie algebra of K within the Lie algebra of G. If this subspace is invariant under the linear map ad"G"("k"): "W" → "W" for any element k of K, then G-invariant Riemannian metrics on the coset space "G"/"K" are in one-to-one correspondence with those inner products on W which are invariant under ad"G"("k"): "W" → "W" for every element k of K. Each such Riemannian metric is homogeneous, with G naturally viewed as a subgroup of the full isometry group. The above example of Lie groups with left-invariant Riemannian metrics arises as a very special case of this construction, namely when K is the trivial subgroup containing only the identity element. The calculations of the Levi-Civita connection and the curvature referenced there can be generalized to this context, where now the computations are formulated in terms of the inner product on W, the Lie algebra of G, and the direct sum decomposition of the Lie algebra of G into the Lie algebra of K and W. This reduces the study of the curvature of homogeneous Riemannian manifolds largely to algebraic problems. This reduction, together with the flexibility of the above construction, makes the class of homogeneous Riemannian manifolds very useful for constructing examples. Symmetric spaces. A connected Riemannian manifold ("M", "g") is said to be "symmetric" if for every point p of M there exists some isometry of the manifold with p as a fixed point and for which the negation of the differential at p is the identity map. Every Riemannian symmetric space is homogeneous, and consequently is geodesically complete and has constant scalar curvature. However, Riemannian symmetric spaces also have a much stronger curvature property not possessed by most homogeneous Riemannian manifolds, namely that the Riemann curvature tensor and Ricci curvature are parallel. Riemannian manifolds with this curvature property, which could loosely be phrased as "constant Riemann curvature tensor" (not to be confused with constant curvature), are said to be "locally symmetric". This property nearly characterizes symmetric spaces; Élie Cartan proved in the 1920s that a locally symmetric Riemannian manifold which is geodesically complete and simply-connected must in fact be symmetric. Many of the fundamental examples of Riemannian manifolds are symmetric. The most basic include the sphere and real projective spaces with their standard metrics, along with hyperbolic space. The complex projective space, quaternionic projective space, and Cayley plane are analogues of the real projective space which are also symmetric, as are complex hyperbolic space, quaternionic hyperbolic space, and Cayley hyperbolic space, which are instead analogues of hyperbolic space. Grassmannian manifolds also carry natural Riemannian metrics making them into symmetric spaces. Among the Lie groups with left-invariant Riemannian metrics, those which are bi-invariant are symmetric. Based on their algebraic formulation as special kinds of homogeneous spaces, Cartan achieved an explicit classification of symmetric spaces which are "irreducible", referring to those which cannot be locally decomposed as product spaces. Every such space is an example of an Einstein manifold; among them only the one-dimensional manifolds have zero scalar curvature. These spaces are important from the perspective of Riemannian holonomy. As found in the 1950s by Marcel Berger, any Riemannian manifold which is simply-connected and irreducible is either a symmetric space or has Riemannian holonomy belonging to a list of only seven possibilities. Six of the seven exceptions to symmetric spaces in Berger's classification fall into the fields of Kähler geometry, quaternion-Kähler geometry, G2 geometry, and Spin(7) geometry, each of which study Riemannian manifolds equipped with certain extra structures and symmetries. The seventh exception is the study of 'generic' Riemannian manifolds with no particular symmetry, as reflected by the maximal possible holonomy group. Infinite-dimensional manifolds. The statements and theorems above are for finite-dimensional manifolds—manifolds whose charts map to open subsets of formula_171 These can be extended, to a certain degree, to infinite-dimensional manifolds; that is, manifolds that are modeled after a topological vector space; for example, Fréchet, Banach, and Hilbert manifolds. Definitions. Riemannian metrics are defined in a way similar to the finite-dimensional case. However, there is a distinction between two types of Riemannian metrics: Metric space structure. Length of curves and the Riemannian distance function formula_193 are defined in a way similar to the finite-dimensional case. The distance function formula_109, called the "geodesic distance", is always a pseudometric (a metric that does not separate points), but it may not be a metric. In the finite-dimensional case, the proof that the Riemannian distance function separates points uses the existence of a pre-compact open set around any point. In the infinite case, open sets are no longer pre-compact, so the proof fails. Hopf–Rinow theorem. In the case of strong Riemannian metrics, one part of the finite-dimensional Hopf–Rinow still holds. Theorem: Let formula_184 be a strong Riemannian manifold. Then metric completeness (in the metric formula_109) implies geodesic completeness. However, a geodesically complete strong Riemannian manifold might not be metrically complete and it might have closed and bounded subsets that are not compact. Further, a strong Riemannian manifold for which all closed and bounded subsets are compact might not be geodesically complete. If formula_5 is a weak Riemannian metric, then no notion of completeness implies the other in general. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "p \\in M" }, { "math_id": 3, "text": "T_pM" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "g_p : T_pM \\times T_pM \\to \\mathbb R" }, { "math_id": 7, "text": " \\|\\cdot\\|_p : T_pM \\to \\mathbb R" }, { "math_id": 8, "text": "\\|v\\|_p = \\sqrt{g_p(v,v)}" }, { "math_id": 9, "text": "(M,g)" }, { "math_id": 10, "text": "(x^1,\\ldots,x^n):U\\to\\mathbb{R}^n" }, { "math_id": 11, "text": "\\left\\{\\frac{\\partial}{\\partial x^1}\\Big|_p,\\dotsc, \\frac{\\partial}{\\partial x^n}\\Big|_p\\right\\}" }, { "math_id": 12, "text": "p\\in U" }, { "math_id": 13, "text": "g_{ij}|_p:=g_p\\left(\\left.\\frac{\\partial }{\\partial x^i}\\right|_p,\\left.\\frac{\\partial }{\\partial x^j}\\right|_p\\right)" }, { "math_id": 14, "text": "n^2" }, { "math_id": 15, "text": "g_{ij}:U\\to\\mathbb{R}" }, { "math_id": 16, "text": "n\\times n" }, { "math_id": 17, "text": "U" }, { "math_id": 18, "text": "g_p" }, { "math_id": 19, "text": "\\{ dx^1, \\ldots, dx^n \\}" }, { "math_id": 20, "text": " g=\\sum_{i,j}g_{ij} \\, dx^i \\otimes dx^j." }, { "math_id": 21, "text": "(U,x)." }, { "math_id": 22, "text": "g_{ij}" }, { "math_id": 23, "text": "v \\mapsto \\langle v, \\cdot \\rangle" }, { "math_id": 24, "text": "(p,v) \\mapsto g_p(v,\\cdot)" }, { "math_id": 25, "text": "TM" }, { "math_id": 26, "text": "T^*M" }, { "math_id": 27, "text": "(N,h)" }, { "math_id": 28, "text": "f:M\\to N" }, { "math_id": 29, "text": "g=f^\\ast h" }, { "math_id": 30, "text": "g_p(u,v)=h_{f(p)}(df_p(u),df_p(v))" }, { "math_id": 31, "text": "p\\in M" }, { "math_id": 32, "text": "u,v\\in T_pM." }, { "math_id": 33, "text": "f:M\\to N," }, { "math_id": 34, "text": "f:U\\to f(U)" }, { "math_id": 35, "text": "dV_g" }, { "math_id": 36, "text": "\\int_M dV_g" }, { "math_id": 37, "text": "x^1,\\ldots,x^n" }, { "math_id": 38, "text": "\\mathbb{R}^n." }, { "math_id": 39, "text": "g^\\text{can}" }, { "math_id": 40, "text": "g^\\text{can}\\left(\\sum_i a_i \\frac{\\partial}{\\partial x^i}, \\sum_j b_j \\frac{\\partial}{\\partial x^j} \\right) = \\sum_i a_i b_i" }, { "math_id": 41, "text": "g^\\text{can} = (dx^1)^2 + \\cdots + (dx^n)^2" }, { "math_id": 42, "text": "g_{ij}^\\text{can} = \\delta_{ij}" }, { "math_id": 43, "text": "\\delta_{ij}" }, { "math_id": 44, "text": "(g_{ij}^\\text{can}) = \\begin{pmatrix}\n1 & 0 & \\cdots & 0 \\\\\n0 & 1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & \\cdots & 1\n\\end{pmatrix}." }, { "math_id": 45, "text": "(\\mathbb{R}^n,g^\\text{can})" }, { "math_id": 46, "text": "S^n" }, { "math_id": 47, "text": "\\mathbb R^{n+1}" }, { "math_id": 48, "text": "i : N \\to M" }, { "math_id": 49, "text": "i^*g" }, { "math_id": 50, "text": "N" }, { "math_id": 51, "text": "(N, i^*g)" }, { "math_id": 52, "text": "N \\subseteq M" }, { "math_id": 53, "text": "i(x) = x" }, { "math_id": 54, "text": "i^*g_p(v,w) = g_{i(p)} \\big( di_p(v), di_p(w) \\big), " }, { "math_id": 55, "text": "di_p(v)" }, { "math_id": 56, "text": "v" }, { "math_id": 57, "text": "i." }, { "math_id": 58, "text": "S^n=\\{x\\in\\mathbb{R}^{n+1}:(x^1)^2+\\cdots+(x^{n+1})^2=1\\}" }, { "math_id": 59, "text": "a,b,c" }, { "math_id": 60, "text": "\\left\\{x \\in \\mathbb R^3 : \\frac{x^2}{a^2} + \\frac{y^2}{b^2} + \\frac{z^2}{c^2} = 1 \\right\\}" }, { "math_id": 61, "text": "\\mathbb R^3" }, { "math_id": 62, "text": "f:\\mathbb{R}^n\\to\\mathbb{R}" }, { "math_id": 63, "text": "\\mathbb{R}^{n+1}" }, { "math_id": 64, "text": "\\widetilde{M}\\to M" }, { "math_id": 65, "text": "\\widetilde M" }, { "math_id": 66, "text": "\\tilde g" }, { "math_id": 67, "text": "\\tilde g = i^* g" }, { "math_id": 68, "text": "M\\times N" }, { "math_id": 69, "text": "h" }, { "math_id": 70, "text": "\\widetilde{g}" }, { "math_id": 71, "text": "M\\times N," }, { "math_id": 72, "text": "T_{(p,q)}(M\\times N) \\cong T_pM \\oplus T_qN," }, { "math_id": 73, "text": "\\widetilde{g}_{p,q} ((u_1, u_2), (v_1, v_2)) = g_p(u_1, v_1) + h_q(u_2, v_2)." }, { "math_id": 74, "text": "(U,x)" }, { "math_id": 75, "text": "(V,y)" }, { "math_id": 76, "text": "(U \\times V, (x,y))" }, { "math_id": 77, "text": "M \\times N." }, { "math_id": 78, "text": "g_U" }, { "math_id": 79, "text": "h_V" }, { "math_id": 80, "text": "(U \\times V,(x,y))" }, { "math_id": 81, "text": "\\widetilde{g} = \\sum_{ij} \\widetilde{g}_{ij} \\, dx^i \\, dx^j" }, { "math_id": 82, "text": "(\\widetilde{g}_{ij}) = \\begin{pmatrix} g_U & 0 \\\\ 0 & h_V \\end{pmatrix}." }, { "math_id": 83, "text": "T^n = S^1\\times\\cdots\\times S^1" }, { "math_id": 84, "text": "S^1" }, { "math_id": 85, "text": "T^n" }, { "math_id": 86, "text": "\\mathbb R \\times \\cdots \\times \\mathbb R" }, { "math_id": 87, "text": "\\mathbb R" }, { "math_id": 88, "text": "\\mathbb R^n" }, { "math_id": 89, "text": "g_1, \\ldots, g_k" }, { "math_id": 90, "text": "M." }, { "math_id": 91, "text": "f_1, \\ldots, f_k" }, { "math_id": 92, "text": "f_1 g_1 + \\ldots + f_k g_k" }, { "math_id": 93, "text": "(M,g)," }, { "math_id": 94, "text": "F:M\\to\\mathbb{R}^N" }, { "math_id": 95, "text": "F" }, { "math_id": 96, "text": "\\mathbb{R}^N" }, { "math_id": 97, "text": "g." }, { "math_id": 98, "text": "\\gamma : [0,1] \\to M" }, { "math_id": 99, "text": "\\gamma'(t) \\in T_{\\gamma(t)}M" }, { "math_id": 100, "text": "t\\mapsto\\|\\gamma'(t)\\|_{\\gamma(t)}" }, { "math_id": 101, "text": "[0,1]" }, { "math_id": 102, "text": "L(\\gamma)" }, { "math_id": 103, "text": "L(\\gamma)=\\int_0^1 \\|\\gamma'(t)\\|_{\\gamma(t)} \\, dt." }, { "math_id": 104, "text": "d_g:M\\times M\\to[0,\\infty)" }, { "math_id": 105, "text": "d_g(p,q) = \\inf \\{ L(\\gamma) : \\gamma \\text{ an admissible curve with } \\gamma(0) = p, \\gamma(1) = q \\}." }, { "math_id": 106, "text": "(M,d_g)" }, { "math_id": 107, "text": "\\lambda" }, { "math_id": 108, "text": "\\gamma" }, { "math_id": 109, "text": "d_g" }, { "math_id": 110, "text": "d_g:M\\times M\\to\\mathbb{R}" }, { "math_id": 111, "text": "\\operatorname{diam}(M,d_g)=\\sup\\{d_g(p,q):p,q\\in M\\}." }, { "math_id": 112, "text": "\\mathfrak X(M)" }, { "math_id": 113, "text": "\\nabla : \\mathfrak X(M) \\times \\mathfrak X(M) \\to \\mathfrak X(M)" }, { "math_id": 114, "text": "(X,Y) \\mapsto \\nabla_X Y" }, { "math_id": 115, "text": "f \\in C^\\infty(M)" }, { "math_id": 116, "text": "\\nabla_{f_1 X_1 + f_2 X_2} Y = f_1 \\,\\nabla_{X_1} Y + f_2 \\, \\nabla_{X_2} Y, " }, { "math_id": 117, "text": "\\nabla_X fY=X(f)Y+ f\\,\\nabla_X Y" }, { "math_id": 118, "text": "\\nabla_X Y" }, { "math_id": 119, "text": "Y" }, { "math_id": 120, "text": "X" }, { "math_id": 121, "text": "\\nabla" }, { "math_id": 122, "text": "X\\bigl(g(Y,Z)\\bigr) = g(\\nabla_X Y, Z) + g(Y, \\nabla_X Z)" }, { "math_id": 123, "text": "\\nabla_X Y - \\nabla_Y X = [X,Y], " }, { "math_id": 124, "text": "[\\cdot,\\cdot]" }, { "math_id": 125, "text": "X : [0,1] \\to TM" }, { "math_id": 126, "text": "X(t) \\in T_{\\gamma(t)}M" }, { "math_id": 127, "text": "t \\in [0,1]" }, { "math_id": 128, "text": "\\mathfrak X(\\gamma)" }, { "math_id": 129, "text": "f : [0,1] \\to \\mathbb R" }, { "math_id": 130, "text": "(fX)(t) = f(t)X(t)" }, { "math_id": 131, "text": "X \\in \\mathfrak X(\\gamma)." }, { "math_id": 132, "text": "\\tilde X" }, { "math_id": 133, "text": "X(t) = \\tilde X_{\\gamma(t)}" }, { "math_id": 134, "text": "D_t : \\mathfrak X(\\gamma) \\to \\mathfrak X(\\gamma)" }, { "math_id": 135, "text": "D_t(aX+bY) = a\\,D_tX + b\\,D_tY," }, { "math_id": 136, "text": "D_t(fX) = f'X + f\\,D_tX," }, { "math_id": 137, "text": "D_tX(t) = \\nabla_{\\gamma'(t)} \\tilde X" }, { "math_id": 138, "text": "D_t\\gamma'" }, { "math_id": 139, "text": "D_t\\gamma' = 0" }, { "math_id": 140, "text": "t" }, { "math_id": 141, "text": "v \\in T_pM" }, { "math_id": 142, "text": "\\gamma : I \\to M" }, { "math_id": 143, "text": "I" }, { "math_id": 144, "text": "\\gamma(0) = p" }, { "math_id": 145, "text": "\\gamma'(0) = v" }, { "math_id": 146, "text": "\\mathbb R^2" }, { "math_id": 147, "text": "S^2" }, { "math_id": 148, "text": "p = (1,1)" }, { "math_id": 149, "text": "v = (1,1)" }, { "math_id": 150, "text": "(-\\infty,\\infty)" }, { "math_id": 151, "text": "\\mathbb{R}^2\\smallsetminus\\{(0,0)\\}" }, { "math_id": 152, "text": "V" }, { "math_id": 153, "text": "D_t V = 0" }, { "math_id": 154, "text": "\\gamma(1) = q" }, { "math_id": 155, "text": "T_qM" }, { "math_id": 156, "text": "q" }, { "math_id": 157, "text": "\\mathbb R^2 \\smallsetminus \\{0,0\\}" }, { "math_id": 158, "text": "dx^2 + dy^2 = dr^2 + r^2 \\, d\\theta^2" }, { "math_id": 159, "text": "dr^2 + d\\theta^2" }, { "math_id": 160, "text": "R : \\mathfrak X(M) \\times \\mathfrak X(M) \\times \\mathfrak X(M) \\to \\mathfrak X(M)" }, { "math_id": 161, "text": "R(X, Y)Z = \\nabla_X\\nabla_Y Z - \\nabla_Y \\nabla_X Z - \\nabla_{[X, Y]} Z" }, { "math_id": 162, "text": "[X, Y]" }, { "math_id": 163, "text": "(1,3)" }, { "math_id": 164, "text": "Ric(X,Y) = \\operatorname{tr}(Z \\mapsto R(Z,X)Y)" }, { "math_id": 165, "text": "\\operatorname{tr}" }, { "math_id": 166, "text": "Ric" }, { "math_id": 167, "text": "Ric = \\lambda g" }, { "math_id": 168, "text": "R_{ijkl}=\\kappa(g_{il}g_{jk}-g_{ik}g_{jl})." }, { "math_id": 169, "text": "\\frac{dx_1^2+\\cdots+dx_n^2}{(1+\\frac{\\kappa}{4}(x_1^2+\\cdots+x_n^2))^2}" }, { "math_id": 170, "text": "g_p(u,v)=g_e(dL_{p^{-1}}(u),dL_{p^{-1}}(v))," }, { "math_id": 171, "text": "\\R^n." }, { "math_id": 172, "text": "g : TM \\times TM \\to \\R," }, { "math_id": 173, "text": "x \\in M" }, { "math_id": 174, "text": "g_x : T_xM \\times T_xM \\to \\R" }, { "math_id": 175, "text": "T_xM." }, { "math_id": 176, "text": "g_x" }, { "math_id": 177, "text": "T_xM" }, { "math_id": 178, "text": "(H, \\langle \\,\\cdot, \\cdot\\, \\rangle)" }, { "math_id": 179, "text": "x \\in H," }, { "math_id": 180, "text": "H" }, { "math_id": 181, "text": "T_xH." }, { "math_id": 182, "text": "g_x(u,v) = \\langle u, v \\rangle" }, { "math_id": 183, "text": "x, u, v \\in H" }, { "math_id": 184, "text": "(M, g)" }, { "math_id": 185, "text": "\\operatorname{Diff}(M)" }, { "math_id": 186, "text": "\\mu" }, { "math_id": 187, "text": "L^2" }, { "math_id": 188, "text": "G" }, { "math_id": 189, "text": "f\\in \\operatorname{Diff}(M)," }, { "math_id": 190, "text": "u, v \\in T_f\\operatorname{Diff}(M)." }, { "math_id": 191, "text": "x \\in M, u(x) \\in T_{f(x)}M" }, { "math_id": 192, "text": "G_f(u,v) = \\int _M g_{f(x)} (u(x),v(x)) \\, d\\mu (x)" }, { "math_id": 193, "text": "d_g : M \\times M \\to [0,\\infty)" } ]
https://en.wikipedia.org/wiki?curid=144652
14467558
Surface force
Surface force denoted "fs" is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area. formula_0, where "f" = force, "p" = pressure, and "A" = area on which a uniform pressure acts Examples. Pressure related surface force. Since pressure is formula_1, and area is a formula_2, a pressure of formula_3 over an area of formula_4 will produce a surface force of formula_5.
[ { "math_id": 0, "text": " f_s=p \\cdot A \\ " }, { "math_id": 1, "text": " \\frac{\\mathit{force}}{\\mathit{area}}=\\mathrm{\\frac{N}{m^2}} " }, { "math_id": 2, "text": " (length)\\cdot(width) = \\mathrm{m \\cdot m }= \\mathrm{m^2} " }, { "math_id": 3, "text": " 5\\ \\mathrm{\\frac{N}{m^2}} = 5\\ \\mathrm{Pa} " }, { "math_id": 4, "text": " 20\\ \\mathrm{m^2} " }, { "math_id": 5, "text": " (5\\ \\mathrm{Pa}) \\cdot (20\\ \\mathrm{m^2}) = 100\\ \\mathrm{N} " } ]
https://en.wikipedia.org/wiki?curid=14467558
1446801
Ewens's sampling formula
In population genetics, Ewens's sampling formula describes the probabilities associated with counts of how many different alleles are observed a given number of times in the sample. Definition. Ewens's sampling formula, introduced by Warren Ewens, states that under certain conditions (specified below), if a random sample of "n" gametes is taken from a population and classified according to the gene at a particular locus then the probability that there are "a"1 alleles represented once in the sample, and "a"2 alleles represented twice, and so on, is formula_0 for some positive number "θ" representing the population mutation rate, whenever formula_1 is a sequence of nonnegative integers such that formula_2 The phrase "under certain conditions" used above is made precise by the following assumptions: This is a probability distribution on the set of all partitions of the integer "n". Among probabilists and statisticians it is often called the multivariate Ewens distribution. Mathematical properties. When "θ" = 0, the probability is 1 that all "n" genes are the same. When "θ" = 1, then the distribution is precisely that of the integer partition induced by a uniformly distributed random permutation. As "θ" → ∞, the probability that no two of the "n" genes are the same approaches 1. This family of probability distributions enjoys the property that if after the sample of "n" is taken, "m" of the "n" gametes are chosen without replacement, then the resulting probability distribution on the set of all partitions of the smaller integer "m" is just what the formula above would give if "m" were put in place of "n". The Ewens distribution arises naturally from the Chinese restaurant process.
[ { "math_id": 0, "text": "\\operatorname{Pr}(a_1,\\dots,a_n; \\theta)={n! \\over \\theta(\\theta+1)\\cdots(\\theta+n-1)}\\prod_{j=1}^n{\\theta^{a_j} \\over j^{a_j} a_j!}," }, { "math_id": 1, "text": "a_1, \\ldots, a_n" }, { "math_id": 2, "text": "a_1+2a_2+3a_3+\\cdots+na_n=\\sum_{i=1}^{n} i a_i = n.\\," } ]
https://en.wikipedia.org/wiki?curid=1446801
14469114
Rhenium–osmium dating
Radiometric dating method using rhenium-187 and osmium-187 Rhenium–osmium dating is a form of radiometric dating based on the beta decay of the isotope 187Re to 187Os. This normally occurs with a half-life of 41.6 × 109 y, but studies using fully ionised 187Re atoms have found that this can decrease to only 33 y. Both rhenium and osmium are strongly siderophilic (iron loving), while Re is also chalcophilic (sulfur loving) making it useful in dating sulfide ores such as gold and Cu–Ni deposits. This dating method is based on an isochron calculated based on isotopic ratios measured using N-TIMS (Negative – Thermal Ionization Mass Spectrometry). Rhenium–osmium isochron. Rhenium–osmium dating is carried out by the isochron dating method. Isochrons are created by analysing several samples believed to have formed at the same time from a common source. The Re–Os isochron plots the ratio of radiogenic 187Os to non-radiogenic 188Os against the ratio of the parent isotope 187Re to the non-radiogenic isotope 188Os. The stable and relatively abundant osmium isotope 188Os is used to normalize the radiogenic isotope in the isochron. The Re–Os isochron is defined by the following equation: formula_0 where: "t" is the age of the sample, λ is the decay constant of 187Re, ("e"λ"t"−1) is the slope of the isochron which defines the age of the system. A good example of an application of the Re–Os isochron method is a study on the dating of a gold deposit in the Witwatersrand mining camp, South Africa. Rhenium–osmium isotopic evolution. Rhenium and osmium were strongly refractory and siderophile during the initial accretion of the Earth which caused both elements to preferentially enter the Earth's core. Thus the two elements should be depleted in the silicate Earth yet the 187Os / 188Os ratio of mantle is chondritic. The reason for this apparent contradiction is owed to the change in behavior between Re and Os in partial melt events. Re tends to enter the melt phase (incompatible) while Os remains in the solid residue (compatible). This causes high ratios of Re/Os in oceanic crust (which is derived from partial melting of mantle) and low ratios of Re/Os in the lower mantle. In this regard, the Re–Os system to study the geochemical evolution of mantle rocks and in defining the chronology of mantle differentiation is extremely helpful. Peridotite xenoliths which are thought to sample the upper mantle sometimes contain supra-chondritic Os-isotopic ratios. This is thought to evidence of subducted ancient high Re/Os basaltic crust that is being recycled back into the mantle. This combination of radiogenic (187Os that was created by decay of 187Re) and nonradiogenic melts helps to support the theory of at least two Os-isotopic reservoirs in the mantle. The volume of both these reservoirs is thought to be around 5–10% of the whole mantle. The first reservoir is characterized by depletion in Re and proxies for melt fertility (such as concentrations of elements like Ca and Al). The second reservoir is chondritic in composition. Direct measurement of the age of continental crust through Re–Os dating is difficult. Infiltration of xenoliths by their commonly Re-rich magma alters the true elemental Re/Os ratios. Instead, determining model ages can be done in two ways: "Re depletion" model ages or the "melting age" model. The former finds the minimum age of the extraction event assuming the elemental Re/Os ratio equals 0 (komatiite residues have Re/Os of 0, so this is assuming a xenolith was extracted from a near-komatiite melt). The latter gives the age of the melting event inferred from the point when a melt proxy like Al2O3 is equal to 0 (ancient subcontinental lithosphere has weight percentages of CaO and Al2O3 ranging from 0 to 2%). Pt–Re–Os systematics. The radioactive decay of 190Pt to 186Os has a half-life of 4.83(3)×1011 years (which is longer than the age of the universe, so it is essentially stable). However, in-situ 187Os / 188Os and 186Os / 188Os of modern plume related magmas show simultaneous enrichment which implies a source that is supra-chondritic in Pt/Os and Re/Os. Since both parental isotopes have extremely long half-lives, the Os-isotope rich reservoir must be very old to allow enough time for the daughter isotopes to form. These observations are interpreted to support the theory that the Archean subducted crust contributed Os-isotope rich melts back into the mantle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\frac{{}^{187}\\mathrm{Os}}{{}^{188}\\mathrm{Os}}\\right)_{\\mathrm{present}} = \\left(\\frac{{}^{187}\\mathrm{Os}}{{}^{188}\\mathrm{Os}}\\right)_{\\mathrm{initial}} + \\left(\\frac{{}^{187}\\mathrm{Re}}{{}^{188}\\mathrm{Os}}\\right) \\cdot (e^{\\lambda t}-1)," } ]
https://en.wikipedia.org/wiki?curid=14469114
14469299
Autonomous convergence theorem
In mathematics, an autonomous convergence theorem is one of a family of related theorems which specify conditions guaranteeing global asymptotic stability of a continuous autonomous dynamical system. History. The Markus–Yamabe conjecture was formulated as an attempt to give conditions for global stability of continuous dynamical systems in two dimensions. However, the Markus–Yamabe conjecture does not hold for dimensions higher than two, a problem which autonomous convergence theorems attempt to address. The first autonomous convergence theorem was constructed by Russell Smith. This theorem was later refined by Michael Li and James Muldowney. An example autonomous convergence theorem. A comparatively simple autonomous convergence theorem is as follows: Let formula_0 be a vector in some space formula_1, evolving according to an autonomous differential equation formula_2. Suppose that formula_3 is convex and forward invariant under formula_4, and that there exists a fixed point formula_5 such that formula_6. If there exists a logarithmic norm formula_7 such that the Jacobian formula_8 satisfies formula_9 for all values of formula_0, then formula_10 is the only fixed point, and it is globally asymptotically stable. This autonomous convergence theorem is very closely related to the Banach fixed-point theorem. How autonomous convergence works. Note: this is an intuitive description of how autonomous convergence theorems guarantee stability, not a strictly mathematical description. The key point in the example theorem given above is the existence of a negative logarithmic norm, which is derived from a vector norm. The vector norm effectively measures the distance between points in the vector space on which the differential equation is defined, and the negative logarithmic norm means that distances between points, as measured by the corresponding vector norm, are decreasing with time under the action of formula_4. So long as the trajectories of all points in the phase space are bounded, all trajectories must therefore eventually converge to the same point. The autonomous convergence theorems by Russell Smith, Michael Li and James Muldowney work in a similar manner, but they rely on showing that the area of two-dimensional shapes in phase space decrease with time. This means that no periodic orbits can exist, as all closed loops must shrink to a point. If the system is bounded, then according to Pugh's closing lemma there can be no chaotic behaviour either, so all trajectories must eventually reach an equilibrium. Michael Li has also developed an extended autonomous convergence theorem which is applicable to dynamical systems containing an invariant manifold.
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "X \\subseteq \\mathbb{R}^n" }, { "math_id": 2, "text": "\\dot{x} = f(x)" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\hat{x} \\in X" }, { "math_id": 6, "text": "f(\\hat{x}) = 0" }, { "math_id": 7, "text": "\\mu" }, { "math_id": 8, "text": "J(x) = D_x f" }, { "math_id": 9, "text": "\\mu(J(x)) < 0" }, { "math_id": 10, "text": "\\hat{x}" } ]
https://en.wikipedia.org/wiki?curid=14469299
1446962
Chromatic circle
Clock diagram for displaying relationships among pitch classes The chromatic circle is a clock diagram for displaying relationships among the equal-tempered pitch classes making up a given equal temperament tuning's chromatic scale on a circle. Explanation. If one starts on any equal-tempered pitch and repeatedly ascends by the musical interval of a semitone, one will eventually land on a pitch with the same pitch class as the initial one, having passed through all the other equal-tempered chromatic pitch classes in between. Since the space is circular, it is also possible to descend by semitone. The chromatic circle is useful because it represents melodic distance, which is often correlated with physical distance on musical instruments. For instance, assuming 12-tone equal temperament, to move from any C on a keyboard to the nearest E, one must move up four semitones, corresponding to four clockwise steps on the chromatic circle. One can also move "down" by eight semitones, corresponding to eight counterclockwise steps on the pitch class circle. Larger motions (or in pitch space) can be represented in pitch class space by paths that "wrap around" the chromatic circle one or more times. For any positive integer "N", one can represent all of the equal-tempered pitch classes of "N"-tone equal temperament by the cyclic group of order "N", or equivalently, the residue classes modulo twelve, Z/NZ. For example, in twelve-tone equal temperament, the group formula_0 has four generators, which can be identified with the ascending and descending semitones and the ascending and descending perfect fifths. In other tunings, such as 31 equal temperament, many more generators are possible. The semitonal generator gives rise to the chromatic circle, while the perfect fourth and perfect fifth give rise to the circle of fifths. Comparison with circle of fifths. A key difference between the chromatic circle and the circle of fifths is that the former is truly a continuous space: every point on the circle corresponds to a conceivable pitch class, and every conceivable pitch class corresponds to a point on the circle. By contrast, the circle of fifths is fundamentally a "discrete" structure, and there is no obvious way to assign pitch classes to each of its points. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Z_{12} " } ]
https://en.wikipedia.org/wiki?curid=1446962
1446965
Net domestic product
The net domestic product (NDP) equals the gross domestic product (GDP) minus depreciation on a country's capital goods. formula_0 Net domestic product accounts for capital that has been consumed over the year in the form of housing, vehicle, or machinery deterioration. The depreciation accounted for is often referred to as "capital consumption allowance" and represents the amount of capital that would be needed to replace those depreciated assets. The portion of investment spending that is used to replace worn out and obsolete equipment — depreciation — while essential for maintaining the level of output, does not increase the economy’s capacities in any way. If GDP were to grow simply as a result of the fact that more money was being spent to maintain the capital stock because of increased depreciation, it would not mean that anyone had been made better off. Because of this some economists view NDP as a better measure of social and economic well being than GDP. If the country is not able to replace the capital stock lost through depreciation, then GDP will fall. In addition, a growing gap between GDP and NDP indicates increasing obsolescence of capital goods, while a narrowing gap means that the condition of capital stock in the country is improving. It reduces the value of capital that is why it is separated from GDP to get NDP. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "GDP - D = NDP" } ]
https://en.wikipedia.org/wiki?curid=1446965
144716
SAT
Standardized test widely used for college admissions in the United States The SAT ( ) is a standardized test widely used for college admissions in the United States. Since its debut in 1926, its name and scoring have changed several times. For much of its history, it was called the Scholastic Aptitude Test and had two components, Verbal and Mathematical, each of which was scored on a range from 200 to 800. Later it was called the Scholastic Assessment Test, then the SAT I: Reasoning Test, then the SAT Reasoning Test, then simply the SAT. The SAT is wholly owned, developed, and published by the College Board, a private, not-for-profit organization in the United States. It is administered on behalf of the College Board by the Educational Testing Service, another non-profit organization which until shortly before the 2016 redesign of the SAT developed the test and maintained a repository of items (test questions) as well. The test is intended to assess students' readiness for college. Originally designed not to be aligned with high school curricula, several adjustments were made for the version of the SAT introduced in 2016. College Board president David Coleman added that he wanted to make the test reflect more closely what students learn in high school with the new Common Core standards, which have been adopted by the District of Columbia and many states. Many students prepare for the SAT using books, classes, online courses, and tutoring, which are offered by a variety of companies and organizations. One of the best known such companies is Kaplan, Inc., which has offered SAT preparation courses since 1946. Starting with the 2015–16 school year, the College Board began working with Khan Academy to provide free online SAT preparation courses. Historically, starting around 1937, the tests offered under the SAT banner also included optional subject-specific SAT Subject Tests, which were called SAT Achievement Tests until 1993 and then were called SAT II: Subject Tests until 2005; these were discontinued after June 2021. After June 2021, with some exceptions, the SAT no longer has an essay section. In the past, the test was taken using paper forms that were filled in using a number 2 pencil and were scored (except for hand-written response sections) using Scantron-type optical mark recognition technology. Starting in March 2023 for international test-takers and March 2024 for those within the U.S., the testing is administered using a computer program called Bluebook running on a laptop or tablet computer brought by the student or provided at the testing site. The test was also made adaptive, customizing the questions that are presented to the student based on how they perform on questions asked earlier in the test, and shortened from three hours to two hours and 14 minutes. While a considerable amount of research has been done on the SAT, many questions and misconceptions remain. Outside of college admissions, the SAT is also used by researchers studying human intelligence in general and intellectual precociousness in particular, and by some employers in the recruitment process. Function. The SAT is typically taken by high school juniors and seniors. The College Board states that the SAT is intended to measure literacy, numeracy and writing skills that are needed for academic success in college. They state that the SAT assesses how well the test-takers analyze and solve problems—skills they learned in school that they will need in college. The College Board also claims that the SAT, in combination with high school grade point average (GPA), provides a better indicator of success in college than high school grades alone, as measured by college freshman GPA. Various studies conducted over the lifetime of the SAT show a statistically significant increase in correlation of high school grades and college freshman grades when the SAT is factored in. The predictive validity and powers of the SAT are topics of research in psychometrics. The SAT is a norm-referenced test intended to yield scores that follow a bell curve distribution among test-takers. To achieve this distribution, test designers include challenging multiple-choice questions with plausible but incorrect options, known as "distractors", exclude questions that a majority of students answer correctly, and impose tight time constraints during the examination. There are substantial differences in funding, curricula, grading, and difficulty among U.S. secondary schools due to U.S. federalism, local control, and the prevalence of private, distance, and home schooled students. SAT (and ACT) scores are intended to supplement the secondary school record and help admission officers put local data—such as course work, grades, and class rank—in a national perspective. Historically, the SAT was more widely used by students living in coastal states and the ACT was more widely used by students in the Midwest and South; in recent years, however, an increasing number of students on the East and West coasts have been taking the ACT. Since 2007, all four-year colleges and universities in the United States that require a test as part of an application for admission will accept either the SAT or ACT, and as of Fall 2022, more than 1400 four-year colleges and universities did not require any standardized test scores at all for admission, though some of them were planning to apply this policy only temporarily due to the coronavirus pandemic. SAT test-takers are given two hours and 14 minutes to complete the test (plus a 10-minute break between the Reading and Writing section and the Math section), and as of 2024[ [update]] the test costs US$60.00, plus additional fees for late test registration, registration by phone, registration changes, rapid delivery of results, delivery of results to more than four institutions, result deliveries ordered more than nine days after the test, and testing administered outside the United States, as applicable, and fee waivers are offered to low-income students within the U.S. and its territories. Scores on the SAT range from 400 to 1600, combining test results from two 200-to-800-point sections: the Mathematics section and the Evidence-Based Reading and Writing section. Although taking the SAT, or its competitor the ACT, is required for freshman entry to many colleges and universities in the United States, during the late 2010s, many institutions made these entrance exams optional, but this did not stop the students from attempting to achieve high scores as they and their parents are skeptical of what "optional" means in this context. In fact, the test-taking population was increasing steadily. And while this may have resulted in a long-term decline in scores, experts cautioned against using this to gauge the scholastic levels of the entire U.S. population. Structure. The SAT has two main sections, namely Evidence-Based Reading and Writing (EBRW, normally known as the "English" portion of the test) and the Math section. These are both further broken down into four sections: Reading, Writing and Language, Math (no calculator), and Math (calculator allowed). Until the summer of 2021, the test taker was also optionally able to write an essay which, in that case, is the fifth test section. (The essay was dropped after June 2021, except in a few states and school districts.) The total time for the scored portion of the SAT is two hours and 14 minutes. Some test takers who are not taking the essay may also have a fifth section, which is used, at least in part, for the pretesting of questions that may appear on future administrations of the SAT. (These questions are not included in the computation of the SAT score.) Two section scores result from taking the SAT: Evidence-Based Reading and Writing, and Math. Section scores are reported on a scale of 200 to 800, and each section score is a multiple of ten. A total score for the SAT is calculated by adding the two section scores, resulting in total scores that range from 400 to 1600. In addition to the two section scores, three "test" scores on a scale of 10 to 40 are reported, one for each of Reading, Writing and Language, and Math, with increment of 1 for Reading / Writing and Language, and 0.5 for Math. There are also two cross-test scores that each range from 10 to 40 points: Analysis in History/Social Studies and Analysis in Science. The essay, if taken, was scored separately from the two section scores. Two people score each essay by each awarding 1 to 4 points in each of three categories: Reading, Analysis, and Writing. These two scores from the different examiners are then combined to give a total score from 2 to 8 points per category. Though sometimes people quote their essay score out of 24, the College Board themselves do not combine the different categories to give one essay score, instead giving a score for each category. There is no penalty or negative marking for guessing on the SAT: scores are based on the number of questions answered correctly. The optional essay was last featured in the June 2021 administration. College Board said it discontinued the essay section because "there are other ways for students to demonstrate their mastery of essay writing," including the test's reading and writing portion. It also acknowledged that the COVID-19 pandemic had played a role in the change, accelerating 'a process already underway'. Reading Test. The Reading Test of the SAT contains one section of 52 questions and a time limit of 65 minutes. All questions are multiple-choice and based on reading passages. Tables, graphs, and charts may accompany some passages, but no math is required to correctly answer the corresponding questions. There are five passages (up to two of which may be a pair of smaller passages) on the Reading Test and ten or eleven questions per passage or passage pair. SAT Reading passages draw from three main fields: history, social studies, and science. Each SAT Reading Test always includes: one passage from U.S. or world literature; one passage from either a U.S. founding document or a related text; one passage about economics, psychology, sociology, or another social science; and, two science passages. Answers to all of the questions are based only on the content stated in or implied by the passage or passage pair. The Reading Test contributes (with the Writing and Language Test) to two subscores, each ranging from 1 to 15 points: Writing and Language Test. The Writing and Language Test of the SAT is made up of one section with 44 multiple-choice questions and a time limit of 35 minutes. As with the Reading Test, all questions are based on reading passages which may be accompanied by tables, graphs, and charts. The test taker will be asked to read the passages and suggest corrections or improvements for the contents underlined. Reading passages on this test range in content from topic arguments to nonfiction narratives in a variety of subjects. The skills being evaluated include: increasing the clarity of argument; improving word choice; improving analysis of topics in social studies and science; changing sentence or word structure to increase organizational quality and impact of writing; and, fixing or improving sentence structure, word usage, and punctuation. The Writing and Language Test reports two subscores, each ranging from 1 to 15 points: Mathematics. The mathematics portion of the SAT is divided into two sections: Math Test – No Calculator and Math Test – Calculator. In total, the SAT math test is 80 minutes long and includes 58 questions: 45 multiple choice questions and 13 grid-in questions. The multiple choice questions have four possible answers; the grid-in questions are free response and require the test taker to provide an answer. Several scores are provided to the test taker for the math test. A subscore (on a scale of 1 to 15) is reported for each of three categories of math content: A test score for the math test is reported on a scale of 10 to 40, with an increment of 0.5, and a section score (equal to the test score multiplied by 20) is reported on a scale of 200 to 800. Calculator use. All scientific and most graphing calculators, including Computer Algebra System (CAS) calculators, are permitted on the SAT Math – Calculator section only. However, with the change to the Digital SAT during 2023 and 2024, a graphing calculator may be used throughout the entire test and is accessible through the test application program. All four-function calculators are allowed as well; however, these devices are not recommended. Mobile phone and smartphone calculators, calculators with typewriter-like (QWERTY) keyboards, laptops and other portable computers, and calculators capable of accessing the Internet are not permitted. Research was conducted by the College Board to study the effect of calculator use on SAT I: Reasoning Test math scores. The study found that performance on the math section was associated with the extent of calculator use: those using calculators on about one third to one half of the items averaged higher scores than those using calculators more or less frequently. However, the effect was "more likely to have been the result of able students using calculators differently than less able students rather than calculator use per se." There is some evidence that the frequent use of a calculator in school outside of the testing situation has a positive effect on test performance compared to those who do not use calculators in school. Style of questions. Most of the questions on the SAT, except for the grid-in math responses, are multiple choice; all multiple-choice questions have four answer choices, one of which is correct. Thirteen of the questions on the math portion of the SAT (about 22% of all the math questions) are not multiple choice. They instead require the test taker to bubble in a number in a four-column grid. All questions on each section of the SAT are weighted equally. For each correct answer, one raw point is added. No points are deducted for incorrect answers. The final score is derived from the raw score; the precise conversion chart varies between test administrations. Logistics. Frequency. The SAT is offered seven times a year in the United States: in August, October, November, December, March, May, and June. For international students SAT is offered four times a year: in October, December, March and May (2020 exception: To cover worldwide May cancelation, an additional September exam was introduced, and August was made available to international test-takers as well). The test is typically offered on the first Saturday of the month for the October, November, December, May, and June administrations. The test was taken by 1,913,742 high school graduates in the class of 2023. Candidates wishing to take the test may register online at the College Board's website or by mail at least three weeks before the test date. Fees. As of 2022, the SAT costs US$60.00, plus additional fees if testing outside the United States. The College Board makes fee waivers available for low-income students. Additional fees apply for late registration, standby testing, registration changes, scores by telephone, and extra score reports (beyond the four provided for free). Accommodation for candidates with disabilities. Students with verifiable disabilities, including physical and learning disabilities, are eligible to take the SAT with accommodations. The standard time increase for students requiring additional time due to learning disabilities or physical handicaps is time + 50%; time + 100% is also offered. Change from paper-based to digital. In January 2022, College Board announced that the SAT would change from paper-based to digital (computer-based). International (non-U.S.) testing centers began using the digital format on March 11, 2023. The December 2023 SAT was the last SAT test offered on paper. The switch to the digital format occurred on March 9, 2024, in the U.S. The digital SAT takes about an hour less to do than the paper-based test (two hours vs. three). It is administered in an official test center, as before, but the students use their own testing devices (a portable computer or tablet). If a student cannot bring his or her own device, one can be requested from College Board. Before the test, College Board's "Bluebook" app must have been successfully installed on the testing device. The new test is a adaptive, meaning that students have two modules per section (reading/writing and math), with the second module being adaptive to the demonstrated level based on the results from the first module. On the reading and writing sections, the questions will have shorter passages for each question. On the math sections, the word problems will be more concise. Students have a ten minute break after the first two English modules and before the two math modules. A timer is built into the testing software and will automatically begin once the student finishes the second English module. New tools such as a question flagger, a timer, and an integrated graphing calculator are included in the new test as well. Scaled scores and percentiles. Students receive their online score reports approximately two to three weeks after test administration (longer for mailed, paper scores). Included in the report is the total score (the sum of the two section scores, with each section graded on a scale of 200–800) and three subscores (in reading, writing, and analysis, each on a scale of 2–8) for the optional essay. Students may also receive, for an additional fee, various score verification services, including (for select test administrations) the Question and Answer Service, which provides the test questions, the student's answers, the correct answers, and the type and difficulty of each question. In addition, students receive two percentile scores, each of which is defined by the College Board as the percentage of students in a comparison group with equal or lower test scores. One of the percentiles, called the "Nationally Representative Sample Percentile", uses as a comparison group all 11th and 12th graders in the United States, regardless of whether or not they took the SAT. This percentile is theoretical and is derived using methods of statistical inference. The second percentile, called the "SAT User Percentile", uses actual scores from a comparison group of recent United States students that took the SAT. For example, for the school year 2019–2020, the SAT User Percentile was based on the test scores of students in the graduating classes of 2018 and 2019 who took the SAT (specifically, the 2016 revision) during high school. Students receive both types of percentiles for their total score as well as their section scores. Percentiles for total scores (2006). The following chart summarizes the original percentiles used for the version of the SAT administered in March 2005 through January 2016. These percentiles used students in the graduating class of 2006 as the comparison group. Percentiles for verbal and math scores (1969–70). The mean verbal score was 461 for students staking the SAT, 383 for the sample of all students. The mathematical scores for 1969–70 were broken out by gender rather than reported as a whole; the mean math score for boys was 415, for girls 378. The differences for the nationally sampled population for math (not shown in table) were similar to those for the verbal section. Ceilings and trends. The version of the SAT administered before April 1995 had a very high ceiling. For example, in the 1985–1986 school year, only 9 students out of 1.7 million test takers obtained a score of 1600. In 2015 the average score for the Class of 2015 was 1490 out of a maximum 2400. That was down 7 points from the previous class's mark and was the lowest composite score of the past decade. SAT–ACT score comparisons. The College Board and ACT, Inc., conducted a joint study of students who took both the SAT and the ACT between September 2004 (for the ACT) or March 2005 (for the SAT) and June 2006. Tables were provided to concord scores for students taking the SAT after January 2005 and before March 2016. In May 2016, the College Board released concordance tables to concord scores on the SAT used from March 2005 through January 2016 to the SAT used since March 2016, as well as tables to concord scores on the SAT used since March 2016 to the ACT. In 2018, the College Board, in partnership with the ACT, introduced a new concordance table to better compare how a student would fare one test to another. This is now considered the official concordance to be used by college professionals and is replacing the one from 2016. The new concordance no longer features the old SAT (out of 2,400), just the new SAT (out of 1,600) and the ACT (out of 36). As of 2018, the most appropriate corresponding SAT score point for the given ACT score is also shown in the table below. Elucidation. Preparation. Pioneered by Stanley Kaplan in 1946 with a 64-hour course, SAT preparation has become a highly lucrative field. Many companies and organizations offer test preparation in the form of books, classes, online courses, and tutoring. The test preparation industry began almost simultaneously with the introduction of university entrance exams in the U.S. and flourished from the start. Test-preparation scams are a genuine problem for parents and students. In general, East Asian Americans, especially Korean Americans, are the most likely to take private SAT preparation courses while African Americans typically rely more one-on-one tutoring for remedial learning. Nevertheless, the College Board maintains that the SAT is essentially uncoachable and research by the College Board and the National Association of College Admission Counseling suggests that tutoring courses result in an average increase of about 20 points on the math section and 10 points on the verbal section. Indeed, researchers have shown time and again that preparation courses tend to offer at best a modest boost to test scores. Like IQ scores, which are a strong correlate, SAT scores tend to be stable over time, meaning SAT preparation courses offer only a limited advantage. An early meta-analysis (from 1983) found similar results and noted "the size of the coaching effect estimated from the matched or randomized studies (10 points) seems too small to be practically important." Statisticians Ben Domingue and Derek C. Briggs examined data from the Education Longitudinal Survey of 2002 and found that the effects of coaching were only statistically significant for mathematics; moreover, coaching had a greater effect on certain students than others, especially those who have taken rigorous courses and those of high socioeconomic status. A 2012 systematic literature review estimated a coaching effect of 23 and 32 points for the math and verbal tests, respectively. A 2016 meta-analysis estimated the effect size to be 0.09 and 0.16 for the verbal and math sections respectively, although there was a large degree of heterogeneity. Meanwhile, a 2011 study found that the effects of one-on-one tutoring to be minimal among all ethnic groups. Public misunderstanding of how to prepare for the SAT continues to be exploited by the preparation industry. While there is a link between family background and taking an SAT preparation course, not all students benefit equally from such an investment. In fact, any average gains in SAT scores due to such courses are primarily due to improvements among East Asian Americans. When this group is broken down even further, Korean Americans are more likely to take SAT prep courses than Chinese Americans, taking full advantage of their Church communities and ethnic economy. The College Board announced a partnership with the non-profit organization Khan Academy to offer free test-preparation materials starting in the 2015–16 academic year to help level the playing field for students from low-income families. Students may also bypass costly preparation programs using the more affordable official guide from the College Board and with solid studying habits. The College Board also offers a test called the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT), and there is some evidence that taking the PSAT at least once can help students do better on the SAT; moreover, like the case for the SAT, top scorers on the PSAT could earn scholarships. According to cognitive scientist Sian Beilock, 'choking', or substandard performance on important occasions, such as taking the SAT, can be prevented by doing plenty of practice questions and proctored exams to improve procedural memory, making use of the booklet to write down intermediate steps to avoid overloading working memory, and writing a diary entry about one's anxieties on the day of the exam to enhance self-empathy and positive self-image. Sleep hygiene is important as the quality of sleep during the days leading to the exam can improve performance. Moreover, it has been shown that later class times (8:30 am rather than 7:30am), which better suits the shifted circadian rhythm of teenagers, can raise SAT scores enough to change the tier of the colleges and universities student might be admitted to. In the wake of the COVID-19 pandemic, a large number of American colleges and universities decided to make standardized test scores optional for prospective students. Nevertheless, many students still chose to take the SAT and to enroll in preparation programs, which continued to be profitable. Predictive validity and powers. In 2009, education researchers Richard C. Atkinson and Saul Geiser from the University of California (UC) system argued that high school GPA is better than the SAT at predicting college grades regardless of high school type or quality. In its 2020 report, the UC academic senate found that the SAT was better than high school GPA at predicting first year GPA, and just as good as high school GPA at predicting undergraduate GPA, first year retention, and graduation. This predictive validity was found to hold across demographic groups, with the report noting that standardized test scores were actually "better predictors of success for students who are Underrepresented Minority students (URMs), who are first-generation, or whose families are low-income." A series of College Board reports point to similar predictive validity across demographic groups. But a month after the UC academic senate report, Saul Geiser disputed the UC academic senate's findings, saying "that the Senate claims are 'spurious', based on a fundamental error of omitting student demographics in the prediction model". Indicating when high school GPA is combined with demographics in the prediction, the SAT is less reliable. Li Cai, a UCLA professor who directs the National Center for Research on Evaluation, Standards, and Student Testing, indicated that the UC Academic Senate did include student demographics by using a different and simpler model for the public to understand and that the discriminatory impacts of the SAT are compensated during the admissions process. Jesse Rothstein, a UC Berkeley professor of public policy and economics, countered Li's claim, mentioning that the UC academic senate "got a lot of things wrong about the SAT", overstates the value of the SAT, and "no basis for its conclusion that UC admissions 'compensate' for test score gaps between groups." However, by analyzing their own institutional data, Brown, Yale, and Dartmouth universities reached the conclusion that SAT scores are more reliable predictors of collegiate success than GPA. Furthermore, the scores allow them to identify "more" potentially qualified students from disadvantaged backgrounds than they otherwise would. At the University of Texas at Austin, students who declined to submit SAT scores when such scores were optional performed more poorly than their peers who did. These results were replicated by a study conducted by the non-profit organization Opportunity Insights analyzing data from Ivy League institutions (Brown University, Columbia University, Cornell University, Dartmouth College, Harvard University, Princeton University, the University of Pennsylvania, and Yale University) plus Stanford University, the Massachusetts Institute of Technology, and the University of Chicago. A 2009 study found that SAT or ACT scores along with high-school GPAs are strong predictors of cumulative university GPAs. In particular, those with standardized test scores in the 50th percentile or better had a two-thirds chance of having a cumulative university GPA in the top half. A 2010 meta-analysis by researchers from the University of Minnesota offered evidence that standardized admissions tests such as the SAT predicted not only freshman GPA but also overall collegiate GPA. A 2012 study from the same university using a multi-institutional data set revealed that even after controlling for socioeconomic status and high-school GPA, SAT scores were still as capable of predicting freshman GPA among university or college students. A 2019 study with a sample size of around a quarter of a million students suggests that together, SAT scores and high-school GPA offer an excellent predictor of freshman collegiate GPA and second-year retention. In 2018, psychologists Oren R. Shewach, Kyle D. McNeal, Nathan R. Kuncel, and Paul R. Sackett showed that both high-school GPA and SAT scores predict enrollment in advanced collegiate courses, even after controlling for Advanced Placement credits. Education economist Jesse M. Rothstein indicated in 2005 that high-school average SAT scores were better at predicting freshman university GPAs compared to individual SAT scores. In other words, a student's SAT scores were not as informative with regards to future academic success as his or her high school's average. In contrast, individual high-school GPAs were a better predictor of collegiate success than average high-school GPAs. Furthermore, an admissions officer who failed to take average SAT scores into account would risk overestimating the future performance of a student from a low-scoring school and underestimating that of a student from a high-scoring school. While the SAT is correlated with intelligence and as such estimates individual differences, it does not have anything to say about "effective cognitive performance" or what intelligent people do. Nor does it measure non-cognitive traits associated with academic success such as positive attitudes or conscientiousness. Psychometricians Thomas R. Coyle and David R. Pillow showed in 2008 that the SAT predicts college GPA even after removing the general factor of intelligence ("g"), with which it is highly correlated. Like other standardized tests such as the ACT or the GRE, the SAT is a traditional method for assessing the academic aptitude of students who have had vastly different educational experiences and as such is focused on the common materials that the students could reasonably be expected to have encountered throughout the course of study. As such the mathematics section contains no materials above the precalculus level, for instance. Psychologist Raymond Cattell referred to this as testing for "historical" rather than "current" crystallized intelligence. Psychologist Scott Barry Kaufman further noted that the SAT can only measure a snapshot of a person's performance at a particular moment in time. Educational psychologists Jonathan Wai, David Lubinski, and Camilla Benbow observed that one way to increase the predictive validity of the SAT is by assessing the student's spatial reasoning ability, as the SAT at present does not contain any questions to that effect. Spatial reasoning skills are important for success in STEM. A 2006 study led by psychometrician Robert Sternberg found that the ability of SAT scores and high-school GPAs to predict collegiate performance could further be enhanced by additional assessments of analytical, creative, and practical thinking. Experimental psychologist Meredith Frey noted that while advances in education research and neuroscience can help incrementally improve the ability to predict scholastic achievement in the future, the SAT or other standardized tests likely will remain a valuable tool to build upon. In a 2014 op-ed for "The New York Times", psychologist John D. Mayer called the predictive powers of the SAT "an astonishing achievement" and cautioned against making it and other standardized tests optional. Research by psychometricians David Lubinsky, Camilla Benbow, and their colleagues has shown that the SAT could even predict life outcomes beyond university. Difficulty and relative weight. The SAT rigorously assesses students' mental stamina, memory, speed, accuracy, and capacity for abstract and analytical reasoning. For American universities and colleges, standardized test scores are the most important factor in admissions, second only to high-school GPAs. By international standards, however, the SAT is not that difficult. For example, South Korea's College Scholastic Ability Test (CSAT) and Finland's Matriculation Examination are both longer, tougher, and count for more towards the admissibility of a student to university. In many countries around the world, exams, including university entrance exams, are the sole deciding factor of admission; school grades are simply irrelevant. In China and India, doing well on the Gaokao or the IIT-JEE, respectively, enhances the social status of the students and their families. In an article from 2012, educational psychologist Jonathan Wai argued that the SAT was too easy to be useful to the most competitive of colleges and universities, whose applicants typically had brilliant high-school GPAs and standardized test scores. Admissions officers therefore had the burden of differentiating the top scorers from one another, not knowing whether or not the students' perfect or near-perfect scores truly reflected their scholastic aptitudes. He suggested that the College Board make the SAT more difficult, which would raise the measurement ceiling of the test, allowing the top schools to identify the best and brightest among the applicants. At that time, the College Board was already working on making the SAT tougher. The changes were announced in 2014 and implemented in 2016. After realizing the June 2018 test was easier than usual, the College Board made adjustments resulting in lower-than-expected scores, prompting complaints from the students, though some understood this was to ensure fairness. In its analysis of the incident, the Princeton Review supported the idea of curving grades, but pointed out that the test was incapable of distinguishing students in the 86th percentile (650 points) or higher in mathematics. The Princeton Review also noted that this particular curve was unusual in that it offered no cushion against careless or last-minute mistakes for high-achieving students. The Review posted a similar blog post for the SAT of August 2019, when a similar incident happened and the College Board responded in the same manner, noting, "A student who misses two questions on an easier test should not get as good a score as a student who misses two questions on a hard test. Equating takes care of that issue." It also cautioned students against retaking the SAT immediately, for they might be disappointed again, and recommended that instead, they give themselves some "leeway" before trying again. Recognition. The College Board claims that outside of the United States, the SAT is considered for university admissions in approximately 70 countries, as of the 2023-24 academic year. Association with general cognitive ability. In a 2000 study, psychometrician Ann M. Gallagher and her colleagues found that only the top students made use of intuitive reasoning in solving problems encountered on the mathematics section of the SAT. Cognitive psychologists Brenda Hannon and Mary McNaughton-Cassill discovered that having a good working memory, the ability of knowledge integration, and low levels of test anxiety predicts high performance on the SAT. Frey and Detterman (2004) investigated associations of SAT scores with intelligence test scores. Using an estimate of general mental ability, or "g", based on the Armed Services Vocational Aptitude Battery, they found SAT scores to be highly correlated with "g" (r=.82 in their sample, .857 when adjusted for non-linearity) in their sample taken from a 1979 national probability survey. Additionally, they investigated the correlation between SAT results, using the revised and recentered form of the test, and scores on the Raven's Advanced Progressive Matrices, a test of fluid intelligence (reasoning), this time using a non-random sample. They found that the correlation of SAT results with scores on the Raven's Advanced Progressive Matrices was .483, they estimated that this correlation would have been about 0.72 were it not for the restriction of ability range in the sample. They also noted that there appeared to be a ceiling effect on the Raven's scores which may have suppressed the correlation. Beaujean and colleagues (2006) have reached similar conclusions to those reached by Frey and Detterman. Because the SAT is strongly correlated with general intelligence, it can be used as a proxy to measure intelligence, especially when the time-consuming traditional methods of assessment are unavailable. Psychometrician Linda Gottfredson noted that the SAT is effective at identifying intellectually gifted college-bound students. For decades many critics have accused designers of the verbal SAT of cultural bias as an explanation for the disparity in scores between poorer and wealthier test-takers, with the biggest critics coming from the University of California system. A famous example of this perceived bias in the SAT I was the oarsman–regatta analogy question, which is no longer part of the exam. The object of the question was to find the pair of terms that had the relationship most similar to the relationship between "runner" and "marathon". The correct answer was "oarsman" and "regatta". The choice of the correct answer was thought to have presupposed students' familiarity with rowing, a sport popular with the wealthy. However, for psychometricians, analogy questions are a useful tool to gauge the mental abilities of students, for, even if the meaning of two words are unclear, a student with sufficiently strong analytical thinking skills should still be able to identify their relationships. Analogy questions were removed in 2005. In their place are questions that provide more contextual information should the students be ignorant of the relevant definition of a word, making it easier for them to guess the correct answer. Association with college or university majors and rankings. In 2010, physicists Stephen Hsu and James Schombert of the University of Oregon examined five years of student records at their school and discovered that the academic standing of students majoring in mathematics or physics (but not biology, English, sociology, or history) was strongly dependent on SAT mathematics scores. Students with SAT mathematics scores below 600 were highly unlikely to excel as a mathematics or physics major. Nevertheless, they found no such patterns between the SAT verbal, or combined SAT verbal and mathematics and the other aforementioned subjects. In 2015, educational psychologist Jonathan Wai of Duke University analyzed average test scores from the Army General Classification Test in 1946 (10,000 students), the Selective Service College Qualification Test in 1952 (38,420), Project Talent in the early 1970s (400,000), the Graduate Record Examination between 2002 and 2005 (over 1.2 million), and the SAT Math and Verbal in 2014 (1.6 million). Wai identified one consistent pattern: those with the highest test scores tended to pick the physical sciences and engineering as their majors while those with the lowest were more likely to choose education and agriculture. (See figure below.)A 2020 paper by Laura H. Gunn and her colleagues examining data from 1389 institutions across the United States unveiled strong positive correlations between the average SAT percentiles of incoming students and the shares of graduates majoring in STEM and the social sciences. On the other hand, they found negative correlations between the former and the shares of graduates in psychology, theology, law enforcement, recreation and fitness. Various researchers have established that average SAT or ACT scores and college ranking in the "U.S. News &amp; World Report" are highly correlated, almost 0.9. Between the 1980s and the 2010s, the U.S. population grew while universities and colleges did not expand their capacities as substantially. As a result, admissions rates fell considerably, meaning it has become more difficult to get admitted to a school whose alumni include one's parents. On top of that, high-scoring students nowadays are much more likely to leave their hometowns in pursuit of higher education at prestigious institutions. Consequently, standardized tests, such as the SAT, are a more reliable measure of selectivity than admissions rates. Still, when Michael J. Petrilli and Pedro Enamorado analyzed the SAT composite scores (math and verbal) of incoming freshman classes of 1985 and 2016 of the top universities and liberal arts colleges in the United States, they found that the median scores of new students increased by 93 points for their sample, from 1216 to 1309. In particular, fourteen institutions saw an increase of at least 150 points, including the University of Notre-Dame (from 1290 to 1440, or 150 points) and Elon College (from 952 to 1192, or 240 points). Association with types of schooling. While there seems to be evidence that private schools tend to produce students who do better on standardized tests such as the ACT or the SAT, Keven Duncan and Jonathan Sandy showed, using data from the National Longitudinal Surveys of Youth, that when student characteristics, such as age, race, and sex (7%), family background (45%), school quality (26%), and other factors were taken into account, the advantage of private schools diminished by 78%. The researchers concluded that students attending private schools already had the attributes associated with high scores on their own. Association with educational and societal standings and outcomes. Research from the University of California system published in 2001 analyzing data of their undergraduates between Fall 1996 through Fall 1999, inclusive, found that the SAT II was the single best predictor of collegiate success in the sense of freshman GPA, followed by high-school GPA, and finally the SAT I. After controlling for family income and parental education, the already low ability of the SAT to measure aptitude and college readiness fell sharply while the more substantial aptitude and college readiness measuring abilities of high school GPA and the SAT II each remained undiminished (and even slightly increased). The University of California system required both the SAT I and the SAT II from applicants to the UC system during the four academic years of the study. This analysis is heavily publicized but is contradicted by many studies. There is evidence that the SAT is correlated with societal and educational outcomes, including finishing a four-year university program. A 2012 paper from psychologists at the University of Minnesota analyzing multi-institutional data sets suggested that the SAT maintained its ability to predict collegiate performance even after controlling for socioeconomic status (as measured by the combination of parental educational attainment and income) and high-school GPA. This means that SAT scores were not merely a proxy for measuring socioeconomic status, the researchers concluded. This finding has been replicated and shown to hold across racial or ethnic groups and for both sexes. Moreover, the Minnesota researchers found that the socioeconomic status distributions of the student bodies of the schools examined reflected those of their respective applicant pools. Because of what it measures, a person's SAT scores cannot be separated from their socioeconomic background. However, the correlation between SAT scores and parental income or socioeconomic status should not be taken to mean causation. It could be that high scorers have intelligent parents who work cognitively demanding jobs and as such earn higher salaries. In addition, the correlation is only significant between biological families, not adoptive ones, suggesting that this might be due to genetic heritage, not economic wealth. In 2007, Rebecca Zwick and Jennifer Greif Green observed that a typical analysis did not take into account that heterogeneity of the high schools attended by the students in terms of not just the socioeconomic statuses of the student bodies but also the standards of grading. Zwick and Greif Green proceeded to show that when these were accounted for, the correlation between family socioeconomic status and classroom grades and rank increased whereas that between socioeconomic status and SAT scores fell. They concluded that school grades and SAT scores were similarly associated with family income. According to the College Board, in 2019, 56% of the test takers had parents with a university degree, 27% parents with no more than a high-school diploma, and about 9% who did not graduate from high school. (8% did not respond to the question.) Association with family structures. One of the proposed partial explanations for the gap between Asian- and European-American students in educational achievement, as measured for example by the SAT, is the general tendency of Asians to come from stable two-parent households. In their 2018 analysis of data from the National Longitudinal Surveys of the Bureau of Labor Statistics, economists Adam Blandin, Christopher Herrington, and Aaron Steelman concluded that family structure played an important role in determining educational outcomes in general and SAT scores in particular. Families with only one parent who has no degrees were designated 1L, with two parents but no degrees 2L, and two parents with at least one degree between them 2H. Children from 2H families held a significant advantage of those from 1L families, and this gap grew between 1990 and 2010. Because the median SAT composite scores (verbal and mathematics) for 2H families grew by 20 points while those of 1L families fell by one point, the gap between them increased by 21 points, or a fifth of one standard deviation. Sex differences. In performance. In 2013, the American College Testing Board released a report stating that boys outperformed girls on the mathematics section of the test, a significant gap that has persisted for over 35 years. As of 2015, boys on average earned 32 points more than girls on the SAT mathematics section. Among those scoring in the 700–800 range, the male-to-female ratio was 1.6:1. In 2014, psychologist Stephen Ceci and his collaborators found boys did better than girls across the percentiles. For example, a girl scoring in the top 10% of her sex would only be in the top 20% among the boys. In 2010, psychologist Jonathan Wai and his colleagues showed, by analyzing data from three decades involving 1.6 million intellectually gifted seventh graders from the Duke University Talent Identification Program (TIP), that in the 1980s the gender gap in the mathematics section of the SAT among students scoring in the top 0.01% was 13.5:1 in favor of boys but dropped to 3.8:1 by the 1990s. The dramatic sex ratio from the 1980s replicates a different study using a sample from Johns Hopkins University. This ratio is similar to that observed for the ACT mathematics and science scores between the early 1990s and the late 2000s. It remained largely unaltered at the end of the 2000s. Sex differences in SAT mathematics scores began making themselves apparent at the level of 400 points and above. In the late 2000s, for every female who scored a perfect 800 on the SAT mathematics test, there were two males. Some researchers point to evidence in support of greater male variability in verbal and quantitative reasoning skills. Greater male variability has been found in body weight, height, and cognitive abilities across cultures, leading to a larger number of males in the lowest and highest distributions of testing. Consequently, a higher number of males are found in both the upper and lower extremes of the performance distributions of the mathematics sections of standardized tests such as the SAT, resulting in the observed gender discrepancy. Paradoxically, this is at odds with the tendency of girls to have higher classroom scores than boys, proving that they do not lack scholastic aptitude. However, boys tend to do better on standardized test questions not directly related to the curriculum. On the other hand, Wai and his colleagues found that both sexes in the top 5% appeared to be more or less at parity when it comes to the verbal section of the SAT, though girls have gained a slight but noticeable edge over boys starting in the mid-1980s. Psychologist David Lubinski, who conducted longitudinal studies of seventh graders who scored exceptionally high on the SAT, found a similar result. Girls generally had better verbal reasoning skills and boys mathematical skills. This reflects other research on the cognitive ability of the general population rather than just the 95th percentile and up. Although aspects of testing such as stereotype threat are a concern, research on the predictive validity of the SAT has demonstrated that it tends to be a more accurate predictor of female GPA in university as compared to male GPA. In strategizing. Mathematical problems on the SAT can be broadly categorized into two groups: conventional and unconventional. Conventional problems can be handled routinely via familiar formulas or algorithms while unconventional ones require more creative thought in order to make unusual use of familiar methods of solution or to come up with the specific insights necessary for solving those problems. In 2000, ETS psychometrician Ann M. Gallagher and her colleagues analyzed how students handled disclosed SAT mathematics questions in self-reports. They found that for both sexes, the most favored approach was to use formulas or algorithms learned in class. When that failed, however, males were more likely than females to identify the suitable methods of solution. Previous research suggested that males were more likely to explore unusual paths to solution whereas females tended to stick to what they had learned in class and that females were more likely to identify the appropriate approaches if such required nothing more than mastery of classroom materials. In confidence. Older versions of the SAT did ask students how confident they were in their mathematical aptitude and verbal reasoning ability, specifically, whether or not they believed they were in the top 10%. Devin G. Pope analyzed data of over four million test takers from the late 1990s to the early 2000s and found that high scorers were more likely to be confident they were in the top 10%, with the top scorers reporting the highest levels of confidence. But there were some noticeable gaps between the sexes. Men tended to be much more confident in their mathematical aptitude than women. For example, among those who scored 700 on the mathematics section, 67% of men answered they believed they were in the top 10% whereas only 56% of women did the same. Women, on the other hand, were slightly more confident in their verbal reasoning ability than men. In glucose metabolism. Cognitive neuroscientists Richard Haier and Camilla Persson Benbow employed positron emission tomography (PET) scans to investigate the rate of glucose metabolism among students who have taken the SAT. They found that among men, those with higher SAT mathematics scores exhibited higher rates of glucose metabolism in the temporal lobes than those with lower scores, contradicting the brain-efficiency hypothesis. This trend, however, was not found among women, for whom the researchers could not find any cortical regions associated with mathematical reasoning. Both sexes scored the same on average in their sample and had the same rates of cortical glucose metabolism overall. According to Haier and Benbow, this is evidence for the structural differences of the brain between the sexes. Association with race and ethnicity. A 2001 meta-analysis of the results of 6,246,729 participants tested for cognitive ability or aptitude found a difference in average scores between black and white students of around 1.0 standard deviation, with comparable results for the SAT (2.4 million test takers). Similarly, on average, Hispanic and Amerindian students perform on the order of one standard deviation lower on the SAT than white and Asian students. Mathematics appears to be the more difficult part of the exam. In 1996, the black-white gap in the mathematics section was 0.91 standard deviations, but by 2020, it fell to 0.79. In 2013, Asian Americans as a group scored 0.38 standard deviations higher than whites in the mathematics section. Some researchers believe that the difference in scores is closely related to the overall achievement gap in American society between students of different racial groups. This gap may be explainable in part by the fact that students of disadvantaged racial groups tend to go to schools that provide lower educational quality. This view is supported by evidence that the black-white gap is higher in cities and neighborhoods that are more racially segregated. Other research cites poorer minority proficiency in key coursework relevant to the SAT (English and math), as well as peer pressure against students who try to focus on their schoolwork ("acting white"). Cultural issues are also evident among black students in wealthier households, with high achieving parents. John Ogbu, a Nigerian-American professor of anthropology, concluded that instead of looking to their parents as role models, black youth chose other models like rappers and did not make an effort to be good students. One set of studies has reported differential item functioning, namely, that some test questions function differently based on the racial group of the test taker, reflecting differences in ability to understand certain test questions or to acquire the knowledge required to answer them between groups. In 2003, Freedle published data showing that black students have had a slight advantage on the verbal questions that are labeled as difficult on the SAT, whereas white and Asian students tended to have a slight advantage on questions labeled as easy. Freedle argued that these findings suggest that "easy" test items use vocabulary that is easier to understand for white middle class students than for minorities, who often use a different language in the home environment, whereas the difficult items use complex language learned only through lectures and textbooks, giving both student groups equal opportunities to acquiring it. The study was severely criticized by the ETS board, but the findings were replicated in a subsequent study by Santelices and Wilson in 2010. There is no evidence that SAT scores systematically underestimate future performance of minority students. However, the predictive validity of the SAT has been shown to depend on the dominant ethnic and racial composition of the college. Some studies have also shown that African-American students under-perform in college relative to their white peers with the same SAT scores; researchers have argued that this is likely because white students tend to benefit from social advantages outside of the educational environment (for example, high parental involvement in their education, inclusion in campus academic activities, positive bias from same-race teachers and peers) which result in better grades. Christopher Jencks concludes that as a group, African Americans have been harmed by the introduction of standardized entrance exams such as the SAT. This, according to him, is not because the tests themselves are flawed, but because of labeling bias and selection bias; the tests measure the skills that African Americans are less likely to develop in their socialization, rather than the skills they are more likely to develop. Furthermore, standardized entrance exams are often labeled as tests of general ability, rather than of certain aspects of ability. Thus, a situation is produced in which African-American ability is consistently underestimated within the education and workplace environments, contributing in turn to selection bias against them which exacerbates underachievement. Among the major racial or ethnic groups of the United States, gaps in SAT mathematics scores are the greatest at the tails, with Hispanic and Latino Americans being the most likely to score at the lowest range and Asian Americans the highest. In addition, there is some evidence suggesting that if the test contains more questions of both the easy and difficult varieties, which would increase the variability of the scores, the gaps would be even wider. Given the distribution for Asians, for example, many could score higher than 800 if the test allowed them to. (See figure below.) 2020 was the year in which education worldwide was disrupted by the COVID-19 pandemic and indeed, the performance of students in the United States on standardized tests, such as the SAT, suffered. Yet the gaps persisted. According to the College Board, in 2020, while 83% of Asian students met the benchmark of college readiness in reading and writing and 80% in mathematics, only 44% and 21% of black students did those respective categories. Among whites, 79% met the benchmark for reading and writing and 59% did mathematics. For Hispanics and Latinos, the numbers were 53% and 30%, respectively. (See figure below.) Test-taking population. By analyzing data from the National Center for Education Statistics, economists Ember Smith and Richard Reeves of the Brookings Institution deduced that the number of students taking the SAT increased at a rate faster than population and high-school graduation growth rates between 2000 and 2020. The increase was especially pronounced among Hispanics and Latinos. Even among whites, whose number of high-school graduates was shrinking, the number of SAT takers rose. In 2015, for example, 1.7 million students took the SAT, up from 1.6 million in 2013. But in 2019, a record-breaking 2.2 million students took the exam, compared to 2.1 million in 2018, another record-breaking year. The rise in the number of students taking the SAT was due in part to many school districts offering to administer the SAT during school days often at no further costs to the students. Some require students to take the SAT, regardless of whether or not they are going to college. However, in 2021, in the wake of the COVID-19 pandemic and the optional status of the SAT at many colleges and universities, only 1.5 million students took the test. But as testing centers reopened, ambitious students chose to take the SAT or the ACT to make themselves stand out from the competition regardless of the admissions policies of their preferred schools. Among the class of 2023, 1.9 million students took the test. Psychologists Jean Twenge, W. Keith Campbell, and Ryne A. Sherman analyzed vocabulary test scores on the U.S. General Social Survey (formula_0) and found that after correcting for education, the use of sophisticated vocabulary has declined between the mid-1970s and the mid-2010s across all levels of education, from below high school to graduate school. However, they cautioned against the use of SAT verbal scores to track the decline for while the College Board reported that SAT verbal scores had been decreasing, these scores were an imperfect measure of the vocabulary level of the nation as a whole because the test-taking demographic has changed and because more students took the SAT in the 2010s than in the 1970s, meaning there were more with limited ability who took it. However, as the frequency of reading for pleasure and the level of reading comprehension among American high-school students continue to decline, students who take the SAT might struggle to do well, even if reforms have been introduced to shorten the duration of the test and to reduce the number of questions associated with a given passage in the verbal portion of the test. Use in non-collegiate contexts. By high-IQ societies. Certain high IQ societies, like Mensa, Intertel, the Prometheus Society and the Triple Nine Society, use scores from certain years as one of their admission tests. For instance, Intertel accepts scores (verbal and math combined) of at least 1300 on tests taken through January 1994; the Triple Nine Society accepts scores of 1450 or greater on SAT tests taken before April 1995, and scores of at least 1520 on tests taken between April 1995 and February 2005. Mensa accepts qualifying SAT scores earned on or before January 31, 1994. By researchers. Because it is strongly correlated with general intelligence, the SAT has often been used as a proxy to measure intelligence by researchers, especially since 2004. In particular, scientists studying mathematically gifted individuals have been using the mathematics section of the SAT to identify subjects for their research. A growing body of research indicates that SAT scores can predict individual success decades into the future, for example in terms of income and occupational achievements. A longitudinal study published in 2005 by educational psychologists Jonathan Wai, David Lubinski, and Camilla Benbow suggests that among the intellectually precocious (the top 1%), those with higher scores in the mathematics section of the SAT at the age of 12 were more likely to earn a PhD in the STEM fields, to have a publication, to register a patent, or to secure university tenure. Wai further showed that an individual's academic ability, as measured by the average SAT or ACT scores of the institution attended, predicted individual differences in income, even among the richest people of all, and being a member of the 'American elite', namely Fortune 500 CEOs, billionaires, federal judges, and members of Congress. Wai concluded that the American elite was also the cognitive elite. Gregory Park, Lubinski, and Benbow gave statistical evidence that intellectually gifted adolescents, as identified by SAT scores, could be expected to accomplish great feats of creativity in the future, both in the arts and in STEM. The SAT is sometimes given to students at age 12 or 13 by organizations such as the Study of Mathematically Precocious Youth (SMPY), Johns Hopkins Center for Talented Youth, and the Duke University Talent Identification Program (TIP) to select, study, and mentor students of exceptional ability, that is, those in the top one percent. Among SMPY participants, those within the top quartile, as indicated by the SAT composite score (mathematics and verbal), were markedly more likely to have a doctoral degree, to have at least one publication in STEM, to earn income in the 95th percentile, to have at least one literary publication, or to register at least one patent than those in the bottom quartile. Duke TIP participants generally picked career tracks in STEM should they be stronger in mathematics, as indicated by SAT mathematics scores, or the humanities if they possessed greater verbal ability, as indicated by SAT verbal scores. For comparison, the bottom SMPY quartile is five times more likely than the average American to have a patent. Meanwhile, as of 2016, the shares doctorates among SMPY participants was 44% and Duke TIP 37%, compared to two percent among the general U.S. population. Consequently, the notion that beyond a certain point, differences in cognitive ability as measured by standardized tests such as the SAT cease to matter is gainsaid by the evidence. In the 2010 paper which showed that the sex gap in SAT mathematics scores had dropped dramatically between the early 1980s and the early 1990s but had persisted for the next two decades or so, Wai and his colleagues argued that "sex differences in abilities in the extreme right tail should not be dismissed as no longer part of the explanation for the dearth of women in math-intensive fields of science." By employers. Cognitive ability is correlated with job training outcomes and job performance. As such, some employers rely on SAT scores to assess the suitability of a prospective recruit, especially if the person has limited work experience. There is nothing new about this practice. Major companies and corporations have spent princely sums on learning how to avoid hiring errors and have decided that standardized test scores are a valuable tool in deciding whether or not a person is fit for the job. In some cases, a company might need to hire someone to handle proprietary materials of its own making, such as computer software. But since the ability to work with such materials cannot be assessed via external certification, it makes sense for such a firm to rely on something that is a proxy of measuring general intelligence. In other cases, a firm may not care about academic background but needs to assess a prospective recruit's quantitative reasoning ability, and what makes standardized test scores necessary. Several companies, especially those considered to be the most prestigious in industries such as investment banking and management consulting such as Goldman Sachs and McKinsey, have been reported to ask prospective job candidates about their SAT scores. Nevertheless, some other top employers, such as Google, have eschewed the use of SAT or other standardized test scores unless the potential employee is a recent graduate. Google's Laszlo Bock explained to "The New York Times", "We found that they don’t predict anything." Educational psychologist Jonathan Wai suggested this might be due to the inability of the SAT to differentiate the intellectual capacities of those at the extreme right end of the distribution of intelligence. Wai told "The New York Times", "Today the SAT is actually too easy, and that's why Google doesn't see a correlation. Every single person they get through the door is a super-high scorer." Perception. Math–verbal achievement gap. In 2002, "New York Times" columnist Richard Rothstein argued that the U.S. math averages on the SAT and ACT continued their decade-long rise over national verbal averages on the tests while the averages of verbal portions on the same tests were floundering. Optional SAT. During the 1960s and 1970s, there was a movement to drop achievement scores. After some time, the countries, states, and provinces that reintroduced them agreed that academic standards had dropped, students had studied less, and had taken their education less seriously. Testing requirements were reinstated in some places after research concluded that these high-stakes tests produced benefits that outweighed the costs. However, in a 2001 speech to the American Council on Education, Richard C. Atkinson, the president of the University of California, urged the dropping of aptitude tests such as the SAT I but not achievement tests such as the SAT II as a college admissions requirement. Atkinson's critique of the predictive validity and powers of the SAT has been contested by the University of California academic senate. In April 2020, the academic senate, which consisted of faculty members, voted 51–0 to restore the requirement of standardized test scores, but the governing board overruled the academic senate and did not reinstate the test requirement anyway. Because of the size of the Californian population, this decision might have an impact on U.S. higher education at large; schools looking to admit Californian students could have a harder time. During the 2010s, over 1,230 American universities and colleges opted to stop requiring the SAT and the ACT for admissions, according to FairTest, an activist group opposing standardized entrance exams. Most, however, were small colleges, with the notable exceptions of the University of California system and the University of Chicago. Also on the list are institutions catering to niche students, such as religious colleges, arts and music conservatories, or nursing schools, and the majority of institutions in the Northeastern United States. In the wake of the COVID-19 pandemic, around 1,600 institutions decided to waive the requirement of the SAT or the ACT for admissions because it was challenging both to administer and to take these tests, resulting in many cancellations. Some schools chose to make them optional on a temporary basis only, either for just one year, as in the case of Princeton University, or three, like the College of William &amp; Mary. Others dropped the requirement completely. Some schools extended their moratorium on standardized entrance exams in 2021. This did not stop highly ambitious students from taking them, however, as many parents and teenagers were skeptical of the "optional" status of university entrance exams and wanted to make their applications more likely to catch the attention of admission officers. This led to complaints of registration sites crashing in the summer of 2020. On the other hand, the number of students applying to the more competitive of schools that had made SAT and ACT scores optional increased dramatically because the students thought they stood a chance. Ivy League institutions saw double-digit increases in the number of applications, as high as 51% in the case of Columbia University, while their admission rates, already in the single digits, fell, e.g. from 4.9% in 2020 to just 3.4% in 2021 at Harvard University. At the same time, interest in lower-status schools that did the same thing dropped precipitously; the college application process remains driven primarily by the preference for elite schools. 44% of students who used the Common Application—accepted by over 900 colleges and universities as of 2021—submitted SAT or ACT scores in the 2020–21 academic year, down from 77% in 2019–20. Those who did submit their test scores tended to hail from high-income families, to have at least one university-educated parent, and to be white or Asian. Despite the fallout from Operation Varsity Blues, which found many wealthy parents illegally intervening to raise their children's standardized test scores, the SAT and the ACT remain popular among American parents and college-bound seniors, who are skeptical of the process of "holistic admissions" because they think it is rather opaque, as schools try to access characteristics not easily discerned via a number, hence the growth in the number of test takers attempting to make themselves more competitive even if this parallels an increase in the number of schools declaring it optional. While holistic admissions might seem like a plausible alternative, the process of applying can be rather stressful for students and parents, and many get upset once they learn that someone else got into the school that rejected them despite having lower SAT scores and GPAs. Holistic admissions notwithstanding, when merit-based scholarships are considered, standardized test scores might be the tiebreakers, as these are highly competitive. Scholarships and financial aid could help students and their parents significantly cut the cost of higher education, especially in times of economic hardship. Moreover, the most selective of schools might have no better options than using standardized test scores in order to quickly prune the number of applications worth considering, for holistic admissions consume valuable time and other resources. Following the 2023 ruling by the Supreme Court of the United States against race-based admissions as a form of affirmative action, a number of schools have signaled their intent to continue pursuing ethnic diversity. One way for them to adapt to the new legal reality is to drop the requirement of standardized testing, making it more difficult for potential plaintiffs (Asian Americans in the twin cases of "SFFA v. Harvard" and "SFFA v UNC") to find concrete evidence for their allegations of discrimination. On one hand, making the SAT and the ACT optional for admissions enables schools to attract a larger pool of applicants of a variety of socioeconomic backgrounds. On the other hand, letters of recommendation are not a good indicator of collegiate performance, and grade inflation is a genuine problem. If standardized tests were taken out of the picture, school grades would become more important, thereby incentivizing grade inflation. In fact, grades in American high schools have been inflating by noticeable amounts due to pressure from parents, creating an apparent oversupply of high achievers that makes actual high-performing students struggle to stand out, especially if they are from low-income families. Schools that made the SAT optional therefore lost an objective measure of academic aptitude and readiness, and they will have to formulate a new methodology for admissions or to develop their own entrance exams. Given that the selectivity of a school a student applies to is correlated with the resources of his or her high school—measured in terms of the availability of rigorous courses, such as AP classes, and the socioeconomic statuses of the student body—, making the SAT optional might exacerbate social inequities. Furthermore, since the costs of attending institutions of higher learning in the United States are high, eliminating the SAT requirement could make said institutions more likely to admit under-performing students, who might have to be removed for their low academic standing and who might be saddled with debt after attending. Another criticism of making the SAT optional is that subjective measures of an applicant's suitability, such as application essays, could become more important, making it easier for the rich to gain admissions at the expense of the poor because their school counselors are more capable of writing good letters of recommendation and they can afford to hire external help to boost their applications. It was due to these concerns that the Massachusetts Institute of Technology (MIT) decided to reinstate its SAT requirement in 2022. Many other universities across the U.S. followed suit in 2024. However, the University of North Carolina system will only require SAT or ACT scores from applicants whose high-school GPA is below 2.8 while the University of California system will continue to be test-blind. Writing section. In 2005, MIT Writing Director Les Perelman plotted essay length versus essay score on the new SAT from released essays and found a high correlation between them. After studying over 50 graded essays, he found that longer essays consistently produced higher scores. In fact, he argues that by simply gauging the length of an essay without reading it, the given score of an essay could likely be determined correctly over 90% of the time. He also discovered that several of these essays were full of factual errors; the College Board does not claim to grade for factual accuracy. Perelman, along with the National Council of Teachers of English, also criticized the 25-minute writing section of the test for damaging standards of writing teaching in the classroom. They say that writing teachers training their students for the SAT will not focus on revision, depth, accuracy, but will instead produce long, formulaic, and wordy pieces. "You're getting teachers to train students to be bad writers", concluded Perelman. On January 19, 2021, the College Board announced that the SAT would no longer offer the optional essay section after the June 2021 administration. History. The College Board, the not-for-profit organization that owns the SAT, was organized at the beginning of the 20th century to provide uniform entrance exams for its member colleges, whose matriculating students often came from boarding and private day schools found in the Northeastern United States. The exams were essay-based, graded by hand, and required several days for the student to take them. By the early 1920s, the increasing interest in intelligence tests as a means of selection convinced the College Board to form a commission to produce such a test for college admission purposes. The leader of the commission was Carl Brigham, a psychologist at Princeton University, who originally saw the value of these types of tests through the lens of eugenic thought. On June 23, 1926, the first SAT, then known as the Scholastic Aptitude Test, was administered to 8,040 students, 60% of whom were male, many of whom were applying to Yale University (26%) and Smith College (27%). In 1934, James Conant and Henry Chauncey used the SAT as a means to identify recipients, besides those from the traditional northeastern private schools, for scholarships to Harvard University. By 1942, the College Board suspended the use of the essay exams, replacing them with the SAT, due in part to the success of Harvard's SAT program as well as because of the constraints from the onset of World War II. At this time, the SAT was standardized so that a test score received by a student in one year could be directly compared to a score received by a student in another year. Test scores ranged from 200 to 800 on each of two test sections (verbal and math) and the same reference group of students was used to standardize the SAT until 1995. After the war, due to several factors including the formation of the Educational Testing Service, the use of the SAT increased rapidly: by 1951, about 80,000 SATs were taken, rising to about 1.5 million in 1971. During this time, changes made to the content of the SAT were relatively minor, and included the introduction of sentence completion questions and "quantitative comparison" math questions as well as changes in the timing of the test. In 1994, however, the SAT was substantially changed in an attempt to make the test more closely reflect the work done by students in school and the skills that they would need in college. Among other changes, antonym questions were removed from the verbal section, and free response questions were added to the math section along with the use of calculators. In 1995, after nearly forty years of declining scores, the SAT was recalibrated by the addition of approximately 100 points to each score to compensate for the decline in what constituted an average score. In 2005, the SAT was changed again, in part due to criticism of the test by the University of California system, which said that the test was not closely enough aligned to high school curricula. Along with the elimination of analogies from the verbal section and quantitative comparison items from the math section, a new writing section with an essay was added. The changes introduced an additional section score, increasing the maximum SAT score to 2400. In early 2016, the SAT would change again in the interest of alignment with typical high school curricula. The changes included making the essay optional (and returning the maximum score to 1600), changing all multiple-choice questions from having five answer options to four, and the removal of penalty for wrong answers (rights-only scoring). The essay was completely removed from the SAT by mid-2021, in the interest of reducing demands on students in the context of the COVID-19 pandemic. Name changes. The SAT has been renamed several times since its introduction in 1926. It was originally known as the Scholastic Aptitude Test. In 1990, a commission set up by the College Board to review the proposed changes to the SAT program recommended that the meaning of the initialism SAT be changed to "Scholastic Assessment Test" because a "test that integrates measures of achievement as well as developed ability can no longer be accurately described as a test of aptitude". In 1993, the College Board changed the name of the test to SAT I: Reasoning Test; at the same time, the name of the SAT Achievement Tests was changed to SAT II: Subject Tests. The Reasoning Test and Subject Tests were to be collectively known as the Scholastic Assessment Tests. According to the president of the College Board at the time, the name change was meant "to correct the impression among some people that the SAT measures something that is innate and impervious to change regardless of effort or instruction." The new SAT debuted in March 1994, and was referred to as the Scholastic Assessment Test by major news organizations. However, in 1997, the College Board announced that the SAT could not properly be called the Scholastic Assessment Test, and that the letters SAT did not stand for anything. In 2004, the Roman numeral in SAT I: Reasoning Test was dropped, making SAT Reasoning Test the name of the SAT. The "Reasoning Test" portion of the name was eliminated following the exam's 2016 redesign; it is now simply called the SAT. Reuse of old SAT exams. The College Board has been accused of completely reusing old SAT papers previously given in the United States. The recycling of questions from previous exams has been exploited to allow for cheating on exams and impugned the validity of some students' test scores. Test preparation companies in Asia have been found to provide test questions to students within hours of a new SAT exam's administration. On August 25, 2018, the SAT test given in the United States was discovered to be a recycled October 2017 international SAT test given in China. The leaked PDF file was on the internet before the August 25, 2018, exam. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n = 29,912" } ]
https://en.wikipedia.org/wiki?curid=144716
1447276
Maximal evenness
Concept in music theory In scale (music) theory, a maximally even set (scale) is one in which every generic interval has either one or two consecutive integers specific intervals-in other words a scale whose notes (pcs) are "spread out as much as possible." This property was first described by John Clough and Jack Douthett. Clough and Douthett also introduced the maximally even algorithm. For a chromatic cardinality "c" and pc-set cardinality "d" a maximally even set is formula_0 where "k" ranges from 0 to "d" − 1 and "m", 0 ≤ "m" ≤ "c" − 1 is fixed and the bracket pair is the floor function. A discussion on these concepts can be found in Timothy Johnson's book on the mathematical foundations of diatonic scale theory. Jack Douthett and Richard Krantz introduced maximally even sets to the mathematics literature. A scale is said to have Myhill's property if every generic interval comes in two specific interval sizes, and a scale with Myhill's property is said to be a well-formed scale. The diatonic collection is both a well-formed scale and is maximally even. The whole-tone scale is also maximally even, but it is not well-formed since each generic interval comes in only one size. Second-order maximal evenness is maximal evenness of a subcollection of a larger collection that is maximally even. Diatonic triads and seventh chords possess second-order maximal evenness, being maximally even in regard to the maximally even diatonic scale—but are not maximally even with regard to the chromatic scale. (ibid, p.115) This nested quality resembles Fred Lerdahl's "reductional format" for pitch space from the bottom up: (Lerdahl, 1992) In a dynamical approach, spinning concentric circles and iterated maximally even sets have been constructed. This approach has implications in Neo-Riemannian theory, and leads to some interesting connections between diatonic and chromatic theory. Emmanuel Amiot has discovered yet another way to define maximally even sets by employing discrete Fourier transforms. Carey, Norman and Clampitt, David (1989). "Aspects of Well-Formed Scales", Music Theory Spectrum 11: 187–206. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D = {\\left \\lfloor \\frac{ck + m}{d} \\right \\rfloor }" } ]
https://en.wikipedia.org/wiki?curid=1447276
1447420
Consumption of fixed capital
Consumption of fixed capital (CFC) is a term used in business accounts, tax assessments and national accounts for depreciation of fixed assets. CFC is used in preference to "depreciation" to emphasize that fixed capital is used up in the process of generating new output, and because unlike depreciation it is not valued at historic cost but at current market value (so-called "economic depreciation"); CFC may also include other expenses incurred in using or installing fixed assets beyond actual depreciation charges. Normally the term applies only to "producing" enterprises, but sometimes it applies also to real estate assets. CFC refers to a depreciation charge (or "write-off") against the gross income of a producing enterprise, which reflects the decline in value of fixed capital being operated with. Fixed assets will decline in value after they are purchased for use in production, due to wear and tear, changed market valuation and possibly market obsolescence. Thus, CFC represents a "compensation" for the loss of value of fixed assets to an enterprise. According to the 2008 manual of the United Nations System of National Accounts, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"Consumption of fixed capital is the decline, during the course of the accounting period, in the current value of the stock of fixed assets owned and used by a producer as a result of physical deterioration, normal obsolescence or normal accidental damage. The term depreciation is often used in place of consumption of fixed capital but it is avoided in the SNA because in commercial accounting the term depreciation is often used in the context of writing off historic costs whereas in the SNA consumption of fixed capital is dependent on the current value of the asset." — UNSNA 2008, section H., p. 123 ) CFC tends to increase as the asset gets older, even if the efficiency and rental remain constant to the end. The larger the depreciation write-off, the larger the gross income of a business. Consequently, business owners consider this accounting entry as very important; after all, it affects both their income, and their ability to invest. Capital Consumption Allowance. The Capital Consumption Allowance (CCA) is the portion of the gross domestic product (GDP) which is due to depreciation. The Capital Consumption Allowance measures the amount of expenditure that a country needs to undertake in order to maintain, as opposed to grow, its productivity. The CCA can be thought of as representing the wear-and-tear on the country's physical capital, together with the investment needed to maintain the level of human capital (e.g. to educate the workers needed to replace retirees). Calculation. Gross domestic product (GDP) equals net domestic product (NDP) + CCA (Capital Consumption Allowance): formula_0 Valuation. How much the depreciation charge actually will be, depends mainly on the depreciation rates which enterprises are "officially permitted" to charge for tax purposes (usually fixed by law), and on how fixed assets themselves are "valued" for accounting purposes. This makes the assessment of CFC quite complex, because fixed assets may be valued for instance at: By how much then, do fixed assets used in production truly "decline" in value, within an accounting period? How should they be valued? This can be arguable and very difficult to answer, and in practice, various conventions are adopted by accountants and auditors within the framework of legal rules and economic theory. In addition, the depreciation schedules imposed by tax departments may "differ" from the "actual" depreciation of business assets at market rates. Often, governments permit depreciation write-offs "higher" than true depreciation, to provide an incentive to enterprises for new investment. But this is not always the case; the tax rate might sometimes be lower than the real market-based rate. Furthermore, businesses might engage in creative accounting and deliberately state their assets and liabilities held at a balance date, or interpret the figures in some other way, to increase the amount of depreciation write-offs, and thus boost their income (how this is done will depend a lot on tax law). For all these reasons, economists distinguish between different kinds of depreciation rates, arguing that the "true" consumption of fixed capital is really the "economic" depreciation, assessed by relating financial data to mathematical models, to arrive at a figure that "seems credible". The economic depreciation rate is based on observations of the average selling prices of assets at different ages. The economic depreciation rate is therefore a market-based depreciation rate, i.e. it is based on what an asset of a given age would currently sell for in the market. In national accounts. In national accounts, CFC is a component of value added or Gross Domestic Product, and regarded as a cost of production. It is defined in general terms as the decline, in an accounting period, of the current value of the stock of fixed assets owned and used by a producer as a result of physical deterioration, normal obsolescence or normal accidental damage. The UNSNA manual notes that "The consumption of fixed capital is one of the most important elements in the System... It may account for 10 per cent or more of total GDP." CFC is defined "in a way that is theoretically appropriate and relevant "for purposes of economic analysis"". Its value may therefore diverge considerably from depreciation actually recorded in business accounts, or as allowed for taxation purposes, especially if there is price inflation. In principle, CFC is calculated using the actual or estimated prices and rentals of fixed assets prevailing at the time the production takes place, and not at the times fixed assets were originally acquired. The "historic costs" of fixed assets, i.e., the prices originally paid for them, may become quite irrelevant for the calculation of consumption of fixed capital, if prices change sufficiently over time. Unlike depreciation as calculated in business accounts, CFC in national accounts is, in principle, "not" a method of allocating the costs of past expenditures on fixed assets over subsequent accounting periods. Rather, fixed assets at a given moment in time are valued according to the remaining benefits derived from their use. Depreciation charges in business accounts are adjusted in national accounts from historic costs to "current" prices, in conjunction with estimates of the capital stock. In addition to gross measures of output and income such as GDP and gross national income (GNI), National Accounts include net measures such as net domestic product (NDP) and net national income (NNI), derived by deducting CFC from the corresponding gross measure. GDP is the most accurate measure of aggregate economic activity. However, NNI represents the income actually available to finance consumption and new investment (excluding the replacement of capital consumed in production). It is therefore a more accurate measure of economic welfare. Inclusions. In UNSNA, included are: Exclusions. In UNSNA, excluded are: Gross and net capital stocks. In UNSNA, the value at current prices of the "gross capital stock" is obtained, by using price indices for fixed assets at current replacement cost, irrespective of the age of the assets. The net, or written-down value of a fixed capital asset is equal to its current replacement cost, less CFC accrued up to that point in time. Criticism. The main criticism, made of the way national accounts value CFC, is that in trying to arrive at an "economic" concept and magnitude of depreciation, they arrive at figures which are at variance with standard accounting practices. The business income cited in the social account is not the business income reported in profit and loss statements, but an economic income measure which is derived from accounting business income. Thus, the criticism centres both on the valuation principles used, and the additional items included in the aggregate, which are not directly related to depreciation charges in business accounts. Yet the whole computation affects the aggregate profit figures provided. Because of the way CFC is calculated, aggregate profit (or operating surplus the residual item in the product account) is likely to be differ from the accounting profit calculation, which is usually derived from tax data. In Marxian economics, the official concept of CFC is also disputed, because it is argued that CFC really should refer to the value transferred by living labor from fixed assets to new output. Consequently, operating expenditures associated with fixed assets other than depreciation should be regarded as either as circulating constant capital, faux frais of production or surplus value, depending on the case. Furthermore, the measured difference between economic depreciation and actual depreciation charges will either add or lower the magnitude of total surplus value.
[ { "math_id": 0, "text": "GDP = NDP + CCA." } ]
https://en.wikipedia.org/wiki?curid=1447420
14478153
Stars and bars (combinatorics)
Graphical aid for deriving some concepts in combinatorics In the context of combinatorial mathematics, stars and bars (also called "sticks and stones", "balls and bars", and "dots and dividers") is a graphical aid for deriving certain combinatorial theorems. It can be used to solve many simple counting problems, such as how many ways there are to put n indistinguishable balls into k distinguishable bins. Theorems one and two are the coefficients used for 2 different support ranges in the negative binomial probability distribution. Statements of theorems. The stars and bars method is often introduced specifically to prove the following two theorems of elementary combinatorics concerning the number of solutions to an equation. Theorem one. For any pair of positive integers n and k, the number of k-tuples of positive integers whose sum is n is equal to the number of ("k" − 1)-element subsets of a set with "n" − 1 elements. For example, if "n" = 10 and "k" = 4, the theorem gives the number of solutions to "x"1 + "x"2 + "x"3 + "x"4 = 10 (with "x"1, "x"2, "x"3, "x"4 &gt; 0) as the binomial coefficient formula_0 This corresponds to compositions of an integer. Theorem two. For any pair of positive integers n and k, the number of k-tuples of non-negative integers whose sum is n is equal to the number of multisets of cardinality "n" taken from a set of size "k", or equivalently, the number of multisets of cardinality "k" − 1 taken from a set of size "n" + 1. For example, if "n" = 10 and "k" = 4, the theorem gives the number of solutions to "x"1 + "x"2 + "x"3 + "x"4 = 10 (with "x"1, "x"2, "x"3, "x"4 formula_1) as: formula_2 formula_3 formula_4 This corresponds to weak compositions of an integer. Proofs via the method of stars and bars. Theorem one proof. Suppose there are "n" objects (represented here by stars) to be placed into "k" bins, such that all bins contain at least one object. The bins are distinguishable (say they are numbered 1 to "k") but the "n" stars are not (so configurations are only distinguished by the "number of stars" present in each bin). A configuration is thus represented by a "k"-tuple of positive integers, as in the statement of the theorem. For example, with "n" 7 and "k" 3, start by placing the stars in a line: Fig. 1: Seven objects, represented by stars The configuration will be determined once it is known which is the first star going to the second bin, and the first star going to the third bin, etc.. This is indicated by placing "k" − 1 bars between the stars. Because no bin is allowed to be empty (all the variables are positive), there is at most one bar between any pair of stars. For example: Fig. 2: These two bars give rise to three bins containing 4, 1, and 2 objects There are "n" − 1 gaps between stars. A configuration is obtained by choosing "k" − 1 of these gaps to contain a bar; therefore there are formula_5 possible combinations. Theorem two proof. In this case, the weakened restriction of non-negativity instead of positivity means that we can place multiple bars between stars, before the first star and after the last star. For example, when "n" 7 and "k" 5, the tuple (4, 0, 1, 2, 0) may be represented by the following diagram: Fig. 3: These four bars give rise to five bins containing 4, 0, 1, 2, and 0 objects To see that there are formula_6 possible arrangements, observe that any arrangement of stars and bars consists of a total of "n" + "k" − 1 objects, "n" of which are stars and "k" − 1 of which are bars. Thus, we only need to choose "k" − 1 of the "n" + "k" − 1 positions to be bars (or, equivalently, choose "n" of the positions to be stars). Theorem 1 can now be restated in terms of Theorem 2, because the requirement that all the variables are positive is equivalent to pre-assigning each variable a "1", and asking for the number of solutions when each variable is non-negative. For example: formula_7 with formula_8 is equivalent to: formula_9 with formula_10 Proofs by generating functions. Both cases are very similar, we will look at the case when formula_11 first. The 'bucket' becomes formula_12 This can also be written as formula_13 and the exponent of x tells us how many balls are placed in the bucket. Each additional bucket is represented by another formula_12, and so the final generating function is formula_14 As we only have n balls, we want the coefficient of formula_15 (written formula_16) from this formula_17 This is a well-known generating function - it generates the diagonals in Pascal's Triangle, and the coefficient of formula_15 is formula_18 For the case when formula_19, we need to add x into the numerator to indicate that at least one ball is in the bucket. formula_20 and the coefficient of formula_15 is formula_21 Examples. Many elementary word problems in combinatorics are resolved by the theorems above. Example 1. If one wishes to count the number of ways to distribute seven indistinguishable one dollar coins among Amber, Ben, and Curtis so that each of them receives at least one dollar, one may observe that distributions are essentially equivalent to tuples of three positive integers whose sum is 7. (Here the first entry in the tuple is the number of coins given to Amber, and so on.) Thus stars and bars theorem 1 applies, with "n" = 7 and "k" = 3, and there are formula_22 ways to distribute the coins. Example 2. If "n" = 5, "k" = 4, and a set of size k is {a, b, c, d}, then ★|★★★||★ could represent either the multiset {a, b, b, b, d} or the 4-tuple (1, 3, 0, 1). The representation of any multiset for this example should use SAB2 with "n" = 5, "k" – 1 = 3 bars to give formula_23. Example 3. SAB2 allows for more bars than stars, which isn't permitted in SAB1. So, for example, 10 balls into 7 bins is formula_24, while 7 balls into 10 bins is formula_25, with 6 balls into 11 bins as formula_26 Example 4. If we have the infinite power series formula_27 we can use this method to compute the Cauchy product of m copies of the series. For the nth term of the expansion, we are picking n powers of x from m separate locations. Hence there are formula_28 ways to form our nth power: formula_29 Example 5. The graphical method was used by Paul Ehrenfest and Heike Kamerlingh Onnes—with symbol ε (quantum energy element) in place of a star and the symbol 0 in place of a bar—as a simple derivation of Max Planck's expression for the number of "complexions" for a system of "resonators" of a single frequency. By complexions (microstates) Planck meant distributions of P energy elements ε over N resonators. The number R of complexions is formula_30 The graphical representation of each possible distribution would contain P copies of the symbol ε and "N" – 1 copies of the symbol 0. In their demonstration, Ehrenfest and Kamerlingh Onnes took "N" = 4 and "P" = 7 ("i.e.", "R" = 120 combinations). They chose the 4-tuple (4, 2, 0, 1) as the illustrative example for this symbolic representation: εεεε0εε00ε.
[ { "math_id": 0, "text": "\\binom{n - 1}{k - 1} = \\binom{10 - 1}{4 - 1} = \\binom{9}{3} = 84." }, { "math_id": 1, "text": "\\ge0" }, { "math_id": 2, "text": "\\left(\\!\\!{k\\choose n}\\!\\!\\right) = {k+n-1 \\choose n} = \\binom{13}{10} = 286" }, { "math_id": 3, "text": "\\left(\\!\\!{n+1\\choose k-1}\\!\\!\\right) = {n+1+k-1-1 \\choose k-1} = \\binom{13}{3} = 286" }, { "math_id": 4, "text": "\\binom{n + k - 1}{k - 1} = \\binom{10+4-1}{4 - 1} = \\binom{13}{3} = 286" }, { "math_id": 5, "text": "\\tbinom{n - 1}{k - 1}" }, { "math_id": 6, "text": "\\tbinom{n + k - 1}{k-1}" }, { "math_id": 7, "text": "x_1+x_2+x_3+x_4=10" }, { "math_id": 8, "text": "x_1,x_2,x_3,x_4>0" }, { "math_id": 9, "text": "x_1+x_2+x_3+x_4=6" }, { "math_id": 10, "text": "x_1,x_2,x_3,x_4\\ge0" }, { "math_id": 11, "text": "x_i\\ge0" }, { "math_id": 12, "text": "\\frac{1}{1-x}" }, { "math_id": 13, "text": "1+x+x^2+\\dots" }, { "math_id": 14, "text": "\\frac{1}{1-x}\\frac{1}{1-x}\\dots\\frac{1}{1-x} = \\frac{1}{(1-x)^k}" }, { "math_id": 15, "text": "x^n" }, { "math_id": 16, "text": "[x^n]:" }, { "math_id": 17, "text": "[x^n]: \\frac{1}{(1-x)^k}" }, { "math_id": 18, "text": "\\binom{n+k-1}{k-1}" }, { "math_id": 19, "text": "x_i>0" }, { "math_id": 20, "text": "\\frac{x}{1-x}\\frac{x}{1-x}\\dots\\frac{x}{1-x} = \\frac{x^k}{(1-x)^k}" }, { "math_id": 21, "text": "\\binom{n-1}{k-1}" }, { "math_id": 22, "text": "\\tbinom{7-1}{3-1} = 15" }, { "math_id": 23, "text": "\\tbinom{5+4-1}{4-1} = \\tbinom{8}{3} = 56" }, { "math_id": 24, "text": "\\tbinom{16}{6}" }, { "math_id": 25, "text": "\\tbinom{16}{9}" }, { "math_id": 26, "text": "\\tbinom{16}{10}=\\tbinom{16}{6}." }, { "math_id": 27, "text": "\\left[\\sum_{k=1}^{\\infty}x^k\\right]," }, { "math_id": 28, "text": "\\tbinom{n-1}{m-1}" }, { "math_id": 29, "text": "\\left[\\sum_{k=1}^{\\infty}x^k\\right]^{m} = \\sum_{n=m}^{\\infty}{{n-1} \\choose {m-1}}x^{n}\n" }, { "math_id": 30, "text": "R=\\frac {(N+P-1)!}{P!(N-1)!}. \\ " } ]
https://en.wikipedia.org/wiki?curid=14478153
1447904
Kerr–Newman metric
Solution of Einstein field equations The Kerr–Newman metric describes the spacetime geometry around a mass which is electrically charged and rotating. It is a vacuum solution which generalizes the Kerr metric (which describes an uncharged, rotating mass) by additionally taking into account the energy of an electromagnetic field, making it the most general asymptotically flat and stationary solution of the Einstein–Maxwell equations in general relativity. As an electrovacuum solution, it only includes those charges associated with the magnetic field; it does not include any free electric charges. Because observed astronomical objects do not possess an appreciable net electric charge (the magnetic fields of stars arise through other processes), the Kerr–Newman metric is primarily of theoretical interest. The model lacks description of infalling baryonic matter, light (null dusts) or dark matter, and thus provides an incomplete description of stellar mass black holes and active galactic nuclei. The solution however is of mathematical interest and provides a fairly simple cornerstone for further exploration. The Kerr–Newman solution is a special case of more general exact solutions of the Einstein–Maxwell equations with non-zero cosmological constant. History. In December of 1963, Roy Kerr and Alfred Schild found the Kerr–Schild metrics that gave all Einstein spaces that are exact linear perturbations of Minkowski space. In early 1964, Kerr looked for all Einstein–Maxwell spaces with this same property. By February of 1964, the special case where the Kerr–Schild spaces were charged (including the Kerr–Newman solution) was known but the general case where the special directions were not geodesics of the underlying Minkowski space proved very difficult. The problem was given to George Debney to try to solve but was given up by March 1964. About this time Ezra T. Newman found the solution for charged Kerr by guesswork. In 1965, Ezra "Ted" Newman found the axisymmetric solution of Einstein's field equation for a black hole which is both rotating and electrically charged. This formula for the metric tensor formula_0 is called the Kerr–Newman metric. It is a generalisation of the Kerr metric for an uncharged spinning point-mass, which had been discovered by Roy Kerr two years earlier. Four related solutions may be summarized by the following table: where "Q" represents the body's electric charge and "J" represents its spin angular momentum. Overview of the solution. Newman's result represents the simplest stationary, axisymmetric, asymptotically flat solution of Einstein's equations in the presence of an electromagnetic field in four dimensions. It is sometimes referred to as an "electrovacuum" solution of Einstein's equations. Any Kerr–Newman source has its rotation axis aligned with its magnetic axis. Thus, a Kerr–Newman source is different from commonly observed astronomical bodies, for which there is a substantial angle between the rotation axis and the magnetic moment. Specifically, neither the Sun, nor any of the planets in the Solar System have magnetic fields aligned with the spin axis. Thus, while the Kerr solution describes the gravitational field of the Sun and planets, the magnetic fields arise by a different process. If the Kerr–Newman potential is considered as a model for a classical electron, it predicts an electron having not just a magnetic dipole moment, but also other multipole moments, such as an electric quadrupole moment. An electron quadrupole moment has not yet been experimentally detected; it appears to be zero. In the "G" = 0 limit, the electromagnetic fields are those of a charged rotating disk inside a ring where the fields are infinite. The total field energy for this disk is infinite, and so this "G" = 0 limit does not solve the problem of infinite self-energy. Like the Kerr metric for an uncharged rotating mass, the Kerr–Newman interior solution exists mathematically but is probably not representative of the actual metric of a physically realistic rotating black hole due to issues with the stability of the Cauchy horizon, due to mass inflation driven by infalling matter. Although it represents a generalization of the Kerr metric, it is not considered as very important for astrophysical purposes, since one does not expect that realistic black holes have a significant electric charge (they are expected to have a minuscule positive charge, but only because the proton has a much larger momentum than the electron, and is thus more likely to overcome electrostatic repulsion and be carried by momentum across the horizon). The Kerr–Newman metric defines a black hole with an event horizon only when the combined charge and angular momentum are sufficiently small: formula_1 An electron's angular momentum "J" and charge "Q" (suitably specified in geometrized units) both exceed its mass "M", in which case the metric has no event horizon. Thus, there can be no such thing as a black hole electron — only a naked spinning ring singularity. Such a metric has several seemingly unphysical properties, such as the ring's violation of the cosmic censorship hypothesis, and also appearance of causality-violating closed timelike curves in the immediate vicinity of the ring. A 2009 paper by Russian theorist Alexander Burinskii considered an electron as a generalization of the previous models by Israel (1970) and Lopez (1984), which truncated the "negative" sheet of the Kerr-Newman metric, obtaining the source of the Kerr-Newman solution in the form of a relativistically rotating disk. Lopez's truncation regularized the Kerr-Newman metric by a cutoff at :formula_2, replacing the singularity by a flat regular space-time, the so called "bubble". Assuming that the Lopez bubble corresponds to a phase transition similar to the Higgs symmetry breaking mechanism, Burinskii showed that a gravity-created ring singularity forms by regularization the superconducting core of the electron model and should be described by the supersymmetric Landau-Ginzburg field model of phase transition: By omitting Burinsky's intermediate work, we come to the recent new proposal: to consider the truncated by Israel and Lopez negative sheet of the KN solution as the sheet of the positron. This modification unites the KN solution with the model of QED, and shows the important role of the Wilson lines formed by frame-dragging of the vector potential. As a result, the modified KN solution acquires a strong interaction with Kerr's gravity caused by the additional energy contribution of the electron-positron vacuum and creates the Kerr–Newman relativistic circular string of Compton size. Limiting cases. The Kerr–Newman metric can be seen to reduce to other exact solutions in general relativity in limiting cases. It reduces to Alternately, if gravity is intended to be removed, Minkowski space arises if the gravitational constant "G" is zero, without taking the mass and charge to zero. In this case, the electric and magnetic fields are more complicated than simply the fields of a charged magnetic dipole; the zero-gravity limit is not trivial. The metric. The Kerr–Newman metric describes the geometry of spacetime for a rotating charged black hole with mass "M", charge "Q" and angular momentum "J". The formula for this metric depends upon what coordinates or coordinate conditions are selected. Two forms are given below: Boyer–Lindquist coordinates, and Kerr–Schild coordinates. The gravitational metric alone is not sufficient to determine a solution to the Einstein field equations; the electromagnetic stress tensor must be given as well. Both are provided in each section. Boyer–Lindquist coordinates. One way to express this metric is by writing down its line element in a particular set of spherical coordinates, also called Boyer–Lindquist coordinates: formula_3 where the coordinates ("r", "θ", "ϕ") are standard spherical coordinate system, and the length scales: formula_4 formula_5 formula_6 have been introduced for brevity. Here "r"s is the Schwarzschild radius of the massive body, which is related to its total mass-equivalent "M" by formula_7 where "G" is the gravitational constant, and "r""Q" is a length scale corresponding to the electric charge "Q" of the mass formula_8 where "ε"0 is the vacuum permittivity. Electromagnetic field tensor in Boyer–Lindquist form. The electromagnetic potential in Boyer–Lindquist coordinates is formula_9 while the Maxwell tensor is defined by formula_10 In combination with the Christoffel symbols the second order equations of motion can be derived with formula_11 where formula_12 is the charge per mass of the test particle. Kerr–Schild coordinates. The Kerr–Newman metric can be expressed in the Kerr–Schild form, using a particular set of Cartesian coordinates, proposed by Kerr and Schild in 1965. The metric is as follows. formula_13 formula_14 formula_15 formula_16 Notice that k is a unit vector. Here "M" is the constant mass of the spinning object, "Q" is the constant charge of the spinning object, "η" is the Minkowski metric, and "a" = "J"/"M" is a constant rotational parameter of the spinning object. It is understood that the vector formula_17 is directed along the positive z-axis, i.e. formula_18. The quantity "r" is not the radius, but rather is implicitly defined by the relation formula_19 Notice that the quantity "r" becomes the usual radius "R" formula_20 when the rotational parameter "a" approaches zero. In this form of solution, units are selected so that the speed of light is unity ("c" = 1). In order to provide a complete solution of the Einstein–Maxwell equations, the Kerr–Newman solution not only includes a formula for the metric tensor, but also a formula for the electromagnetic potential: formula_21 At large distances from the source ("R" ≫ "a"), these equations reduce to the Reissner–Nordström metric with: formula_22 In the Kerr–Schild form of the Kerr–Newman metric, the determinant of the metric tensor is everywhere equal to negative one, even near the source. Electromagnetic fields in Kerr–Schild form. The electric and magnetic fields can be obtained in the usual way by differentiating the four-potential to obtain the electromagnetic field strength tensor. It will be convenient to switch over to three-dimensional vector notation. formula_23 The static electric and magnetic fields are derived from the vector potential and the scalar potential like this: formula_24 formula_25 Using the Kerr–Newman formula for the four-potential in the Kerr–Schild form, in the limit of the mass going to zero, yields the following concise complex formula for the fields: formula_26 formula_27 The quantity omega (formula_28) in this last equation is similar to the Coulomb potential, except that the radius vector is shifted by an imaginary amount. This complex potential was discussed as early as the nineteenth century, by the French mathematician Paul Émile Appell. Irreducible mass. The total mass-equivalent "M", which contains the electric field-energy and the rotational energy, and the irreducible mass "M"irr are related by formula_29 which can be inverted to obtain formula_30 In order to electrically charge and/or spin a neutral and static body, energy has to be applied to the system. Due to the mass–energy equivalence, this energy also has a mass-equivalent; therefore "M" is always higher than "M"irr. If for example the rotational energy of a black hole is extracted via the Penrose processes, the remaining mass–energy will always stay greater than or equal to "M"irr. Important surfaces. Setting formula_31 to 0 and solving for formula_32 gives the inner and outer event horizon, which is located at the Boyer–Lindquist coordinate formula_33 Repeating this step with formula_34 gives the inner and outer ergosphere formula_35 Equations of motion. For brevity, we further use nondimensionalized quantities normalized against formula_36, formula_37, formula_38 and formula_39, where formula_40 reduces to formula_41 and formula_42 to formula_43, and the equations of motion for a test particle of charge formula_12 become formula_44 formula_45 formula_46 formula_47 with formula_48 for the total energy and formula_49 for the axial angular momentum. formula_50 is the Carter constant: formula_51 where formula_52 is the poloidial component of the test particle's angular momentum, and formula_53 the orbital inclination angle. formula_54 and formula_55 with formula_56 and formula_57 for particles are also conserved quantities. formula_58 is the frame dragging induced angular velocity. The shorthand term formula_59 is defined by formula_60 The relation between the coordinate derivatives formula_61 and the local 3-velocity formula_62 is formula_63 for the radial, formula_64 for the poloidial, formula_65 for the axial and formula_66 for the total local velocity, where formula_67 is the axial radius of gyration (local circumference divided by 2π), and formula_68 the gravitational time dilation component. The local radial escape velocity for a neutral particle is therefore formula_69 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g_{\\mu \\nu} \\!" }, { "math_id": 1, "text": "J^2/M^2 + Q^2 \\leq M^2." }, { "math_id": 2, "text": " r= r_e=e^2/2M " }, { "math_id": 3, "text": "c^{2} d\\tau^{2} = \n-\\left(\\frac{dr^2}{\\Delta} + d\\theta^2 \\right) \\rho^2 + \\left(c \\, dt - a \\sin^2 \\theta \\, d\\phi \\right)^2 \\frac{\\Delta}{\\rho^2} - \\left(\\left(r^2 + a^2 \\right) d\\phi - a c\\, dt \\right)^2 \\frac{\\sin^2 \\theta}{\\rho^2}," }, { "math_id": 4, "text": "a = \\frac{J}{Mc}\\,," }, { "math_id": 5, "text": "\\rho^{2}=r^2+a^2\\cos^2\\theta\\,," }, { "math_id": 6, "text": "\\Delta=r^2-r_\\text{s}r+a^2+r_Q^2\\,," }, { "math_id": 7, "text": "r_{s} = \\frac{2GM}{c^{2}}," }, { "math_id": 8, "text": "r_{Q}^{2} = \\frac{Q^{2}G}{4\\pi\\epsilon_{0} c^{4}}," }, { "math_id": 9, "text": "A_{\\mu}=\\left( \\frac{r \\ r_Q }{\\rho^2},0,0,-\\frac{\\ a \\ r \\ r_Q \\sin ^2 \\theta }{\\rho^2 } \\right) " }, { "math_id": 10, "text": "F_{\\mu\\nu} = \\frac{\\partial A_\\nu}{\\partial x^{\\mu}} - \\frac{\\partial A_\\mu}{\\partial x^{\\nu}} \\ \\to \\ F^{\\mu\\nu}=g^{\\mu\\sigma} \\ g^{\\nu\\kappa} \\ F_{\\sigma \\kappa}" }, { "math_id": 11, "text": "{{\\ddot x^i = - \\Gamma^i_{j k} \\ {\\dot x^j} \\ {\\dot x^k} + q \\ {F^{i k}} \\ {\\dot x^j}} \\ {g_{j k}}}," }, { "math_id": 12, "text": "q" }, { "math_id": 13, "text": "g_{\\mu \\nu} = \\eta_{\\mu \\nu} + fk_{\\mu}k_{\\nu} \\!" }, { "math_id": 14, "text": "f = \\frac{Gr^2}{r^4 + a^2z^2}\\left[2Mr - Q^2 \\right]" }, { "math_id": 15, "text": "\\mathbf{k} = ( k_{x} ,k_{y} ,k_{z} ) = \\left( \\frac{rx+ay}{r^2 + a^2} , \\frac{ry-ax}{r^2 + a^2}, \\frac{z}{r} \\right) " }, { "math_id": 16, "text": "k_{0} = 1. \\!" }, { "math_id": 17, "text": "\\vec{a}" }, { "math_id": 18, "text": "\\vec{a} = a \\hat{z}" }, { "math_id": 19, "text": "1 = \\frac{x^2+y^2}{r^2 + a^2} + \\frac{z^2}{r^2}." }, { "math_id": 20, "text": "r \\to R = \\sqrt{x^2 + y^2 + z^2}" }, { "math_id": 21, "text": "A_{\\mu} = \\frac{Qr^3}{r^4 + a^2z^2}k_{\\mu}" }, { "math_id": 22, "text": "A_{\\mu} = \\frac{Q}{R}k_{\\mu}" }, { "math_id": 23, "text": "A_{\\mu} = \\left(-\\phi, A_x, A_y, A_z \\right) \\," }, { "math_id": 24, "text": "\\vec{E} = - \\vec{\\nabla} \\phi \\," }, { "math_id": 25, "text": "\\vec{B} = \\vec{\\nabla} \\times \\vec{A} \\," }, { "math_id": 26, "text": "\\vec{E} + i\\vec{B} = -\\vec{\\nabla}\\Omega\\," }, { "math_id": 27, "text": "\\Omega = \\frac{Q}{\\sqrt{(\\vec{R}-i\\vec{a})^2}} \\," }, { "math_id": 28, "text": "\\Omega" }, { "math_id": 29, "text": "\nM_{\\rm irr} = \\frac{1}{2}\\sqrt{2 M^2-r^2_Q c^4/G^2+2 M \\sqrt{M^2-(r^2_Q +a^2) c^4/G^2}}\n" }, { "math_id": 30, "text": "\nM = \\frac{4 M_{\\rm irr}^2+r^2_Q c^4/G^2}{2\\sqrt{4 M_{\\rm irr}^2- a^2 c^4/G^2}}\n" }, { "math_id": 31, "text": "1 / g_{rr}" }, { "math_id": 32, "text": "r" }, { "math_id": 33, "text": "r_{\\text{H}}^{\\pm} = \\frac{r_{\\rm s}}{2} \\pm \\sqrt{\\frac{r_{\\rm s}^2}{4} - a^2 - r_Q^2}." }, { "math_id": 34, "text": "g_{tt}" }, { "math_id": 35, "text": "r_{\\text{E}}^{\\pm} = \\frac{r_{\\rm s}}{2} \\pm \\sqrt{\\frac{r_{\\rm s}^2}{4} - a^2 \\cos^2\\theta - r_Q^2}." }, { "math_id": 36, "text": "G" }, { "math_id": 37, "text": "M" }, { "math_id": 38, "text": "c" }, { "math_id": 39, "text": "4\\pi\\epsilon_0" }, { "math_id": 40, "text": "a" }, { "math_id": 41, "text": "Jc/G/M^2" }, { "math_id": 42, "text": "Q" }, { "math_id": 43, "text": "Q/(M\\sqrt{4\\pi\\epsilon_0G})" }, { "math_id": 44, "text": "\\dot t = \\frac{\\csc ^2 \\theta \\ ({L_z} (a \\ \\Delta \\sin ^2 \\theta -a \\ (a^2+r^2) \\sin ^2 \\theta )-q \\ Q \\ r \\ (a^2+r^2) \\sin ^2 \\theta +E ((a^2+r^2)^2 \\sin ^2 \\theta -a^2 \\Delta \\sin ^4 \\theta ))}{\\Delta \\rho^2 }" }, { "math_id": 45, "text": "\\dot r = \\pm \\frac{\\sqrt{((r^2+a^2) \\ E - a \\ L_z - q \\ Q \\ r)^2-\\Delta \\ (C+r^2)}}{\\rho^2}" }, { "math_id": 46, "text": "\\dot \\theta = \\pm \\frac{\\sqrt{C-(a \\cos \\theta)^2-(a \\ \\sin^2 \\theta \\ E-L_z)^2/\\sin^2 \\theta}}{\\rho^2}" }, { "math_id": 47, "text": "\\dot \\phi = \\frac{E \\ (a \\ \\sin^2 \\theta \\ (r^2+a^2)-a \\ \\sin^2 \\theta \\ \\Delta)+L_z \\ (\\Delta-a^2 \\ \\sin^2 \\theta)-q \\ Q \\ r \\ a \\ \\sin^2 \\theta}{\\rho^2 \\ \\Delta \\ \\sin^2\\theta}" }, { "math_id": 48, "text": "E" }, { "math_id": 49, "text": "L_z" }, { "math_id": 50, "text": "C" }, { "math_id": 51, "text": "C = p_{\\theta}^{2} + \\cos^{2}\\theta \\left( a^{2}(\\mu^2 - E^{2}) + \\frac{L_z^2}{ \\sin^2\\theta}\\right) = a^2 \\ (\\mu^2-E^2) \\ \\sin^2 \\delta + L_z^2 \\ \\tan^2 \\delta = {\\rm const}," }, { "math_id": 52, "text": "p_{\\theta} = \\dot \\theta \\ \\rho^2" }, { "math_id": 53, "text": "\\delta" }, { "math_id": 54, "text": "L_z = p_{\\phi}=-g_{\\phi \\phi} {\\dot{\\phi}}-g_{t \\phi} {\\dot{t}} - q \\ A_{\\phi} = \\frac{v^{\\phi} \\ \\bar R}{\\sqrt{1-\\mu^2 v^2}}+\\frac{(1-\\mu^2 v^2) \\ a \\ r \\ \\mho \\ q \\ \\sin ^2 \\theta }{\\Sigma } = {\\rm const.}" }, { "math_id": 55, "text": "E = -p_t =g_{tt} {\\dot{t}}+g_{t \\phi} {\\dot{\\phi}} + q \\ A_{t} = \\sqrt{\\frac{\\Delta \\ \\rho^2}{(1-\\mu^2 v^2) \\ \\chi}} + \\Omega \\ L_z +\\frac{\\mho \\ q \\ r }{\\Sigma} = {\\rm const.}" }, { "math_id": 56, "text": " \\mu^2=0 " }, { "math_id": 57, "text": " \\mu^2=1 " }, { "math_id": 58, "text": "\\Omega = -\\frac{g_{t\\phi}}{g_{\\phi\\phi}} = \\frac{a \\left(2 r-Q^2\\right)}{\\chi }" }, { "math_id": 59, "text": "\\chi" }, { "math_id": 60, "text": "\\chi = \\left(a ^2+r^2\\right)^2-a ^2 \\ \\sin ^2 \\theta \\ \\Delta." }, { "math_id": 61, "text": "\\dot r, \\ \\dot \\theta, \\ \\dot \\phi" }, { "math_id": 62, "text": "v" }, { "math_id": 63, "text": "v^{r} = \\dot r \\ \\sqrt{\\frac{\\rho^2 \\ (1-\\mu^2 v^2)}{\\Delta}}" }, { "math_id": 64, "text": "v^{\\theta} = \\dot \\theta \\ \\sqrt{\\rho^2 \\ (1-\\mu^2 v^2) }" }, { "math_id": 65, "text": "v^{\\phi} = \\frac{\\sqrt{1-\\mu^2 v^2} \\left(L_z \\ \\Sigma - a \\ q \\ Q \\ r \\left( 1-\\mu^2 v^2 \\right) \\sin^2 \\theta \\right)}{\\bar{R} \\ \\Sigma }" }, { "math_id": 66, "text": "v = \\frac{\\sqrt{\\dot t^2-\\varsigma^2}}{\\dot t} = \\sqrt{\\frac{\\chi \\ (E-L_z \\ \\Omega )^2 -\\Delta \\ \\rho^2}{\\chi \\ (E-L_z \\ \\Omega )^2}}" }, { "math_id": 67, "text": "\\bar R = \\sqrt{-g_{\\phi \\phi}} = \\sqrt{\\frac{\\chi}{\\rho^2}} \\ \\sin \\theta" }, { "math_id": 68, "text": "\\varsigma = \\sqrt{g^{t t}} = \\frac{\\chi }{\\Delta \\ \\rho^2}" }, { "math_id": 69, "text": "v_{\\rm esc}=\\frac{\\sqrt{\\varsigma^2-1}}{\\varsigma} ." } ]
https://en.wikipedia.org/wiki?curid=1447904
1447921
Pp-wave spacetime
In general relativity, the pp-wave spacetimes, or pp-waves for short, are an important family of exact solutions of Einstein's field equation. The term "pp" stands for "plane-fronted waves with parallel propagation", and was introduced in 1962 by Jürgen Ehlers and Wolfgang Kundt. Overview. The pp-waves solutions model radiation moving at the speed of light. This radiation may consist of: or any combination of these, so long as the radiation is all moving in the "same" direction. A special type of pp-wave spacetime, the plane wave spacetimes, provide the most general analogue in general relativity of the plane waves familiar to students of electromagnetism. In particular, in general relativity, we must take into account the gravitational effects of the energy density of the electromagnetic field itself. When we do this, "purely electromagnetic plane waves" provide the direct generalization of ordinary plane wave solutions in Maxwell's theory. Furthermore, in general relativity, disturbances in the gravitational field itself can propagate, at the speed of light, as "wrinkles" in the curvature of spacetime. Such "gravitational radiation" is the gravitational field analogue of electromagnetic radiation. In general relativity, the gravitational analogue of electromagnetic plane waves are precisely the vacuum solutions among the plane wave spacetimes. They are called gravitational plane waves. There are physically important examples of pp-wave spacetimes which are "not" plane wave spacetimes. In particular, the physical experience of an observer who whizzes by a gravitating object (such as a star or a black hole) at nearly the speed of light can be modelled by an "impulsive" pp-wave spacetime called the Aichelburg–Sexl ultraboost. The gravitational field of a beam of light is modelled, in general relativity, by a certain axi-symmetric pp-wave. An example of pp-wave given when gravity is in presence of matter is the gravitational field surrounding a neutral Weyl fermion: the system consists in a gravitational field that is a pp-wave, no electrodynamic radiation, and a massless spinor exhibiting axial symmetry. In the Weyl-Lewis-Papapetrou spacetime, there exists a complete set of exact solutions for both gravity and matter. Pp-waves were introduced by Hans Brinkmann in 1925 and have been rediscovered many times since, most notably by Albert Einstein and Nathan Rosen in 1937. Mathematical definition. A "pp-wave spacetime" is any Lorentzian manifold whose metric tensor can be described, with respect to Brinkmann coordinates, in the form formula_0 where formula_1 is any smooth function. This was the original definition of Brinkmann, and it has the virtue of being easy to understand. The definition which is now standard in the literature is more sophisticated. It makes no reference to any coordinate chart, so it is a coordinate-free definition. It states that any Lorentzian manifold which admits a "covariantly constant" null vector field formula_2 is called a pp-wave spacetime. That is, the covariant derivative of formula_2 must vanish identically: formula_3 This definition was introduced by Ehlers and Kundt in 1962. To relate Brinkmann's definition to this one, take formula_4, the coordinate vector orthogonal to the hypersurfaces formula_5. In the "index-gymnastics" notation for tensor equations, the condition on formula_2 can be written formula_6. Neither of these definitions make any mention of any field equation; in fact, they are "entirely independent of physics". The vacuum Einstein equations are very simple for pp waves, and in fact linear: the metric formula_0 obeys these equations if and only if formula_7. But the definition of a pp-wave spacetime does not impose this equation, so it is entirely mathematical and belongs to the study of pseudo-Riemannian geometry. In the next section we turn to "physical interpretations" of pp-wave spacetimes. Ehlers and Kundt gave several more coordinate-free characterizations, including: Physical interpretation. It is a purely mathematical fact that the characteristic polynomial of the Einstein tensor of any pp-wave spacetime vanishes identically. Equivalently, we can find a Newman–Penrose complex null tetrad such that the Ricci-NP scalars formula_8 (describing any matter or nongravitational fields which may be present in a spacetime) and the Weyl-NP scalars formula_9 (describing any gravitational field which may be present) each have only one nonvanishing component. Specifically, with respect to the NP tetrad formula_10 formula_11 formula_12 the only nonvanishing component of the Ricci spinor is formula_13 and the only nonvanishing component of the Weyl spinor is formula_14 This means that any pp-wave spacetime can be interpreted, in the context of general relativity, as a null dust solution. Also, the Weyl tensor always has Petrov type N as may be verified by using the Bel criteria. In other words, pp-waves model various kinds of "classical" and "massless" radiation traveling at the local speed of light. This radiation can be gravitational, electromagnetic, Weyl fermions, or some hypothetical kind of massless radiation other than these three, or any combination of these. All this radiation is traveling in the same direction, and the null vector formula_4 plays the role of a wave vector. Relation to other classes of exact solutions. Unfortunately, the terminology concerning pp-waves, while fairly standard, is highly confusing and tends to promote misunderstanding. In any pp-wave spacetime, the covariantly constant vector field formula_2 always has identically vanishing optical scalars. Therefore, pp-waves belong to the Kundt class (the class of Lorentzian manifolds admitting a null congruence with vanishing optical scalars). Going in the other direction, pp-waves include several important special cases. From the form of Ricci spinor given in the preceding section, it is immediately apparent that a pp-wave spacetime (written in the Brinkmann chart) is a vacuum solution if and only if formula_1 is a harmonic function (with respect to the spatial coordinates formula_15). Physically, these represent purely gravitational radiation propagating along the null rays formula_16. Ehlers and Kundt and Sippel and Gönner have classified vacuum pp-wave spacetimes by their autometry group, or group of "self-isometries". This is always a Lie group, and as usual it is easier to classify the underlying Lie algebras of Killing vector fields. It turns out that the most general pp-wave spacetime has only one Killing vector field, the null geodesic congruence formula_17. However, for various special forms of formula_1, there are additional Killing vector fields. The most important class of particularly symmetric pp-waves are the plane wave spacetimes, which were first studied by Baldwin and Jeffery. A plane wave is a pp-wave in which formula_1 is quadratic, and can hence be transformed to the simple form formula_18 Here, formula_19 are arbitrary smooth functions of formula_20. Physically speaking, formula_21 describe the wave profiles of the two linearly independent polarization modes of gravitational radiation which may be present, while formula_22 describes the wave profile of any nongravitational radiation. If formula_23, we have the vacuum plane waves, which are often called plane gravitational waves. Equivalently, a plane-wave is a pp-wave with at least a five-dimensional Lie algebra of Killing vector fields formula_24, including formula_25 and four more which have the form formula_26 where formula_27 formula_28 Intuitively, the distinction is that the wavefronts of plane waves are truly "planar"; all points on a given two-dimensional wavefront are equivalent. This not quite true for more general pp-waves. Plane waves are important for many reasons; to mention just one, they are essential for the beautiful topic of colliding plane waves. A more general subclass consists of the axisymmetric pp-waves, which in general have a two-dimensional Abelian Lie algebra of Killing vector fields. These are also called "SG2 plane waves", because they are the second type in the symmetry classification of Sippel and Gönner. A limiting case of certain axisymmetric pp-waves yields the Aichelburg/Sexl ultraboost modeling an ultrarelativistic encounter with an isolated spherically symmetric object. J. D. Steele has introduced the notion of generalised pp-wave spacetimes. These are nonflat Lorentzian spacetimes which admit a self-dual covariantly constant null bivector field. The name is potentially misleading, since as Steele points out, these are nominally a "special case" of nonflat pp-waves in the sense defined above. They are only a generalization in the sense that although the Brinkmann metric form is preserved, they are not necessarily the vacuum solutions studied by Ehlers and Kundt, Sippel and Gönner, etc. Another important special class of pp-waves are the sandwich waves. These have vanishing curvature except on some range formula_29, and represent a gravitational wave moving through a Minkowski spacetime background. Relation to other theories. Since they constitute a very simple and natural class of Lorentzian manifolds, defined in terms of a null congruence, it is not very surprising that they are also important in other relativistic classical field theories of gravitation. In particular, pp-waves are exact solutions in the Brans–Dicke theory, various higher curvature theories and Kaluza–Klein theories, and certain gravitation theories of J. W. Moffat. Indeed, B. O. J. Tupper has shown that the "common" vacuum solutions in general relativity and in the Brans/Dicke theory are precisely the vacuum pp-waves (but the Brans/Dicke theory admits further wavelike solutions). Hans-Jürgen Schmidt has reformulated the theory of (four-dimensional) pp-waves in terms of a "two-dimensional" metric-dilaton theory of gravity. Pp-waves also play an important role in the search for quantum gravity, because as Gary Gibbons has pointed out, all loop term quantum corrections vanish identically for any pp-wave spacetime. This means that studying tree-level quantizations of pp-wave spacetimes offers a glimpse into the yet unknown world of quantum gravity. It is natural to generalize pp-waves to higher dimensions, where they enjoy similar properties to those we have discussed. C. M. Hull has shown that such "higher-dimensional pp-waves" are essential building blocks for eleven-dimensional supergravity. Geometric and physical properties. PP-waves enjoy numerous striking properties. Some of their more abstract mathematical properties have already been mentioned. In this section a few additional properties are presented. Consider an inertial observer in Minkowski spacetime who encounters a sandwich plane wave. Such an observer will experience some interesting optical effects. If he looks into the "oncoming" wavefronts at distant galaxies which have already encountered the wave, he will see their images undistorted. This must be the case, since he cannot know the wave is coming until it reaches his location, for it is traveling at the speed of light. However, this can be confirmed by direct computation of the optical scalars of the null congruence formula_16. Now suppose that after the wave passes, our observer turns about face and looks through the "departing" wavefronts at distant galaxies which the wave has not yet reached. Now he sees their optical images sheared and magnified (or demagnified) in a time-dependent manner. If the wave happens to be a polarized "gravitational plane wave", he will see circular images alternately squeezed horizontally while expanded vertically, and squeezed vertically while expanded horizontally. This directly exhibits the characteristic effect of a gravitational wave in general relativity on light. The effect of a passing polarized gravitational plane wave on the relative positions of a cloud of (initially static) test particles will be qualitatively very similar. We might mention here that in general, the motion of test particles in pp-wave spacetimes can exhibit chaos. The fact that Einstein's field equation is nonlinear is well known. This implies that if you have two exact solutions, there is almost never any way to linearly superimpose them. PP waves provide a rare exception to this rule: if you have two PP waves sharing the same covariantly constant null vector (the same geodesic null congruence, i.e. the same wave vector field), with metric functions formula_30 respectively, then formula_31 gives a third exact solution. Roger Penrose has observed that near a null geodesic, "every Lorentzian spacetime looks like a plane wave". To show this, he used techniques imported from algebraic geometry to "blow up" the spacetime so that the given null geodesic becomes the covariantly constant null geodesic congruence of a plane wave. This construction is called a Penrose limit. Penrose also pointed out that in a pp-wave spacetime, all the polynomial scalar invariants of the Riemann tensor "vanish identically", yet the curvature is almost never zero. This is because in four-dimension all pp-waves belong to the class of VSI spacetimes. Such statement does not hold in higher-dimensions since there are higher-dimensional pp-waves of algebraic type II with non-vanishing polynomial scalar invariants. If you view the Riemann tensor as a second rank tensor acting on bivectors, the vanishing of invariants is analogous to the fact that a nonzero null vector has vanishing squared length. Penrose was also the first to understand the strange nature of causality in pp-sandwich wave spacetimes. He showed that some or all of the null geodesics emitted at a given event will be refocused at a later event (or string of events). The details depend upon whether the wave is purely gravitational, purely electromagnetic, or neither. Every pp-wave admits many different Brinkmann charts. These are related by coordinate transformations, which in this context may be considered to be gauge transformations. In the case of plane waves, these gauge transformations allow us to always regard two colliding plane waves to have "parallel wavefronts", and thus the waves can be said to "collide head-on". This is an exact result in fully nonlinear general relativity which is analogous to a similar result concerning electromagnetic plane waves as treated in special relativity. Examples. There are many noteworthy "explicit" examples of pp-waves. Explicit examples of "axisymmetric pp-waves" include Explicit examples of "plane wave spacetimes" include Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. 43–54.
[ { "math_id": 0, "text": " ds^2 = H(u,x,y) \\, du^2 + 2 \\, du \\, dv + dx^2 + dy^2" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "\\nabla k = 0." }, { "math_id": 4, "text": "k = \\partial_v" }, { "math_id": 5, "text": "v=v_0" }, { "math_id": 6, "text": "k_{a ;b} = 0" }, { "math_id": 7, "text": " H_{xx} + H_{yy} = 0" }, { "math_id": 8, "text": "\\Phi_{ij}" }, { "math_id": 9, "text": "\\Psi_i" }, { "math_id": 10, "text": " \\vec{\\ell} = \\partial_u - H/2 \\, \\partial_v" }, { "math_id": 11, "text": " \\vec{n} = \\partial_v" }, { "math_id": 12, "text": " \\vec{m} = \\frac{1}{\\sqrt2} \\, \\left( \\partial_x + i \\, \\partial_y\\right)" }, { "math_id": 13, "text": " \\Phi_{00} = \\frac{1}{4} \\, \\left( H_{xx} + H_{yy} \\right)" }, { "math_id": 14, "text": " \\Psi_0 = \\frac{1}{4} \\, \\left( \\left( H_{xx}-H_{yy} \\right) + 2i \\, H_{xy} \\right)." }, { "math_id": 15, "text": "x,y" }, { "math_id": 16, "text": "\\partial_v" }, { "math_id": 17, "text": "k=\\partial_v" }, { "math_id": 18, "text": "H(u,x,y)=a(u) \\, (x^2-y^2) + 2 \\, b(u) \\, xy + c(u) \\, (x^2+y^2)" }, { "math_id": 19, "text": "a,b,c" }, { "math_id": 20, "text": "u" }, { "math_id": 21, "text": "a,b" }, { "math_id": 22, "text": "c" }, { "math_id": 23, "text": "c = 0" }, { "math_id": 24, "text": "X" }, { "math_id": 25, "text": "X = \\partial_v" }, { "math_id": 26, "text": " X = \\frac{\\partial}{\\partial u}(p x + q y) \\, \\partial_v \n+ p \\, \\partial_x + q \\, \\partial_y " }, { "math_id": 27, "text": " \\ddot{p} = -a p + b q - c p " }, { "math_id": 28, "text": " \\ddot{q} = a q - b p - c q. " }, { "math_id": 29, "text": "u_1 < u < u_2" }, { "math_id": 30, "text": "H_1, H_2" }, { "math_id": 31, "text": "H_1 + H_2" }, { "math_id": 32, "text": "S^3" } ]
https://en.wikipedia.org/wiki?curid=1447921
1447929
Gravitational plane wave
In general relativity, a gravitational plane wave is a special class of a vacuum pp-wave spacetime, and may be defined in terms of Brinkmann coordinates by formula_0 Here, formula_1 can be any smooth functions; they control the waveform of the two possible polarization modes of gravitational radiation. In this context, these two modes are usually called the plus mode and cross mode, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ds^2=[a(u)(x^2-y^2)+2b(u)xy]du^2+2dudv+dx^2+dy^2" }, { "math_id": 1, "text": "a(u), b(u)" } ]
https://en.wikipedia.org/wiki?curid=1447929
144793
Denying the antecedent
Logical fallacy Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of inferring the inverse from an original statement. It is a type of mixed hypothetical syllogism in the form: If "P", then "Q". Not "P". Therefore, not "Q". which may also be phrased as formula_0 (P implies Q) formula_1 (therefore, not-P implies not-Q) Arguments of this form are invalid. Informally, this means that arguments of this form do not give good reason to establish their conclusions, even if their premises are true. In this example, a valid conclusion would be: ~P or Q. The name "denying the antecedent" derives from the premise "not "P"", which denies the "if" clause (antecedent) of the conditional premise. One way to demonstrate the invalidity of this argument form is with an example that has true premises but an obviously false conclusion. For example: If you are a ski instructor, then you have a job. You are not a ski instructor. Therefore, you have no job. That argument is intentionally bad, but arguments of the same form can sometimes seem superficially convincing, as in the following example offered by Alan Turing in the article "Computing Machinery and Intelligence": If each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines. However, men could still be machines that do not follow a definite set of rules. Thus, this argument (as Turing intends) is invalid. It is possible that an argument that denies the antecedent could be valid if the argument instantiates some other valid form. For example, if the claims "P" and "Q" express the same proposition, then the argument would be trivially valid, as it would beg the question. In everyday discourse, however, such cases are rare, typically only occurring when the "if-then" premise is actually an "if and only if" claim (i.e., a biconditional/equality). The following argument is not valid, but would be if the first premise was "If I can veto Congress, then I am the US President." This claim is now "modus tollens", and thus valid. If I am President of the United States, then I can veto Congress. I am not President. Therefore, I cannot veto Congress. [This is a case of the fallacy denying the antecedent as written because it matches the formal symbolic schema at beginning. The form is taken without regard to the content of the language.] References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\rightarrow Q" }, { "math_id": 1, "text": "\\therefore \\neg P \\rightarrow \\neg Q" } ]
https://en.wikipedia.org/wiki?curid=144793
14479711
Self-concordant function
A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimization using Newton's methodSub.6.2.4.2 A self-concordant barrier is a particular self-concordant function, that is also a barrier function for a particular convex set. Self-concordant barriers are important ingredients in interior point methods for optimization. Self-concordant functions. Multivariate self-concordant function. Here is the general definition of a self-concordant function.Def.2.0.1 Let "C" be a convex nonempty open set in R"n". Let "f" be a function that is three-times continuously differentiable defined on "C". We say that f is self-concordant on "C" if it satisfies the following properties: 1. "Barrier property": on any sequence of points in "C" that converges to a boundary point of "C", "f" converges to ∞. 2. "Differential inequality": for every point x in "C", and any direction h in R"n", let "g"h be the function "f" restricted to the direction h, that is: "g"h("t") = "f"(x+t*h). Then the one-dimensional function "g"h should satisfy the following differential inequality: formula_0. Equivalently:formula_1 Univariate self-concordant function. A function formula_2 is self-concordant on formula_3 if: formula_4 Equivalently: if wherever formula_5 it satisfies: formula_6 and satisfies formula_7 elsewhere. Examples. Some functions that are not self-concordant: Self-concordant barriers. Here is the general definition of a self-concordant barrier (SCB).Def.3.1.1 Let "C" be a convex closed set in R"n" with a non-empty interior. Let "f" be a function from interior("C") to R. Let "M"&gt;0 be a real parameter. We say that f" is a "M"-self-concordant barrier for "C if it satisfies the following: 1. "f" is a self-concordant function on interior("C"). 2. For every point x in interior("C"), and any direction h in R"n", let "g"h be the function "f" restricted to the direction h, that is: "g"h("t") = "f"(x+t*h). Then the one-dimensional function "g"h should satisfy the following differential inequality:formula_25. Constructing SCBs. Due to the importance of SCBs in interior-point methods, it is important to know how to construct SCBs for various domains. In theory, it can be proved that "every" closed convex domain in Rn has a self-concordant barrier with parameter O("n"). But this “universal barrier” is given by some multivariate integrals, and it is too complicated for actual computations. Hence, the main goal is to construct SCBs that are efficiently computable.Sec.9.2.3.3 SCBs can be constructed from some "basic SCBs", that are combined to produce SCBs for more complex domains, using several "combination rules". Basic SCBs. Every constant is a self-concordant barrier for all R"n", with parameter M=0. It is the only self-concordant barrier for the entire space, and the only self-concordant barrier with "M" &lt; 1.Example 3.1.1 [Note that linear and quadratic functions are self-concordant functions, but they are "not" self concordant barriers]. For the positive half-line formula_26(formula_10), formula_27 is a self-concordant barrier with parameter formula_28. This can be proved directly from the definition. Substitution rule. Let "G" be a closed convex domain in "Rn", and "g" an "M"-SCB for "G". Let "x" = "Ay"+"b" be an affine mapping from Rk to Rn with its image intersecting the interior of "G". Let "H" be the inverse image of "G" under the mapping: "H" = {"y" in R"k" | "Ay+b" in "G"}. Let "h" be the composite function "h"("y") := g("Ay"+"b"). Then, "h" is an "M"-SCB for "H".Prop.3.1.1 For example, take "n"=1, "G" the positive half-line, and formula_29. For any "k", let "a" be a "k"-element vector and "b" a scalar. Let "H" = {"y" in Rk | "a"T"y+b" ≥ 0} = a "k"-dimensional half-space. By the substitution rule, formula_30 is a 1-SCB for "H". A more common format is "H" = {"x" in R"k" | "aTx" ≤ b}, for which the SCB is formula_31. The substitution rule can be extended from affine mappings to a certain class of "appropriate" mappings,Thm.9.1.1 and to quadratic mappings.Sub.9.3 Cartesian product rule. For all "i" in 1...,"m", let "Gi" be a closed convex domains in "Rni", and let "gi" be an "M"i-SCB for "Gi". Let "G" be the cartesian product of all "Gi". Let "g(x"1"...,xm)" := sum"i gi"("xi"). Then, "g" is a SCB for "G", with parameter sum"i Mi".Prop.3.1.1 For example, take all "Gi" to be the positive half-line, so that "G" is the positive orthant formula_32. Let formula_33 is an "m"-SCB for "G." We can now apply the substitution rule. We get that, for the polytope defined by the linear inequalities "aj"T"x" ≤ "bj" for "j" in 1...,"m", if it satisfies Slater's condition, then formula_34 is an "m"-SCB. The linear functions formula_35 can be replaced by quadratic functions. Intersection rule. Let "G"1...,"Gm" be closed convex domains in "Rn". For each "i" in 1...,"m", let "gi" be an "M"i-SCB for "Gi", and "ri" a real number. Let "G" be the intersection of all "Gi", and suppose its interior is nonempty. Let "g" := sum"i ri*gi". Then, "g" is a SCB for "G", with parameter sum"i ri*Mi".Prop.3.1.1 Therefore, if "G" is defined by a list of constraints, we can find a SCB for each constraint separately, and then simply sum them to get a SCB for "G". For example, suppose the domain is defined by "m" linear constraints of the form "ajTx" ≤ "bj", for "j" in 1...,"m". Then we can use the Intersection rule to construct the "m"-SCB formula_34 (the same one that we previously computed using the Cartesian product rule). SCBs for epigraphs. The epigraph of a function "f"("x") is the area above the graph of the function, that is, formula_36. The epigraph of "f" is a convex set if and only if "f" is a convex function. The following theorems present some functions "f" for which the epigraph has an SCB. Let "g"("t") be a 3-times continuously-differentiable concave function on "t"&gt;0, such that formula_37 is bounded by a constant (denoted 3*"b") for all "t"&gt;0. Let "G" be the 2-dimensional convex domain: formula_38Then, the function "f"("x","t") = -ln(f(t)-x) - max[1,b2]*ln(t) is a self-concordant barrier for "G", with parameter (1+max[1,b2]).Prop.9.2.1 Examples: We can now construct a SCB for the problem of minimizing the "p"-norm: formula_43, where "vj" are constant scalars, "uj" are constant vectors, and "p"&gt;0 is a constant. We first convert it into minimization of a linear objective: formula_44, with the constraints: formula_45for all "j" in ["m"]. For each constraint, we have a 4-SCB by the affine substitution rule. Using the Intersection rule, we get a (4"n")-SCB for the entire feasible domain. Similarly, let "g" be a 3-times continuously-differentiable convex function on the ray "x"&gt;0, such that: formula_46 for all "x"&gt;0. Let "G" be the 2-dimensional convex domain: closure({ ("t,x") in R2: x&gt;0, "t" ≥ "g"("x") }). Then, the function "f"("x","t") = -ln(t-f(x)) - max[1,b2]*ln(x) is a self-concordant barrier for G, with parameter (1+max[1,b2]).Prop.9.2.2 Examples: History. As mentioned in the "Bibliography Comments" of their 1994 book, self-concordant functions were introduced in 1988 by Yurii Nesterov and further developed with Arkadi Nemirovski. As explained in their basic observation was that the Newton method is affine invariant, in the sense that if for a function formula_61 we have Newton steps formula_62 then for a function formula_63 where formula_64 is a non-degenerate linear transformation, starting from formula_65 we have the Newton steps formula_66 which can be shown recursively formula_67. However, the standard analysis of the Newton method supposes that the Hessian of formula_68 is Lipschitz continuous, that is formula_69 for some constant formula_70. If we suppose that formula_68 is 3 times continuously differentiable, then this is equivalent to formula_71for all formula_72 where formula_73 . Then the left hand side of the above inequality is invariant under the affine transformation formula_74, however the right hand side is not. The authors note that the right hand side can be made also invariant if we replace the Euclidean metric by the scalar product defined by the Hessian of formula_68 defined as formula_75 for formula_76. They then arrive at the definition of a self concordant function as formula_77. Properties. Linear combination. If formula_78 and formula_79 are self-concordant with constants formula_80 and formula_81 and formula_82, then formula_83 is self-concordant with constant formula_84. Affine transformation. If formula_68 is self-concordant with constant formula_70 and formula_85 is an affine transformation of formula_86, then formula_87 is also self-concordant with parameter formula_70. Convex conjugate. If formula_68 is self-concordant, then its convex conjugate formula_88 is also self-concordant. Non-singular Hessian. If formula_68 is self-concordant and the domain of formula_68 contains no straight line (infinite in both directions), then formula_89 is non-singular. Conversely, if for some formula_90 in the domain of formula_68 and formula_91 we have formula_92, then formula_93 for all formula_94 for which formula_95 is in the domain of formula_68 and then formula_96 is linear and cannot have a maximum so all of formula_97 is in the domain of formula_68. We note also that formula_68 cannot have a minimum inside its domain. Applications. Among other things, self-concordant functions are useful in the analysis of Newton's method. Self-concordant "barrier functions" are used to develop the barrier functions used in interior point methods for convex and nonlinear optimization. The usual analysis of the Newton method would not work for barrier functions as their second derivative cannot be Lipschitz continuous, otherwise they would be bounded on any compact subset of formula_86. Self-concordant barrier functions Minimizing a self-concordant function. A self-concordant function may be minimized with a modified Newton method where we have a bound on the number of steps required for convergence. We suppose here that formula_68 is a "standard" self-concordant function, that is it is self-concordant with parameter formula_56. We define the "Newton decrement" formula_98 of formula_68 at formula_90 as the size of the Newton step formula_99 in the local norm defined by the Hessian of formula_68 at formula_90 formula_100 Then for formula_90 in the domain of formula_68, if formula_101 then it is possible to prove that the Newton iterate formula_102 will be also in the domain of formula_68. This is because, based on the self-concordance of formula_68, it is possible to give some finite bounds on the value of formula_103. We further have formula_104 Then if we have formula_105 then it is also guaranteed that formula_106, so that we can continue to use the Newton method until convergence. Note that for formula_107 for some formula_108 we have quadratic convergence of formula_109 to 0 as formula_110. This then gives quadratic convergence of formula_111 to formula_112 and of formula_90 to formula_113, where formula_114, by the following theorem. If formula_101 then formula_115 formula_116 with the following definitions formula_117 formula_118 formula_119 If we start the Newton method from some formula_120 with formula_121 then we have to start by using a "damped Newton method" defined by formula_122 For this it can be shown that formula_123 with formula_124 as defined previously. Note that formula_125 is an increasing function for formula_126 so that formula_127 for any formula_128, so the value of formula_68 is guaranteed to decrease by a certain amount in each iteration, which also proves that formula_129 is in the domain of formula_68. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|g_h'''(x)| \\leq 2 g_h''(x)^{3/2}" }, { "math_id": 1, "text": "\\left. \\frac{d}{d\\alpha} \\nabla^2 f(x + \\alpha y) \\right|_{\\alpha = 0} \\preceq 2 \\sqrt{y^T \\nabla^2 f(x)\\,y} \\, \\nabla^2 f(x)" }, { "math_id": 2, "text": "f:\\mathbb{R} \\rightarrow \\mathbb{R}" }, { "math_id": 3, "text": "\\mathbb{R}" }, { "math_id": 4, "text": "|f'''(x)| \\leq 2 f''(x)^{3/2}" }, { "math_id": 5, "text": "f''(x) > 0" }, { "math_id": 6, "text": "\\left| \\frac{d}{dx} \\frac{1}{\\sqrt{f''(x)}} \\right| \\leq 1" }, { "math_id": 7, "text": "f'''(x) = 0" }, { "math_id": 8, "text": "f(x) = -\\log(-g(x))-\\log x" }, { "math_id": 9, "text": "g(x)" }, { "math_id": 10, "text": "x > 0" }, { "math_id": 11, "text": "| g'''(x) | \\leq 3g''(x)/x" }, { "math_id": 12, "text": "\\{ x \\mid x > 0, g(x) < 0 \\}" }, { "math_id": 13, "text": "g(x) = -x^p" }, { "math_id": 14, "text": "0 < p \\leq 1" }, { "math_id": 15, "text": "g(x) = -\\log x" }, { "math_id": 16, "text": "g(x) = x^p" }, { "math_id": 17, "text": "-1 \\leq p \\leq 0" }, { "math_id": 18, "text": "g(x) = (ax+b)^2 / x" }, { "math_id": 19, "text": "g" }, { "math_id": 20, "text": "g(x) + a x^2 + bx + c" }, { "math_id": 21, "text": "a \\geq 0" }, { "math_id": 22, "text": "f(x) = e^x" }, { "math_id": 23, "text": "f(x) = \\frac{1}{x^p}, x >0, p >0" }, { "math_id": 24, "text": "f(x) = |x^p|, p > 2" }, { "math_id": 25, "text": "|g_h'(x)| \\leq M^{1/2}\\cdot g_h''(x)^{1/2}" }, { "math_id": 26, "text": "\\mathbb R_+" }, { "math_id": 27, "text": "f(x) = -\\ln x" }, { "math_id": 28, "text": "M = 1" }, { "math_id": 29, "text": "g(x) = -\\ln x" }, { "math_id": 30, "text": "h(y) = -\\ln (a^T y+b)" }, { "math_id": 31, "text": "h(y) = -\\ln (b - a^T y)" }, { "math_id": 32, "text": "\\mathbb R_+^m" }, { "math_id": 33, "text": "g(x) = -\\sum_{i=1}^m \\ln x_i" }, { "math_id": 34, "text": "f(x) = -\\sum_{i=1}^m \\ln (b_j-a_j^T x)" }, { "math_id": 35, "text": "b_j-a_j^T x" }, { "math_id": 36, "text": "\\{ (x,t) \\in \\mathbb{R}^2: t\\geq f(x) \\}\n" }, { "math_id": 37, "text": "t\\cdot | g'''(t)| / |g''(t)| " }, { "math_id": 38, "text": "G=\\text{closure}(\\{ (x,t) \\in \\mathbb{R}^2: t>0, x \\leq g(t) \\}).\n" }, { "math_id": 39, "text": "G_1=\\{ (x,t) \\in \\mathbb{R}^2: (x_+)^p \\leq t \\}\n" }, { "math_id": 40, "text": "G_2=\\{ (x,t) \\in \\mathbb{R}^2: ([-x]_+)^p \\leq t \\}\n" }, { "math_id": 41, "text": "G = G_1\\cap G_2= \\{ (x,t) \\in \\mathbb{R}^2: |x|^p \\leq t \\}\n" }, { "math_id": 42, "text": "G=\\{ (x,t) \\in \\mathbb{R}^2: e^x \\leq t \\}\n" }, { "math_id": 43, "text": "\\min_x \\sum_{j=1}^n |v_j - x^T u_j|^p\n" }, { "math_id": 44, "text": "\\min_x \\sum_{j=1}^n t_j\n" }, { "math_id": 45, "text": "t_j \\geq |v_j - x^T u_j|^p \n" }, { "math_id": 46, "text": "x\\cdot |g'''(x)| / |g''(x)| \\leq 3 b " }, { "math_id": 47, "text": "G_1=\\{ (x,t) \\in \\mathbb{R}^2: x^{-p} \\leq t, x\\geq 0 \\}\n" }, { "math_id": 48, "text": "G=\\{ (x,t) \\in \\mathbb{R}^2: x\\ln x \\leq t, x\\geq 0 \\}\n" }, { "math_id": 49, "text": "\\{ (x,y) \\in \\mathbb R^{n-1} \\times \\mathbb R \\mid \\| x \\| \\leq y \\}" }, { "math_id": 50, "text": "f(x,y) = -\\log(y^2 - x^T x)" }, { "math_id": 51, "text": "f(A) = - \\log \\det A" }, { "math_id": 52, "text": "\\phi(x) > 0" }, { "math_id": 53, "text": "\\phi(x) = \\alpha +\\langle a, x \\rangle - \\frac{1}{2} \\langle Ax, x \\rangle" }, { "math_id": 54, "text": "A = A^T \\geq 0" }, { "math_id": 55, "text": "f(x) = -\\log \\phi(x)" }, { "math_id": 56, "text": "M = 2" }, { "math_id": 57, "text": "\\{ (x,y,z) \\in \\mathbb R^3 \\mid ye^{x/y} \\leq z, y > 0 \\}" }, { "math_id": 58, "text": "f(x,y,z) = -\\log (y \\log(z/y) - x) - \\log z - \\log y" }, { "math_id": 59, "text": "\\{ (x_1,x_2,y) \\in \\mathbb R_+^2 \\times \\mathbb R \\mid |y| \\leq x_1^{\\alpha} x_2^{1-\\alpha} \\}" }, { "math_id": 60, "text": "f(x_1,x_2,y) = -\\log(x_1^{2\\alpha} x_2^{2(1-\\alpha)} - y^2) - \\log x_1 - \\log x_2" }, { "math_id": 61, "text": "f(x)" }, { "math_id": 62, "text": "x_{k+1} = x_k - [f''(x_k)]^{-1}f'(x_k)" }, { "math_id": 63, "text": "\\phi(y) = f(Ay)" }, { "math_id": 64, "text": "A" }, { "math_id": 65, "text": "y_0 = A^{-1} x_0" }, { "math_id": 66, "text": "y_k = A^{-1} x_k" }, { "math_id": 67, "text": "y_{k+1} = y_k - [\\phi''(y_k)]^{-1} \\phi'(y_k) = y_k - [A^T f''(A y_k) A]^{-1} A^T f'(A y_k) = A^{-1} x_k - A^{-1}[f''(x_k)]^{-1} f'(x_k) = A^{-1} x_{k+1}" }, { "math_id": 68, "text": "f" }, { "math_id": 69, "text": "\\|f''(x) - f''(y)\\| \\leq M\\| x-y \\|" }, { "math_id": 70, "text": "M" }, { "math_id": 71, "text": "| \\langle f'''(x)[u]v, v \\rangle | \\leq M \\|u\\| \\|v\\|^2" }, { "math_id": 72, "text": "u,v \\in \\mathbb{R}^n" }, { "math_id": 73, "text": "f'''(x)[u] = \\lim_{\\alpha \\to 0} \\alpha^{-1} [f''(x + \\alpha u) - f''(x)]" }, { "math_id": 74, "text": "f(x) \\to \\phi(y) = f(A y), u \\to A^{-1} u, v \\to A^{-1} v" }, { "math_id": 75, "text": "\\| w \\|_{f''(x)} = \\langle f''(x)w, w \\rangle^{1/2}" }, { "math_id": 76, "text": "w \\in \\mathbb R^n" }, { "math_id": 77, "text": "| \\langle f'''(x)[u]u, u \\rangle | \\leq M \\langle f''(x) u, u \\rangle^{3/2}" }, { "math_id": 78, "text": "f_1" }, { "math_id": 79, "text": "f_2" }, { "math_id": 80, "text": "M_1" }, { "math_id": 81, "text": "M_2" }, { "math_id": 82, "text": "\\alpha,\\beta>0" }, { "math_id": 83, "text": "\\alpha f_1 + \\beta f_2" }, { "math_id": 84, "text": "\\max(\\alpha^{-1/2} M_1, \\beta^{-1/2} M_2)" }, { "math_id": 85, "text": "Ax + b" }, { "math_id": 86, "text": "\\mathbb R^n" }, { "math_id": 87, "text": "\\phi(x) = f(Ax+b)" }, { "math_id": 88, "text": "f^*" }, { "math_id": 89, "text": "f''" }, { "math_id": 90, "text": "x" }, { "math_id": 91, "text": "u \\in \\mathbb R^n, u \\neq 0" }, { "math_id": 92, "text": "\\langle f''(x) u, u \\rangle = 0" }, { "math_id": 93, "text": "\\langle f''(x + \\alpha u) u, u \\rangle = 0" }, { "math_id": 94, "text": "\\alpha" }, { "math_id": 95, "text": "x + \\alpha u" }, { "math_id": 96, "text": "f(x + \\alpha u)" }, { "math_id": 97, "text": "x + \\alpha u, \\alpha \\in \\mathbb R" }, { "math_id": 98, "text": "\\lambda_f(x)" }, { "math_id": 99, "text": "[f''(x)]^{-1} f'(x)" }, { "math_id": 100, "text": "\\lambda_f(x) = \\langle f''(x) [f''(x)]^{-1} f'(x), [f''(x)]^{-1} f'(x) \\rangle^{1/2} = \\langle [f''(x)]^{-1} f'(x), f'(x) \\rangle^{1/2}" }, { "math_id": 101, "text": "\\lambda_f(x) < 1" }, { "math_id": 102, "text": "x_+ = x - [f''(x)]^{-1}f'(x)" }, { "math_id": 103, "text": "f(x_+)" }, { "math_id": 104, "text": "\\lambda_f(x_+) \\leq \\Bigg( \\frac{\\lambda_f(x)}{1-\\lambda_f(x)} \\Bigg)^2" }, { "math_id": 105, "text": "\\lambda_f(x) < \\bar\\lambda = \\frac{3-\\sqrt 5}{2}" }, { "math_id": 106, "text": "\\lambda_f(x_+) < \\lambda_f(x)" }, { "math_id": 107, "text": "\\lambda_f(x_+) < \\beta" }, { "math_id": 108, "text": "\\beta \\in (0, \\bar\\lambda)" }, { "math_id": 109, "text": "\\lambda_f" }, { "math_id": 110, "text": "\\lambda_f(x_+) \\leq (1-\\beta)^{-2} \\lambda_f(x)^2" }, { "math_id": 111, "text": "f(x_k)" }, { "math_id": 112, "text": "f(x^*)" }, { "math_id": 113, "text": "x^*" }, { "math_id": 114, "text": "x^* = \\arg\\min f(x)" }, { "math_id": 115, "text": "\\omega(\\lambda_f(x)) \\leq f(x)-f(x^*) \\leq \\omega_*(\\lambda_f(x))" }, { "math_id": 116, "text": "\\omega'(\\lambda_f(x)) \\leq \\| x-x^* \\|_x \\leq \\omega_*'(\\lambda_f(x))" }, { "math_id": 117, "text": "\\omega(t) = t - \\log(1+t)" }, { "math_id": 118, "text": "\\omega_*(t) = -t-\\log(1-t)" }, { "math_id": 119, "text": "\\| u \\|_x = \\langle f''(x) u, u \\rangle^{1/2} " }, { "math_id": 120, "text": "x_0" }, { "math_id": 121, "text": "\\lambda_f(x_0) \\geq \\bar\\lambda" }, { "math_id": 122, "text": "x_{k+1} = x_k - \\frac{1}{1+\\lambda_f(x_k)}[f''(x_k)]^{-1}f'(x_k)" }, { "math_id": 123, "text": "f(x_{k+1}) \\leq f(x_k) - \\omega(\\lambda_f(x_k))" }, { "math_id": 124, "text": "\\omega " }, { "math_id": 125, "text": "\\omega(t)" }, { "math_id": 126, "text": "t > 0" }, { "math_id": 127, "text": "\\omega(t) \\geq \\omega(\\bar\\lambda)" }, { "math_id": 128, "text": "t \\geq \\bar\\lambda" }, { "math_id": 129, "text": "x_{k+1}" } ]
https://en.wikipedia.org/wiki?curid=14479711
14479902
Monogenic system
Type of system in classical mechanics In classical mechanics, a physical system is termed a monogenic system if the force acting on the system can be modelled in a particular, especially convenient mathematical form. The systems that are typically studied in physics are monogenic. The term was introduced by Cornelius Lanczos in his book "The Variational Principles of Mechanics" (1970). In Lagrangian mechanics, the property of being monogenic is a necessary condition for certain different formulations to be mathematically equivalent. If a physical system is both a holonomic system and a monogenic system, then it is possible to derive Lagrange's equations from d'Alembert's principle; it is also possible to derive Lagrange's equations from Hamilton's principle. Mathematical definition. In a physical system, if all forces, with the exception of the constraint forces, are derivable from the generalized scalar potential, and this generalized scalar potential is a function of generalized coordinates, generalized velocities, or time, then, this system is a monogenic system. Expressed using equations, the exact relationship between generalized force formula_0 and generalized potential formula_1 is as follows: formula_2 where formula_3 is generalized coordinate, formula_4 is generalized velocity, and formula_5 is time. If the generalized potential in a monogenic system depends only on generalized coordinates, and not on generalized velocities and time, then, this system is a conservative system. The relationship between generalized force and generalized potential is as follows: formula_6 . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{F}_i" }, { "math_id": 1, "text": "\\mathcal{V}(q_1,\\ q_2,\\ \\dots,\\ q_N,\\ \\dot{q}_1,\\ \\dot{q}_2,\\ \\dots,\\ \\dot{q}_N,\\ t)" }, { "math_id": 2, "text": "\\mathcal{F}_i= - \\frac{\\partial \\mathcal{V}}{\\partial q_i}+\\frac{d}{dt}\\left(\\frac{\\partial \\mathcal{V}}{\\partial \\dot{q_i}}\\right);" }, { "math_id": 3, "text": "q_i" }, { "math_id": 4, "text": "\\dot{q_i} " }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "\\mathcal{F}_i= - \\frac{\\partial \\mathcal{V}}{\\partial q_i}" } ]
https://en.wikipedia.org/wiki?curid=14479902
14481648
Binder parameter
Kurtosis of the order parameter in statistical physics The Binder parameter or Binder cumulant in statistical physics, also known as the fourth-order cumulant formula_0 is defined as the kurtosis of the order parameter, "s", introduced by Austrian theoretical physicist Kurt Binder. It is frequently used to determine accurately phase transition points in numerical simulations of various models. The phase transition point is usually identified comparing the behavior of formula_1 as a function of the temperature for different values of the system size formula_2. The transition temperature is the unique point where the different curves cross in the thermodynamic limit. This behavior is based on the fact that in the critical region, formula_3, the Binder parameter behaves as formula_4, where formula_5. Accordingly, the cumulant may also be used to identify the universality class of the transition by determining the value of the critical exponent formula_6 of the correlation length. In the thermodynamic limit, at the critical point, the value of the Binder parameter depends on boundary conditions, the shape of the system, and anisotropy of correlations.
[ { "math_id": 0, "text": "U_L=1-\\frac{{\\langle s^4\\rangle}_L}{3{\\langle s^2\\rangle}^2_L}" }, { "math_id": 1, "text": "U" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "T\\approx T_c" }, { "math_id": 4, "text": "U(T,L)=b(\\epsilon L^{1/\\nu})" }, { "math_id": 5, "text": "\\epsilon=\\frac{T-T_c}{T}" }, { "math_id": 6, "text": "\\nu" } ]
https://en.wikipedia.org/wiki?curid=14481648
144823
Special linear group
Group of matrices with determinant 1 In mathematics, the special linear group SL("n", "R") of degree "n" over a commutative ring "R" is the set of "n" × "n" matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant formula_0 where "R"× is the multiplicative group of "R" (that is, "R" excluding 0 when "R" is a field). These elements are "special" in that they form an algebraic subvariety of the general linear group – they satisfy a polynomial equation (since the determinant is polynomial in the entries). When "R" is the finite field of order "q", the notation SL("n", "q") is sometimes used. Geometric interpretation. The special linear group SL("n", R) can be characterized as the group of "volume and orientation preserving" linear transformations of R"n"; this corresponds to the interpretation of the determinant as measuring change in volume and orientation. Lie subgroup. When "F" is R or C, SL("n", "F") is a Lie subgroup of GL("n", "F") of dimension "n"2 − 1. The Lie algebra formula_1 of SL("n", "F") consists of all "n" × "n" matrices over "F" with vanishing trace. The Lie bracket is given by the commutator. Topology. Any invertible matrix can be uniquely represented according to the polar decomposition as the product of a unitary matrix and a hermitian matrix with positive eigenvalues. The determinant of the unitary matrix is on the unit circle while that of the hermitian matrix is real and positive and since in the case of a matrix from the special linear group the product of these two determinants must be 1, then each of them must be 1. Therefore, a special linear matrix can be written as the product of a special unitary matrix (or special orthogonal matrix in the real case) and a positive definite hermitian matrix (or symmetric matrix in the real case) having determinant 1. Thus the topology of the group SL("n", C) is the product of the topology of SU("n") and the topology of the group of hermitian matrices of unit determinant with positive eigenvalues. A hermitian matrix of unit determinant and having positive eigenvalues can be uniquely expressed as the exponential of a traceless hermitian matrix, and therefore the topology of this is that of ("n"2 − 1)-dimensional Euclidean space. Since SU("n") is simply connected, we conclude that SL("n", C) is also simply connected, for all "n" greater than or equal to 2. The topology of SL("n", R) is the product of the topology of SO("n") and the topology of the group of symmetric matrices with positive eigenvalues and unit determinant. Since the latter matrices can be uniquely expressed as the exponential of symmetric traceless matrices, then this latter topology is that of ("n" + 2)("n" − 1)/2-dimensional Euclidean space. Thus, the group SL("n", R) has the same fundamental group as SO("n"), that is, Z for "n" = 2 and Z2 for "n" &gt; 2. In particular this means that SL("n", R), unlike SL("n", C), is not simply connected, for "n" greater than 1. Relations to other subgroups of GL("n", "A"). Two related subgroups, which in some cases coincide with SL, and in other cases are accidentally conflated with SL, are the commutator subgroup of GL, and the group generated by transvections. These are both subgroups of SL (transvections have determinant 1, and det is a map to an abelian group, so [GL, GL] ≤ SL), but in general do not coincide with it. The group generated by transvections is denoted E("n", "A") (for elementary matrices) or TV("n", "A"). By the second Steinberg relation, for "n" ≥ 3, transvections are commutators, so for "n" ≥ 3, E("n", "A") ≤ [GL("n", "A"), GL("n", "A")]. For "n" = 2, transvections need not be commutators (of 2 × 2 matrices), as seen for example when "A" is F2, the field of two elements, then formula_2 where Alt(3) and Sym(3) denote the alternating resp. symmetric group on 3 letters. However, if "A" is a field with more than 2 elements, then E(2, "A") = [GL(2, "A"), GL(2, "A")], and if "A" is a field with more than 3 elements, E(2, "A") = [SL(2, "A"), SL(2, "A")]. In some circumstances these coincide: the special linear group over a field or a Euclidean domain is generated by transvections, and the "stable" special linear group over a Dedekind domain is generated by transvections. For more general rings the stable difference is measured by the special Whitehead group SK1("A") := SL("A")/E("A"), where SL("A") and E("A") are the stable groups of the special linear group and elementary matrices. Generators and relations. If working over a ring where SL is generated by transvections (such as a field or Euclidean domain), one can give a presentation of SL using transvections with some relations. Transvections satisfy the Steinberg relations, but these are not sufficient: the resulting group is the Steinberg group, which is not the special linear group, but rather the universal central extension of the commutator subgroup of GL. A sufficient set of relations for SL("n", Z) for "n" ≥ 3 is given by two of the Steinberg relations, plus a third relation . Let "Tij" := "eij"(1) be the elementary matrix with 1's on the diagonal and in the "ij" position, and 0's elsewhere (and "i" ≠ "j"). Then formula_3 are a complete set of relations for SL("n", Z), "n" ≥ 3. SL±("n","F"). In characteristic other than 2, the set of matrices with determinant ±1 form another subgroup of GL, with SL as an index 2 subgroup (necessarily normal); in characteristic 2 this is the same as SL. This forms a short exact sequence of groups: formula_4 This sequence splits by taking any matrix with determinant −1, for example the diagonal matrix formula_5 If formula_6 is odd, the negative identity matrix formula_7 is in SL±("n","F") but not in SL("n","F") and thus the group splits as an internal direct product formula_8. However, if formula_9 is even, formula_7 is already in SL("n","F") , SL± does not split, and in general is a non-trivial group extension. Over the real numbers, SL±("n", "R") has two connected components, corresponding to SL("n", "R") and another component, which are isomorphic with identification depending on a choice of point (matrix with determinant −1). In odd dimension these are naturally identified by formula_7, but in even dimension there is no one natural identification. Structure of GL("n","F"). The group GL("n", "F") splits over its determinant (we use "F"× ≅ GL(1, "F") → GL("n", "F") as the monomorphism from "F"× to GL("n", "F"), see semidirect product), and therefore GL("n", "F") can be written as a semidirect product of SL("n", "F") by "F"×: GL("n", "F") = SL("n", "F") ⋊ "F"×. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\det\\colon \\operatorname{GL}(n, R) \\to R^\\times." }, { "math_id": 1, "text": "\\mathfrak{sl}(n, F)" }, { "math_id": 2, "text": "\\operatorname{Alt}(3) \\cong [\\operatorname{GL}(2, \\mathbf{F}_2),\\operatorname{GL}(2, \\mathbf{F}_2)] < \\operatorname{E}(2, \\mathbf{F}_2) = \\operatorname{SL}(2, \\mathbf{F}_2) = \\operatorname{GL}(2, \\mathbf{F}_2) \\cong \\operatorname{Sym}(3)," }, { "math_id": 3, "text": "\\begin{align}\n \\left[ T_{ij},T_{jk} \\right] &= T_{ik} && \\text{for } i \\neq k \\\\[4pt]\n \\left[ T_{ij},T_{k\\ell} \\right] &= \\mathbf{1} && \\text{for } i \\neq \\ell, j \\neq k \\\\[4pt]\n \\left(T_{12}T_{21}^{-1}T_{12}\\right)^4 &= \\mathbf{1}\n\\end{align}" }, { "math_id": 4, "text": "1\\to\\operatorname{SL}(n, F) \\to \\operatorname{SL}^{\\pm}(n, F) \\to \\{\\pm 1\\}\\to1." }, { "math_id": 5, "text": "(-1, 1, \\dots, 1)." }, { "math_id": 6, "text": "n = 2k + 1" }, { "math_id": 7, "text": "-I" }, { "math_id": 8, "text": "\\operatorname{SL}^\\pm(2k + 1, F) \\cong \\operatorname{SL}(2k + 1, F) \\times \\{\\pm I\\}" }, { "math_id": 9, "text": "n = 2k" } ]
https://en.wikipedia.org/wiki?curid=144823
14483315
Bonnor–Ebert mass
In astrophysics, the Bonnor–Ebert mass is the largest mass that an isothermal gas sphere embedded in a pressurized medium can have while still remaining in hydrostatic equilibrium. Clouds of gas with masses greater than the Bonnor–Ebert mass must inevitably undergo gravitational collapse to form much smaller and denser objects. As the gravitational collapse of an interstellar gas cloud is the first stage in the formation of a protostar, the Bonnor–Ebert mass is an important quantity in the study of star formation. For a gas cloud embedded in a medium with a gas pressure formula_0, the Bonnor–Ebert mass is given by formula_1 where G is the gravitational constant and formula_2 is the isothermal sound speed (formula_3) with formula_4 as the molecular mass. formula_5 is a dimensionless constant which varies based on the density distribution of the cloud. For a uniform mass density formula_6 and for a centrally peaked density formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_{0}" }, { "math_id": 1, "text": "M_{BE} (p_0)={225\\over {32 \\sqrt{5 \\pi}}}{c_s^4\\over {(aG)}^{3 / 2}} {1\\over \\sqrt{p_0}} " }, { "math_id": 2, "text": "c_s \\equiv \\sqrt{kT/{\\mu m_H}}" }, { "math_id": 3, "text": "\\gamma = 1" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "a=1" }, { "math_id": 7, "text": "a\\approx 1.67" } ]
https://en.wikipedia.org/wiki?curid=14483315
14484071
Logarithmic norm
In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well. The logarithmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis. In the finite-dimensional setting, it is also referred to as the matrix measure or the Lozinskiĭ measure. Original definition. Let formula_0 be a square matrix and formula_1 be an induced matrix norm. The associated logarithmic norm formula_2 of formula_0 is defined formula_3 Here formula_4 is the identity matrix of the same dimension as formula_0, and formula_5 is a real, positive number. The limit as formula_6 equals formula_7, and is in general different from the logarithmic norm formula_8, as formula_9 for all matrices. The matrix norm formula_10 is always positive if formula_11, but the logarithmic norm formula_8 may also take negative values, e.g. when formula_0 is negative definite. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The name "logarithmic norm," which does not appear in the original reference, seems to originate from estimating the logarithm of the norm of solutions to the differential equation formula_12 The maximal growth rate of formula_13 is formula_8. This is expressed by the differential inequality formula_14 where formula_15 is the upper right Dini derivative. Using logarithmic differentiation the differential inequality can also be written formula_16 showing its direct relation to Grönwall's lemma. In fact, it can be shown that the norm of the state transition matrix formula_17 associated to the differential equation formula_18 is bounded by formula_19 for all formula_20. Alternative definitions. If the vector norm is an inner product norm, as in a Hilbert space, then the logarithmic norm is the smallest number formula_8 such that for all formula_21 formula_22 Unlike the original definition, the latter expression also allows formula_0 to be unbounded. Thus differential operators too can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products or duality. Both the operator norm and the logarithmic norm are then associated with extremal values of quadratic forms as follows: formula_23 Properties. Basic properties of the logarithmic norm of a matrix include: Example logarithmic norms. The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas, formula_35 represents the element on the formula_36th row and formula_37th column of a matrix formula_0. Applications in matrix theory and spectral theory. The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that formula_41 and both extreme values are taken for some vectors formula_42. This also means that every eigenvalue formula_43 of formula_0 satisfies formula_44. More generally, the logarithmic norm is related to the numerical range of a matrix. A matrix with formula_45 is positive definite, and one with formula_46 is negative definite. Such matrices have inverses. The inverse of a negative definite matrix is bounded by formula_47 Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, if formula_48 is a rational function with the property formula_49 then, for inner product norms, formula_50 Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices. Applications in stability theory and numerical analysis. The logarithmic norm plays an important role in the stability analysis of a continuous dynamical system formula_51. Its role is analogous to that of the matrix norm for a discrete dynamical system formula_52. In the simplest case, when formula_0 is a scalar complex constant formula_53, the discrete dynamical system has stable solutions when formula_54, while the differential equation has stable solutions when formula_55. When formula_0 is a matrix, the discrete system has stable solutions if formula_56. In the continuous system, the solutions are of the form formula_57. They are stable if formula_58 for all formula_59, which follows from property 7 above, if formula_60. In the latter case, formula_61 is a Lyapunov function for the system. Runge–Kutta methods for the numerical solution of formula_51 replace the differential equation by a discrete equation formula_62, where the rational function formula_48 is characteristic of the method, and formula_5 is the time step size. If formula_63 whenever formula_64, then a stable differential equation, having formula_60, will always result in a stable (contractive) numerical method, as formula_65. Runge-Kutta methods having this property are called A-stable. Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as to semigroup theory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem is well posed. Similar results also apply in the stability analysis in control theory, where there is a need to discriminate between positive and negative feedback. Applications to elliptic differential operators. In connection with differential operators it is common to use inner products and integration by parts. In the simplest case we consider functions satisfying formula_66 with inner product formula_67 Then it holds that formula_68 where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality. In the latter, equality is attained for the function formula_69, implying that the constant formula_70 is the best possible. Thus formula_71 for the differential operator formula_72, which implies that formula_73 As an operator satisfying formula_74 is called elliptic, the logarithmic norm quantifies the (strong) ellipticity of formula_75. Thus, if formula_0 is strongly elliptic, then formula_76, and is invertible given proper data. If a finite difference method is used to solve formula_77, the problem is replaced by an algebraic equation formula_78. The matrix formula_79 will typically inherit the ellipticity, i.e., formula_80, showing that formula_79 is positive definite and therefore invertible. These results carry over to the Poisson equation as well as to other numerical methods such as the Finite element method. Extensions to nonlinear maps. For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities formula_81 where formula_82 is the least upper bound Lipschitz constant of formula_83, and formula_84 is the greatest lower bound Lipschitz constant; and formula_85 where formula_86 and formula_87 are in the domain formula_88 of formula_83. Here formula_89 is the least upper bound logarithmic Lipschitz constant of formula_83, and formula_84 is the greatest lower bound logarithmic Lipschitz constant. It holds that formula_90 (compare above) and, analogously, formula_91, where formula_92 is defined on the image of formula_83. For nonlinear operators that are Lipschitz continuous, it further holds that formula_93 If formula_83 is differentiable and its domain formula_88 is convex, then formula_94 and formula_95 Here formula_96 is the Jacobian matrix of formula_83, linking the nonlinear extension to the matrix norm and logarithmic norm. An operator having either formula_97 or formula_98 is called uniformly monotone. An operator satisfying formula_99 is called contractive. This extension offers many connections to fixed point theory, and critical point theory. The theory becomes analogous to that of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that formula_100 which quantifies the Uniform Monotonicity Theorem due to Browder &amp; Minty (1963).
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\| \\cdot \\|" }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "\\mu(A) = \\lim \\limits_{h \\rightarrow 0^+} \\frac{\\| I + hA \\| - 1}{h}" }, { "math_id": 4, "text": "I" }, { "math_id": 5, "text": "h" }, { "math_id": 6, "text": "h\\rightarrow 0^-" }, { "math_id": 7, "text": "-\\mu(-A)" }, { "math_id": 8, "text": "\\mu(A)" }, { "math_id": 9, "text": "-\\mu(-A) \\leq \\mu(A)" }, { "math_id": 10, "text": "\\|A\\|" }, { "math_id": 11, "text": "A\\neq 0" }, { "math_id": 12, "text": "\\dot x = Ax." }, { "math_id": 13, "text": "\\log \\|x\\|" }, { "math_id": 14, "text": "\\frac{\\mathrm d}{\\mathrm d t^+} \\log \\|x\\| \\leq \\mu(A)," }, { "math_id": 15, "text": "\\mathrm d/\\mathrm dt^+" }, { "math_id": 16, "text": "\\frac{\\mathrm d\\|x\\|}{\\mathrm d t^+} \\leq \\mu(A)\\cdot \\|x\\|," }, { "math_id": 17, "text": "\\Phi(t, t_0)" }, { "math_id": 18, "text": " \\dot x = A(t)x " }, { "math_id": 19, "text": " \\exp\\left(-\\int_{t_0}^{t} \\mu(-A(s)) ds \\right) \\le \\|\\Phi(t,t_0)\\| \\le \\exp\\left(\\int_{t_0}^{t} \\mu(A(s)) ds \\right) " }, { "math_id": 20, "text": " t \\ge t_0 " }, { "math_id": 21, "text": "x" }, { "math_id": 22, "text": " \\real\\langle x, Ax\\rangle \\leq \\mu(A)\\cdot \\|x\\|^2" }, { "math_id": 23, "text": " \\|A\\|^2 = \\sup_{x\\neq 0}{\\frac { \\langle Ax, Ax\\rangle }{ \\langle x,x\\rangle }}\\,; \\qquad \\mu(A) = \\sup_{x\\neq 0} {\\frac {\\real\\langle x, Ax\\rangle }{ \\langle x,x \\rangle }} " }, { "math_id": 24, "text": " \\mu(zI) = \\real\\,(z) " }, { "math_id": 25, "text": " \\mu(A) \\leq \\|A\\| " }, { "math_id": 26, "text": " \\mu(\\gamma A) = \\gamma \\mu(A)\\," }, { "math_id": 27, "text": "\\gamma > 0 " }, { "math_id": 28, "text": " \\mu(A+zI) = \\mu(A) + \\real\\,(z)" }, { "math_id": 29, "text": " \\mu(A + B) \\leq \\mu(A) + \\mu(B) " }, { "math_id": 30, "text": " \\alpha(A) \\leq \\mu(A)\\," }, { "math_id": 31, "text": "\\alpha(A) " }, { "math_id": 32, "text": " \\|\\mathrm e^{tA}\\| \\leq \\mathrm e^{t\\mu(A)}\\, " }, { "math_id": 33, "text": "t \\geq 0" }, { "math_id": 34, "text": " \\mu(A) < 0 \\, \\Rightarrow \\, \\|A^{-1}\\| \\leq -1/\\mu(A) " }, { "math_id": 35, "text": "a_{ij}" }, { "math_id": 36, "text": "i" }, { "math_id": 37, "text": "j" }, { "math_id": 38, "text": " \\mu_1(A) = \\sup \\limits_j \\left( \\real (a_{jj}) + \\sum \\limits_{ i \\neq j} |a_{ij}| \\right) " }, { "math_id": 39, "text": " \\displaystyle \\mu_{2}(A) = \\lambda_{max}\\left(\\frac{A+A^{\\mathrm T}}{2}\\right) " }, { "math_id": 40, "text": " \\mu_{\\infty}(A) = \\sup \\limits_i \\left( \\real (a_{ii}) + \\sum \\limits_{ j \\neq i} |a_{ij}| \\right) " }, { "math_id": 41, "text": "-\\mu(-A) \\leq {\\frac {x^{\\mathrm T}Ax}{x^{\\mathrm T}x}} \\leq \\mu(A)," }, { "math_id": 42, "text": "x\\neq 0" }, { "math_id": 43, "text": "\\lambda_k" }, { "math_id": 44, "text": "-\\mu(-A) \\leq \\real\\, \\lambda_k \\leq \\mu(A)" }, { "math_id": 45, "text": "-\\mu(-A)>0" }, { "math_id": 46, "text": "\\mu(A)<0" }, { "math_id": 47, "text": "\\|A^{-1}\\|\\leq - {\\frac {1}{\\mu(A)}}." }, { "math_id": 48, "text": "R" }, { "math_id": 49, "text": "\\real \\, (z)\\leq 0 \\, \\Rightarrow \\, |R(z)|\\leq 1" }, { "math_id": 50, "text": "\\mu(A)\\leq 0 \\, \\Rightarrow \\, \\|R(A)\\|\\leq 1." }, { "math_id": 51, "text": "\\dot x = Ax" }, { "math_id": 52, "text": "x_{n+1} = Ax_n" }, { "math_id": 53, "text": "\\lambda" }, { "math_id": 54, "text": "|\\lambda|\\leq 1" }, { "math_id": 55, "text": "\\real\\,\\lambda\\leq 0" }, { "math_id": 56, "text": "\\|A\\|\\leq 1" }, { "math_id": 57, "text": "\\mathrm e^{tA}x(0)" }, { "math_id": 58, "text": "\\|\\mathrm e^{tA}\\|\\leq 1" }, { "math_id": 59, "text": "t\\geq 0" }, { "math_id": 60, "text": "\\mu(A)\\leq 0" }, { "math_id": 61, "text": "\\|x\\|" }, { "math_id": 62, "text": "x_{n+1} = R(hA)\\cdot x_n" }, { "math_id": 63, "text": "|R(z)|\\leq 1" }, { "math_id": 64, "text": "\\real\\,(z)\\leq 0" }, { "math_id": 65, "text": "\\|R(hA)\\|\\leq 1" }, { "math_id": 66, "text": "u(0)=u(1)=0" }, { "math_id": 67, "text": "\\langle u,v\\rangle = \\int_0^1 uv\\, \\mathrm dx." }, { "math_id": 68, "text": "\\langle u,u''\\rangle = -\\langle u',u'\\rangle \\leq -\\pi^2\\|u\\|^2," }, { "math_id": 69, "text": "\\sin\\, \\pi x" }, { "math_id": 70, "text": "-\\pi^2" }, { "math_id": 71, "text": "\\langle u, Au\\rangle \\leq -\\pi^2 \\|u\\|^2" }, { "math_id": 72, "text": "A=\\mathrm d^2/\\mathrm dx^2" }, { "math_id": 73, "text": "\\mu({\\frac {\\mathrm d^2}{\\mathrm dx^2}}) = -\\pi^2." }, { "math_id": 74, "text": "\\langle u,Au \\rangle > 0" }, { "math_id": 75, "text": "-\\mathrm d^2/\\mathrm dx^2" }, { "math_id": 76, "text": "\\mu(-A)<0" }, { "math_id": 77, "text": "-u''=f" }, { "math_id": 78, "text": "Tu=f" }, { "math_id": 79, "text": "T" }, { "math_id": 80, "text": "-\\mu(-T)>0" }, { "math_id": 81, "text": "l(f)\\cdot \\|u-v\\| \\leq \\|f(u)-f(v)\\| \\leq L(f)\\cdot \\|u-v\\|," }, { "math_id": 82, "text": "L(f)" }, { "math_id": 83, "text": "f" }, { "math_id": 84, "text": "l(f)" }, { "math_id": 85, "text": "m(f)\\cdot \\|u-v\\|^2 \\leq \\langle u-v, f(u)-f(v)\\rangle \\leq M(f)\\cdot \\|u-v\\|^2," }, { "math_id": 86, "text": "u" }, { "math_id": 87, "text": "v" }, { "math_id": 88, "text": "D" }, { "math_id": 89, "text": "M(f)" }, { "math_id": 90, "text": "m(f)=-M(-f)" }, { "math_id": 91, "text": "l(f)=L(f^{-1})^{-1}" }, { "math_id": 92, "text": "L(f^{-1})" }, { "math_id": 93, "text": "M(f) = \\lim_{h\\rightarrow 0^+}{\\frac {L(I+hf)-1}{h}}." }, { "math_id": 94, "text": "L(f) = \\sup_{x\\in D} \\|f'(x)\\| " }, { "math_id": 95, "text": " \\displaystyle M(f) = \\sup_{x\\in D} \\mu(f'(x))." }, { "math_id": 96, "text": "f'(x)" }, { "math_id": 97, "text": "m(f) > 0" }, { "math_id": 98, "text": "M(f) < 0" }, { "math_id": 99, "text": "L(f) < 1" }, { "math_id": 100, "text": "M(f)<0\\,\\Rightarrow\\,L(f^{-1})\\leq -{\\frac {1}{M(f)}}," } ]
https://en.wikipedia.org/wiki?curid=14484071
1448472
Laplacian matrix
Matrix representation of a graph In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method. The Laplacian matrix relates to many useful properties of a graph. Together with Kirchhoff's theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by Cheeger's inequality. The spectral decomposition of the Laplacian matrix allows constructing low dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing. Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal. The Laplacian matrix is the easiest to define for a simple graph, but more common in applications for an edge-weighted graph, i.e., with weights on its edges — the entries of the graph adjacency matrix. Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices. Definitions for "simple graphs". Laplacian matrix. Given a simple graph formula_0 with formula_1 vertices formula_2, its Laplacian matrix formula_3 is defined element-wise as formula_4 or equivalently by the matrix formula_5 where "D" is the degree matrix and "A" is the adjacency matrix of the graph. Since formula_6 is a simple graph, formula_7 only contains 1s or 0s and its diagonal elements are all 0s. Here is a simple example of a labelled, undirected graph and its Laplacian matrix. We observe for the undirected graph that both the adjacency matrix and the Laplacian matrix are symmetric, and that row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular). For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example: In the directed graph, both the adjacency matrix and the Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the indegree or outdegree has been used. Laplacian matrix for an undirected graph via the oriented incidence matrix. The formula_8 oriented incidence matrix "B" with element "B""ve" for the vertex "v" and the edge "e" (connecting vertices formula_9 and formula_10, with "i" ≠ "j") is defined by formula_11 Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian formula_12 matrix "L" defined as formula_13 where formula_14 is the matrix transpose of "B". An alternative product formula_15 defines the so-called formula_16 "edge-based Laplacian," as opposed to the original commonly used "vertex-based Laplacian" matrix "L". Symmetric Laplacian for a directed graph. The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditional spectral clustering is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to apply techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter. In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as a Boolean sum of the adjacency matrix formula_17 of the original directed graph and its matrix transpose formula_18, where the zero and one entries of formula_17 are treated as logical, rather than numerical, values, as in the following example: Laplacian matrix normalization. A vertex with a large degree, also called a "heavy node," results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization. Symmetrically normalized Laplacian. The symmetrically normalized Laplacian matrix is defined as: formula_19 where formula_20 is the Moore–Penrose inverse. The elements of formula_21 are thus given by formula_22 The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric. For a non-symmetric adjacency matrix of a directed graph, either of indegree and outdegree can be used for normalization: Left (random-walk) and right normalized Laplacians. The left (random-walk) normalized Laplacian matrix is defined as: formula_23 where formula_20 is the Moore–Penrose inverse. The elements of formula_24 are given by formula_25 Similarly, the right normalized Laplacian matrix is defined as formula_26. The left or right normalized Laplacian matrix is not symmetric if the adjacency matrix is symmetric, except for the trivial case of all isolated vertices. For example, The example also demonstrates that if formula_0 has no isolated vertices, then formula_27 right stochastic and hence is the matrix of a random walk, so that the left normalized Laplacian formula_28 has each row summing to zero. Thus we sometimes alternatively call formula_29 the random-walk normalized Laplacian. In the less uncommonly used right normalized Laplacian formula_26 each column sums to zero since formula_30 is left stochastic. For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic formula_31 , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic formula_32. Definitions for graphs with weighted edges. Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization. Laplacian matrix. The Laplacian matrix is defined by formula_5 where "D" is the degree matrix and "A" is the adjacency matrix of the graph. For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example: Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values. Symmetric Laplacian via the incidence matrix. For graphs with weighted edges one can define a weighted incidence matrix "B" and use it to construct the corresponding symmetric Laplacian as formula_13. An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. A spring system is an example of this model used in mechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges. We thus reuse the definition of the weightless formula_8 incidence matrix "B" with element "B""ve" for the vertex "v" and the edge "e" (connecting vertexes formula_9 and formula_10, with "i" &gt; "j") defined by formula_11 We now also define a diagonal formula_16 matrix "W" containing the edge weights. Even though the edges in the definition of "B" are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian formula_12 matrix "L" defined as formula_33 where formula_14 is the matrix transpose of "B". The construction is illustrated in the following example, where every edge formula_34 is assigned the weight value "i", with formula_35 Symmetric Laplacian for a directed graph. Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix formula_17 of the original directed graph and its matrix transpose formula_18 as in the following example: where the zero and one entries of formula_17 are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2. Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the indegree and outdegree, as in the following example: The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix. Laplacian matrix normalization. The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights. Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors. Symmetrically normalized Laplacian. The symmetrically normalized Laplacian is defined as formula_19 where "L" is the unnormalized Laplacian, "A" is the adjacency matrix, "D" is the degree matrix, and formula_20 is the Moore–Penrose inverse. Since the degree matrix "D" is diagonal, its reciprocal square root formula_36 is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of "D". If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example: The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix "A" is symmetric and the diagonal entries of "D" are nonnegative, in which case we can use the term the symmetric normalized Laplacian. The symmetric normalized Laplacian matrix can be also written as formula_37 using the weightless formula_8 incidence matrix "B" and the diagonal formula_16 matrix "W" containing the edge weights and defining the new formula_8 weighted incidence matrix formula_38 whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge "e = {u, v}" has an entry formula_39 in the row corresponding to "u", an entry formula_40 in the row corresponding to "v", and has 0 entries elsewhere. Random walk normalized Laplacian. The random walk normalized Laplacian is defined as formula_41 where "D" is the degree matrix. Since the degree matrix "D" is diagonal, its inverse formula_42 is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of "D". For the isolated vertices (those with degree 0), a common choice is to set the corresponding element formula_43 to 0. The matrix elements of formula_24 are given by formula_44 The name of the random-walk normalized Laplacian comes from the fact that this matrix is formula_45, where formula_46 is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let formula_47 denote the i-th standard basis vector. Then formula_48 is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex formula_49; i.e., formula_50. More generally, if the vector formula_51 is a probability distribution of the location of a random walker on the vertices of the graph, then formula_52 is the probability distribution of the walker after formula_53 steps. The random walk normalized Laplacian can also be called the left normalized Laplacian formula_54 since the normalization is performed by multiplying the Laplacian by the normalization matrix formula_20 on the left. It has each row summing to zero since formula_55 is right stochastic, assuming all the weights are non-negative. In the less uncommonly used right normalized Laplacian formula_26 each column sums to zero since formula_30 is left stochastic. For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic formula_31 , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic formula_32. Negative weights. Negative weights present several challenges for normalisation: Properties. For an (undirected) graph "G" and its Laplacian matrix "L" with eigenvalues formula_56: formula_70 Because formula_69 can be written as the inner product of the vector formula_71 with itself, this shows that formula_57 and so the eigenvalues of formula_67 are all non-negative. formula_72, i.e., formula_24 is similar to the normalized Laplacian formula_21. For this reason, even if formula_24 is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric Laplacian formula_21. Interpretation as the discrete Laplace operator approximating the continuous Laplacian. The graph Laplacian matrix can be further viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian operator obtained by the finite difference method. (See Discrete Poisson equation) In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition, i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size. Generalizations and extensions of the Laplacian matrix. Generalized Laplacian. The generalized Laplacian formula_73 is defined as: formula_74 Notice the ordinary Laplacian is a generalized Laplacian. Admittance matrix of an AC circuit. The Laplacian of a graph was first introduced to model electrical networks. In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances. The weight of edge ("i", "j") is, by convention, "minus" the reciprocal of the impedance directly between "i" and "j". In models of such networks, the entries of the adjacency matrix are complex, but the Kirchoff matrix remains symmetric, rather than being Hermitian. Such a matrix is usually called an "admittance matrix", denoted formula_75, rather than a "Laplacian". This is one of the rare applications that give rise to complex symmetric matrices. Magnetic Laplacian. There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become a Hermitian matrix. The Magnetic Laplacian for a directed graph with real weights formula_76 is constructed as the Hadamard product of the real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the complex entries formula_77 which encode the edge direction into the phase in the complex plane. In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter formula_78 is called electric charge. In the following example formula_79: Deformed Laplacian. The deformed Laplacian is commonly defined as formula_80 where "I" is the identity matrix, "A" is the adjacency matrix, "D" is the degree matrix, and "s" is a (complex-valued) number. The standard Laplacian is just formula_81 and formula_82 is the signless Laplacian. Signless Laplacian. The signless Laplacian is defined as formula_83 where formula_84 is the degree matrix, and formula_17 is the adjacency matrix. Like the signed Laplacian formula_85, the signless Laplacian formula_73 also is positive semi-definite as it can be factored as formula_86 where formula_87 is the incidence matrix. formula_73 has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as formula_88 This has a solution where formula_89 if and only if the graph has a bipartite connected component. Directed multigraphs. An analogue of the Laplacian matrix can be defined for directed multigraphs. In this case the Laplacian matrix "L" is defined as formula_90 where "D" is a diagonal matrix with "D""i","i" equal to the outdegree of vertex "i" and "A" is a matrix with "A""i","j" equal to the number of edges from "i" to "j" (including loops). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "v_1, \\ldots, v_n" }, { "math_id": 3, "text": "L_{n \\times n}" }, { "math_id": 4, "text": "L_{i,j} := \\begin{cases}\n \\deg(v_i) & \\mbox{if}\\ i = j \\\\\n -1 & \\mbox{if}\\ i \\neq j\\ \\mbox{and}\\ v_i \\mbox{ is adjacent to } v_j \\\\\n 0 & \\mbox{otherwise},\n\\end{cases}" }, { "math_id": 5, "text": "L = D - A, " }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "|v| \\times |e|" }, { "math_id": 9, "text": "v_i" }, { "math_id": 10, "text": "v_j" }, { "math_id": 11, "text": "B_{ve} = \\left\\{\\begin{array}{rl}\n 1, & \\text{if } v = v_i\\\\\n -1, & \\text{if } v = v_j\\\\\n 0, & \\text{otherwise}.\n\\end{array}\\right." }, { "math_id": 12, "text": "|v| \\times |v|" }, { "math_id": 13, "text": "L = B B^\\textsf{T}" }, { "math_id": 14, "text": "B^\\textsf{T}" }, { "math_id": 15, "text": "B^\\textsf{T}B" }, { "math_id": 16, "text": "|e| \\times |e|" }, { "math_id": 17, "text": "A" }, { "math_id": 18, "text": "A^T" }, { "math_id": 19, "text": "L^\\text{sym} := (D^+)^{1/2} L (D^+)^{1/2} = I - (D^+)^{1/2} A (D^+)^{1/2}," }, { "math_id": 20, "text": "D^+" }, { "math_id": 21, "text": "L^\\text{sym}" }, { "math_id": 22, "text": "L^\\text{sym}_{i,j} := \\begin{cases}\n 1 & \\mbox{if } i = j \\mbox{ and } \\deg(v_i) \\neq 0\\\\\n -\\frac{1}{\\sqrt{\\deg(v_i)\\deg(v_j)}} & \\mbox{if } i \\neq j \\mbox{ and } v_i \\mbox{ is adjacent to } v_j \\\\\n 0 & \\mbox{otherwise}.\n\\end{cases}" }, { "math_id": 23, "text": "L^\\text{rw} := D^+L = I - D^+A," }, { "math_id": 24, "text": "L^\\text{rw}" }, { "math_id": 25, "text": "L^\\text{rw}_{i,j} := \\begin{cases}\n 1 & \\mbox{if } i = j \\mbox{ and } \\deg(v_i) \\neq 0\\\\\n -\\frac{1}{\\deg(v_i)} & \\mbox{if } i \\neq j \\mbox{ and } v_i \\mbox{ is adjacent to } v_j \\\\\n 0 & \\mbox{otherwise}.\n\\end{cases}" }, { "math_id": 26, "text": "L D^+ = I - A D^+" }, { "math_id": 27, "text": "D^+A" }, { "math_id": 28, "text": "L^\\text{rw} := D^+L = I - D^+A" }, { "math_id": 29, "text": "L^\\text{rw}" }, { "math_id": 30, "text": "A D^+" }, { "math_id": 31, "text": "D_{\\text{out}}^+A" }, { "math_id": 32, "text": "AD_{\\text{in}}^+" }, { "math_id": 33, "text": "L = B W B^\\textsf{T}" }, { "math_id": 34, "text": "e_i" }, { "math_id": 35, "text": "i=1, 2, 3, 4." }, { "math_id": 36, "text": "(D^+)^{1/2}" }, { "math_id": 37, "text": "L^\\text{sym} := (D^+)^{1/2} L (D^+)^{1/2} = (D^+)^{1/2}B W B^\\textsf{T} (D^+)^{1/2} = S S^T" }, { "math_id": 38, "text": "S=(D^+)^{1/2}B W^{{1}/{2}}" }, { "math_id": 39, "text": "\\frac{1}{\\sqrt{d_u}}" }, { "math_id": 40, "text": "-\\frac{1}{\\sqrt{d_v}}" }, { "math_id": 41, "text": "L^\\text{rw} := D^+ L = I - D^+ A" }, { "math_id": 42, "text": "D^+" }, { "math_id": 43, "text": "L^\\text{rw}_{i,i}" }, { "math_id": 44, "text": "L^{\\text{rw}}_{i,j} := \\begin{cases}\n 1 & \\mbox{if}\\ i = j\\ \\mbox{and}\\ \\deg(v_i) \\neq 0\\\\\n -\\frac{1}{\\deg(v_i)} & \\mbox{if}\\ i \\neq j\\ \\mbox{and}\\ v_i \\mbox{ is adjacent to } v_j \\\\\n 0 & \\mbox{otherwise}.\n\\end{cases}" }, { "math_id": 45, "text": "L^\\text{rw} = I - P" }, { "math_id": 46, "text": "P = D^+A" }, { "math_id": 47, "text": " e_i " }, { "math_id": 48, "text": "x = e_i P " }, { "math_id": 49, "text": "i" }, { "math_id": 50, "text": "x_j = \\mathbb{P}\\left(v_i \\to v_j\\right)" }, { "math_id": 51, "text": " x " }, { "math_id": 52, "text": "x' = x P^t" }, { "math_id": 53, "text": "t" }, { "math_id": 54, "text": "L^\\text{rw} := D^+L" }, { "math_id": 55, "text": "P = D^+A" }, { "math_id": 56, "text": "\\lambda_0 \\le \\lambda_1 \\le \\cdots \\le \\lambda_{n-1}" }, { "math_id": 57, "text": "\\lambda_i \\ge 0" }, { "math_id": 58, "text": "\\lambda_0 = 0" }, { "math_id": 59, "text": "\\mathbf{v}_0 = (1, 1, \\dots, 1)" }, { "math_id": 60, "text": "L \\mathbf{v}_0 = \\mathbf{0} ." }, { "math_id": 61, "text": "f : V \\to \\mathbb{R}" }, { "math_id": 62, "text": "V" }, { "math_id": 63, "text": "n = |V|" }, { "math_id": 64, "text": "\\mathcal{L} = \\tfrac{1}{k} L = I - \\tfrac{1}{k} A" }, { "math_id": 65, "text": "2m" }, { "math_id": 66, "text": "m" }, { "math_id": 67, "text": "L" }, { "math_id": 68, "text": "\\mathbf{v}_i" }, { "math_id": 69, "text": "\\lambda_i" }, { "math_id": 70, "text": "\\begin{align}\n \\lambda_i & = \\mathbf{v}_i^\\textsf{T} L \\mathbf{v}_i \\\\\n & = \\mathbf{v}_i^\\textsf{T} M^\\textsf{T} M \\mathbf{v}_i \\\\\n & = \\left(M \\mathbf{v}_i\\right)^\\textsf{T} \\left(M \\mathbf{v}_i\\right). \\\\\n\\end{align}" }, { "math_id": 71, "text": "M \\mathbf{v}_i" }, { "math_id": 72, "text": "L^\\text{rw} = I-D^{-\\frac{1}{2}}\\left(I - L^\\text{sym}\\right) D^\\frac{1}{2}" }, { "math_id": 73, "text": "Q" }, { "math_id": 74, "text": "\\begin{cases}\n Q_{i,j} < 0 & \\mbox{if } i \\neq j \\mbox{ and } v_i \\mbox{ is adjacent to } v_j\\\\\n Q_{i,j} = 0 & \\mbox{if } i \\neq j \\mbox{ and } v_i \\mbox{ is not adjacent to } v_j \\\\\n \\mbox{any number} & \\mbox{otherwise}.\n\\end{cases}" }, { "math_id": 75, "text": "Y" }, { "math_id": 76, "text": "w_{ij}" }, { "math_id": 77, "text": "\\gamma_q(i, j) = e^{i2 \\pi q(w_{ij}-w_{ji})}" }, { "math_id": 78, "text": "q" }, { "math_id": 79, "text": "q=1/4" }, { "math_id": 80, "text": "\\Delta(s) = I - sA + s^2(D - I)" }, { "math_id": 81, "text": "\\Delta(1)" }, { "math_id": 82, "text": "\\Delta(-1) = D + A" }, { "math_id": 83, "text": "Q = D + A" }, { "math_id": 84, "text": "D" }, { "math_id": 85, "text": "L" }, { "math_id": 86, "text": "Q = RR^\\textsf{T}" }, { "math_id": 87, "text": "R" }, { "math_id": 88, "text": "\\mathbf{x}^\\textsf{T} Q \\mathbf{x} = \\mathbf{x}^\\textsf{T} R R^\\textsf{T} \\mathbf{x} \\implies R^\\textsf{T} \\mathbf{x} = \\mathbf{0}." }, { "math_id": 89, "text": "\\mathbf{x} \\neq \\mathbf{0}" }, { "math_id": 90, "text": "L = D - A" } ]
https://en.wikipedia.org/wiki?curid=1448472
1448500
Laser beam welding
Welding technique Laser beam welding (LBW) is a welding technique used to join pieces of metal or thermoplastics through the use of a laser. The beam provides a concentrated heat source, allowing for narrow, deep welds and high welding rates. The process is frequently used in high volume and precision requiring applications using automation, as in the automotive and aeronautics industries. It is based on keyhole or penetration mode welding. Operation. Like electron-beam welding (EBW), laser beam welding has high power density (on the order of 1 MW/cm2) resulting in small heat-affected zones and high heating and cooling rates. The spot size of the laser can vary between 0.2 mm and 13 mm, though only smaller sizes are used for welding. The depth of penetration is proportional to the amount of power supplied, but is also dependent on the location of the focal point: penetration is maximized when the focal point is slightly below the surface of the workpiece A continuous or pulsed laser beam may be used depending upon the application. Millisecond-long pulses are used to weld thin materials such as razor blades while continuous laser systems are employed for deep welds. LBW is a versatile process, capable of welding carbon steels, HSLA steels, stainless steel, aluminum, and titanium. Due to high cooling rates, cracking is a concern when welding high-carbon steels. The weld quality is high, similar to that of electron beam welding. The speed of welding is proportional to the amount of power supplied but also depends on the type and thickness of the workpieces. The high power capability of gas lasers make them especially suitable for high volume applications. LBW is particularly dominant in the automotive industry. Some of the advantages of LBW in comparison to EBW are: A derivative of LBW, laser-hybrid welding, combines the laser of LBW with an arc welding method such as gas metal arc welding (GMAW). This combination allows for greater positioning flexibility, since GMAW supplies molten metal to fill the joint, and due to the use of a laser, increases the welding speed over what is normally possible with GMAW. Weld quality tends to be higher as well, since the potential for undercutting is reduced. Equipment. Automation and CAM. Although laser beam welding can be accomplished by hand, most systems are automated and use a system of computer aided manufacturing based on computer aided designs. Laser welding can also be coupled with milling to form a finished part. In 2016 the RepRap project, which historically worked on fused filament fabrication, expanded to development of open source laser welding systems. Such systems have been fully characterized and can be used in a wide scale of applications while reducing conventional manufacturing costs. Lasers. Solid state. Solid-state lasers operate at wavelengths on the order of 1 micrometer, much shorter than gas lasers used for welding, and as a result require that operators wear special eyewear or use special screens to prevent retina damage. Nd:YAG lasers can operate in both pulsed and continuous mode, but the other types are limited to pulsed mode. The original and still popular solid-state design is a single crystal shaped as a rod approximately 20 mm in diameter and 200 mm long, and the ends are ground flat. This rod is surrounded by a flash tube containing xenon or krypton. When flashed, a pulse of light lasting about two milliseconds is emitted by the laser. Disk shaped crystals are growing in popularity in the industry, and flashlamps are giving way to diodes due to their high efficiency. Typical power output for ruby lasers is 10–20 W, while the Nd:YAG laser outputs between 0.04–6,000 W. To deliver the laser beam to the weld area, fiber optics are usually employed. Gas. Gas lasers use high-voltage, low-current power sources to supply the energy needed to excite the gas mixture used as a lasing medium. These lasers can operate in both continuous and pulsed mode, and the wavelength of the CO2 gas laser beam is 10.6 μm, deep infrared, i.e. 'heat'. Fiber optic cable absorbs and is destroyed by this wavelength, so a rigid lens and mirror delivery system is used. Power outputs for gas lasers can be much higher than solid-state lasers, reaching 25 kW. Fiber. In fiber lasers, the main medium is the optical fiber itself. They are capable of power up to 50 kW and are increasingly being used for robotic industrial welding. Laser beam delivery. Modern laser beam welding machines can be grouped into two types. In the traditional type, the laser output is moved to follow the seam. This is usually achieved with a robot. In many modern applications, remote laser beam welding is used. In this method, the laser beam is moved along the seam with the help of a laser scanner, so that the robotic arm does not need to follow the seam any more. The advantages of remote laser welding are the higher speed and the higher precision of the welding process. Thermal modeling of pulsed-laser welding. Pulsed-laser welding has advantages over continuous wave (CW) laser welding. Some of these advantages are lower porosity and less spatter. Pulsed-laser welding also has some disadvantages such as causing hot cracking in aluminum alloys. Thermal analysis of the pulsed-laser welding process can assist in prediction of welding parameters such as depth of fusion, cooling rates, and residual stresses. Due to the complexity of the pulsed laser process, it is necessary to employ a procedure that involves a development cycle. The cycle involves constructing a mathematical model, calculating a thermal cycle using numerical modeling techniques like either finite elemental modeling (FEM) or finite difference method (FDM) or analytical models with simplifying assumptions, and validating the model by experimental measurements. A methodology combining some of the published models involves: Step 1. Not all radiant energy is absorbed and turned into heat for welding. Some of the radiant energy is absorbed in the plasma created by vaporizing and then subsequently ionizing the gas. In addition, the absorptivity is affected by the wavelength of the beam, the surface composition of the material being welded, the angle of incidence, and the temperature of the material. Rosenthal point source assumption leaves an infinitely high temperature discontinuity which is addressed by assuming a Gaussian distribution instead. Radiant energy is also not uniformly distributed within the beam. Some devices produce Gaussian energy distributions, whereas others can be bimodal. A Gaussian energy distribution can be applied by multiplying the power density by a function like this:formula_0, where r is the radial distance from the center of the beam, formula_1=beam radius or spot size. Using a temperature distribution instead of a point source assumption allows for easier calculation of temperature-dependent material properties such as absorptivity. On the irradiated surface, when a keyhole is formed, Fresnel reflection (the almost complete absorption of the beam energy due to multiple reflection within the keyhole cavity) occurs and can be modeled by formula_2, where ε is a function of dielectric constant, electric conductivity, and laser frequency. θ is the angle of incidence. Understanding the absorption efficiency is key to calculating thermal effects. Step 2. Lasers can weld in one of two modes: conduction and keyhole. Which mode is in operation depends on whether the power density is sufficiently high enough to cause evaporation. Conduction mode occurs below the vaporization point while keyhole mode occurs above the vaporization point. The keyhole is analogous to an air pocket. The air pocket is in a state of flux. Forces such as the recoil pressure of the evaporated metal open the keyhole while gravity (aka hydrostatic forces) and metal surface tension tend to collapse it. At even higher power densities, the vapor can be ionized to form a plasma. The recoil pressure is determined by using the Clausius-Clapeyron equation.formula_3, where P is the equilibrium vapor pressure, T is the liquid surface temperature, HLV is the latent heat of vaporization, TLV is the equilibrium temperature at the liquid-vapor interface. Using the assumption that the vapor flow is limited to sonic velocities, one gets that formula_4, where Po is atmospheric pressure and Pr is recoil pressure. Step 3. This pertains to keyhole profiles. Fluid flow velocities are determined by formula_5 formula_6 formula_7 where formula_8 is the velocity vector, P=pressure, ρ= mass density, formula_9=viscosity, β=thermal expansion coefficient, g=gravity, and F is the volume fraction of fluid in a simulation grid cell. Step 4. In order to determine the boundary temperature at the laser impingement surface, you would apply an equation like this. formula_10, where kn=the thermal conductivity normal to the surface impinged on by the laser, h=convective heat transfer coefficient for air, σ is the Stefan–Boltzmann constant for radiation, and ε is the emissivity of the material being welded on, q is laser beam heat flux. Unlike CW (Continuous Wave) laser welding which involves one moving thermal cycle, pulsed laser involves repetitively impinging on the same spot, thus creating multiple overlapping thermal cycles. A method of addressing this is to add a step function that multiplies the heat flux by one when the beam is on but multiplies the heat flux by zero when the beam is off. One way to achieve this is by using a Kronecker delta which modifies q as follows: formula_11, where δ= the Kronecker delta, qe=experimentally determined heat flux. The problem with this method, is it does not allow you to see the effect of pulse duration. One way of solving this is to a use a modifier that is time-dependent function such as: formula_12 where v= pulse frequency, n=0,1, 2...,v-1), τ= pulse duration. Next, you would apply this boundary condition and solve for Fourier's 2nd Law to obtain the internal temperature distribution. Assuming no internal heat generation, the solution is formula_13, where k=thermal conductivity, ρ=density, Cp=specific heat capacity, formula_8=fluid velocity vector. Step 5. Incrementing is done by discretizing the governing equations presented in the previous steps and applying the next time and length steps. Step 6. Results can be validated by specific experimental observations or trends from generic experiments. These experiments have involved metallographic verification of the depth of fusion. Consequences of simplifying assumptions. The physics of pulsed laser can be very complex and therefore, some simplifying assumptions need to be made to either speed up calculation or compensate for a lack of materials properties. The temperature-dependence of material properties such as specific heat are ignored to minimize computing time. The liquid temperature can be overestimated if the amount of heat loss due to mass loss from vapor leaving the liquid-metal interface is not accounted for. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(r)=\\exp(-r^2/a_o^2)" }, { "math_id": 1, "text": "a_o" }, { "math_id": 2, "text": "\\alpha_{\\theta}=1-R_{\\theta}=1-0.5{{1+(1-\\epsilon \\cos \\theta)^2 \\over {1+{1+\\epsilon \\cos \\theta)^2}}}+ {{{\\epsilon^2}-2\\epsilon \\cos \\theta+2 \\cos^2 \\theta} \\over {\\epsilon^2}+2\\epsilon \\cos \\theta+2 \\cos^2 \\theta}}" }, { "math_id": 3, "text": "{dP \\over dT}={d\\Delta H_{LV} \\over dT\\Delta V_{LV}}\\thickapprox {d\\Delta H_{LV}\\over T_{LV} V_{LV}}" }, { "math_id": 4, "text": "P_r\\approxeq0.54P_oexp(\\Delta H_{LV}{{T-T_{LV}\\over RTT_{LV}}})" }, { "math_id": 5, "text": "\\bigtriangledown*\\overrightarrow{v}=0" }, { "math_id": 6, "text": "{\\partial \\overrightarrow{v}\\over\\partial t}+ (\\overrightarrow{v}*\\bigtriangledown)\\overrightarrow{v} =-{1 \\over \\rho} \\bigtriangledown P +v\\bigtriangledown\\overrightarrow{v}+\\beta\\overrightarrow{g}\\Delta T" }, { "math_id": 7, "text": "{\\partial F \\over\\partial t}+(\\overrightarrow{v}* \\bigtriangledown) F = 0" }, { "math_id": 8, "text": "\\overrightarrow{v}" }, { "math_id": 9, "text": "v" }, { "math_id": 10, "text": "k_n{\\partial T\\over \\partial n}-q+h(T-T_o)+\\sigma \\epsilon (T^4-T^2_o)=0" }, { "math_id": 11, "text": "q=\\delta*qe" }, { "math_id": 12, "text": "f(n) = \\begin{cases} 1, & \\text{if }n/v\\leq t \\leq n/v+\\tau \\\\ 0, & \\text{if }n/v+\\tau\\leq t \\leq (n+1)/v \\end{cases}" }, { "math_id": 13, "text": "\\rho C_p ({\\partial T \\over \\partial t}+\\overrightarrow{v} \\bigtriangledown T)=k \\bigtriangledown T" } ]
https://en.wikipedia.org/wiki?curid=1448500
14485857
Taft equation
The Taft equation is a linear free energy relationship (LFER) used in physical organic chemistry in the study of reaction mechanisms and in the development of quantitative structure–activity relationships for organic compounds. It was developed by Robert W. Taft in 1952 as a modification to the Hammett equation. While the Hammett equation accounts for how field, inductive, and resonance effects influence reaction rates, the Taft equation also describes the steric effects of a substituent. The Taft equation is written as: formula_0 where formula_1 is the ratio of the rate of the substituted reaction compared to the reference reaction, ρ* is the sensitivity factor for the reaction to polar effects, σ* is the polar substituent constant that describes the field and inductive effects of the substituent, δ is the sensitivity factor for the reaction to steric effects, and Es is the steric substituent constant. Polar substituent constants, σ*. Polar substituent constants describe the way a substituent will influence a reaction through polar (inductive, field, and resonance) effects. To determine σ* Taft studied the hydrolysis of methyl esters (RCOOMe). The use of ester hydrolysis rates to study polar effects was first suggested by Ingold in 1930. The hydrolysis of esters can occur through either acid and base catalyzed mechanisms, both of which proceed through a tetrahedral intermediate. In the base catalyzed mechanism the reactant goes from a neutral species to negatively charged intermediate in the rate determining (slow) step, while in the acid catalyzed mechanism a positively charged reactant goes to a positively charged intermediate. Due to the similar tetrahedral intermediates, Taft proposed that under identical conditions any steric factors should be nearly the same for the two mechanisms and therefore would not influence the ratio of the rates. However, because of the difference in charge buildup in the rate determining steps it was proposed that polar effects would only influence the reaction rate of the base catalyzed reaction since a new charge was formed. He defined the polar substituent constant σ* as: formula_2 where log(ks/kCH3)B is the ratio of the rate of the base catalyzed reaction compared to the reference reaction, log(ks/kCH3)A is ratio of a rate of the acid catalyzed reaction compared to the reference reaction, and ρ* is a reaction constant that describes the sensitivity of the reaction series. For the definition reaction series, ρ* was set to 1 and R = methyl was defined as the reference reaction (σ* = zero). The factor of 1/2.48 is included to make σ* similar in magnitude to the Hammett σ values. Steric substituent constants, Es. Although the acid catalyzed and base catalyzed hydrolysis of esters gives transition states for the rate determining steps that have differing charge densities, their structures differ only by two hydrogen atoms. Taft thus assumed that steric effects would influence both reaction mechanisms equally. Due to this, the steric substituent constant Es was determined from solely the acid catalyzed reaction, as this would not include polar effects. Es was defined as: formula_3 where "ks" is the rate of the studied reaction and &lt;chem&gt;\mathit k_{CH3}&lt;/chem&gt; is the rate of the reference reaction (R = methyl). δ is a reaction constant that describes the susceptibility of a reaction series to steric effects. For the definition reaction series δ was set to 1 and "Es" for the reference reaction was set to zero. This equation is combined with the equation for σ* to give the full Taft equation. From comparing the "Es" values for methyl, ethyl, isopropyl, and tert-butyl, it is seen that the value increases with increasing steric bulk. However, because context will have an effect on steric interactions some "Es" values can be larger or smaller than expected. For example, the value for phenyl is much larger than that for "tert"-butyl. When comparing these groups using another measure of steric bulk, axial strain values, the "tert"-butyl group is larger. Other steric parameters for LFERs. In addition to Taft's steric parameter "Es", other steric parameters that are independent of kinetic data have been defined. Charton has defined values "v" that are derived from van der Waals radii. Using molecular mechanics, Meyers has defined "V"a values that are derived from the volume of the portion of the substituent that is within 0.3 nm of the reaction center. Sensitivity factors. Polar sensitivity factor, ρ*. Similar to ρ values for Hammett plots, the polar sensitivity factor ρ* for Taft plots will describe the susceptibility of a reaction series to polar effects. When the steric effects of substituents do not significantly influence the reaction rate the Taft equation simplifies to a form of the Hammett equation: formula_4 The polar sensitivity factor ρ* can be obtained by plotting the ratio of the measured reaction rates ("ks") compared to the reference reaction (&lt;chem&gt;\mathit k_{CH3}&lt;/chem&gt;) versus the σ* values for the substituents. This plot will give a straight line with a slope equal to ρ*. Similar to the Hammett ρ value: Steric sensitivity factor, δ. Similar to the polar sensitivity factor, the steric sensitivity factor δ for a new reaction series will describe to what magnitude the reaction rate is influenced by steric effects. When a reaction series is not significantly influenced by polar effects, the Taft equation reduces to: formula_5 A plot of the ratio of the rates versus the "Es" value for the substituent will give a straight line with a slope equal to δ. Similarly to the Hammett ρ value, the magnitude of δ will reflect to what extent a reaction is influenced by steric effects: Since "Es" values are large and "negative" for bulkier substituents, it follows that: Reactions influenced by polar and steric effects. When both steric and polar effects influence the reaction rate the Taft equation can be solved for both ρ* and δ through the use of standard least squares methods for determining a bivariant regression plane. Taft outlined the application of this method to solving the Taft equation in a 1957 paper. Taft plots in QSAR. The Taft equation is often employed in biological chemistry and medicinal chemistry for the development of quantitative structure–activity relationships (QSARs). In a recent example, Sandri and co-workers have used Taft plots in studies of polar effects in the aminolysis of β-lactams. They have looked at the binding of β-lactams to a poly(ethyleneimine) polymer, which functions as a simple mimic for human serum albumin (HSA). The formation of a covalent bond between penicillins and HSA as a result of aminolysis with lysine residues is believed to be involved in penicillin allergies. As a part of their mechanistic studies Sandri and co-workers plotted the rate of aminolysis versus calculated σ* values for 6 penicillins and found no correlation, suggesting that the rate is influenced by other effects in addition to polar and steric effects. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\log\\left ( \\frac{k_s}{k_{\\ce{CH3}}} \\right )= \\rho^*\\sigma^* + \\delta E_s" }, { "math_id": 1, "text": "\\log\\frac{k_s}{k_\\ce{CH3}}" }, { "math_id": 2, "text": "\\sigma^* = \\left( \\frac{1}{2.48\\rho^*} \\right )\\Bigg[\\log\\left( \\frac{k_s}{k_{\\ce{CH3}}} \\right )_B - \\log\\left( \\frac{k_s}{k_{\\ce{CH3}}} \\right )_A \\Bigg]" }, { "math_id": 3, "text": "E_s = \\frac {1}{\\delta}\\log\\left ( \\frac{k_s}{k_{\\ce{CH3}}} \\right )" }, { "math_id": 4, "text": "\\log \\left (\\frac {k_s}{k_{\\ce{CH3}}}\\right ) = \\rho^*\\sigma^*" }, { "math_id": 5, "text": "\\log \\left (\\frac {k_s}{k_{\\ce{CH3}}}\\right ) = \\delta E_s" } ]
https://en.wikipedia.org/wiki?curid=14485857
1448612
Speedup
Process for increasing the performance between two systems solving the same problem In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with different resources. The notion of speedup was established by Amdahl's law, which was particularly focused on parallel processing. However, speedup can be used more generally to show the effect on performance after any resource enhancement. Definitions. Speedup can be defined for two different types of quantities: "latency" and "throughput". "Latency" of an architecture is the reciprocal of the execution speed of a task: formula_0 where "Throughput" of an architecture is the execution rate of a task: formula_1 where Latency is often measured in seconds per unit of execution workload. Throughput is often measured in units of execution workload per second. Another unit of throughput is instructions per cycle (IPC) and its reciprocal, cycles per instruction (CPI), is another unit of latency. Speedup is dimensionless and defined differently for each type of quantity so that it is a consistent metric. Speedup in latency. Speedup in "latency" is defined by the following formula: formula_2 where Speedup in latency can be predicted from Amdahl's law or Gustafson's law. Speedup in throughput. Speedup in "throughput" is defined by the formula: formula_3 where Examples. Using execution times. We are testing the effectiveness of a branch predictor on the execution of a program. First, we execute the program with the standard branch predictor on the processor, which yields an execution time of 2.25 seconds. Next, we execute the program with our modified (and hopefully improved) branch predictor on the same processor, which produces an execution time of 1.50 seconds. In both cases the execution workload is the same. Using our speedup formula, we know formula_4 Our new branch predictor has provided a 1.5x speedup over the original. Using cycles per instruction and instructions per cycle. We can also measure speedup in cycles per instruction (CPI) which is a latency. First, we execute the program with the standard branch predictor, which yields a CPI of 3. Next, we execute the program with our modified branch predictor, which yields a CPI of 2. In both cases the execution workload is the same and both architectures are not pipelined nor parallel. Using the speedup formula gives formula_5 We can also measure speedup in instructions per cycle (IPC), which is a throughput and the inverse of CPI. Using the speedup formula gives formula_6 We achieve the same 1.5x speedup, though we measured different quantities. Additional details. Let "S" be the speedup of execution of a task and "s" the speedup of execution of the part of the task that benefits from the improvement of the resources of an architecture. "Linear speedup" or "ideal speedup" is obtained when "S" = "s". When running a task with linear speedup, doubling the local speedup doubles the overall speedup. As this is ideal, it is considered very good scalability. "Efficiency" is a metric of the utilization of the resources of the improved system defined as formula_7 Its value is typically between 0 and 1. Programs with linear speedup and programs running on a single processor have an efficiency of 1, while many difficult-to-parallelize programs have efficiency such as 1/ln("s") that approaches 0 as the number of processors "A" = "s" increases. In engineering contexts, efficiency curves are more often used for graphs than speedup curves, since In marketing contexts, speedup curves are more often used, largely because they go up and to the right and thus appear better to the less-informed. Super-linear speedup. Sometimes a speedup of more than "A" when using "A" processors is observed in parallel computing, which is called "super-linear speedup". Super-linear speedup rarely happens and often confuses beginners, who believe the theoretical maximum speedup should be "A" when "A" processors are used. One possible reason for super-linear speedup in low-level computations is the cache effect resulting from the different memory hierarchies of a modern computer: in parallel computing, not only do the numbers of processors change, but so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even all of the working set can fit into caches and the memory access time reduces dramatically, which causes the extra speedup in addition to that from the actual computation. An analogous situation occurs when searching large datasets, such as the genomic data searched by BLAST implementations. There the accumulated RAM from each of the nodes in a cluster enables the dataset to move from disk into RAM thereby drastically reducing the time required by e.g. mpiBLAST to search it. Super-linear speedups can also occur when performing backtracking in parallel: an exception in one thread can cause several other threads to backtrack early, before they reach the exception themselves. Super-linear speedups can also occur in parallel implementations of branch-and-bound for optimization: the processing of one node by one processor may affect the work other processors need to do for the other nodes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = \\frac{1}{v} = \\frac{T}{W}," }, { "math_id": 1, "text": "Q = \\rho vA = \\frac{\\rho AW}{T} = \\frac{\\rho A}{L}," }, { "math_id": 2, "text": "S_\\text{latency} = \\frac{L_1}{L_2} = \\frac{T_1W_2}{T_2W_1}," }, { "math_id": 3, "text": "S_\\text{throughput} = \\frac{Q_2}{Q_1} = \\frac{\\rho_2A_2T_1W_2}{\\rho_1A_1T_2W_1} = \\frac{\\rho_2A_2}{\\rho_1A_1}S_\\text{latency}," }, { "math_id": 4, "text": "S_\\text{latency} = \\frac{L_\\text{old}}{L_\\text{new}} = \\frac{2.25~\\mathrm{s}}{1.50~\\mathrm{s}} = 1.5." }, { "math_id": 5, "text": "S_\\text{latency} = \\frac{L_\\text{old}}{L_\\text{new}} = \\frac{3~\\text{CPI}}{2~\\text{CPI}} = 1.5." }, { "math_id": 6, "text": "S_\\text{throughput} = \\frac{Q_\\text{new}}{Q_\\text{old}} = \\frac{0.5~\\text{IPC}}{0.33~\\text{IPC}} = 1.5." }, { "math_id": 7, "text": "\\eta = \\frac{S}{s}." } ]
https://en.wikipedia.org/wiki?curid=1448612
14486776
Contact analysis
In cryptanalysis, contact analysis is the study of the frequency with which certain symbols precede or follow other symbols. The method is used as an aid to breaking classical ciphers. Contact analysis is based on the fact that, in any sample of any written language, certain symbols appear adjacent to other symbols with varying frequencies. Moreover, these frequencies are roughly the same for almost all samples of that language, even when the distribution of the symbols themselves differs significantly from normal. This is true regardless of whether the symbols being used are words or letters. In some ciphers, these properties of the natural language plaintext are preserved in the ciphertext, and have the potential to be exploited in a ciphertext-only attack. Although in a sense contact analysis can be considered a type of frequency analysis, most discussions of frequency analysis concern themselves with the simple probabilities of the symbols in the text: formula_0 or formula_1 Contact analysis is based on the conditional probability that certain letters will precede or succeed other letters: formula_2, or formula_3, or even formula_4, where formula_5 and formula_6 are subsets of the alphabet being used. Where frequency analysis is based on first-order statistics, contact analysis is based on second or third-order statistics.
[ { "math_id": 0, "text": "P(X_i=a)" }, { "math_id": 1, "text": "P(X_i=a \\cap X_{i+1}=b)" }, { "math_id": 2, "text": "P(X_i=b \\mid X_{i-1}=a)" }, { "math_id": 3, "text": "P(X_i=c \\mid X_{i-2}=a \\cap X_{i-1}=b)" }, { "math_id": 4, "text": "P(X_i \\sub S \\mid X_{i-1}\\sub T \\cap X_{i+1} \\sub T)" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=14486776
1448702
Iterated function
Result of repeatedly applying a mathematical function In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated. For example, on the image on the right: formula_0 Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics. Definition. The formal definition of an iterated function on a set "X" follows. Let "X" be a set and "f": "X" → "X" be a function. Defining "f" "n" as the "n"-th iterate of "f", where "n" is a non-negative integer, by: formula_1 and formula_2 where id"X" is the identity function on "X" and ("f" "g")("x") "f" ("g"("x")) denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813. Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered. Because the notation "f" "n" may refer to both iteration (composition) of the function "f" or exponentiation of the function "f" (the latter is commonly used in trigonometry), some mathematicians choose to use ∘ to denote the compositional meaning, writing "f"∘"n"("x") for the n-th iterate of the function "f"("x"), as in, for example, "f"∘3("x") meaning "f"("f"("f"("x"))). For the same purpose, "f" ["n"]("x") was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested instead. Abelian property and iteration sequences. In general, the following identity holds for all non-negative integers m and n, formula_3 This is structurally identical to the property of exponentiation that "a""m""a""n" = "a""m" + "n". In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, "T""m"("T""n"("x")) = "T""m n"("x"), since "T""n"("x") = cos("n" arccos("x")). The relation ("f" "m")"n"("x") = ("f" "n")"m"("x") = "f" "mn"("x") also holds, analogous to the property of exponentiation that ("a""m")"n" = ("a""n")"m" = "a""mn". The sequence of functions "f" "n" is called a Picard sequence, named after Charles Émile Picard. For a given x in X, the sequence of values "f""n"("x") is called the orbit of x. If "f" "n" ("x") = "f" "n"+"m" ("x") for some integer m &gt; 0, the orbit is called a periodic orbit. The smallest such value of m for a given x is called the period of the orbit. The point x itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit. Fixed points. If " "x" = f"("x") for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix("f"). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem. There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence. Limiting behaviour. Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point. When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set. The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions. Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started. Invariant measure. If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states. In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos. Fractional iterates and flows, and negative iterates. The notion "f"1/"n" must be used with care when the equation "g""n"("x") = "f"("x") has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for "n" = 2 and "f"("x") = 4"x" − 6, both "g"("x") = 6 − 2"x" and "g"("x") = 2"x" − 2 are solutions; so the expression "f" 1/2("x") does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of "f" can always be obtained if "f"'s domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study. Fractional iteration of a function can be defined: for instance, a half iterate of a function f is a function g such that "g"("g"("x")) = "f"("x"). This function "g"("x") can be written using the index notation as "f" 1/2("x") . Similarly, "f" 1/3("x") is the function defined such that "f"1/3("f"1/3("f"1/3("x"))) = "f"("x"), while "f"2/3("x") may be defined as equal to "f"1/3("f"1/3("x")), and so forth, all based on the principle, mentioned earlier, that "f" "m" ○ "f" "n" = "f" "m" + "n". This idea can be generalized so that the iteration count n becomes a continuous parameter, a sort of continuous "time" of a continuous orbit. In such cases, one refers to the system as a flow (cf. section on conjugacy below.) If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, "f" −1("x") is the normal inverse of f, while "f" −2("x") is the inverse composed with itself, i.e. "f" −2("x") = "f" −1("f" −1("x")). Fractional negative iterates are defined analogously to fractional positive ones; for example, "f" −1/2("x") is defined such that "f" −1/2("f" −1/2("x")) = "f" −1("x"), or, equivalently, such that "f" −1/2("f" 1/2("x")) = "f" 0("x") = "x". Some formulas for fractional iteration. One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows. This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy. Example 1. For example, setting "f"("x") "Cx" + "D" gives the fixed point "a" "D"/(1 − "C"), so the above formula terminates to just formula_9 which is trivial to check. Example 2. Find the value of formula_10 where this is done "n" times (and possibly the interpolated values when "n" is not an integer). We have "f"("x") = √2"x". A fixed point is "a" = "f"(2) = 2. So set "x" = 1 and "f" "n" (1) expanded around the fixed point value of 2 is then an infinite series, formula_11 which, taking just the first three terms, is correct to the first decimal place when "n" is positive. Also see Tetration: "f" "n"(1) = "n"√2. Using the other fixed point "a" = "f"(4) 4 causes the series to diverge. For "n" = −1, the series computes the inverse function . Example 3. With the function "f"("x") "x""b", expand around the fixed point 1 to get the series formula_12 which is simply the Taylor series of "x"("b""n" ) expanded around 1. Conjugacy. If f and g are two iterated functions, and there exists a homeomorphism h such that "g" "h"−1 ○ "f" ○ "h" , then f and g are said to be topologically conjugate. Clearly, topological conjugacy is preserved under iteration, as "g""n"   "h"−1  ○ "f" "n" ○ "h". Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking "f"("x") "x" + 1, one has the iteration of "g"("x") "h"−1("h"("x") + 1) as "g""n"("x") "h"−1("h"("x") + "n"),   for any function h. Making the substitution "x" "h"−1("y") "ϕ"("y") yields "g"("ϕ"("y")) "ϕ"("y"+1),   a form known as the Abel equation. Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at x = 0, f(0) = 0, one may often solve Schröder's equation for a function Ψ, which makes "f"("x") locally conjugate to a mere dilation, "g"("x") "f" '(0) "x", that is "f"("x") Ψ−1("f" '(0) Ψ("x")). Thus, its iteration orbit, or flow, under suitable provisions (e.g., "f" '(0) ≠ 1), amounts to the conjugate of the orbit of the monomial, Ψ−1("f" '(0)"n" Ψ("x")), where n in this expression serves as a plain exponent: "functional iteration has been reduced to multiplication!" Here, however, the exponent n no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group. This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic. Markov chains. If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain. Examples. There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems. Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case "f"("x") = 4"x"(1 − "x"), so that Ψ("x") = arcsin(√"x")2, hence "f" "n"("x") = sin(2"n" arcsin(√"x"))2. A nonchaotic case Schröder also illustrated with his method, "f"("x") = 2"x"(1 − "x"), yielded Ψ("x") = − ln(1 − 2"x"), and hence "f""n"("x") = −((1 − 2"x")2"n" − 1). If "f" is the action of a group element on a set, then the iterated function corresponds to a free group. Most functions do not have explicit general closed-form expressions for the "n"-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative "n", as well as non-negative integer "n". Note: these two special cases of "ax"2 + "bx" + "c" are the only cases that have a closed-form solution. Choosing "b" = 2 = –"a" and "b" = 4 = –"a", respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table. Some of these examples are related among themselves by simple conjugacies. Means of study. Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators. In computer science. In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs. Definitions in terms of iterated functions. Two important functionals can be defined in terms of iterated functions. These are summation: formula_13 and the equivalent product: formula_14 Functional derivative. The functional derivative of an iterated function is given by the recursive formula: formula_15 Lie's data transport equation. Iterated functions crop up in the series expansion of combined functions, such as "g"("f"("x")). Given the iteration velocity, or beta function (physics), formula_16 for the nth iterate of the function f, we have formula_17 For example, for rigid advection, if "f"("x") "x" + "t", then "v"("x") "t". Consequently, "g"("x" + "t") exp("t" ∂/∂"x") "g"("x"), action by a plain shift operator. Conversely, one may specify "f"("x") given an arbitrary "v"("x"), through the generic Abel equation discussed above, formula_18 where formula_19 This is evident by noting that formula_20 For continuous iteration index t, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group, formula_21 The initial flow velocity v suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the "translation functional equation", formula_22 See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = F(K), \\ M = F \\circ F (K) = F^2(K)." }, { "math_id": 1, "text": "f^0 ~ \\stackrel{\\mathrm{def}}{=} ~ \\operatorname{id}_X" }, { "math_id": 2, "text": "f^{n+1} ~ \\stackrel{\\mathrm{def}}{=} ~ f \\circ f^{n}," }, { "math_id": 3, "text": "f^m \\circ f^n = f^n \\circ f^m = f^{m+n}~." }, { "math_id": 4, "text": "\nf^n(x) = f^n(a) + (x-a)\\left.\\frac{d}{dx}f^n(x)\\right|_{x=a} + \\frac{(x-a)^2}2\\left.\\frac{d^2}{dx^2}f^n(x)\\right|_{x=a} +\\cdots\n" }, { "math_id": 5, "text": "\nf^n(x) = f^n(a) + (x-a) f'(a)f'(f(a))f'(f^2(a))\\cdots f'(f^{n-1}(a)) + \\cdots\n" }, { "math_id": 6, "text": "\nf^n(x) = a + (x-a) f'(a)^n + \\frac{(x-a)^2}2(f''(a)f'(a)^{n-1})\\left(1+f'(a)+\\cdots+f'(a)^{n-1} \\right)+\\cdots\n" }, { "math_id": 7, "text": "\nf^n(x) = a + (x-a) f'(a)^n + \\frac{(x-a)^2}2(f''(a)f'(a)^{n-1})\\frac{f'(a)^n-1}{f'(a)-1}+\\cdots\n" }, { "math_id": 8, "text": "\nf^n(x) = x + \\frac{(x-a)^2}2(n f''(a))+ \\frac{(x-a)^3}6\\left(\\frac{3}{2}n(n-1) f''(a)^2 + n f'''(a)\\right)+\\cdots\n" }, { "math_id": 9, "text": "\nf^n(x)=\\frac{D}{1-C} + \\left(x-\\frac{D}{1-C}\\right)C^n=C^nx+\\frac{1-C^n}{1-C}D ~,\n" }, { "math_id": 10, "text": "\\sqrt{2}^{ \\sqrt{2}^{\\sqrt{2}^{\\cdots}} }" }, { "math_id": 11, "text": "\n\\sqrt{2}^{ \\sqrt{2}^{\\sqrt{2}^{\\cdots}} } = f^n(1) = 2 - (\\ln 2)^n + \\frac{(\\ln 2)^{n+1}((\\ln 2)^n-1)}{4(\\ln 2-1)} - \\cdots\n" }, { "math_id": 12, "text": "\nf^n(x) = 1 + b^n(x-1) + \\frac{1}2b^{n}(b^n-1)(x-1)^2 + \\frac{1}{3!}b^n (b^n-1)(b^n-2)(x-1)^3 + \\cdots ~,\n" }, { "math_id": 13, "text": "\n\\left\\{b+1,\\sum_{i=a}^b g(i)\\right\\} \\equiv \\left( \\{i,x\\} \\rightarrow \\{ i+1 ,x+g(i) \\}\\right)^{b-a+1} \\{a,0\\}\n" }, { "math_id": 14, "text": "\n\\left\\{b+1,\\prod_{i=a}^b g(i)\\right\\} \\equiv \\left( \\{i,x\\} \\rightarrow \\{ i+1 ,x g(i) \\}\\right)^{b-a+1} \\{a,1\\}\n" }, { "math_id": 15, "text": "\\frac{ \\delta f^N(x)}{\\delta f(y)} = f'( f^{N-1}(x) ) \\frac{ \\delta f^{N-1}(x)}{\\delta f(y)} + \\delta( f^{N-1}(x) - y ) " }, { "math_id": 16, "text": "v(x) = \\left. \\frac{\\partial f^n(x)}{\\partial n} \\right|_{n=0}" }, { "math_id": 17, "text": "\ng(f(x)) = \\exp\\left[ v(x) \\frac{\\partial}{\\partial x} \\right] g(x).\n" }, { "math_id": 18, "text": "\nf(x) = h^{-1}(h(x)+1) ,\n" }, { "math_id": 19, "text": "\nh(x) = \\int \\frac{1}{v(x)} \\, dx .\n" }, { "math_id": 20, "text": "f^n(x)=h^{-1}(h(x)+n)~." }, { "math_id": 21, "text": "e^{t~\\frac{\\partial ~~}{\\partial h(x)}} g(x)= g(h^{-1}(h(x )+t))= g(f_t(x))." }, { "math_id": 22, "text": "f_t(f_\\tau (x))=f_{t+\\tau} (x) ~." } ]
https://en.wikipedia.org/wiki?curid=1448702
1448784
Ordinal utility
Preference ranking In economics, an ordinal utility function is a function representing the preferences of an agent on an ordinal scale. Ordinal utility theory claims that it is only meaningful to ask which option is better than the other, but it is meaningless to ask "how much" better it is or how good it is. All of the theory of consumer decision-making under conditions of certainty can be, and typically is, expressed in terms of ordinal utility. For example, suppose George tells us that "I prefer A to B and B to C". George's preferences can be represented by a function "u" such that: formula_0 But critics of cardinal utility claim the only meaningful message of this function is the order formula_1; the actual numbers are meaningless. Hence, George's preferences can also be represented by the following function "v": formula_2 The functions "u" and "v" are ordinally equivalent – they represent George's preferences equally well. Ordinal utility contrasts with cardinal utility theory: the latter assumes that the differences between preferences are also important. In "u" the difference between A and B is much smaller than between B and C, while in "v" the opposite is true. Hence, "u" and "v" are "not" cardinally equivalent. The ordinal utility concept was first introduced by Pareto in 1906. Notation. Suppose the set of all states of the world is formula_3 and an agent has a preference relation on formula_3. It is common to mark the weak preference relation by formula_4, so that formula_5 reads "the agent wants B at least as much as A". The symbol formula_6 is used as a shorthand to the indifference relation: formula_7, which reads "The agent is indifferent between B and A". The symbol formula_8 is used as a shorthand to the strong preference relation: formula_9 if: formula_10 Related concepts. Indifference curve mappings. Instead of defining a numeric function, an agent's preference relation can be represented graphically by indifference curves. This is especially useful when there are two kinds of goods, "x" and "y". Then, each indifference curve shows a set of points formula_11 such that, if formula_12 and formula_13 are on the same curve, then formula_14. An example indifference curve is shown below: Each indifference curve is a set of points, each representing a combination of quantities of two goods or services, all of which combinations the consumer is equally satisfied with. The further a curve is from the origin, the greater is the level of utility. The slope of the curve (the negative of the marginal rate of substitution of X for Y) at any point shows the rate at which the individual is willing to trade off good X against good Y maintaining the same level of utility. The curve is convex to the origin as shown assuming the consumer has a diminishing marginal rate of substitution. It can be shown that consumer analysis with indifference curves (an ordinal approach) gives the same results as that based on cardinal utility theory — i.e., consumers will consume at the point where the marginal rate of substitution between any two goods equals the ratio of the prices of those goods (the equi-marginal principle). Revealed preference. Revealed preference theory addresses the problem of how to observe ordinal preference relations in the real world. The challenge of revealed preference theory lies in part in determining what goods bundles were foregone, on the basis of them being less liked, when individuals are observed choosing particular bundles of goods. Necessary conditions for existence of ordinal utility function. Some conditions on formula_4 are necessary to guarantee the existence of a representing function: When these conditions are met and the set formula_3 is finite, it is easy to create a function formula_22 which represents formula_8 by just assigning an appropriate number to each element of formula_3, as exemplified in the opening paragraph. The same is true when X is countably infinite. Moreover, it is possible to inductively construct a representing utility function whose values are in the range formula_23. When formula_3 is infinite, these conditions are insufficient. For example, lexicographic preferences are transitive and complete, but they cannot be represented by any utility function. The additional condition required is continuity. Continuity. A preference relation is called "continuous" if, whenever B is preferred to A, small deviations from B or A will not reverse the ordering between them. Formally, a preference relation on a set X is called continuous if it satisfies one of the following equivalent conditions: If a preference relation is represented by a continuous utility function, then it is clearly continuous. By the theorems of Debreu (1954), the opposite is also true: Every continuous complete preference relation can be represented by a continuous ordinal utility function. Note that the lexicographic preferences are not continuous. For example, formula_36, but in every ball around (5,1) there are points with formula_37 and these points are inferior to formula_38. This is in accordance with the fact, stated above, that these preferences cannot be represented by a utility function. Uniqueness. For every utility function "v", there is a unique preference relation represented by "v". However, the opposite is not true: a preference relation may be represented by many different utility functions. The same preferences could be expressed as "any" utility function that is a monotonically increasing transformation of "v". E.g., if formula_39 where formula_40 is "any" monotonically increasing function, then the functions "v" and "v" give rise to identical indifference curve mappings. This equivalence is succinctly described in the following way: An ordinal utility function is "unique up to increasing monotone transformation". In contrast, a cardinal utility function is unique up to increasing affine transformation. Every affine transformation is monotone; hence, if two functions are cardinally equivalent they are also ordinally equivalent, but not vice versa. Monotonicity. Suppose, from now on, that the set formula_3 is the set of all non-negative real two-dimensional vectors. So an element of formula_3 is a pair formula_11 that represents the amounts consumed from two products, e.g., apples and bananas. Then under certain circumstances a preference relation formula_4 is represented by a utility function formula_41. Suppose the preference relation is "monotonically increasing", which means that "more is always better": formula_42 formula_43 Then, both partial derivatives, if they exist, of "v" are positive. In short: "If a utility function represents a monotonically increasing preference relation, then the utility function is monotonically increasing." Marginal rate of substitution. Suppose a person has a bundle formula_44 and claims that he is indifferent between this bundle and the bundle formula_45. This means that he is willing to give formula_46 units of x to get formula_47 units of y. If this ratio is kept as formula_48, we say that formula_49 is the "marginal rate of substitution (MRS)" between "x" and "y" at the point formula_44. This definition of the MRS is based only on the ordinal preference relation – it does not depend on a numeric utility function. If the preference relation is represented by a utility function and the function is differentiable, then the MRS can be calculated from the derivatives of that function: formula_50 For example, if the preference relation is represented by formula_51 then formula_52. The MRS is the same for the function formula_53. This is not a coincidence as these two functions represent the same preference relation – each one is an increasing monotone transformation of the other. In general, the MRS may be different at different points formula_44. For example, it is possible that at formula_54 the MRS is low because the person has a lot of "x" and only one "y", but at formula_55 or formula_56 the MRS is higher. Some special cases are described below. Linearity. When the MRS of a certain preference relation does not depend on the bundle, i.e., the MRS is the same for all formula_44, the indifference curves are linear and of the form: formula_57 and the preference relation can be represented by a linear function: formula_58 Quasilinearity. When the MRS depends on formula_61 but not on formula_62, the preference relation can be represented by a quasilinear utility function, of the form formula_63 where formula_64 is a certain monotonically increasing function. Because the MRS is a function formula_65, a possible function formula_64 can be calculated as an integral of formula_65: formula_66 In this case, all the indifference curves are parallel – they are horizontal transfers of each other. Additivity with two goods. A more general type of utility function is an additive function: formula_67 There are several ways to check whether given preferences are representable by an additive utility function. Double cancellation property. If the preferences are additive then a simple arithmetic calculation shows that formula_68 and formula_69 implies formula_70 so this "double-cancellation" property is a necessary condition for additivity. Debreu (1960) showed that this property is also sufficient: i.e., if a preference relation satisfies the double-cancellation property then it can be represented by an additive utility function. Corresponding tradeoffs property. If the preferences are represented by an additive function, then a simple arithmetic calculation shows that formula_71 so this "corresponding tradeoffs" property is a necessary condition for additivity. This condition is also sufficient. Additivity with three or more goods. When there are three or more commodities, the condition for the additivity of the utility function is surprisingly "simpler" than for two commodities. This is an outcome of Theorem 3 of Debreu (1960). The condition required for additivity is preferential independence. A subset A of commodities is said to be "preferentially independent" of a subset B of commodities, if the preference relation in subset A, given constant values for subset B, is independent of these constant values. For example, suppose there are three commodities: "x" "y" and "z". The subset {"x","y"} is preferentially-independent of the subset {"z"}, if for all formula_72: formula_73. In this case, we can simply say that: formula_74 for constant "z". Preferential independence makes sense in case of independent goods. For example, the preferences between bundles of apples and bananas are probably independent of the number of shoes and socks that an agent has, and vice versa. By Debreu's theorem, if all subsets of commodities are preferentially independent of their complements, then the preference relation can be represented by an additive value function. Here we provide an intuitive explanation of this result by showing how such an additive value function can be constructed. The proof assumes three commodities: "x", "y", "z". We show how to define three points for each of the three value functions formula_75: the 0 point, the 1 point and the 2 point. Other points can be calculated in a similar way, and then continuity can be used to conclude that the functions are well-defined in their entire range. 0 point: choose arbitrary formula_76 and assign them as the zero of the value function, i.e.: formula_77 1 point: choose arbitrary formula_78 such that formula_79. Set it as the unit of value, i.e.: formula_80 Choose formula_81 and formula_82 such that the following indifference relations hold: formula_83. This indifference serves to scale the units of "y" and "z" to match those of "x". The value in these three points should be 1, so we assign formula_84 2 point: Now we use the preferential-independence assumption. The relation between formula_85 and formula_86 is independent of "z", and similarly the relation between formula_87 and formula_88 is independent of "x" and the relation between formula_89 and formula_90 is independent of "y". Hence formula_91 This is useful because it means that the function "v" can have the same value – 2 – in these three points. Select formula_92 such that formula_93 and assign formula_94 3 point: To show that our assignments so far are consistent, we must show that all points that receive a total value of 3 are indifference points. Here, again, the preferential independence assumption is used, since the relation between formula_95 and formula_12 is independent of "z" (and similarly for the other pairs); hence formula_96 and similarly for the other pairs. Hence, the 3 point is defined consistently. We can continue like this by induction and define the per-commodity functions in all integer points, then use continuity to define it in all real points. An implicit assumption in point 1 of the above proof is that all three commodities are "essential" or "preference relevant". This means that there exists a bundle such that, if the amount of a certain commodity is increased, the new bundle is strictly better. The proof for more than 3 commodities is similar. In fact, we do not have to check that all subsets of points are preferentially independent; it is sufficient to check a linear number of pairs of commodities. E.g., if there are formula_97 different commodities, formula_98, then it is sufficient to check that for all formula_99, the two commodities formula_100 are preferentially independent of the other formula_101 commodities. Uniqueness of additive representation. An additive preference relation can be represented by many different additive utility functions. However, all these functions are similar: they are not only increasing monotone transformations of each other (as are all utility functions representing the same relation); they are increasing linear transformations of each other. In short, An additive ordinal utility function is "unique up to increasing linear transformation". Constructing additive and quadratic utility functions from ordinal data. The mathematical foundations of most common types of utility functions — quadratic and additive — laid down by Gérard Debreu enabled Andranik Tangian to develop methods for their construction from purely ordinal data. In particular, additive and quadratic utility functions in formula_102 variables can be constructed from interviews of decision makers, where questions are aimed at tracing totally formula_102 2D-indifference curves in formula_103 coordinate planes without referring to cardinal utility estimates. Comparison between ordinal and cardinal utility functions. The following table compares the two types of utility functions common in economics: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u(A)=9, u(B)=8, u(C)=1" }, { "math_id": 1, "text": "u(A)>u(B)>u(C)" }, { "math_id": 2, "text": "v(A)=9, v(B)=2, v(C)=1" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "\\preceq" }, { "math_id": 5, "text": "A \\preceq B" }, { "math_id": 6, "text": "\\sim" }, { "math_id": 7, "text": "A\\sim B \\iff (A\\preceq B \\land B\\preceq A)" }, { "math_id": 8, "text": "\\prec" }, { "math_id": 9, "text": "A\\prec B \\iff (A\\preceq B \\land B\\not\\preceq A)" }, { "math_id": 10, "text": "A \\preceq B \\iff u(A) \\leq u(B)" }, { "math_id": 11, "text": "(x,y)" }, { "math_id": 12, "text": "(x_1,y_1)" }, { "math_id": 13, "text": "(x_2,y_2)" }, { "math_id": 14, "text": "(x_1,y_1) \\sim (x_2,y_2)" }, { "math_id": 15, "text": "B \\preceq C" }, { "math_id": 16, "text": "A \\preceq C" }, { "math_id": 17, "text": "A,B\\in X" }, { "math_id": 18, "text": "A\\preceq B" }, { "math_id": 19, "text": "B\\preceq A" }, { "math_id": 20, "text": "A\\in X" }, { "math_id": 21, "text": "A \\preceq A" }, { "math_id": 22, "text": "u" }, { "math_id": 23, "text": "(-1,1)" }, { "math_id": 24, "text": "\\{(A,B)|A\\preceq B\\}" }, { "math_id": 25, "text": "X\\times X" }, { "math_id": 26, "text": "(A_i,B_i)" }, { "math_id": 27, "text": "A_i\\preceq B_i" }, { "math_id": 28, "text": "A_i \\to A" }, { "math_id": 29, "text": "B_i \\to B" }, { "math_id": 30, "text": "A\\prec B" }, { "math_id": 31, "text": "A" }, { "math_id": 32, "text": "B" }, { "math_id": 33, "text": "a" }, { "math_id": 34, "text": "b" }, { "math_id": 35, "text": "a\\prec b" }, { "math_id": 36, "text": "(5,0)\\prec (5,1)" }, { "math_id": 37, "text": "x<5" }, { "math_id": 38, "text": "(5,0)" }, { "math_id": 39, "text": "v(A) \\equiv f(v(A))" }, { "math_id": 40, "text": "f: \\mathbb{R}\\to \\mathbb{R}" }, { "math_id": 41, "text": "v(x,y)" }, { "math_id": 42, "text": "x<x' \\implies (x,y)\\prec(x',y)" }, { "math_id": 43, "text": "y<y' \\implies (x,y')\\prec(x,y')" }, { "math_id": 44, "text": "(x_0,y_0)" }, { "math_id": 45, "text": "(x_0-\\lambda\\cdot\\delta,y_0+\\delta)" }, { "math_id": 46, "text": "\\lambda\\cdot\\delta" }, { "math_id": 47, "text": "\\delta" }, { "math_id": 48, "text": "\\delta\\to 0" }, { "math_id": 49, "text": "\\lambda" }, { "math_id": 50, "text": "MRS = \\frac{v'_x}{v'_y}." }, { "math_id": 51, "text": "v(x,y)=x^a\\cdot y^b" }, { "math_id": 52, "text": "MRS = \\frac{a\\cdot x^{a-1}\\cdot y^b}{b\\cdot y^{b-1}\\cdot x^a}=\\frac{ay}{bx}" }, { "math_id": 53, "text": "v(x,y)=a\\cdot \\log{x} + b\\cdot \\log{y}" }, { "math_id": 54, "text": "(9,1)" }, { "math_id": 55, "text": "(9,9)" }, { "math_id": 56, "text": "(1,1)" }, { "math_id": 57, "text": "x+\\lambda y = \\text{const}," }, { "math_id": 58, "text": "v(x,y)=x+\\lambda y." }, { "math_id": 59, "text": "\\sqrt{x+\\lambda y}" }, { "math_id": 60, "text": "(x+\\lambda y)^2" }, { "math_id": 61, "text": "y_0" }, { "math_id": 62, "text": "x_0" }, { "math_id": 63, "text": "v(x,y)=x+\\gamma v_Y(y)" }, { "math_id": 64, "text": "v_Y" }, { "math_id": 65, "text": "\\lambda(y)" }, { "math_id": 66, "text": "v_Y(y)=\\int_{0}^{y}{\\lambda(y') dy'}" }, { "math_id": 67, "text": "v(x,y)=v_X(x)+v_Y(y)" }, { "math_id": 68, "text": "(x_1,y_1)\\succeq (x_2,y_2)" }, { "math_id": 69, "text": "(x_2,y_3)\\succeq(x_3,y_1) " }, { "math_id": 70, "text": "(x_1,y_3)\\succeq(x_3,y_2)" }, { "math_id": 71, "text": "MRS(x_2,y_2)=\\frac{MRS(x_1,y_2)\\cdot MRS(x_2,y_1)}{MRS(x_1,y_1)}" }, { "math_id": 72, "text": "x_i,y_i,z,z'" }, { "math_id": 73, "text": "(x_1,y_1, z)\\preceq (x_2,y_2, z) \\iff (x_1,y_1, z')\\preceq (x_2,y_2, z')" }, { "math_id": 74, "text": "(x_1,y_1)\\preceq (x_2,y_2)" }, { "math_id": 75, "text": "v_x, v_y, v_z" }, { "math_id": 76, "text": "x_0,y_0,z_0" }, { "math_id": 77, "text": "v_x(x_0)=v_y(y_0)=v_z(z_0)=0" }, { "math_id": 78, "text": "x_1>x_0" }, { "math_id": 79, "text": "(x_1,y_0,z_0)\\succ(x_0,y_0,z_0)" }, { "math_id": 80, "text": "v_x(x_1)=1" }, { "math_id": 81, "text": "y_1" }, { "math_id": 82, "text": "z_1" }, { "math_id": 83, "text": "(x_1,y_0,z_0)\\sim(x_0,y_1,z_0)\\sim(x_0,y_0,z_1)" }, { "math_id": 84, "text": "v_y(y_1)=v_z(z_1)=1" }, { "math_id": 85, "text": "(x_1,y_0)" }, { "math_id": 86, "text": "(x_0,y_1)" }, { "math_id": 87, "text": "(y_1,z_0)" }, { "math_id": 88, "text": "(y_0,z_1)" }, { "math_id": 89, "text": "(z_1,x_0)" }, { "math_id": 90, "text": "(z_0,x_1)" }, { "math_id": 91, "text": "(x_1,y_0,z_1)\\sim(x_0,y_1,z_1)\\sim(x_1,y_1,z_0)." }, { "math_id": 92, "text": "x_2, y_2, z_2" }, { "math_id": 93, "text": "(x_2,y_0,z_0)\\sim(x_0,y_2,z_0)\\sim(x_0,y_0,z_2)\\sim(x_1,y_1,z_0)" }, { "math_id": 94, "text": "v_x(x_2)=v_y(y_2)=v_z(z_2)=2." }, { "math_id": 95, "text": "(x_2,y_0)" }, { "math_id": 96, "text": "(x_2,y_0,z_1)\\sim(x_1,y_1,z_1)" }, { "math_id": 97, "text": "m" }, { "math_id": 98, "text": "j=1,...,m" }, { "math_id": 99, "text": "j=1,...,m-1" }, { "math_id": 100, "text": "\\{x_j,x_{j+1}\\}" }, { "math_id": 101, "text": "m-2" }, { "math_id": 102, "text": "n" }, { "math_id": 103, "text": "n - 1" } ]
https://en.wikipedia.org/wiki?curid=1448784
1448821
Conjugate gradient method
Mathematical optimization algorithm In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems. The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it. The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems. Description of the problem addressed by conjugate gradients. Suppose we want to solve the system of linear equations formula_0 for the vector formula_1, where the known formula_2 matrix formula_3 is symmetric (i.e., AT = A), positive-definite (i.e. xTAx &gt; 0 for all non-zero vectors formula_1 in R"n"), and real, and formula_4 is known as well. We denote the unique solution of this system by formula_5. Derivation as a direct method. The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of the Arnoldi/Lanczos iteration for eigenvalue problems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method. We say that two non-zero vectors u and v are conjugate (with respect to formula_3) if formula_6 Since formula_3 is symmetric and positive-definite, the left-hand side defines an inner product formula_7 Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if formula_8 is conjugate to formula_9, then formula_9 is conjugate to formula_8. Suppose that formula_10 is a set of formula_11 mutually conjugate vectors with respect to formula_3, i.e. formula_12 for all formula_13. Then formula_14 forms a basis for formula_15, and we may express the solution formula_5 of formula_16 in this basis: formula_17 Left-multiplying the problem formula_16 with the vector formula_18 yields formula_19 and so formula_20 This gives the following method for solving the equation Ax b: find a sequence of formula_11 conjugate directions, and then compute the coefficients formula_21. As an iterative method. If we choose the conjugate vectors formula_22 carefully, then we may not need all of them to obtain a good approximation to the solution formula_5. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems where "n" is so large that the direct method would take too much time. We denote the initial guess for x∗ by x0 (we can assume without loss of generality that x0 0, otherwise consider the system Az = b − Ax0 instead). Starting with x0 we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solution x∗ (that is unknown to us). This metric comes from the fact that the solution x∗ is also the unique minimizer of the following quadratic function formula_23 The existence of a unique minimizer is apparent as its Hessian matrix of second derivatives is symmetric positive-definite formula_24 and that the minimizer (use D"f"(x)=0) solves the initial problem follows from its first derivative formula_25 This suggests taking the first basis vector p0 to be the negative of the gradient of "f" at x = x0. The gradient of "f" equals Ax − b. Starting with an initial guess x0, this means we take p0 = b − Ax0. The other vectors in the basis will be conjugate to the gradient, hence the name "conjugate gradient method". Note that p0 is also the residual provided by this initial step of the algorithm. Let r"k" be the residual at the "k"th step: formula_26 As observed above, formula_27 is the negative gradient of formula_28 at formula_29, so the gradient descent method would require to move in the direction r"k". Here, however, we insist that the directions formula_22 must be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example of Gram-Schmidt orthonormalization. This gives the following expression: formula_30 (see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by formula_31 with formula_32 where the last equality follows from the definition of formula_27 . The expression for formula_33 can be derived if one substitutes the expression for x"k"+1 into "f" and minimizing it with respect to formula_33 formula_34 The resulting algorithm. The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix–vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows that formula_35 is orthogonal to formula_36, i.e. formula_37 , for i ≠ j. And formula_38is formula_3-orthogonal to formula_39 , i.e. formula_40 , for formula_13. This can be regarded that as the algorithm progresses, formula_38 and formula_35 span the same Krylov subspace. Where formula_35 form the orthogonal basis with respect to the standard inner product, and formula_38 form the orthogonal basis with respect to the inner product induced by formula_3. Therefore, formula_29 can be regarded as the projection of formula_1 on the Krylov subspace. That is, if the CG method starts with formula_41, thenformula_42The algorithm is detailed below for solving formula_43 where formula_3 is a real, symmetric, positive-definite matrix. The input vector formula_44 can be an approximate initial solution or 0. It is a different formulation of the exact procedure described above. formula_45 This is the most commonly used algorithm. The same formula for βk is also used in the Fletcher–Reeves nonlinear conjugate gradient method. Restarts. We note that formula_46 is computed by the gradient descent method applied to formula_47. Setting formula_48 would similarly make formula_49 computed by the gradient descent method from formula_50, i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations. Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due to round-off error. Explicit residual calculation. The formulas formula_51 and formula_52, which both hold in exact arithmetic, make the formulas formula_53 and formula_54 mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication by formula_3 since the vector formula_55 is already computed to evaluate formula_21. The latter may be more accurate, substituting the explicit calculation formula_54 for the implicit one by the recursion subject to round-off error accumulation, and is thus recommended for an occasional evaluation. A norm of the residual is typically used for stopping criteria. The norm of the explicit residual formula_54 provides a guaranteed level of accuracy both in exact arithmetic and in the presence of the rounding errors, where convergence naturally stagnates. In contrast, the implicit residual formula_53 is known to keep getting smaller in amplitude well below the level of rounding errors and thus cannot be used to determine the stagnation of convergence. Computation of alpha and beta. In the algorithm, αk is chosen such that formula_56 is orthogonal to formula_57. The denominator is simplified from formula_58 since formula_59. The βk is chosen such that formula_60 is conjugate to formula_61. Initially, βk is formula_62 using formula_63 and equivalently formula_64 the numerator of βk is rewritten as formula_65 because formula_56 and formula_57 are orthogonal by design. The denominator is rewritten as formula_66 using that the search directions p"k" are conjugated and again that the residuals are orthogonal. This gives the β in the algorithm after cancelling αk. Example code in Julia (programming language). conjugate_gradient!(A, b, x) Return the solution to `A * x = b` using the conjugate gradient method. function conjugate_gradient!( A::AbstractMatrix, b::AbstractVector, x::AbstractVector; tol=eps(eltype(b)) # Initialize residual vector residual = b - A * x # Initialize search direction vector search_direction = residual # Compute initial squared residual norm norm(x) = sqrt(sum(x.^2)) old_resid_norm = norm(residual) # Iterate until convergence while old_resid_norm &gt; tol A_search_direction = A * search_direction step_size = old_resid_norm^2 / (search_direction' * A_search_direction) # Update solution @. x = x + step_size * search_direction # Update residual @. residual = residual - step_size * A_search_direction new_resid_norm = norm(residual) # Update search direction vector @. search_direction = residual + (new_resid_norm / old_resid_norm)^2 * search_direction # Update squared residual norm for next iteration old_resid_norm = new_resid_norm end return x end Numerical example. Consider the linear system Ax = b given by formula_67 we will perform two steps of the conjugate gradient method beginning with the initial guess formula_68 in order to find an approximate solution to the system. Solution. For reference, the exact solution is formula_69 Our first step is to calculate the residual vector r0 associated with x0. This residual is computed from the formula r0 = b - Ax0, and in our case is equal to formula_70 Since this is the first iteration, we will use the residual vector r0 as our initial search direction p0; the method of selecting p"k" will change in further iterations. We now compute the scalar "α"0 using the relationship formula_71 We can now compute x1 using the formula formula_72 This result completes the first iteration, the result being an "improved" approximate solution to the system, x1. We may now move on and compute the next residual vector r1 using the formula formula_73 Our next step in the process is to compute the scalar "β"0 that will eventually be used to determine the next search direction p1. formula_74 Now, using this scalar "β"0, we can compute the next search direction p1 using the relationship formula_75 We now compute the scalar "α"1 using our newly acquired p1 using the same method as that used for "α"0. formula_76 Finally, we find x2 using the same method as that used to find x1. formula_77 The result, x2, is a "better" approximation to the system's solution than x1 and x0. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached after "n" = 2 iterations ("n" being the order of the system). Convergence properties. The conjugate gradient method can theoretically be viewed as a direct method, as in the absence of round-off error it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix. In practice, the exact solution is never obtained since the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, due to a degenerative nature of generating the Krylov subspaces. As an iterative method, the conjugate gradient method monotonically (in the energy norm) improves approximations formula_50 to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number formula_78 of the system matrix formula_79: the larger formula_78 is, the slower the improvement. If formula_78 is large, preconditioning is commonly used to replace the original system formula_80 with formula_81 such that formula_82 is smaller than formula_83, see below. Convergence theorem. Define a subset of polynomials as formula_84 where formula_85 is the set of polynomials of maximal degree formula_86. Let formula_87 be the iterative approximations of the exact solution formula_88, and define the errors as formula_89. Now, the rate of convergence can be approximated as formula_90 where formula_91 denotes the spectrum, and formula_92 denotes the condition number. This shows formula_93 iterations suffices to reduce the error to formula_94 for any formula_95. Note, the important limit when formula_92 tends to formula_96 formula_97 This limit shows a faster convergence rate compared to the iterative methods of Jacobi or Gauss–Seidel which scale as formula_98. No round-off error is assumed in the convergence theorem, but the convergence bound is commonly valid in practice as theoretically explained by Anne Greenbaum. Practical convergence. If initialized randomly, the first stage of iterations is often the fastest, as the error is eliminated within the Krylov subspace that initially reflects a smaller effective condition number. The second stage of convergence is typically well defined by the theoretical convergence bound with formula_99, but may be super-linear, depending on a distribution of the spectrum of the matrix formula_79 and the spectral distribution of the error. In the last stage, the smallest attainable accuracy is reached and the convergence stalls or the method may even start diverging. In typical scientific computing applications in double-precision floating-point format for matrices of large sizes, the conjugate gradient method uses a stopping criterion with a tolerance that terminates the iterations during the first or second stage. The preconditioned conjugate gradient method. In most cases, preconditioning is necessary to ensure fast convergence of the conjugate gradient method. If formula_100 is symmetric positive-definite and formula_101 has a better condition number than formula_3, a preconditioned conjugate gradient method can be used. It takes the following form: formula_102 formula_103 formula_104 formula_105 repeat formula_106 formula_51 formula_53 if r"k"+1 is sufficiently small then exit loop end if formula_107 formula_108 formula_109 formula_110 end repeat The result is x"k"+1 The above formulation is equivalent to applying the regular conjugate gradient method to the preconditioned system formula_111 where formula_112 The Cholesky decomposition of the preconditioner must be used to keep the symmetry (and positive definiteness) of the system. However, this decomposition does not need to be computed, and it is sufficient to know formula_100. It can be shown that formula_113 has the same spectrum as formula_101. The preconditioner matrix M has to be symmetric positive-definite and fixed, i.e., cannot change from iteration to iteration. If any of these assumptions on the preconditioner is violated, the behavior of the preconditioned conjugate gradient method may become unpredictable. An example of a commonly used preconditioner is the incomplete Cholesky factorization. Using the preconditioner in practice. It is importart to keep in mind that we don't want to invert the matrix formula_114 explicitly in order to get formula_100 for use it in the process, since inverting formula_114 would take more time/computational resources than solving the conjugate gradient algorithm itself. As an example, let's say that we are using a preconditioner coming from incomplete Cholesky factorization. The resulting matrix is the lower triangular matrix formula_115, and the preconditioner matrix is: formula_116 Then we have to solve: formula_117 formula_118 But: formula_119 Then: formula_120 Let's take an intermediary vector formula_121: formula_122 formula_123 Since formula_124 and formula_115 and known, and formula_115 is lower triangular, solving for formula_121 is easy and computationally cheap by using forward substitution. Then, we substitute formula_121 in the original equation: formula_125 formula_126 Since formula_121 and formula_127 are known, and formula_127 is upper triangular, solving for formula_128 is easy and computationally cheap by using backward substitution. Using this method, there is no need to invert formula_114 or formula_115 explicitly at all, and we still obtain formula_128. The flexible preconditioned conjugate gradient method. In numerically challenging applications, sophisticated preconditioners are used, which may lead to variable preconditioning, changing between iterations. Even if the preconditioner is symmetric positive-definite on every iteration, the fact that it may change makes the arguments above invalid, and in practical tests leads to a significant slow down of the convergence of the algorithm presented above. Using the Polak–Ribière formula formula_129 instead of the Fletcher–Reeves formula formula_108 may dramatically improve the convergence in this case. This version of the preconditioned conjugate gradient method can be called flexible, as it allows for variable preconditioning. The flexible version is also shown to be robust even if the preconditioner is not symmetric positive definite (SPD). The implementation of the flexible version requires storing an extra vector. For a fixed SPD preconditioner, formula_130 so both formulas for βk are equivalent in exact arithmetic, i.e., without the round-off error. The mathematical explanation of the better convergence behavior of the method with the Polak–Ribière formula is that the method is locally optimal in this case, in particular, it does not converge slower than the locally optimal steepest descent method. Vs. the locally optimal steepest descent method. In both the original and the preconditioned conjugate gradient methods one only needs to set formula_131 in order to make them locally optimal, using the line search, steepest descent methods. With this substitution, vectors p are always the same as vectors z, so there is no need to store vectors p. Thus, every iteration of these steepest descent methods is a bit cheaper compared to that for the conjugate gradient methods. However, the latter converge faster, unless a (highly) variable and/or non-SPD preconditioner is used, see above. Conjugate gradient method as optimal feedback controller for double integrator. The conjugate gradient method can also be derived using optimal control theory. In this approach, the conjugate gradient method falls out as an optimal feedback controller,formula_132 for the double integrator system,formula_133 The quantities formula_134 and formula_135 are variable feedback gains. Conjugate gradient on the normal equations. The conjugate gradient method can be applied to an arbitrary "n"-by-"m" matrix by applying it to normal equations ATA and right-hand side vector ATb, since ATA is a symmetric positive-semidefinite matrix for any A. The result is conjugate gradient on the normal equations (CGN or CGNR). ATAx = ATb As an iterative method, it is not necessary to form ATA explicitly in memory but only to perform the matrix–vector and transpose matrix–vector multiplications. Therefore, CGNR is particularly useful when "A" is a sparse matrix since these operations are usually extremely efficient. However the downside of forming the normal equations is that the condition number κ(ATA) is equal to κ2(A) and so the rate of convergence of CGNR may be slow and the quality of the approximate solution may be sensitive to roundoff errors. Finding a good preconditioner is often an important part of using the CGNR method. Several algorithms have been proposed (e.g., CGLS, LSQR). The LSQR algorithm purportedly has the best numerical stability when A is ill-conditioned, i.e., A has a large condition number. Conjugate gradient method for complex Hermitian matrices. The conjugate gradient method with a trivial modification is extendable to solving, given complex-valued matrix A and vector b, the system of linear equations formula_136 for the complex-valued vector x, where A is Hermitian (i.e., A' = A) and positive-definite matrix, and the symbol ' denotes the conjugate transpose. The trivial modification is simply substituting the conjugate transpose for the real transpose everywhere. Advantages and disadvantages. The advantages and disadvantages of the conjugate gradient methods are summarized in the lecture notes by Nemirovsky and BenTal.Sec.7.3 A pathological example. This example is from Let formula_137, and defineformula_138Since formula_139 is invertible, there exists a unique solution to formula_140. Solving it by conjugate gradient descent gives us rather bad convergence:formula_141In words, during the CG process, the error grows exponentially, until it suddenly becomes zero as the unique solution is found. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{A}\\mathbf{x} = \\mathbf{b}" }, { "math_id": 1, "text": "\\mathbf{x}" }, { "math_id": 2, "text": "n \\times n" }, { "math_id": 3, "text": "\\mathbf{A}" }, { "math_id": 4, "text": "\\mathbf{b}" }, { "math_id": 5, "text": "\\mathbf{x}_*" }, { "math_id": 6, "text": " \\mathbf{u}^\\mathsf{T} \\mathbf{A} \\mathbf{v} = 0. " }, { "math_id": 7, "text": "\n \\mathbf{u}^\\mathsf{T} \\mathbf{A} \\mathbf{v} =\n \\langle \\mathbf{u}, \\mathbf{v} \\rangle_\\mathbf{A} :=\n \\langle \\mathbf{A} \\mathbf{u}, \\mathbf{v}\\rangle =\n \\langle \\mathbf{u}, \\mathbf{A}^\\mathsf{T} \\mathbf{v}\\rangle =\n \\langle \\mathbf{u}, \\mathbf{A}\\mathbf{v}\\rangle.\n" }, { "math_id": 8, "text": "\\mathbf{u}" }, { "math_id": 9, "text": "\\mathbf{v}" }, { "math_id": 10, "text": "P = \\{ \\mathbf{p}_1, \\dots, \\mathbf{p}_n \\}" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "\\mathbf{p}_i^\\mathsf{T} \\mathbf{A} \\mathbf{p}_j = 0" }, { "math_id": 13, "text": "i \\neq j" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "\\mathbb{R}^n" }, { "math_id": 16, "text": "\\mathbf{Ax} = \\mathbf{b}" }, { "math_id": 17, "text": "\\mathbf{x}_* = \\sum^{n}_{i=1} \\alpha_i \\mathbf{p}_i \\Rightarrow \\mathbf{A} \\mathbf{x}_* = \\sum^{n}_{i=1} \\alpha_i \\mathbf{A} \\mathbf{p}_i." }, { "math_id": 18, "text": "\\mathbf{p}_k^\\mathsf{T}" }, { "math_id": 19, "text": "\n\\mathbf{p}_k^\\mathsf{T} \\mathbf{b} \n= \\mathbf{p}_k^\\mathsf{T} \\mathbf{A} \\mathbf{x}_* \n= \\sum^{n}_{i=1} \\alpha_i \\mathbf{p}_k^\\mathsf{T} \\mathbf{A} \\mathbf{p}_i \n= \\sum^{n}_{i=1} \\alpha_i \\left \\langle \\mathbf{p}_k, \\mathbf{p}_i \\right \\rangle_{\\mathbf{A}} \n= \\alpha_k \\left \\langle \\mathbf{p}_k, \\mathbf{p}_k \\right \\rangle_{\\mathbf{A}} " }, { "math_id": 20, "text": "\\alpha_k = \\frac{\\langle \\mathbf{p}_k, \\mathbf{b} \\rangle}{\\langle \\mathbf{p}_k, \\mathbf{p}_k \\rangle_\\mathbf{A}}." }, { "math_id": 21, "text": "\\alpha_k" }, { "math_id": 22, "text": "\\mathbf{p}_k" }, { "math_id": 23, "text": " \n f(\\mathbf{x}) = \\tfrac12 \\mathbf{x}^\\mathsf{T} \\mathbf{A}\\mathbf{x} - \\mathbf{x}^\\mathsf{T} \\mathbf{b}, \\qquad \\mathbf{x}\\in\\mathbf{R}^n \\,. \n" }, { "math_id": 24, "text": "\n \\mathbf{H}(f(\\mathbf{x})) = \\mathbf{A} \\,,\n" }, { "math_id": 25, "text": "\n \\nabla f(\\mathbf{x}) = \\mathbf{A} \\mathbf{x} - \\mathbf{b} \\,.\n" }, { "math_id": 26, "text": " \\mathbf{r}_k = \\mathbf{b} - \\mathbf{Ax}_k." }, { "math_id": 27, "text": "\\mathbf{r}_k" }, { "math_id": 28, "text": "f" }, { "math_id": 29, "text": "\\mathbf{x}_k" }, { "math_id": 30, "text": "\\mathbf{p}_{k} = \\mathbf{r}_{k} - \\sum_{i < k}\\frac{\\mathbf{p}_i^\\mathsf{T} \\mathbf{A} \\mathbf{r}_{k}}{\\mathbf{p}_i^\\mathsf{T}\\mathbf{A} \\mathbf{p}_i} \\mathbf{p}_i" }, { "math_id": 31, "text": " \\mathbf{x}_{k+1} = \\mathbf{x}_k + \\alpha_k \\mathbf{p}_k " }, { "math_id": 32, "text": " \\alpha_{k} = \\frac{\\mathbf{p}_k^\\mathsf{T} (\\mathbf{b} - \\mathbf{Ax}_k )}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k} = \\frac{\\mathbf{p}_{k}^\\mathsf{T} \\mathbf{r}_{k}}{\\mathbf{p}_{k}^\\mathsf{T} \\mathbf{A} \\mathbf{p}_{k}}, " }, { "math_id": 33, "text": " \\alpha_k " }, { "math_id": 34, "text": "\n\\begin{align}\nf(\\mathbf{x}_{k+1}) &= f(\\mathbf{x}_k + \\alpha_k \\mathbf{p}_k) =: g(\\alpha_k)\n \\\\\ng'(\\alpha_k) &\\overset{!}{=} 0\n \\quad \\Rightarrow \\quad\n \\alpha_{k} = \\frac{\\mathbf{p}_k^\\mathsf{T} (\\mathbf{b} - \\mathbf{Ax}_k)}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k} \\,.\n\\end{align}\n" }, { "math_id": 35, "text": "\\mathbf{r}_i" }, { "math_id": 36, "text": "\\mathbf{r}_j" }, { "math_id": 37, "text": "\\mathbf{r}_i^\\mathsf{T} \\mathbf{r}_j=0 " }, { "math_id": 38, "text": "\\mathbf{p}_i" }, { "math_id": 39, "text": "\\mathbf{p}_j" }, { "math_id": 40, "text": "\\mathbf{p}_i^\\mathsf{T} \\mathbf{A} \\mathbf{p}_j=0 " }, { "math_id": 41, "text": "\\mathbf{x}_0 = 0" }, { "math_id": 42, "text": "x_k =\n\\mathrm{argmin}_{y \\in \\mathbb{R}^n}\n{\\left\\{(x-y)^{\\top} A(x-y): y \\in \\operatorname{span}\\left\\{b, A b, \\ldots, A^{k-1} b\\right\\}\\right\\}}" }, { "math_id": 43, "text": "\\mathbf{A} \\mathbf{x}= \\mathbf{b}" }, { "math_id": 44, "text": "\\mathbf{x}_0" }, { "math_id": 45, "text": "\\begin{align}\n& \\mathbf{r}_0 := \\mathbf{b} - \\mathbf{A x}_0 \\\\\n& \\hbox{if } \\mathbf{r}_{0} \\text{ is sufficiently small, then return } \\mathbf{x}_{0} \\text{ as the result}\\\\\n& \\mathbf{p}_0 := \\mathbf{r}_0 \\\\\n& k := 0 \\\\\n& \\text{repeat} \\\\\n& \\qquad \\alpha_k := \\frac{\\mathbf{r}_k^\\mathsf{T} \\mathbf{r}_k}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A p}_k} \\\\\n& \\qquad \\mathbf{x}_{k+1} := \\mathbf{x}_k + \\alpha_k \\mathbf{p}_k \\\\\n& \\qquad \\mathbf{r}_{k+1} := \\mathbf{r}_k - \\alpha_k \\mathbf{A p}_k \\\\\n& \\qquad \\hbox{if } \\mathbf{r}_{k+1} \\text{ is sufficiently small, then exit loop} \\\\\n& \\qquad \\beta_k := \\frac{\\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{r}_{k+1}}{\\mathbf{r}_k^\\mathsf{T} \\mathbf{r}_k} \\\\\n& \\qquad \\mathbf{p}_{k+1} := \\mathbf{r}_{k+1} + \\beta_k \\mathbf{p}_k \\\\\n& \\qquad k := k + 1 \\\\\n& \\text{end repeat} \\\\\n& \\text{return } \\mathbf{x}_{k+1} \\text{ as the result}\n\\end{align}" }, { "math_id": 46, "text": "\\mathbf{x}_{1}" }, { "math_id": 47, "text": "\\mathbf{x}_{0}" }, { "math_id": 48, "text": "\\beta_{k}=0" }, { "math_id": 49, "text": "\\mathbf{x}_{k+1}" }, { "math_id": 50, "text": "\\mathbf{x}_{k}" }, { "math_id": 51, "text": "\\mathbf{x}_{k+1} := \\mathbf{x}_k + \\alpha_k \\mathbf{p}_k" }, { "math_id": 52, "text": "\\mathbf{r}_k := \\mathbf{b} - \\mathbf{A x}_k" }, { "math_id": 53, "text": "\\mathbf{r}_{k+1} := \\mathbf{r}_k - \\alpha_k \\mathbf{A p}_k" }, { "math_id": 54, "text": "\\mathbf{r}_{k+1} := \\mathbf{b} - \\mathbf{A x}_{k+1}" }, { "math_id": 55, "text": "\\mathbf{A p}_k" }, { "math_id": 56, "text": "\\mathbf{r}_{k+1}" }, { "math_id": 57, "text": "\\mathbf{r}_{k}" }, { "math_id": 58, "text": "\\alpha_k = \\frac{\\mathbf{r}_{k}^\\mathsf{T} \\mathbf{r}_{k}}{\\mathbf{r}_{k}^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k} = \\frac{\\mathbf{r}_k^\\mathsf{T} \\mathbf{r}_k}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A p}_k} " }, { "math_id": 59, "text": "\\mathbf{r}_{k+1} = \\mathbf{p}_{k+1}-\\mathbf{\\beta}_{k}\\mathbf{p}_{k}" }, { "math_id": 60, "text": "\\mathbf{p}_{k+1}" }, { "math_id": 61, "text": "\\mathbf{p}_{k}" }, { "math_id": 62, "text": "\\beta_k = - \\frac{\\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k}" }, { "math_id": 63, "text": "\\mathbf{r}_{k+1} = \\mathbf{r}_{k} - \\alpha_{k} \\mathbf{A} \\mathbf{p}_{k}" }, { "math_id": 64, "text": " \\mathbf{A} \\mathbf{p}_{k} = \\frac{1}{\\alpha_{k}} (\\mathbf{r}_{k} - \\mathbf{r}_{k+1}), " }, { "math_id": 65, "text": " \\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k = \\frac{1}{\\alpha_k} \\mathbf{r}_{k+1}^\\mathsf{T} (\\mathbf{r}_k - \\mathbf{r}_{k+1}) = - \\frac{1}{\\alpha_k} \\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{r}_{k+1} " }, { "math_id": 66, "text": " \\mathbf{p}_k^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k = (\\mathbf{r}_k + \\beta_{k-1} \\mathbf{p}_{k-1})^\\mathsf{T} \\mathbf{A} \\mathbf{p}_k = \\frac{1}{\\alpha_k} \\mathbf{r}_k^\\mathsf{T} (\\mathbf{r}_k - \\mathbf{r}_{k+1}) = \\frac{1}{\\alpha_k} \\mathbf{r}_k^\\mathsf{T} \\mathbf{r}_k " }, { "math_id": 67, "text": "\\mathbf{A} \\mathbf{x}= \\begin{bmatrix} 4 & 1 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix}," }, { "math_id": 68, "text": "\\mathbf{x}_0 = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix}" }, { "math_id": 69, "text": " \\mathbf{x} = \\begin{bmatrix} \\frac{1}{11} \\\\\\\\ \\frac{7}{11} \\end{bmatrix} \\approx \\begin{bmatrix} 0.0909 \\\\\\\\ 0.6364 \\end{bmatrix}" }, { "math_id": 70, "text": "\\mathbf{r}_0 = \\begin{bmatrix} 1 \\\\ 2 \\end{bmatrix} - \n\\begin{bmatrix} 4 & 1 \\\\ 1 & 3 \\end{bmatrix}\n\\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix} = \n\\begin{bmatrix}-8 \\\\ -3 \\end{bmatrix} = \\mathbf{p}_0." }, { "math_id": 71, "text": " \\alpha_0 = \\frac{\\mathbf{r}_0^\\mathsf{T} \\mathbf{r}_0}{\\mathbf{p}_0^\\mathsf{T} \\mathbf{A p}_0} = \\frac{\\begin{bmatrix} -8 & -3 \\end{bmatrix} \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix}}{ \\begin{bmatrix} -8 & -3 \\end{bmatrix} \\begin{bmatrix} 4 & 1 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix} } =\\frac{73}{331}\\approx0.2205" }, { "math_id": 72, "text": "\\mathbf{x}_1 = \\mathbf{x}_0 + \\alpha_0\\mathbf{p}_0 = \\begin{bmatrix} 2 \\\\ 1 \\end{bmatrix} + \\frac{73}{331} \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix} \\approx \\begin{bmatrix} 0.2356 \\\\ 0.3384 \\end{bmatrix}." }, { "math_id": 73, "text": "\\mathbf{r}_1 = \\mathbf{r}_0 - \\alpha_0 \\mathbf{A} \\mathbf{p}_0 = \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix} - \\frac{73}{331} \\begin{bmatrix} 4 & 1 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix} \\approx \\begin{bmatrix} -0.2810 \\\\ 0.7492 \\end{bmatrix}." }, { "math_id": 74, "text": "\\beta_0 = \\frac{\\mathbf{r}_1^\\mathsf{T} \\mathbf{r}_1}{\\mathbf{r}_0^\\mathsf{T} \\mathbf{r}_0} \\approx \\frac{\\begin{bmatrix} -0.2810 & 0.7492 \\end{bmatrix} \\begin{bmatrix} -0.2810 \\\\ 0.7492 \\end{bmatrix}}{\\begin{bmatrix} -8 & -3 \\end{bmatrix} \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix}} = 0.0088." }, { "math_id": 75, "text": "\\mathbf{p}_1 = \\mathbf{r}_1 + \\beta_0 \\mathbf{p}_0 \\approx \\begin{bmatrix} -0.2810 \\\\ 0.7492 \\end{bmatrix} + 0.0088 \\begin{bmatrix} -8 \\\\ -3 \\end{bmatrix} = \\begin{bmatrix} -0.3511 \\\\ 0.7229 \\end{bmatrix}." }, { "math_id": 76, "text": " \\alpha_1 = \\frac{\\mathbf{r}_1^\\mathsf{T} \\mathbf{r}_1}{\\mathbf{p}_1^\\mathsf{T} \\mathbf{A p}_1} \\approx \\frac{\\begin{bmatrix} -0.2810 & 0.7492 \\end{bmatrix} \\begin{bmatrix} -0.2810 \\\\ 0.7492 \\end{bmatrix}}{ \\begin{bmatrix} -0.3511 & 0.7229 \\end{bmatrix} \\begin{bmatrix} 4 & 1 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} -0.3511 \\\\ 0.7229 \\end{bmatrix} } = 0.4122." }, { "math_id": 77, "text": "\\mathbf{x}_2 = \\mathbf{x}_1 + \\alpha_1 \\mathbf{p}_1 \\approx \\begin{bmatrix} 0.2356 \\\\ 0.3384 \\end{bmatrix} + 0.4122 \\begin{bmatrix} -0.3511 \\\\ 0.7229 \\end{bmatrix} = \\begin{bmatrix} 0.0909 \\\\ 0.6364 \\end{bmatrix}." }, { "math_id": 78, "text": "\\kappa(A)" }, { "math_id": 79, "text": "A" }, { "math_id": 80, "text": "\\mathbf{A x}-\\mathbf{b} = 0" }, { "math_id": 81, "text": "\\mathbf{M}^{-1}(\\mathbf{A x}-\\mathbf{b}) = 0" }, { "math_id": 82, "text": "\\kappa(\\mathbf{M}^{-1}\\mathbf{A})" }, { "math_id": 83, "text": "\\kappa(\\mathbf{A})" }, { "math_id": 84, "text": "\n \\Pi_k^* := \\left\\lbrace \\ p \\in \\Pi_k \\ : \\ p(0)=1 \\ \\right\\rbrace \\,,\n" }, { "math_id": 85, "text": " \\Pi_k " }, { "math_id": 86, "text": " k " }, { "math_id": 87, "text": " \\left( \\mathbf{x}_k \\right)_k " }, { "math_id": 88, "text": " \\mathbf{x}_* " }, { "math_id": 89, "text": " \\mathbf{e}_k := \\mathbf{x}_k - \\mathbf{x}_* " }, { "math_id": 90, "text": "\n\\begin{align}\n \\left\\| \\mathbf{e}_k \\right\\|_\\mathbf{A}\n &= \\min_{p \\in \\Pi_k^*} \\left\\| p(\\mathbf{A}) \\mathbf{e}_0 \\right\\|_\\mathbf{A}\n \\\\\n &\\leq \\min_{p \\in \\Pi_k^*} \\, \\max_{ \\lambda \\in \\sigma(\\mathbf{A})} | p(\\lambda) | \\ \\left\\| \\mathbf{e}_0 \\right\\|_\\mathbf{A}\n \\\\\n &\\leq 2 \\left( \\frac{ \\sqrt{\\kappa(\\mathbf{A})}-1 }{ \\sqrt{\\kappa(\\mathbf{A})}+1 } \\right)^k \\ \\left\\| \\mathbf{e}_0 \\right\\|_\\mathbf{A}\n \\\\\n &\\leq 2 \\exp\\left(\\frac{-2k}{\\sqrt{\\kappa(\\mathbf{A})}}\\right) \\ \\left\\| \\mathbf{e}_0 \\right\\|_\\mathbf{A}\n \\,,\n\\end{align}\n" }, { "math_id": 91, "text": " \\sigma(\\mathbf{A}) " }, { "math_id": 92, "text": " \\kappa(\\mathbf{A}) " }, { "math_id": 93, "text": "k = \\tfrac{1}{2}\\sqrt{\\kappa(\\mathbf{A})} \\log\\left(\\left\\| \\mathbf{e}_0 \\right\\|_\\mathbf{A} \\varepsilon^{-1}\\right)" }, { "math_id": 94, "text": "2\\varepsilon" }, { "math_id": 95, "text": "\\varepsilon>0" }, { "math_id": 96, "text": " \\infty " }, { "math_id": 97, "text": "\n \\frac{ \\sqrt{\\kappa(\\mathbf{A})}-1 }{ \\sqrt{\\kappa(\\mathbf{A})}+1 }\n \\approx 1 - \\frac{2}{\\sqrt{\\kappa(\\mathbf{A})}}\n \\quad \\text{for} \\quad\n \\kappa(\\mathbf{A}) \\gg 1\n \\,.\n" }, { "math_id": 98, "text": " \\approx 1 - \\frac{2}{\\kappa(\\mathbf{A})} " }, { "math_id": 99, "text": " \\sqrt{\\kappa(\\mathbf{A})}" }, { "math_id": 100, "text": "\\mathbf{M}^{-1}" }, { "math_id": 101, "text": "\\mathbf{M}^{-1}\\mathbf{A}" }, { "math_id": 102, "text": "\\mathbf{r}_0 := \\mathbf{b} - \\mathbf{A x}_0" }, { "math_id": 103, "text": " \\textrm{Solve:}\\mathbf{M}\\mathbf{z}_0 := \\mathbf{r}_0" }, { "math_id": 104, "text": "\\mathbf{p}_0 := \\mathbf{z}_0" }, { "math_id": 105, "text": "k := 0 \\, " }, { "math_id": 106, "text": "\\alpha_k := \\frac{\\mathbf{r}_k^\\mathsf{T} \\mathbf{z}_k}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A p}_k}" }, { "math_id": 107, "text": "\\mathrm{Solve}\\ \\mathbf{M}\\mathbf{z}_{k+1} := \\mathbf{r}_{k+1}" }, { "math_id": 108, "text": "\\beta_k := \\frac{\\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{z}_{k+1}}{\\mathbf{r}_k^\\mathsf{T} \\mathbf{z}_k}" }, { "math_id": 109, "text": "\\mathbf{p}_{k+1} := \\mathbf{z}_{k+1} + \\beta_k \\mathbf{p}_k" }, { "math_id": 110, "text": "k := k + 1 \\, " }, { "math_id": 111, "text": "\\mathbf{E}^{-1}\\mathbf{A}(\\mathbf{E}^{-1})^\\mathsf{T}\\mathbf{\\hat{x}}=\\mathbf{E}^{-1}\\mathbf{b}" }, { "math_id": 112, "text": "\\mathbf{EE}^\\mathsf{T}=\\mathbf{M}, \\qquad \\mathbf{\\hat{x}}=\\mathbf{E}^\\mathsf{T}\\mathbf{x}." }, { "math_id": 113, "text": "\\mathbf{E}^{-1}\\mathbf{A}(\\mathbf{E}^{-1})^\\mathsf{T}" }, { "math_id": 114, "text": "\\mathbf{M}" }, { "math_id": 115, "text": "\\mathbf{L}" }, { "math_id": 116, "text": "\\mathbf{M}=\\mathbf{LL}^\\mathsf{T}" }, { "math_id": 117, "text": "\\mathbf{Mz}=\\mathbf{r}" }, { "math_id": 118, "text": "\\mathbf{z}=\\mathbf{M}^{-1}\\mathbf{r}" }, { "math_id": 119, "text": "\\mathbf{M}^{-1}=(\\mathbf{L}^{-1})^\\mathsf{T}\\mathbf{L}^{-1}" }, { "math_id": 120, "text": "\\mathbf{z}=(\\mathbf{L}^{-1})^\\mathsf{T}\\mathbf{L}^{-1}\\mathbf{r}" }, { "math_id": 121, "text": "\\mathbf{a}" }, { "math_id": 122, "text": "\\mathbf{a}=\\mathbf{L}^{-1}\\mathbf{r}" }, { "math_id": 123, "text": "\\mathbf{r}=\\mathbf{L}\\mathbf{a}" }, { "math_id": 124, "text": "\\mathbf{r}" }, { "math_id": 125, "text": "\\mathbf{z}=(\\mathbf{L}^{-1})^\\mathsf{T}\\mathbf{a}" }, { "math_id": 126, "text": "\\mathbf{a}=\\mathbf{L}^\\mathsf{T}\\mathbf{z}" }, { "math_id": 127, "text": "\\mathbf{L}^\\mathsf{T}" }, { "math_id": 128, "text": "\\mathbf{z}" }, { "math_id": 129, "text": "\\beta_k := \\frac{\\mathbf{r}_{k+1}^\\mathsf{T} \\left(\\mathbf{z}_{k+1}-\\mathbf{z}_{k}\\right)}{\\mathbf{r}_k^\\mathsf{T} \\mathbf{z}_k}" }, { "math_id": 130, "text": "\\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{z}_{k}=0," }, { "math_id": 131, "text": "\\beta_k := 0" }, { "math_id": 132, "text": "u = k(x, v):= -\\gamma_a \\nabla f(x) - \\gamma_b v " }, { "math_id": 133, "text": "\\dot x = v, \\quad \\dot v = u " }, { "math_id": 134, "text": "\\gamma_a" }, { "math_id": 135, "text": "\\gamma_b" }, { "math_id": 136, "text": "\\mathbf {A} \\mathbf {x} =\\mathbf {b}" }, { "math_id": 137, "text": "t \\in (0, 1)" }, { "math_id": 138, "text": "W= \\begin{bmatrix}\n t & \\sqrt{t} & & & & \\\\\n \\sqrt{t} & 1+t & \\sqrt{t} & & & \\\\\n & \\sqrt{t} & 1+t & \\sqrt{t} & & \\\\\n & & \\sqrt{t} & \\ddots & \\ddots & \\\\\n & & & \\ddots & & \\\\\n & & & & & \\sqrt{t} \\\\\n & & & & \\sqrt{t} & 1+t\n \\end{bmatrix}, \\quad b=\\begin{bmatrix}\n 1 \\\\\n 0 \\\\\n \\vdots \\\\\n 0\n \\end{bmatrix}" }, { "math_id": 139, "text": "W" }, { "math_id": 140, "text": "W x = b " }, { "math_id": 141, "text": "\\|b- Wx_k\\|^2 = (1/t)^{k}, \\quad \\|b- Wx_n\\|^2 =0" } ]
https://en.wikipedia.org/wiki?curid=1448821
1448833
Cardinal utility
In contrast with ordinal utility, in economics In economics, a cardinal utility expresses not only which of two outcomes is preferred, but also the intensity of preferences, i.e. "how much" better or worse one outcome is compared to another. In consumer choice theory, economists originally attempted to replace cardinal utility with the apparently-weaker concept of ordinal utility. Cardinal utility appears to impose the assumption that levels of absolute satisfaction exist, so magnitudes of increments to satisfaction can be compared across different situations. However, economists in the 1940s proved that under mild conditions, ordinal utilities imply cardinal utilities. This result is now known as the von Neumann-Morgenstern utility theorem; many similar utility representation theorems exist in other contexts. History. In 1738, Daniel Bernoulli was the first to theorize about the marginal value of money. He assumed that the value of an additional amount is inversely proportional to the pecuniary possessions which a person already owns. Since Bernoulli tacitly assumed that an interpersonal measure for the utility reaction of different persons can be discovered, he was then inadvertently using an early conception of cardinality. Bernoulli's imaginary logarithmic utility function and Gabriel Cramer's "U" "W"1/2 function were conceived at the time not for a theory of demand but to solve the St. Petersburg's game. Bernoulli assumed that "a poor man generally obtains more utility than a rich man from an equal gain" an approach that is more profound than the simple mathematical expectation of money as it involves a law of "moral expectation". Early theorists of utility considered that it had physically quantifiable attributes. They thought that utility behaved like the magnitudes of distance or time, in which the simple use of a ruler or stopwatch resulted in a distinguishable measure. "Utils" was the name actually given to the units in a utility scale. In the Victorian era many aspects of life were succumbing to quantification. The theory of utility soon began to be applied to moral-philosophy discussions. The essential idea in utilitarianism is to judge people's decisions by looking at their change in utils and measure whether they are better off. The main forerunner of the utilitarian principles since the end of the 18th century was Jeremy Bentham, who believed utility could be measured by some complex introspective examination and that it should guide the design of social policies and laws. For Bentham a scale of pleasure has as a unit of intensity "the degree of intensity possessed by that pleasure which is the faintest of any that can be distinguished to be pleasure"; he also stated that, as these pleasures increase in intensity higher and higher numbers could represent them. In the 18th and 19th centuries utility's measurability received plenty of attention from European schools of political economy, most notably through the work of marginalists (e.g., William Stanley Jevons, Léon Walras, Alfred Marshall). However, neither of them offered solid arguments to support the assumption of measurability. In Jevon's case he added to the later editions of his work a note on the difficulty of estimating utility with accuracy. Walras, too, struggled for many years before he could even attempt to formalize the assumption of measurability. Marshall was ambiguous about the measurability of hedonism because he adhered to its psychological-hedonistic properties but he also argued that it was "unrealistical" to do so. Supporters of cardinal utility theory in the 19th century suggested that market prices reflect utility, although they did not say much about their compatibility (i.e., prices being objective while utility is subjective). Accurately measuring subjective pleasure (or pain) seemed awkward, as the thinkers of the time were surely aware. They renamed utility in imaginative ways such as "subjective wealth", "overall happiness", "moral worth", "psychic satisfaction", or "ophélimité". During the second half of the 19th century many studies related to this fictional magnitude—utility—were conducted, but the conclusion was always the same: it proved impossible to definitively say whether a good is worth 50, 75, or 125 utils to a person, or to two different people. Moreover, the mere dependence of utility on notions of hedonism led academic circles to be skeptical of this theory. Francis Edgeworth was also aware of the need to ground the theory of utility into the real world. He discussed the quantitative estimates that a person can make of his own pleasure or the pleasure of others, borrowing methods developed in psychology to study hedonic measurement: psychophysics. This field of psychology was built on work by Ernst H. Weber, but around the time of World War I, psychologists grew discouraged of it. In the late 19th century, Carl Menger and his followers from the Austrian school of economics undertook the first successful departure from measurable utility, in the clever form of a theory of ranked uses. Despite abandoning the thought of quantifiable utility (i.e. psychological satisfaction mapped into the set of real numbers) Menger managed to establish a body of hypothesis about decision-making, resting solely on a few axioms of ranked preferences over the possible uses of goods and services. His numerical examples are "illustrative of ordinal, not cardinal, relationships". Around the turn of the 19th century neoclassical economists started to embrace alternative ways to deal with the measurability issue. By 1900, Pareto was hesitant about accurately measuring pleasure or pain because he thought that such a self-reported subjective magnitude lacked scientific validity. He wanted to find an alternative way to treat utility that did not rely on erratic perceptions of the senses. Pareto's main contribution to ordinal utility was to assume that higher indifference curves have greater utility, but how much greater does not need to be specified to obtain the result of increasing marginal rates of substitution. The works and manuals of Vilfredo Pareto, Francis Edgeworth, Irving Fischer, and Eugene Slutsky departed from cardinal utility and served as pivots for others to continue the trend on ordinality. According to Viner, these economic thinkers came up with a theory that explained the negative slopes of demand curves. Their method avoided the measurability of utility by constructing some abstract indifference curve map. During the first three decades of the 20th century, economists from Italy and Russia became familiar with the Paretian idea that utility does not need to be cardinal. According to Schultz, by 1931 the idea of ordinal utility was not yet embraced by American economists. The breakthrough occurred when a theory of ordinal utility was put together by John Hicks and Roy Allen in 1934. In fact pages 54–55 from this paper contain the first use ever of the term 'cardinal utility'. The first treatment of a class of utility functions preserved by affine transformations, though, was made in 1934 by Oskar Lange. In 1944 Frank Knight argued extensively for cardinal utility. In the decade of 1960 Parducci studied human judgements of magnitudes and suggested a range-frequency theory. Since the late 20th century economists are having a renewed interest in the measurement issues of happiness. This field has been developing methods, surveys and indices to measure happiness. Several properties of cardinal utility functions can be derived using tools from measure theory and set theory. Measurability. A utility function is considered to be measurable, if the strength of preference or intensity of liking of a good or service is determined with precision by the use of some objective criteria. For example, suppose that eating an apple gives to a person exactly half the pleasure of that of eating an orange. This would be a measurable utility if and only if the test employed for its direct measurement is based on an objective criterion that could let any external observer repeat the results accurately. One hypothetical way to achieve this would be by the use of a hedonometer, which was the instrument suggested by Edgeworth to be capable of registering the height of pleasure experienced by people, diverging according to a law of errors. Before the 1930s, the measurability of utility functions was erroneously labeled as cardinality by economists. A different meaning of cardinality was used by economists who followed the formulation of Hicks-Allen, where two cardinal utility functions are considered the same if they preserve preference orderings uniquely up to positive affine transformations. Around the end of the 1940s, some economists even rushed to argue that von Neumann-Morgenstern axiomatization of expected utility had resurrected measurability. The confusion between cardinality and measurability was not to be solved until the works of Armen Alchian, William Baumol, and John Chipman. The title of Baumol's paper, "The cardinal utility which is ordinal", expressed well the semantic mess of the literature at the time. It is helpful to consider the same problem as it appears in the construction of scales of measurement in the natural sciences. In the case of temperature there are two "degrees of freedom" for its measurement - the choice of unit and the zero. Different temperature scales map its intensity in different ways. In the celsius scale the zero is chosen to be the point where water freezes, and likewise, in cardinal utility theory one would be tempted to think that the choice of zero would correspond to a good or service that brings exactly 0 utils. However this is not necessarily true. The mathematical index remains cardinal, even if the zero gets moved arbitrarily to another point, or if the choice of scale is changed, or if both the scale and the zero are changed. Every measurable entity maps into a cardinal function but not every cardinal function is the result of the mapping of a measurable entity. The point of this example was used to prove that (as with temperature) it is still possible to predict something about the combination of two values of some utility function, even if the utils get transformed into entirely different numbers, as long as it remains a linear transformation. Von Neumann and Morgenstern stated that the question of measurability of physical quantities was dynamic. For instance, temperature was originally a number only up to any monotone transformation, but the development of the ideal-gas-thermometry led to transformations in which the absolute zero and absolute unit were missing. Subsequent developments of thermodynamics even fixed the absolute zero so that the transformation system in thermodynamics consists only of the multiplication by constants. According to Von Neumann and Morgenstern (1944, p. 23) "For utility the situation seems to be of a similar nature [to temperature]". The following quote from Alchian served to clarify once and for all the real nature of utility functions, emphasizing that they no longer need to be measurable: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Can we assign a set of numbers (measures) to the various entities and predict that the entity with the largest assigned number (measure) will be chosen? If so, we could christen this measure "utility" and then assert that choices are made so as to maximize utility. It is an easy step to the statement that "you are maximizing your utility", which says no more than that your choice is predictable according to the size of some assigned numbers. For analytical convenience it is customary to postulate that an individual seeks to maximize something subject to some constraints. The thing -or numerical measure of the "thing"- which he seeks to maximize is called "utility". Whether or not utility is of some kind glow or warmth, or happiness, is here irrelevant; all that counts is that we can assign numbers to entities or conditions which a person can strive to realize. Then we say the individual seeks to maximize some function of those numbers. Unfortunately, the term "utility" has by now acquired so many connotations, that it is difficult to realize that for present purposes utility has no more meaning than this. Order of preference. In 1955 Patrick Suppes and Muriel Winet solved the issue of the representability of preferences by a cardinal utility function, and derived the set of axioms and primitive characteristics required for this utility index to work. Suppose an agent is asked to rank his preferences of "A" relative to "B" and his preferences of "B" relative to "C". If he finds that he can state, for example, that his degree of preference of "A" to "B" exceeds his degree of preference of "B" to "C", we could summarize this information by any triplet of numbers satisfying the two inequalities: "UA" &gt; "UB" &gt; "UC" and "UA" − "UB" − "UB" − "UC". If A and B were sums of money, the agent could vary the sum of money represented by B until he could tell us that he found his degree of preference of A over the revised amount "B′ " equal to his degree of preference of "B′ " over C. If he finds such a "B′", then the results of this last operation would be expressed by any triplet of numbers satisfying the relationships: (a) "UA" &gt; "UB'" &gt; "UC", and (b) "UA" − "UB'" "UB'" − "UC". Any two triplets obeying these relationships must be related by a linear transformation; they represent utility indices differing only by scale and origin. In this case, "cardinality" means nothing more being able to give consistent answers to these particular questions. This experiment does not require measurability of utility. Itzhak Gilboa gives a sound explanation of why measurability can never be attained solely by introspection: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It might have happened to you that you were carrying a pile of papers, or clothes, and didn't notice that you dropped a few. The decrease in the total weight you were carrying was probably not large enough for you to notice. Two objects may be too close in terms of weight for us to notice the difference between them. This problem is common to perception in all our senses. If I ask whether two rods are of the same length or not, there are differences that will be too small for you to notice. The same would apply to your perception of sound (volume, pitch), light, temperature, and so forth... According to this view, those situations where a person just cannot tell the difference between A and B will lead to indifference not because of a consistency of preferences, but because of a misperception of the senses. Moreover, human senses adapt to a given level of stimulation and then register changes from that baseline. Construction. Suppose a certain agent has a preference ordering over random outcomes (lotteries). If the agent can be queried about his preferences, it is possible to construct a cardinal utility function that represents these preferences. This is the core of the von Neumann–Morgenstern utility theorem. Applications. Welfare economics. Among welfare economists of the utilitarian school it has been the general tendency to take satisfaction (in some cases, pleasure) as the unit of welfare. If the function of welfare economics is to contribute data which will serve the social philosopher or the statesman in the making of welfare judgments, this tendency leads perhaps, to a hedonistic ethics. Under this framework, actions (including production of goods and provision of services) are judged by their contributions to the subjective wealth of people. In other words, it provides a way of judging the "greatest good to the greatest number of persons". An act that reduces one person's utility by 75 utils while increasing two others' by 50 utils each has increased overall utility by 25 utils and is thus a positive contribution; one that costs the first person 125 utils while giving the same 50 each to two other people has resulted in a net loss of 25 utils. If a class of utility functions is cardinal, intrapersonal comparisons of utility differences are allowed. If, in addition, some comparisons of utility are meaningful interpersonally, the linear transformations used to produce the class of utility functions must be restricted across people. An example is cardinal unit comparability. In that information environment, admissible transformations are increasing affine functions and, in addition, the scaling factor must be the same for everyone. This information assumption allows for interpersonal comparisons of utility differences, but utility levels cannot be compared interpersonally because the intercept of the affine transformations may differ across people. Expected utility theory. This type of indices involves choices under risk. In this case, "A", "B", and "C", are lotteries associated with outcomes. Unlike cardinal utility theory under certainty, in which the possibility of moving from preferences to quantified utility was almost trivial, here it is paramount to be able to map preferences into the set of real numbers, so that the operation of mathematical expectation can be executed. Once the mapping is done, the introduction of additional assumptions would result in a consistent behavior of people regarding fair bets. But fair bets are, by definition, the result of comparing a gamble with an expected value of zero to some other gamble. Although it is impossible to model attitudes toward risk if one doesn't quantify utility, the theory should not be interpreted as measuring strength of preference under certainty. Construction of the utility function. Suppose that certain outcomes are associated with three states of nature, so that "x"3 is preferred over "x"2 which in turn is preferred over "x"1; this set of outcomes, "X", can be assumed to be a calculable money-prize in a controlled game of chance, unique up to one positive proportionality factor depending on the currency unit. Let "L"1 and "L"2 be two lotteries with probabilities "p"1, "p"2, and "p"3 of "x"1, "x"2, and "x"3 respectively being formula_0 formula_1 Assume that someone has the following preference structure under risk: formula_2 meaning that "L"1 is preferred over "L"2. By modifying the values of "p"1 and "p"3 in "L"1, eventually there will be some appropriate values ("L"1') for which she is found to be indifferent between it and "L"2—for example formula_3 Expected utility theory tells us that formula_4 and so formula_5 In this example from Majumdar fixing the zero value of the utility index such that the utility of "x"1 is 0, and by choosing the scale so that the utility of "x"2 equals 1, gives formula_6 formula_7 Intertemporal utility. Models of utility with several periods, in which people discount future values of utility, need to employ cardinalities in order to have well-behaved utility functions. According to Paul Samuelson the maximization of the discounted sum of future utilities implies that a person can rank utility differences. Controversies. Some authors have commented on the misleading nature of the terms "cardinal utility" and "ordinal utility", as used in economic jargon: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;These terms, which seem to have been introduced by Hicks and Allen (1934), bear scant if any relation to the mathematicians' concept of ordinal and cardinal numbers; rather they are euphemisms for the concepts of order-homomorphism to the real numbers and group-homomorphism to the real numbers. There remain economists who believe that utility, if it cannot be measured, at least can be approximated somewhat to provide some form of measurement, similar to how prices, which have no uniform unit to provide an actual price level, could still be indexed to provide an "inflation rate" (which is actually a level of change in the prices of weighted indexed products). These measures are not perfect but can act as a proxy for the utility. Lancaster's characteristics approach to consumer demand illustrates this point. Comparison between ordinal and cardinal utility functions. The following table compares the two types of utility functions common in economics: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_1 =(0.6, 0, 0.4)," }, { "math_id": 1, "text": "L_2 =(0,1,0)\\ ." }, { "math_id": 2, "text": "L_{1} \\succ L_{2}," }, { "math_id": 3, "text": "L_{1}' =(0.5, 0, 0.5)." }, { "math_id": 4, "text": "EU(L_{1}') = EU(L_2)" }, { "math_id": 5, "text": "(0.5) \\times u(x_1)+(0.5) \\times u(x_{3}) = 1 \\times u(x_{2})." }, { "math_id": 6, "text": "(0.5) \\times u(x_{3})=1." }, { "math_id": 7, "text": "u(x_{3}) = 2." } ]
https://en.wikipedia.org/wiki?curid=1448833
1448859
Art gallery problem
Mathematical problem The art gallery problem or museum problem is a well-studied visibility problem in computational geometry. It originates from the following real-world problem: "In an art gallery, what is the minimum number of guards who together can observe the whole gallery?" In the geometric version of the problem, the layout of the art gallery is represented by a simple polygon and each guard is represented by a point in the polygon. A set formula_0 of points is said to guard a polygon if, for every point formula_1 in the polygon, there is some formula_2 such that the line segment between formula_1 and formula_3 does not leave the polygon. The art gallery problem can be applied in several domains such as in robotics, when artificial intelligences (AI) need to execute movements depending on their surroundings. Other domains, where this problem is applied, are in image editing, lighting problems of a stage or installation of infrastructures for the warning of natural disasters. Two dimensions. There are numerous variations of the original problem that are also referred to as the art gallery problem. In some versions guards are restricted to the perimeter, or even to the vertices of the polygon. Some versions require only the perimeter or a subset of the perimeter to be guarded. Solving the version in which guards must be placed on vertices and only vertices need to be guarded is equivalent to solving the dominating set problem on the visibility graph of the polygon. Chvátal's art gallery theorem. Chvátal's art gallery theorem, named after Václav Chvátal, gives an upper bound on the minimal number of guards. It states: "To guard a simple polygon with formula_4 vertices, formula_5 guards are always sufficient and sometimes necessary." History. The question about how many vertices/watchmen/guards were needed, was posed to Chvátal by Victor Klee in 1973. Chvátal proved it shortly thereafter. Chvátal's proof was later simplified by Steve Fisk, via a 3-coloring argument. Chvátal has a more geometrical approach, whereas Fisk uses well-known results from Graph theory. Fisk's short proof. Steve Fisk's proof is so short and elegant that it was chosen for inclusion in "Proofs from THE BOOK". The proof goes as follows: First, the polygon is triangulated (without adding extra vertices), which is possible, because the existence of triangulations is proven under certain verified conditions. The vertices of the resulting triangulation graph may be 3-colored. Clearly, under a 3-coloring, every triangle must have all three colors. The vertices with any one color form a valid guard set, because every triangle of the polygon is guarded by its vertex with that color. Since the three colors partition the "n" vertices of the polygon, the color with the fewest vertices defines a valid guard set with at most formula_6 guards. Illustration of the proof. To illustrate the proof, we consider the polygon below. The first step is to triangulate the polygon (see "Figure 1"). Then, one applies a proper formula_7-colouring ("Figure 2") and observes that there are formula_8 red, formula_8 blue and formula_9 green vertices. The colour with the fewest vertices is blue or red, thus the polygon can be covered by formula_8 guards ("Figure 3"). This agrees with the art gallery theorem, because the polygon has formula_10 vertices, and formula_11. Generalizations. Chvátal's upper bound remains valid if the restriction to guards at corners is loosened to guards at any point not exterior to the polygon. There are a number of other generalizations and specializations of the original art-gallery theorem. For instance, for orthogonal polygons, those whose edges/walls meet at right angles, only formula_12 guards are needed. There are at least three distinct proofs of this result, none of them simple: by Kahn, Klawe, and Kleitman; by Lubiw; and by Sack and Toussaint. A related problem asks for the number of guards to cover the exterior of an arbitrary polygon (the "Fortress Problem"): formula_13 are sometimes necessary and always sufficient if guards are placed on the boundary of the polygon, while formula_14 are sometimes necessary and always sufficient if guards are placed anywhere in the exterior of the polygon. In other words, the infinite exterior is more challenging to cover than the finite interior. Computational complexity. In decision problem versions of the art gallery problem, one is given as input both a polygon and a number "k", and must determine whether the polygon can be guarded with "k" or fewer guards. This problem is formula_15-complete, as is the version where the guards are restricted to the edges of the polygon. Furthermore, most of the other standard variations (such as restricting the guard locations to vertices) are NP-hard. Regarding approximation algorithms for the minimum number of guards, proved the problem to be APX-hard, implying that it is unlikely that any approximation ratio better than some fixed constant can be achieved by a polynomial time approximation algorithm. showed that a logarithmic approximation may be achieved for the minimum number of vertex guards by discretizing the input polygon into convex subregions and then reducing the problem to a set cover problem. As showed, the set system derived from an art gallery problem has bounded VC dimension, allowing the application of set cover algorithms based on ε-nets whose approximation ratio is the logarithm of the optimal number of guards rather than of the number of polygon vertices. For unrestricted guards, the infinite number of potential guard positions makes the problem even more difficult. However by restricting the guards to lie on a fine grid, a more complicated logarithmic approximation algorithm can be derived under some mild extra assumptions, as shown by . However, efficient algorithms are known for finding a set of at most formula_5 vertex guards, matching Chvátal's upper bound. David Avis and Godfried Toussaint (1981) proved that a placement for these guards may be computed in O(n "log" n) time in the worst case, via a divide and conquer algorithm. gave a linear time algorithm by using Fisk's short proof and Bernard Chazelle's linear time plane triangulation algorithm. For simple polygons that do not contain holes, the existence of a constant factor approximation algorithm for vertex and edge guards was conjectured by Ghosh. Ghosh's conjecture was initially shown to be true for vertex guards in two special sub-classes of simple polygons, viz. monotone polygons and polygons weakly visible from an edge. presented an approximation algorithm that computes in polynomial time a vertex guard set for a monotone polygon such that the size of the guard set is at most 30 times the optimal number of vertex guards. presented an approximation algorithm that computes in O(n2) time a vertex guard set for a simple polygon that is weakly visible from an edge such that the size of the guard set is at most 6 times the optimal number of vertex guards. Subsequently, claimed to have settled the conjecture completely by presenting constant factor approximation algorithms for guarding general simple polygons using vertex guards and edge guards. For vertex guarding the subclass of simple polygons that are weakly visible from an edge, a polynomial-time approximation scheme was proposed by . An exact algorithm was proposed by for vertex guards. The authors conducted extensive computational experiments with several classes of polygons showing that optimal solutions can be found in relatively small computation times even for instances associated to thousands of vertices. The input data and the optimal solutions for these instances are available for download. Three dimensions. If a museum is represented in three dimensions as a polyhedron, then putting a guard at each vertex will not ensure that all of the museum is under observation. Although all of the surface of the polyhedron would be surveyed, for some polyhedra there are points in the interior that might not be under surveillance. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "q\\in S" }, { "math_id": 3, "text": "q" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "\\left\\lfloor n/3 \\right\\rfloor" }, { "math_id": 6, "text": "\\lfloor n/3\\rfloor" }, { "math_id": 7, "text": "3" }, { "math_id": 8, "text": "4" }, { "math_id": 9, "text": "6" }, { "math_id": 10, "text": "14" }, { "math_id": 11, "text": "\\left\\lfloor \\frac{14}{3} \\right\\rfloor = 4" }, { "math_id": 12, "text": "\\lfloor n/4 \\rfloor" }, { "math_id": 13, "text": "\\lceil n/2 \\rceil" }, { "math_id": 14, "text": "\\lceil n/3 \\rceil" }, { "math_id": 15, "text": "\\exists\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=1448859
14488591
1,2-α-L-fucosidase
Class of enzymes The enzyme 1,2-α--fucosidase (EC 3.2.1.63) catalyzes the following chemical reaction: methyl-2-α--fucopyranosyl-β--galactoside + H2O formula_0 -fucose + methyl β--galactoside It belongs to the family of hydrolases, specifically those glycosidases that hydrolyse "O"- and "S"-glycosyl compounds. The systematic name is 2-α--fucopyranosyl-β--galactoside fucohydrolase. Other names in common use include almond emulsin fucosidase, and α-(1→2)--fucosidase. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 2EAB, 2EAC, 2EAD, and 2EAE. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14488591
14488752
1-methyladenosine nucleosidase
Class of enzymes In enzymology, a 1-methyladenosine nucleosidase (EC 3.2.2.13) is an enzyme that catalyzes the chemical reaction 1-methyladenosine + H2O formula_0 1-methyladenine + D-ribose Thus, the two substrates of this enzyme are 1-methyladenosine and H2O, whereas its two products are 1-methyladenine and D-ribose. This enzyme belongs to the family of hydrolases, specifically those glycosylases that hydrolyse N-glycosyl compounds. The systematic name of this enzyme class is 1-methyladenosine ribohydrolase. This enzyme is also called 1-methyladenosine hydrolase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14488752
1449031
Activity coefficient
Value accounting for thermodynamic non-ideality of mixtures In thermodynamics, an activity coefficient is a factor used to account for deviation of a mixture of chemical substances from ideal behaviour. In an ideal mixture, the microscopic interactions between each pair of chemical species are the same (or macroscopically equivalent, the enthalpy change of solution and volume variation in mixing is zero) and, as a result, properties of the mixtures can be expressed directly in terms of simple concentrations or partial pressures of the substances present e.g. Raoult's law. Deviations from ideality are accommodated by modifying the concentration by an "activity coefficient". Analogously, expressions involving gases can be adjusted for non-ideality by scaling partial pressures by a fugacity coefficient. The concept of activity coefficient is closely linked to that of activity in chemistry. Thermodynamic definition. The chemical potential, formula_0, of a substance B in an ideal mixture of liquids or an ideal solution is given by formula_1, where "μ" is the chemical potential of a pure substance formula_2, and formula_3 is the mole fraction of the substance in the mixture. This is generalised to include non-ideal behavior by writing formula_4 when formula_5 is the activity of the substance in the mixture, formula_6, where formula_7 is the activity coefficient, which may itself depend on formula_8. As formula_7 approaches 1, the substance behaves as if it were ideal. For instance, if formula_7 ≈ 1, then Raoult's law is accurate. For formula_7 &gt; 1 and formula_7 &lt; 1, substance B shows positive and negative deviation from Raoult's law, respectively. A positive deviation implies that substance B is more volatile. In many cases, as formula_8 goes to zero, the activity coefficient of substance B approaches a constant; this relationship is Henry's law for the solvent. These relationships are related to each other through the Gibbs–Duhem equation. Note that in general activity coefficients are dimensionless. In detail: Raoult's law states that the partial pressure of component B is related to its vapor pressure (saturation pressure) and its mole fraction formula_8 in the liquid phase, formula_9 with the convention formula_10 In other words: Pure liquids represent the ideal case. At infinite dilution, the activity coefficient approaches its limiting value, formula_7∞. Comparison with Henry's law, formula_11 immediately gives formula_12 In other words: The compound shows nonideal behavior in the dilute case. The above definition of the activity coefficient is impractical if the compound does not exist as a pure liquid. This is often the case for electrolytes or biochemical compounds. In such cases, a different definition is used that considers infinite dilution as the ideal state: formula_13 with formula_14 and formula_15 The formula_16 symbol has been used here to distinguish between the two kinds of activity coefficients. Usually it is omitted, as it is clear from the context which kind is meant. But there are cases where both kinds of activity coefficients are needed and may even appear in the same equation, e.g., for solutions of salts in (water + alcohol) mixtures. This is sometimes a source of errors. Modifying mole fractions or concentrations by activity coefficients gives the "effective activities" of the components, and hence allows expressions such as Raoult's law and equilibrium constants to be applied to both ideal and non-ideal mixtures. Knowledge of activity coefficients is particularly important in the context of electrochemistry since the behaviour of electrolyte solutions is often far from ideal, due to the effects of the ionic atmosphere. Additionally, they are particularly important in the context of soil chemistry due to the low volumes of solvent and, consequently, the high concentration of electrolytes. Ionic solutions. For solution of substances which ionize in solution the activity coefficients of the cation and anion cannot be experimentally determined independently of each other because solution properties depend on both ions. Single ion activity coefficients must be linked to the activity coefficient of the dissolved electrolyte as if undissociated. In this case a mean stoichiometric activity coefficient of the dissolved electrolyte, "γ"±, is used. It is called stoichiometric because it expresses both the deviation from the ideality of the solution and the incomplete ionic dissociation of the ionic compound which occurs especially with the increase of its concentration. For a 1:1 electrolyte, such as NaCl it is given by the following: formula_17 where formula_18 and formula_19 are the activity coefficients of the cation and anion respectively. More generally, the mean activity coefficient of a compound of formula formula_20 is given by formula_21 Single-ion activity coefficients can be calculated theoretically, for example by using the Debye–Hückel equation. The theoretical equation can be tested by combining the calculated single-ion activity coefficients to give mean values which can be compared to experimental values. The prevailing view that single ion activity coefficients are unmeasurable independently, or perhaps even physically meaningless, has its roots in the work of Guggenheim in the late 1920s. However, chemists have never been able to give up the idea of single ion activities, and by implication single ion activity coefficients. For example, pH is defined as the negative logarithm of the hydrogen ion activity. If the prevailing view on the physical meaning and measurability of single ion activities is correct then defining pH as the negative logarithm of the hydrogen ion activity places the quantity squarely in the unmeasurable category. Recognizing this logical difficulty, International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only. Despite the prevailing negative view on the measurability of single ion coefficients, the concept of single ion activities continues to be discussed in the literature, and at least one author presents a definition of single ion activity in terms of purely thermodynamic quantities and proposes a method of measuring single ion activity coefficients based on purely thermodynamic processes. Concentrated ionic solutions. For concentrated ionic solutions the hydration of ions must be taken into consideration, as done by Stokes and Robinson in their hydration model from 1948. The activity coefficient of the electrolyte is split into electric and statistical components by E. Glueckauf who modifies the Robinson–Stokes model. The statistical part includes hydration index number h, the number of ions from the dissociation and the ratio r between the apparent molar volume of the electrolyte and the molar volume of water and molality b. Concentrated solution statistical part of the activity coefficient is: formula_22 The Stokes–Robinson model has been analyzed and improved by other investigators as well. Experimental determination of activity coefficients. Activity coefficients may be determined experimentally by making measurements on non-ideal mixtures. Use may be made of Raoult's law or Henry's law to provide a value for an ideal mixture against which the experimental value may be compared to obtain the activity coefficient. Other colligative properties, such as osmotic pressure may also be used. Radiochemical methods. Activity coefficients can be determined by radiochemical methods. At infinite dilution. Activity coefficients for binary mixtures are often reported at the infinite dilution of each component. Because activity coefficient models simplify at infinite dilution, such empirical values can be used to estimate interaction energies. Examples are given for water: Theoretical calculation of activity coefficients. Activity coefficients of electrolyte solutions may be calculated theoretically, using the Debye–Hückel equation or extensions such as the Davies equation, Pitzer equations or TCPC model. Specific ion interaction theory (SIT) may also be used. For non-electrolyte solutions correlative methods such as UNIQUAC, NRTL, MOSCED or UNIFAC may be employed, provided fitted component-specific or model parameters are available. COSMO-RS is a theoretical method which is less dependent on model parameters as required information is obtained from quantum mechanics calculations specific to each molecule (sigma profiles) combined with a statistical thermodynamics treatment of surface segments. For uncharged species, the activity coefficient "γ"0 mostly follows a salting-out model: formula_23 This simple model predicts activities of many species (dissolved undissociated gases such as CO2, H2S, NH3, undissociated acids and bases) to high ionic strengths (up to 5 mol/kg). The value of the constant "b" for CO2 is 0.11 at 10 °C and 0.20 at 330 °C. For water as solvent, the activity "a"w can be calculated using: formula_24 where "ν" is the number of ions produced from the dissociation of one molecule of the dissolved salt, "b" is the molality of the salt dissolved in water, "φ" is the osmotic coefficient of water, and the constant 55.51 represents the molality of water. In the above equation, the activity of a solvent (here water) is represented as inversely proportional to the number of particles of salt versus that of the solvent. Link to ionic diameter. The ionic activity coefficient is connected to the ionic diameter by the formula obtained from Debye–Hückel theory of electrolytes: formula_25 where "A" and "B" are constants, "zi" is the valence number of the ion, and "I" is ionic strength. Dependence on state parameters. The derivative of an activity coefficient with respect to temperature is related to excess molar enthalpy by formula_26 Similarly, the derivative of an activity coefficient with respect to pressure can be related to excess molar volume. formula_27 Application to chemical equilibrium. At equilibrium, the sum of the chemical potentials of the reactants is equal to the sum of the chemical potentials of the products. The Gibbs free energy change for the reactions, Δr"G", is equal to the difference between these sums and therefore, at equilibrium, is equal to zero. Thus, for an equilibrium such as formula_28 formula_29 Substitute in the expressions for the chemical potential of each reactant: formula_30 Upon rearrangement this expression becomes formula_31 The sum "σμ" + "τμ" − "αμ" − "βμ" is the standard free energy change for the reaction, formula_32. Therefore, formula_33 where K is the equilibrium constant. Note that activities and equilibrium constants are dimensionless numbers. This derivation serves two purposes. It shows the relationship between standard free energy change and equilibrium constant. It also shows that an equilibrium constant is defined as a quotient of activities. In practical terms this is inconvenient. When each activity is replaced by the product of a concentration and an activity coefficient, the equilibrium constant is defined as formula_34 where [S] denotes the concentration of S, etc. In practice equilibrium constants are determined in a medium such that the quotient of activity coefficient is constant and can be ignored, leading to the usual expression formula_35 which applies under the conditions that the activity quotient has a particular (constant) value. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu_\\mathrm{B}" }, { "math_id": 1, "text": "\\mu_\\mathrm{B} = \\mu_\\mathrm{B}^{\\ominus} + RT \\ln x_\\mathrm{B} \\," }, { "math_id": 2, "text": "\\mathrm{B}" }, { "math_id": 3, "text": " x_\\mathrm{B} " }, { "math_id": 4, "text": "\\mu_\\mathrm{B} = \\mu_\\mathrm{B}^{\\ominus} + RT \\ln a_\\mathrm{B} \\," }, { "math_id": 5, "text": "a_\\mathrm{B}" }, { "math_id": 6, "text": "a_\\mathrm{B} = x_\\mathrm{B} \\gamma_\\mathrm{B}" }, { "math_id": 7, "text": "\\gamma_\\mathrm{B}" }, { "math_id": 8, "text": "x_\\mathrm{B}" }, { "math_id": 9, "text": " p_\\mathrm{B} = x_\\mathrm{B} \\gamma_\\mathrm{B} p^{\\sigma}_\\mathrm{B} \\;," }, { "math_id": 10, "text": " \\lim_{x_\\mathrm{B} \\to 1} \\gamma_\\mathrm{B} = 1 \\;." }, { "math_id": 11, "text": " p_\\mathrm{B} = K_{\\mathrm{H,B}} x_\\mathrm{B} \\quad \\text{for} \\quad x_\\mathrm{B} \\to 0 \\;," }, { "math_id": 12, "text": "K_{\\mathrm{H,B}} = p_\\mathrm{B}^\\sigma \\gamma_\\mathrm{B}^\\infty \\;." }, { "math_id": 13, "text": "\\gamma_\\mathrm{B}^\\dagger \\equiv \\gamma_\\mathrm{B} / \\gamma_\\mathrm{B}^\\infty" }, { "math_id": 14, "text": " \\lim_{x_\\mathrm{B} \\to 0} \\gamma_\\mathrm{B}^\\dagger = 1 \\;," }, { "math_id": 15, "text": " \\mu_\\mathrm{B} = \\underbrace{\\mu_\\mathrm{B}^\\ominus + RT \\ln \\gamma_\\mathrm{B}^\\infty}_{\\mu_\\mathrm{B}^{\\ominus\\dagger}} + RT \\ln \\left(x_\\mathrm{B} \\gamma_\\mathrm{B}^\\dagger\\right)" }, { "math_id": 16, "text": "^\\dagger" }, { "math_id": 17, "text": " \\gamma_\\pm=\\sqrt{\\gamma_+\\gamma_-}" }, { "math_id": 18, "text": "\\gamma_\\mathrm{+}" }, { "math_id": 19, "text": "\\gamma_\\mathrm{-}" }, { "math_id": 20, "text": "A_\\mathrm{p} B_\\mathrm{q}" }, { "math_id": 21, "text": " \\gamma_\\pm=\\sqrt[p+q]{\\gamma_\\mathrm{A}^p\\gamma_\\mathrm{B}^q}" }, { "math_id": 22, "text": "\\ln \\gamma_s = \\frac{h- \\nu}{\\nu} \\ln \\left (1 + \\frac{br}{55.5} \\right) - \\frac{h}{\\nu} \\ln \\left (1 - \\frac{br}{55.5} \\right) + \\frac{br(r + h -\\nu)}{55.5 \\left (1 + \\frac{br}{55.5} \\right)}" }, { "math_id": 23, "text": " \\log_{10}(\\gamma_{0}) = b I" }, { "math_id": 24, "text": " \\ln(a_\\mathrm{w}) = \\frac{-\\nu b}{55.51} \\varphi" }, { "math_id": 25, "text": "\\log (\\gamma_{i}) = - \\frac {A z_i^2 \\sqrt {I}}{1+ B a \\sqrt {I}}" }, { "math_id": 26, "text": "\\bar{H}^{\\mathsf{E}}_i= -RT^2 \\frac{\\partial}{\\partial T}\\ln(\\gamma_i)" }, { "math_id": 27, "text": "\\bar{V}^{\\mathsf{E}}_i= RT \\frac{\\partial}{\\partial P}\\ln(\\gamma_i)" }, { "math_id": 28, "text": " \\alpha_\\mathrm{A} + \\beta_\\mathrm{B} = \\sigma_\\mathrm{S} + \\tau_\\mathrm{T}," }, { "math_id": 29, "text": " \\Delta_\\mathrm{r} G = \\sigma \\mu_\\mathrm{S} + \\tau \\mu_\\mathrm{T} - (\\alpha \\mu_\\mathrm{A} + \\beta \\mu_\\mathrm{B}) = 0\\," }, { "math_id": 30, "text": " \\Delta_\\mathrm{r} G = \\sigma \\mu_S^\\ominus + \\sigma RT \\ln a_\\mathrm{S} + \\tau \\mu_\\mathrm{T}^\\ominus + \\tau RT \\ln a_\\mathrm{T} -(\\alpha \\mu_\\mathrm{A}^\\ominus + \\alpha RT \\ln a_\\mathrm{A} + \\beta \\mu_\\mathrm{B}^\\ominus + \\beta RT \\ln a_\\mathrm{B})=0" }, { "math_id": 31, "text": " \\Delta_\\mathrm{r} G =\\left(\\sigma \\mu_\\mathrm{S}^\\ominus+\\tau \\mu_\\mathrm{T}^\\ominus -\\alpha \\mu_\\mathrm{A}^\\ominus- \\beta \\mu_\\mathrm{B}^\\ominus \\right) + RT \\ln \\frac{a_\\mathrm{S}^\\sigma a_\\mathrm{T}^\\tau} {a_\\mathrm{A}^\\alpha a_\\mathrm{B}^\\beta} =0" }, { "math_id": 32, "text": "\\Delta_\\mathrm{r} G^\\ominus" }, { "math_id": 33, "text": " \\Delta_r G^\\ominus = -RT \\ln K " }, { "math_id": 34, "text": "K= \\frac{[\\mathrm{S}]^\\sigma[\\mathrm{T}]^\\tau}{[\\mathrm{A}]^\\alpha[\\mathrm{B}]^\\beta} \\times \\frac{\\gamma_\\mathrm{S}^\\sigma \\gamma_\\mathrm{T}^\\tau}{\\gamma_\\mathrm{A}^\\alpha \\gamma_\\mathrm{B}^\\beta}" }, { "math_id": 35, "text": "K= \\frac{[\\mathrm{S}]^\\sigma[\\mathrm{T}]^\\tau}{[\\mathrm{A}]^\\alpha[\\mathrm{B}]^\\beta}" } ]
https://en.wikipedia.org/wiki?curid=1449031
1449175
Degree matrix
Type of matrix in algebraic graph theory In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex. It is used together with the adjacency matrix to construct the Laplacian matrix of a graph: the Laplacian matrix is the difference of the degree matrix and the adjacency matrix. Definition. Given a graph formula_0 with formula_1, the degree matrix formula_2 for formula_3 is a formula_4 diagonal matrix defined as formula_5 where the degree formula_6 of a vertex counts the number of times an edge terminates at that vertex. In an undirected graph, this means that each loop increases the degree of a vertex by two. In a directed graph, the term "degree" may refer either to indegree (the number of incoming edges at each vertex) or outdegree (the number of outgoing edges at each vertex). Example. The following undirected graph has a 6x6 degree matrix with values: Note that in the case of undirected graphs, an edge that starts and ends in the same node increases the corresponding degree value by 2 (i.e. it is counted twice). Properties. The degree matrix of a k-regular graph has a constant diagonal of formula_7. According to the degree sum formula, the trace of the degree matrix is twice the number of edges of the considered graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G=(V,E)" }, { "math_id": 1, "text": "|V|=n" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "n \\times n" }, { "math_id": 5, "text": "D_{i,j}:=\\left\\{\n\\begin{matrix} \n\\deg(v_i) & \\mbox{if}\\ i = j \\\\\n0 & \\mbox{otherwise}\n\\end{matrix}\n\\right.\n" }, { "math_id": 6, "text": "\\deg(v_i)" }, { "math_id": 7, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=1449175
144940
Longitudinal wave
Waves in which the direction of media displacement is parallel (along) to the direction of travel Longitudinal waves are waves in which the vibration of the medium is parallel to the direction the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal waves are also called "compressional" or compression waves, because they produce compression and rarefaction when travelling through a medium, and pressure waves, because they produce increases and decreases in pressure. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Real-world examples include sound waves (vibrations in pressure, a particle of displacement, and particle velocity propagated in an elastic medium) and seismic P-waves (created by earthquakes and explosions). The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse waves, for instance, describe "some" bulk sound waves in solid materials (but not in fluids); these are also called "shear waves" to differentiate them from the (longitudinal) pressure waves that these materials also support. Nomenclature. "Longitudinal waves" and "transverse waves" have been abbreviated by some authors as "L-waves" and "T-waves", respectively, for their own convenience. While these two abbreviations have specific meanings in seismology (L-wave for Love wave or long wave) and electrocardiography (see T wave), some authors chose to use "l-waves" (lowercase 'L') and "t-waves" instead, although they are not commonly found in physics writings except for some popular science books. Sound waves. For longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula formula_0 where: The quantity "x"/"c" is the time that the wave takes to travel the distance "x". The ordinary frequency ("f") of the wave is given by formula_1 The wavelength can be calculated as the relation between a wave's speed and ordinary frequency. formula_2 For sound waves, the amplitude of the wave is the difference between the pressure of the undisturbed air and the maximum pressure caused by the wave. Sound's propagation speed depends on the type, temperature, and composition of the medium through which it propagates. Speed of Longitudinal Waves. Isotropic medium. For isotropic solids and liquids, the speed of a Longitudinal wave can be described by formula_3 where Attenuation of longitudinal waves. The attenuation of a wave in a medium describes the loss of energy a wave carries as it propagates throughout the medium. This is caused by the scattering of the wave at interfaces, the loss of energy due to the friction between molecules, or geometric divergence. The study of attenuation of elastic waves in materials has increased in recent years, particularly within the study of polycrystalline materials where researchers aim to "nondestructively evaluate the degree of damage of engineering components" and to "develop improved procedures for characterizing microstructures" according to a research team led by R. Bruce Thompson in a "Wave Motion" publication. Attenuation in viscoelastic materials. In viscoelastic materials, the attenuation coefficients per length alpha formula_9 for longitudinal waves and formula_10 for transverse waves must satisfy the following ratio: formula_11 where formula_12 and formula_13 are the transverse and longitudinal wave speeds respectively. Attenuation in polycrystalline materials. Polycrystalline materials are made up of various crystal which form the bulk material. Due to the difference in crystal structure and properties of these grains, when a wave propagating through a poly-crystal crosses a grain boundary, a scattering event occurs causing scattering based attenuation of the wave. Additionally it has been shown that the ratio rule for viscoelastic materials, formula_11 applies equally successfully to polycrystalline materials. A current prediction for modeling attenuation of waves in polycrystalline materials with elongated grains is the second-order approximation (SOA) model which accounts the second order of inhomogeneity allowing for the consideration multiple scattering in the crystal system. This model predicts that the shape of the grains in a poly-crystal has little effect on attenuation. Pressure waves. The equations for sound in a fluid given above also apply to acoustic waves in an elastic solid. Although solids also support transverse waves (known as S-waves in seismology), longitudinal sound waves in the solid exist with a velocity and wave impedance dependent on the material's density and its rigidity, the latter of which is described (as with sound in a gas) by the material's bulk modulus. In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster. Electromagnetics. Maxwell's equations lead to the prediction of electromagnetic waves in a vacuum, which are strictly transverse waves; due to the fact that they would need particles to vibrate upon, the electric and magnetic fields of which the wave consists are perpendicular to the direction of the wave's propagation. However plasma waves are longitudinal since these are not electromagnetic waves but density waves of charged particles, but which can couple to the electromagnetic field. After Heaviside's attempts to generalize Maxwell's equations, Heaviside concluded that electromagnetic waves were not to be found as longitudinal waves in ""free space" or homogeneous media. Maxwell's equations, as we now understand them, retain that conclusion: in free-space or other uniform isotropic dielectrics, electro-magnetic waves are strictly transverse. However electromagnetic waves can display a longitudinal component in the electric and/or magnetic fields when traversing birefringent materials, or inhomogeneous materials especially at interfaces (surface waves for instance) such as Zenneck waves. In the development of modern physics, Alexandru Proca (1897–1955) was known for developing relativistic quantum field equations bearing his name (Proca's equations) which apply to the massive vector spin-1 mesons. In recent decades some other theorists, such as Jean-Pierre Vigier and Bo Lehnert of the Swedish Royal Society, have used the Proca equation in an attempt to demonstrate photon mass as a longitudinal electromagnetic component of Maxwell's equations, suggesting that longitudinal electromagnetic waves could exist in a Dirac polarized vacuum. However photon rest mass is strongly doubted by almost all physicists and is incompatible with the Standard Model of physics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y(x,t) = y_0 \\cos\\! \\bigg( \\omega\\! \\left(t-\\frac{x}{c}\\right)\\! \\bigg)" }, { "math_id": 1, "text": " f = \\frac{\\omega}{2 \\pi}." }, { "math_id": 2, "text": " \\lambda =\\frac{c}{f}." }, { "math_id": 3, "text": "v_l=\\sqrt{E_l/\\rho}" }, { "math_id": 4, "text": "E_l\n" }, { "math_id": 5, "text": "E_L = K_b + \\frac{4G}{3}" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "K_B\n" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "\\alpha_l\n" }, { "math_id": 10, "text": "\\alpha_T\n\n" }, { "math_id": 11, "text": "\\frac{\\alpha_L}{\\alpha_T}\\geq \\frac{4c_T^3}{3c_L^3}\n" }, { "math_id": 12, "text": "c_T\n\n" }, { "math_id": 13, "text": "c_L\n" } ]
https://en.wikipedia.org/wiki?curid=144940
144948
Universal joint
Mechanism with bendable rotation axis A universal joint (also called a universal coupling or U-joint) is a joint or coupling connecting rigid shafts whose axes are inclined to each other. It is commonly used in shafts that transmit rotary motion. It consists of a pair of hinges located close together, oriented at 90° to each other, connected by a cross shaft. The universal joint is not a constant-velocity joint. U-joints are also sometimes called by various eponymous names, as follows: History. The main concept of the universal joint is based on the design of gimbals, which have been in use since antiquity. One anticipation of the universal joint was its use by the ancient Greeks on ballistae. In Europe the universal joint is often called the Cardano joint (and a drive shaft that uses the joints, a Cardan shaft), after the 16th century Italian mathematician, Gerolamo Cardano, who was an early writer on gimbals, although his writings mentioned only gimbal mountings, not universal joints. The mechanism was later described in "Technica curiosa sive mirabilia artis" (1664) by Gaspar Schott, who mistakenly claimed that it was a constant-velocity joint. Shortly afterward, between 1667 and 1675, Robert Hooke analysed the joint and found that its speed of rotation was nonuniform, but that property could be used to track the motion of the shadow on the face of a sundial. In fact, the component of the equation of time which accounts for the tilt of the equatorial plane relative to the ecliptic is entirely analogous to the mathematical description of the universal joint. The first recorded use of the term 'universal joint' for this device was by Hooke in 1676, in his book "Helioscopes". He published a description in 1678, resulting in the use of the term "Hooke's joint" in the English-speaking world. In 1683, Hooke proposed a solution to the nonuniform rotary speed of the universal joint: a pair of Hooke's joints 90° out of phase at either end of an intermediate shaft, an arrangement that is now known as a type of constant-velocity joint. Christopher Polhem of Sweden later re-invented the universal joint, giving rise to the name "Polhemsknut" ("Polhem knot") in Swedish. In 1841, the English scientist Robert Willis analyzed the motion of the universal joint. By 1845, the French engineer and mathematician Jean-Victor Poncelet had analyzed the movement of the universal joint using spherical trigonometry. The term "universal joint" was used in the 18th century and was in common use in the 19th century. Edmund Morewood's 1844 patent for a metal coating machine called for a universal joint, by that name, to accommodate small alignment errors between the engine and rolling mill shafts. Ephriam Shay's locomotive patent of 1881, for example, used double universal joints in the locomotive's drive shaft. Charles Amidon used a much smaller universal joint in his bit-brace patented 1884. Beauchamp Tower's spherical, rotary, high speed steam engine used an adaptation of the universal joint c. 1885. The term 'Cardan joint' appears to be a latecomer to the English language. Many early uses in the 19th century appear in translations from French or are strongly influenced by French usage. Examples include an 1868 report on the "Exposition Universelle" of 1867 and an article on the dynamometer translated from French in 1881. In the 20th century, Clarence W. Spicer and the Spicer Manufacturing Company, as well as the Hardy Spicer successor brand, helped further popularize universal joints in the automotive, farm equipment, heavy equipment, and industrial machinery industries. Equation of motion. The Cardan joint suffers from one major problem: even when the input drive shaft axle rotates at a constant speed, the output drive shaft axle rotates at a variable speed, thus causing vibration and wear. The variation in the speed of the driven shaft depends on the configuration of the joint, which is specified by three variables: These variables are illustrated in the diagram on the right. Also shown are a set of fixed coordinate axes with unit vectors formula_6 and formula_7 and the planes of rotation of each axle. These planes of rotation are perpendicular to the axes of rotation and do not move as the axles rotate. The two axles are joined by a gimbal which is not shown. However, axle 1 attaches to the gimbal at the red points on the red plane of rotation in the diagram, and axle 2 attaches at the blue points on the blue plane. Coordinate systems fixed with respect to the rotating axles are defined as having their x-axis unit vectors (formula_8 and formula_9) pointing from the origin towards one of the connection points. As shown in the diagram, formula_8 is at angle formula_0 with respect to its beginning position along the "x" axis and formula_9 is at angle formula_1 with respect to its beginning position along the "y" axis. formula_8 is confined to the "red plane" in the diagram and is related to formula_0 by: formula_10 formula_9 is confined to the "blue plane" in the diagram and is the result of the unit vector on the "x" axis formula_11 being rotated through Euler angles formula_12]: formula_13 A constraint on the formula_8 and formula_9 vectors is that since they are fixed in the gimbal, they must remain at right angles to each other. This is so when their dot product equals zero: formula_14 Thus the equation of motion relating the two angular positions is given by: formula_15 with a formal solution for formula_1: formula_16 The solution for formula_1 is not unique since the arctangent function is multivalued, however it is required that the solution for formula_1 be continuous over the angles of interest. For example, the following explicit solution using the atan2(y, x) function will be valid for formula_17: formula_18 The angles formula_0 and formula_1 in a rotating joint will be functions of time. Differentiating the equation of motion with respect to time and using the equation of motion itself to eliminate a variable yields the relationship between the angular velocities formula_19 and formula_20: formula_21 As shown in the plots, the angular velocities are not linearly related, but rather are periodic with a period half that of the rotating shafts. The angular velocity equation can again be differentiated to get the relation between the angular accelerations formula_22 and formula_23: formula_24 Double Cardan shaft. A configuration known as a double Cardan joint drive shaft partially overcomes the problem of jerky rotation. This configuration uses two U-joints joined by an intermediate shaft, with the second U-joint phased in relation to the first U-joint to cancel the changing angular velocity. In this configuration, the angular velocity of the driven shaft will match that of the driving shaft, provided that both the driving shaft and the driven shaft are at equal angles with respect to the intermediate shaft (but not necessarily in the same plane) and that the two universal joints are 90 degrees out of phase. This assembly is commonly employed in rear wheel drive vehicles, where it is known as a drive shaft or propeller (prop) shaft. Even when the driving and driven shafts are at equal angles with respect to the intermediate shaft, if these angles are greater than zero, oscillating moments are applied to the three shafts as they rotate. These tend to bend them in a direction perpendicular to the common plane of the shafts. This applies forces to the support bearings and can cause "launch shudder" in rear wheel drive vehicles. The intermediate shaft will also have a sinusoidal component to its angular velocity, which contributes to vibration and stresses. Mathematically, this can be shown as follows: If formula_2 and formula_4 are the angles for the input and output of the universal joint connecting the drive and the intermediate shafts respectively, and formula_25 and formula_26 are the angles for the input and output of the universal joint connecting the intermediate and the output shafts respectively, and each pair are at angle formula_3 with respect to each other, then: formula_27 If the second universal joint is rotated 90 degrees with respect to the first, then formula_28. Using the fact that formula_29 yields: formula_30 and it is seen that the output drive is just 90 degrees out of phase with the input shaft, yielding a constant-velocity drive. NOTE: The reference for measuring angles of input and output shafts of universal joint are mutually perpendicular axes. So, in absolute sense the forks of the intermediate shaft are parallel to each other. (Since, one fork is acting as input and the other fork is acting as output for shafts and above 90 degree phase difference is mentioned between the forks.) Double Cardan joint. A double Cardan joint consists of two universal joints mounted back to back with a centre yoke; the centre yoke replaces the intermediate shaft. Provided that the angle between the input shaft and centre yoke is equal to the angle between the centre yoke and the output shaft, the second Cardan joint will cancel the velocity errors introduced by the first Cardan joint and the aligned double Cardan joint will act as a CV joint. Thompson coupling. A Thompson coupling is a refined version of the double Cardan joint. It offers slightly increased efficiency with the penalty of great increase in complexity. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma_1" }, { "math_id": 1, "text": "\\gamma_2" }, { "math_id": 2, "text": "\\gamma_1\\," }, { "math_id": 3, "text": "\\beta\\," }, { "math_id": 4, "text": "\\gamma_2\\," }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "\\hat{\\mathbf{x}}" }, { "math_id": 7, "text": "\\hat{\\mathbf{y}}" }, { "math_id": 8, "text": "\\hat{\\mathbf{x}}_1" }, { "math_id": 9, "text": "\\hat{\\mathbf{x}}_2" }, { "math_id": 10, "text": "\\hat{\\mathbf{x}}_1 = \\left[\\cos\\gamma_1\\,,\\, \\sin\\gamma_1\\,,\\,0\\right]" }, { "math_id": 11, "text": "\\hat{x} = [1, 0, 0]" }, { "math_id": 12, "text": "[\\pi\\!/2\\,,\\, \\beta\\,,\\, \\gamma_2" }, { "math_id": 13, "text": "\n\\hat{\\mathbf{x}}_2 = \\left[-\\cos\\beta\\sin\\gamma_2\\,,\\, \\cos\\gamma_2\\,,\\, \\sin\\beta\\sin\\gamma_2\\right]\n" }, { "math_id": 14, "text": "\n\\hat{\\mathbf{x}}_1 \\cdot \\hat{\\mathbf{x}}_2 = 0\n" }, { "math_id": 15, "text": "\n\\tan\\gamma_1 = \\cos\\beta\\tan\\gamma_2\\,\n" }, { "math_id": 16, "text": "\\gamma_2 = \\tan^{-1}\\left[\\tan\\gamma_1 \\sec\\beta\\right]\\," }, { "math_id": 17, "text": "-\\pi < \\gamma_1 < \\pi" }, { "math_id": 18, "text": "\\gamma_2 = \\operatorname{atan2}\\left(\\sin\\gamma_1, \\cos\\beta\\, \\cos\\gamma_1\\right)" }, { "math_id": 19, "text": "\\omega_1 = d\\gamma_1/dt" }, { "math_id": 20, "text": "\\omega_2 = d\\gamma_2/dt" }, { "math_id": 21, "text": "\n\\omega_2 = \\omega_1\\left(\\frac{\\cos\\beta}{1 - \\sin^2\\beta\\,\\cos^2\\gamma_1}\\right)\n" }, { "math_id": 22, "text": "a_1" }, { "math_id": 23, "text": "a_2" }, { "math_id": 24, "text": "\na_2 = \\frac{a_1\\cos\\beta}{1 - \\sin^2\\beta\\,\\cos^2\\gamma_1} - \\frac{\\omega_1^2\\cos\\beta\\,\\sin^2\\beta\\,\\sin 2\\gamma_1}{\\left(1 - \\sin^2\\beta\\,\\cos^2\\gamma_1\\right)^2}\n" }, { "math_id": 25, "text": "\\gamma_3\\," }, { "math_id": 26, "text": "\\gamma_4\\," }, { "math_id": 27, "text": "\\tan\\gamma_2 = \\cos\\beta\\,\\tan\\gamma_1\\qquad \\tan\\gamma_4 = \\cos\\beta\\,\\tan\\gamma_3" }, { "math_id": 28, "text": "\\gamma_3 = \\gamma_2 + \\pi/2" }, { "math_id": 29, "text": "\\tan(\\gamma + \\pi/2) = 1/\\tan\\gamma" }, { "math_id": 30, "text": "\\tan\\gamma_4 = \\frac{\\cos\\beta}{\\tan\\gamma_2} = \\frac{1}{\\tan\\gamma_1} = \\tan\\left(\\gamma_1 + \\frac{\\pi}{2}\\right)\\," } ]
https://en.wikipedia.org/wiki?curid=144948
1449523
Preload (cardiology)
Heart muscle stretch at rest In cardiac physiology, preload is the amount of sarcomere stretch experienced by cardiac muscle cells, called cardiomyocytes, at the end of ventricular filling during diastole. Preload is directly related to ventricular filling. As the relaxed ventricle fills during diastole, the walls are stretched and the length of sarcomeres increases. Sarcomere length can be approximated by the volume of the ventricle because each shape has a conserved surface-area-to-volume ratio. This is useful clinically because measuring the sarcomere length is destructive to heart tissue. It requires cutting out a piece of cardiac muscle to look at the sarcomeres under a microscope. It is currently not possible to directly measure preload in the beating heart of a living animal. Preload is estimated from end-diastolic ventricular pressure and is measured in millimeters of mercury (mmHg). Estimating preload. Though not exactly equivalent to the strict definition of "preload," end-diastolic volume is better suited to the clinic. It is relatively straightforward to estimate the volume of a healthy, filled left ventricle by visualizing the 2D cross-section with cardiac ultrasound. This technique is less helpful for estimating right ventricular preload because it is difficult to calculate the volume in an asymmetrical chamber. In cases of rapid heart rate, it can be difficult to capture the moment of maximum fill at the end of diastole, which means the volume may be difficult to measure in children or during tachycardia. An alternative to estimating the end-diastolic volume of the heart is to measure the end-diastolic pressure. This is possible because pressure and volume are related to one another according to Boyle's law, which can be simplified to formula_0 The end diastolic pressure of the right ventricle can measured directly with a Swan-Ganz catheter. For the left ventricle, end diastolic pressure is most commonly estimated by taking the pulmonary wedge pressure, which is approximately equal to the pressure in the left atrium when the lungs are healthy. When the heart is healthy the diastolic pressure in the left atrium and left ventricle are equal. When both the heart and lungs are healthy, pulmonary wedge pressure is equal to left ventricle diastolic pressure and can be used as a surrogate for preload. Pulmonary wedge pressure will overestimate left ventricle pressure in people with mitral valve stenosis, pulmonary hypertension and other heart and lung conditions. Estimation of preload may also be inaccurate in a chronically dilated ventricles because additional new sarcomeres cause the relaxed ventricle to appear enlarged. Factors affecting preload. Preload is affected by venous blood pressure and the rate of venous return. These are affected by venous tone and volume of circulating blood. Preload is related to the ventricular end-diastolic volume; a higher end-diastolic volume implies a higher preload. However, the relationship is not simple because of the restriction of the term preload to single myocytes. Preload can still be approximated by the inexpensive echocardiographic measurement end diastolic volume or EDV. Preload increases with exercise (slightly), increasing blood volume (as in edema, excessive blood transfusion (overtransfusion), polycythemia) and neuroendocrine activity (sympathetic tone). An arteriovenous fistula can increase preload. Preload is also affected by two main body "pumps": References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\propto \\frac{1}{V}" } ]
https://en.wikipedia.org/wiki?curid=1449523
14496121
Conductance (graph theory)
A mixing property of Markov chains and graphs In theoretical computer science, graph theory, and mathematics, the conductance is a parameter of a Markov chain that is closely tied to its mixing time, that is, how rapidly the chain converges to its stationary distribution, should it exist. Equivalently, the conductance can be viewed as a parameter of a directed graph, in which case it can be used to analyze how quickly random walks in the graph converge. The conductance of a graph is closely related to the Cheeger constant of the graph, which is also known as the edge expansion or the isoperimetic number. However, due to subtly different definitions, the conductance and the edge expansion do not generally coincide if the graphs are not regular. On the other hand, the notion of electrical conductance that appears in electrical networks is unrelated to the conductance of a graph. History. The conductance was first defined by Mark Jerrum and Alistair Sinclair in 1988 to prove that the permanent of a matrix with entries from {0,1} has a polynomial-time approximation scheme. In the proof, Jerrum and Sinclair studied the Markov chain that switches between perfect and near-perfect matchings in bipartite graphs by adding or removing individual edges. They defined and used the conductance to prove that this Markov chain is rapidly mixing. This means that, after running the Markov chain for a polynomial number of steps, the resulting distribution is guaranteed to be close to the stationary distribution, which in this case is the uniform distribution on the set of all perfect and near-perfect matchings. This rapidly mixing Markov chain makes it possible in polynomial time to draw approximately uniform random samples from the set of all perfect matchings in the bipartite graph, which in turn gives rise to the polynomial-time approximation scheme for computing the permanent. Definition. For undirected d-regular graphs formula_0 without edge weights, the conductance formula_1 is equal to the Cheeger constant formula_2 divided by d, that is, we have formula_3. More generally, let formula_0 be a directed graph with formula_4 vertices, vertex set formula_5, edge set formula_6, and real weights formula_7 on each edge formula_8. Let formula_9 be any vertex subset. The conductance formula_10 of the cut formula_11 is defined viaformula_12whereformula_13and so formula_14 is the total weight of all edges that are crossing the cut from formula_15 to formula_16 andformula_17is the "volume" of formula_15, that is, the total weight of all edges that start at formula_15. If formula_18 equals formula_19, then formula_14 also equals formula_19 and formula_10 is defined as formula_20. The conductance formula_1 of the graph formula_0 is now defined as the minimum conductance over all possible cuts:formula_21Equivalently, the conductance satisfiesformula_22 Generalizations and applications. In practical applications, one often considers the conductance only over a cut. A common generalization of conductance is to handle the case of weights assigned to the edges: then the weights are added; if the weight is in the form of a resistance, then the reciprocal weights are added. The notion of conductance underpins the study of percolation in physics and other applied areas; thus, for example, the permeability of petroleum through porous rock can be modeled in terms of the conductance of a graph, with weights given by pore sizes. Conductance also helps measure the quality of a Spectral clustering. The maximum among the conductance of clusters provides a bound which can be used, along with inter-cluster edge weight, to define a measure on the quality of clustering. Intuitively, the conductance of a cluster (which can be seen as a set of vertices in a graph) should be low. Apart from this, the conductance of the subgraph induced by a cluster (called "internal conductance") can be used as well. Markov chains. For an ergodic reversible Markov chain with an underlying graph "G", the conductance is a way to measure how hard it is to leave a small set of nodes. Formally, the conductance of a graph is defined as the minimum over all sets formula_15 of the capacity of formula_15 divided by the ergodic flow out of formula_15. Alistair Sinclair showed that conductance is closely tied to mixing time in ergodic reversible Markov chains. We can also view conductance in a more probabilistic way, as the probability of leaving a set of nodes given that we started in that set to begin with. This may also be written as formula_23 where formula_24 is the stationary distribution of the chain. In some literature, this quantity is also called the bottleneck ratio of "G". Conductance is related to Markov chain mixing time in the reversible setting. Precisely, for any irreducible, reversible Markov Chain with self loop probabilities formula_25 for all states formula_26 and an initial state formula_27, formula_28. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\varphi(G)" }, { "math_id": 2, "text": "h(G)" }, { "math_id": 3, "text": "\\varphi(G) = h(G) / d" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "V" }, { "math_id": 6, "text": "E" }, { "math_id": 7, "text": "a_{ij} \\geq 0" }, { "math_id": 8, "text": "ij\\in E" }, { "math_id": 9, "text": "S\\subseteq V" }, { "math_id": 10, "text": "\\varphi(S)" }, { "math_id": 11, "text": "(S, \\bar S)" }, { "math_id": 12, "text": "\\varphi(S) = \\frac{\\displaystyle a(S,\\bar S)}{\\min(\\mathrm{vol}(S),\\mathrm{vol}(\\bar S))}\\,," }, { "math_id": 13, "text": "a(S,T) = \\sum_{i \\in S} \\sum_{j \\in T} a_{ij}\\,," }, { "math_id": 14, "text": "a(S,\\bar S)" }, { "math_id": 15, "text": "S" }, { "math_id": 16, "text": "\\bar S" }, { "math_id": 17, "text": "\\mathrm{vol}(S) = a(S,V)= \\sum_{i \\in S} \\sum_{j \\in V} a_{ij}" }, { "math_id": 18, "text": "\\mathrm{vol}(S)" }, { "math_id": 19, "text": "0" }, { "math_id": 20, "text": "1" }, { "math_id": 21, "text": "\\varphi(G) = \\min_{S \\subseteq V}\\varphi(S)." }, { "math_id": 22, "text": "\\varphi(G) = \\min\\left\\{\\frac{a(S,\\bar S)}{\\mathrm{vol}(S)}\\;\\colon\\; {\\mathrm{vol}(S)\\leq \\frac{\\mathrm{vol}(V)}{2}}\\right\\}\\,." }, { "math_id": 23, "text": "\\Phi = \\min_{S \\subseteq V, 0 < \\pi(S) \\leq \\frac{1}{2}}\\Phi_S = \\min_{S \\subseteq V, 0 < \\pi(S) \\leq \\frac{1}{2}}\\frac{\\sum_{x \\in S, y \\in \\bar S} \\pi(x) P(x,y)}{\\pi(S)}, " }, { "math_id": 24, "text": "\\pi" }, { "math_id": 25, "text": " P(y,y) \\geq 1/2" }, { "math_id": 26, "text": "y" }, { "math_id": 27, "text": "x \\in \\Omega" }, { "math_id": 28, "text": "\\frac{1}{4 \\Phi} \\leq \\tau_x(\\delta) \\leq \\frac{2}{\\Phi^2} \\big( \\ln \\pi(x)^{-1} + \\ln \\delta^{-1} \\big) " } ]
https://en.wikipedia.org/wiki?curid=14496121
14498167
Goldbach–Euler theorem
In mathematics, the Goldbach–Euler theorem (also known as Goldbach's theorem), states that the sum of 1/("p" − 1) over the set of perfect powers "p", excluding 1 and omitting repetitions, converges to 1: formula_0 This result was first published in Euler's 1737 paper "Variæ observationes circa series infinitas". Euler attributed the result to a letter (now lost) from Goldbach. Proof. Goldbach's original proof to Euler involved assigning a constant to the harmonic series: formula_1, which is divergent. Such a proof is not considered rigorous by modern standards. There is a strong resemblance between the method of sieving out powers employed in his proof and the method of factorization used to derive Euler's product formula for the Riemann zeta function. Let formula_2 be given by formula_3 Since the sum of the reciprocal of every power of 2 is formula_4, subtracting the terms with powers of 2 from formula_2 gives formula_5 Repeat the process with the terms with the powers of 3: formula_6 formula_7 Absent from the above sum are now all terms with powers of 2 and 3. Continue by removing terms with powers of 5, 6 and so on until the right side is exhausted to the value of 1. Eventually, we obtain the equation formula_8 which we rearrange into formula_9 where the denominators consist of all positive integers that are the non-powers minus 1. By subtracting the previous equation from the definition of formula_2 given above, we obtain formula_10 where the denominators now consist only of perfect powers minus 1. While lacking mathematical rigor, Goldbach's proof provides a reasonably intuitive argument for the theorem's truth. Rigorous proofs require proper and more careful treatment of the divergent terms of the harmonic series. Other proofs make use of the fact that the sum of 1/("p" − 1) over the set of perfect powers "p", excluding 1 but including repetitions, converges to 1 by demonstrating the equivalence: formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_{p}^{\\infty }\\frac{1}{p-1}= {\\frac{1}{3} + \\frac{1}{7} + \\frac{1}{8}+ \\frac{1}{15} + \\frac{1}{24} + \\frac{1}{26}+ \\frac{1}{31}}+ \\cdots = 1." }, { "math_id": 1, "text": " \\textstyle x = \\sum_{n=1}^\\infty \\frac{1}{n}" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "x = 1 + \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{4} + \\frac{1}{5} + \\frac{1}{6} + \\frac{1}{7} + \\frac{1}{8} + \\cdots" }, { "math_id": 4, "text": " \\textstyle 1 = \\frac{1}{2} + \\frac{1}{4} + \\frac{1}{8} + \\frac{1}{16} + \\cdots" }, { "math_id": 5, "text": "x - 1 = 1 + \\frac{1}{3} + \\frac{1}{5} + \\frac{1}{6} + \\frac{1}{7} + \\frac{1}{9} + \\frac{1}{10} + \\frac{1}{11} + \\cdots" }, { "math_id": 6, "text": "\\textstyle \\frac{1}{2} = \\frac{1}{3} + \\frac{1}{9} + \\frac{1}{27} + \\frac{1}{81} + \\cdots" }, { "math_id": 7, "text": "x - 1 - \\frac{1}{2} = 1 + \\frac{1}{5} + \\frac{1}{6} + \\frac{1}{7} + \\frac{1}{10} + \\frac{1}{11} + \\frac{1}{12} + \\cdots" }, { "math_id": 8, "text": "x - 1 - \\frac{1}{2} - \\frac{1}{4} - \\frac{1}{5} - \\frac{1}{6} - \\frac{1}{9} - \\cdots = 1" }, { "math_id": 9, "text": "x - 1 = 1 + \\frac{1}{2} + \\frac{1}{4} + \\frac{1}{5} + \\frac{1}{6} + \\frac{1}{9} + \\cdots" }, { "math_id": 10, "text": "1 = \\frac{1}{3} + \\frac{1}{7} + \\frac{1}{8}+ \\frac{1}{15} + \\frac{1}{24} + \\frac{1}{26}+ \\frac{1}{31} + \\cdots" }, { "math_id": 11, "text": "\\sum_p^\\infty \\frac{1}{p - 1} = \\sum_{m=2}^\\infty \\sum_{n=2}^\\infty \\frac 1 {m^n} = 1." } ]
https://en.wikipedia.org/wiki?curid=14498167
14498641
Engin Arık
Turkish physicist Engin Arık (October 14, 1948 – November 30, 2007) was a Turkish particle physicist and professor at Boğaziçi University. She led the Turkish participation in a number of experiments at CERN. Arık was a prominent supporter of Turkey's membership to CERN and the founding of a national particle accelerator center as a means to utilize thorium as an energy source. She has also represented Turkey at the Comprehensive Nuclear Test Ban Treaty Organization for a number of years. She died in the Atlasjet Flight 4203 crash on November 30, 2007. Education. Arık graduated from Istanbul University in 1969 with a BSc in physics and mathematics. As a graduate student, Arik attended University of Pittsburgh where she earned a master's degree in 1971 and a PhD in 1976 in experimental high energy physics, where she worked on the E583 experiment at Brookhaven National Laboratory. Arık's thesis was titled "Inclusive lambda production in sigma minus - proton collisions at 23 GeV/c." Following her PhD, Arık went to University of London, Westfield College for postdoctoral work. Here she worked in high energy physics research being carried out at the Rutherford Laboratory and later at the CERN Laboratory. While working as a postdoctoral researcher, she contributed to the "measurement of observables in formula_0." Career. In 1979, Arık returned to Turkey and joined the Department of Physics at Boğaziçi University, first as a lecturer, then in 1981 as an associate professor. In 1983, Arık briefly left her position at the university to work in industry with Control Data Corporation. Arık would return to Boğaziçi University in 1985 and in 1988, she received a full professorship. While teaching at Boğaziçi University, Arık performed research in the field of high energy physics. Her work faced limitations due to a scarcity of resources in Turkey available for this area of research. In the beginning of the 1990s, she joined experiments at CERN as a collaborator. Experiments she was a part of include: CHARM II, CHORUS, Spin Muon Collaboration (SMC), ATLAS, and CERN Axion Solar Telescope (CAST). During her career, Arık was a supporter of a movement for Turkey to become a full member of CERN as opposed to an associate member. A supporter of women in science, she was amongst the founders of the ATLAS Women's Network. From 1997 to 2000, Arık was appointed to represent Turkey at the Comprehensive Nuclear Test Ban Treaty Organization, which was held at the headquarters of the International Atomic Energy Agency (IAEA) in Vienna, Austria. During this time, Arık commuted between Geneva, Istanbul and Vienna. Arık spoke often about the use of thorium as an energy source in a new generation of Nuclear Power Plants, calling it "the most strategic material of the 21st century." Throughout her career, Arık published more than 100 studies in the fields of experimental high energy physics (HEP), detectors, applications of nuclear physics, and mathematical physics. She was the vice president of the Turkish Physical Society between 2001 and 2003. After her passing, she has been described as a "bannerbearer" for HEP in her country, and "one of the engin(es)" for the HEP community. Death and legacy. Arık died in the Atlasjet Flight 4203 crash on November 30, 2007. She was traveling with two students and three colleagues to Isparta, Turkey for the fourth workshop on a potential Turkish particle accelerator design. Following Arık's passing, a fellowship at CERN was established in her memory. Until 2015, the fellowship supported a total of 45 Turkish students so that they could attend CERN's Summer Student Program. Funding for the fellowship was provided by institutes, individuals, and private businesses. An international conference was held at Boğaziçi University in İstanbul on October 27–31, 2008 in memory of Arık and her colleagues. Another iteration was held three years later, organized jointly by the Doğuş and Boğaziçi Universities, with support from CERN and the Turkish Academy of Sciences. In 2013, her name was given to the main conference room at the accelerator institute building she helped found. The building is now part of TARLA, the Turkish Accelerator Radiation Laboratory. A street has been named after Arık in the İlkyerleşim neighborhood of the Yenimahalle district in Ankara, Turkey. A monument at the Süleyman Demirel University commemorating the six scientists who passed away at the plane crash has a bust of Arık specifically. Assassination allegations. There are various assassination allegations about Engin Arık's death. After the plane crash, some groups claimed that it was an assassination and that the accident was preplanned. An investigation has been opened on this issue and is still ongoing.
[ { "math_id": 0, "text": "\\Pi^+p\\rightarrow\\Kappa^+\\Sigma^+" } ]
https://en.wikipedia.org/wiki?curid=14498641
14500225
2-deoxyglucosidase
Class of enzymes The enzyme 2-deoxyglucosidase (EC 3.2.1.112) catalyzes the following chemical reaction a 2-deoxy-α--glucoside + H2O formula_0 2-deoxy--glucose + an alcohol It belongs to the family of hydrolases, specifically those glycosidases that hydrolyse "O"- and "S"-glycosyl compounds. The systematic name is 2-deoxy-α--glucoside deoxyglucohydrolase. Other names in common use include 2-deoxy-α-glucosidase, and 2-deoxy-α--glucosidase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14500225
14500247
3-deoxyoctulosonase
Class of enzymes In enzymology, a 3-deoxyoctulosonase (EC 3.2.1.144) is an enzyme that catalyzes the chemical reaction 3-deoxyoctulosonyl-lipopolysaccharide + H2O formula_0 3-deoxyoctulosonic acid + lipopolysaccharide Thus, the two substrates of this enzyme are 3-deoxyoctulosonyl-lipopolysaccharide and H2O, whereas its two products are 3-deoxyoctulosonic acid and lipopolysaccharide. This enzyme belongs to the family of hydrolases, specifically those glycosidases that hydrolyse O- and S-glycosyl compounds. The systematic name of this enzyme class is 3-deoxyoctulosonyl-lipopolysaccharide hydrolase. This enzyme is also called alpha-Kdo-ase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14500247