id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
14045888 | Peptidylglycine monooxygenase | In enzymology, a peptidylglycine monooxygenase (EC 1.14.17.3) is an enzyme that catalyzes the chemical reaction
peptidylglycine + ascorbate + O2 formula_0 peptidyl(2-hydroxyglycine) + dehydroascorbate + H2O
The 3 substrates of this enzyme are peptidylglycine, ascorbate, and O2, whereas its 3 products are peptidyl(2-hydroxyglycine), dehydroascorbate, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with reduced ascorbate as one donor, and incorporation of one atom of oxygen into the other donor. The systematic name of this enzyme class is peptidylglycine,ascorbate:oxygen oxidoreductase (2-hydroxylating). Other names in common use include 2-hydroxylase, alpha-amidating enzyme, peptide-alpha-amide synthetase, synthase, peptide alpha-amide, peptide alpha-amidating enzyme, peptide alpha-amide synthase, alpha-hydroxylase, alpha-amidating monooxygenase, PAM-A, PAM-B, and PAM. It employs one cofactor, copper.
Structural studies.
As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1OPM, 1PHM, 1SDW, 1YI9, 1YIP, 1YJK, 1YJL, and 3PHM.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045888 |
140459 | Base (chemistry) | Type of chemical substance
In chemistry, there are three definitions in common use of the word "base": "Arrhenius bases", "Brønsted bases", and "Lewis bases". All definitions agree that bases are substances that react with acids, as originally proposed by G.-F. Rouelle in the mid-18th century.
In 1884, Svante Arrhenius proposed that a base is a substance which dissociates in aqueous solution to form hydroxide ions OH−. These ions can react with hydrogen ions (H+ according to Arrhenius) from the dissociation of acids to form water in an acid–base reaction. A base was therefore a metal hydroxide such as NaOH or Ca(OH)2. Such aqueous hydroxide solutions were also described by certain characteristic properties. They are slippery to the touch, can taste bitter and change the color of pH indicators (e.g., turn red litmus paper blue).
In water, by altering the autoionization equilibrium, bases yield solutions in which the hydrogen ion activity is lower than it is in pure water, i.e., the water has a pH higher than 7.0 at standard conditions. A soluble base is called an alkali if it contains and releases OH− ions quantitatively. Metal oxides, hydroxides, and especially alkoxides are basic, and conjugate bases of weak acids are weak bases.
Bases and acids are seen as chemical opposites because the effect of an acid is to increase the hydronium (H3O+) concentration in water, whereas bases reduce this concentration. A reaction between aqueous solutions of an acid and a base is called neutralization, producing a solution of water and a salt in which the salt separates into its component ions. If the aqueous solution is saturated with a given salt solute, any additional such salt precipitates out of the solution.
In the more general Brønsted–Lowry acid–base theory (1923), a base is a substance that can accept hydrogen cations (H+)—otherwise known as protons. This does include aqueous hydroxides since OH− does react with H+ to form water, so that Arrhenius bases are a subset of Brønsted bases. However, there are also other Brønsted bases which accept protons, such as aqueous solutions of ammonia (NH3) or its organic derivatives (amines). These bases do not contain a hydroxide ion but nevertheless react with water, resulting in an increase in the concentration of hydroxide ion. Also, some non-aqueous solvents contain Brønsted bases which react with solvated protons. For example, in liquid ammonia, NH2− is the basic ion species which accepts protons from NH4+, the acidic species in this solvent.
G. N. Lewis realized that water, ammonia, and other bases can form a bond with a proton due to the unshared pair of electrons that the bases possess. In the Lewis theory, a base is an electron pair donor which can share a pair of electrons with an electron acceptor which is described as a Lewis acid. The Lewis theory is more general than the Brønsted model because the Lewis acid is not necessarily a proton, but can be another molecule (or ion) with a vacant low-lying orbital which can accept a pair of electrons. One notable example is boron trifluoride (BF3).
Some other definitions of both bases and acids have been proposed in the past, but are not commonly used today.
Properties.
General properties of bases include:
Reactions between bases and water.
The following reaction represents the general reaction between a base (B) and water to produce a conjugate acid (BH+) and a conjugate base (OH−):<chem display="block">{B}_{(aq)} + {H2O}_{(l)} <=> {BH+}_{(aq)} + {OH- }_{(aq)}</chem>The equilibrium constant, Kb, for this reaction can be found using the following general equation:
formula_0
In this equation, the base (B) and the extremely strong base (the conjugate base OH−) compete for the proton. As a result, bases that react with water have relatively small equilibrium constant values. The base is weaker when it has a lower equilibrium constant value.
Neutralization of acids.
Bases react with acids to neutralize each other at a fast rate both in water and in alcohol. When dissolved in water, the strong base sodium hydroxide ionizes into hydroxide and sodium ions:
<chem>NaOH -> Na+ + OH-</chem>
and similarly, in water the acid hydrogen chloride forms hydronium and chloride ions:
<chem>HCl + H2O -> H3O+ + Cl-</chem>
When the two solutions are mixed, the H3O+ and OH- ions combine to form water molecules:
<chem>H3O+ + OH- -> 2H2O</chem>
If equal quantities of NaOH and HCl are dissolved, the base and the acid neutralize exactly, leaving only NaCl, effectively table salt, in solution.
Weak bases, such as baking soda or egg white, should be used to neutralize any acid spills. Neutralizing acid spills with strong bases, such as sodium hydroxide or potassium hydroxide, can cause a violent exothermic reaction, and the base itself can cause just as much damage as the original acid spill.
Alkalinity of non-hydroxides.
Bases are generally compounds that can neutralize an amount of acid. Both sodium carbonate and ammonia are bases, although neither of these substances contains OH- groups. Both compounds accept H+ when dissolved in protic solvents such as water:
<chem>Na2CO3 + H2O -> 2Na+ + HCO3- + OH-</chem>
<chem>NH3 + H2O -> NH4+ + OH-</chem>
From this, a pH, or acidity, can be calculated for aqueous solutions of bases.
A base is also defined as a molecule that has the ability to accept an electron pair bond by entering another atom's valence shell through its possession of one electron pair. There are a limited number of elements that have atoms with the ability to provide a molecule with basic properties. Carbon can act as a base as well as nitrogen and oxygen. Fluorine and sometimes rare gases possess this ability as well. This occurs typically in compounds such as butyl lithium, alkoxides, and metal amides such as sodium amide. Bases of carbon, nitrogen and oxygen without resonance stabilization are usually very strong, or superbases, which cannot exist in a water solution due to the acidity of water. Resonance stabilization, however, enables weaker bases such as carboxylates; for example, sodium acetate is a weak base.
Strong bases.
A strong base is a basic chemical compound that can remove a proton (H+) from (or "deprotonate") a molecule of even a very weak acid (such as water) in an acid–base reaction. Common examples of strong bases include hydroxides of alkali metals and alkaline earth metals, like NaOH and Ca(OH)2, respectively. Due to their low solubility, some bases, such as alkaline earth hydroxides, can be used when the solubility factor is not taken into account.
One advantage of this low solubility is that "many antacids were suspensions of metal hydroxides such as aluminium hydroxide and magnesium hydroxide"; compounds with low solubility and the ability to stop an increase in the concentration of the hydroxide ion, preventing the harm of the tissues in the mouth, oesophagus, and stomach. As the reaction continues and the salts dissolve, the stomach acid reacts with the hydroxide produced by the suspensions.
Strong bases hydrolyze in water almost completely, resulting in the leveling effect." In this process, the water molecule combines with a strong base, due to the water's amphoteric ability; and, a hydroxide ion is released. Very strong bases can even deprotonate very weakly acidic C–H groups in the absence of water. Here is a list of several strong bases:
The cations of these strong bases appear in the first and second groups of the periodic table (alkali and earth alkali metals). Tetraalkylated ammonium hydroxides are also strong bases since they dissociate completely in water. Guanidine is a special case of a species that is exceptionally stable when protonated, analogously to the reason that makes perchloric acid and sulfuric acid very strong acids.
Acids with a p"Ka" of more than about 13 are considered very weak, and their conjugate bases are strong bases.
Superbases.
Group 1 salts of carbanions, amide ions, and hydrides tend to be even stronger bases due to the extreme weakness of their conjugate acids, which are stable hydrocarbons, amines, and dihydrogen. Usually, these bases are created by adding pure alkali metals such as sodium into the conjugate acid. They are called "superbases", and it is impossible to keep them in aqueous solutions because they are stronger bases than the hydroxide ion (See the leveling effect.) For example, the ethoxide ion (conjugate base of ethanol) undergoes this reaction quantitatively in presence of water.
<chem>CH3CH2O- + H2O -> CH3CH2OH + OH-</chem>
Examples of common superbases are:
Strongest superbases are synthesised in only gas phase:
Weak bases.
A weak base is one which does not fully ionize in an aqueous solution, or in which protonation is incomplete. For example, ammonia transfers a proton to water according to the equation
NH3(aq) + H2O(l) → NH(aq) + OH-(aq)
The equilibrium constant for this reaction at 25 °C is 1.8 x 10−5, such that the extent of reaction or degree of ionization is quite small.
Lewis bases.
A Lewis base or "electron-pair donor" is a molecule with one or more high-energy lone pairs of electrons which can be shared with a low-energy vacant orbital in an acceptor molecule to form an adduct. In addition to H+, possible "electron-pair acceptors" (Lewis acids) include neutral molecules such as BF3 and high oxidation state metal ions such as Ag2+, Fe3+ and Mn7+. Adducts involving metal ions are usually described as coordination complexes.
According to the original formulation of Lewis, when a neutral base forms a bond with a neutral acid, a condition of electric stress occurs. The acid and the base share the electron pair that formerly belonged to the base. As a result, a high dipole moment is created, which can only be decreased to zero by rearranging the molecules.
Solid bases.
Examples of solid bases include:
Depending on a solid surface's ability to successfully form a conjugate base by absorbing an electrically neutral acid, basic strength of the surface is determined. The "number of basic sites per unit surface area of the solid" is used to express how much basic strength is found on a solid base catalyst. Scientists have developed two methods to measure the amount of basic sites: one, titration with benzoic acid using indicators and gaseous acid adsorption. A solid with enough basic strength will absorb an electrically neutral acidic indicator and cause the acidic indicator's color to change to the color of its conjugate base. When performing the gaseous acid adsorption method, nitric oxide is used. The basic sites are then determined by calculating the amount of carbon dioxide that is absorbed.
Bases as catalysts.
Basic substances can be used as insoluble heterogeneous catalysts for chemical reactions. Some examples are metal oxides such as magnesium oxide, calcium oxide, and barium oxide as well as potassium fluoride on alumina and some zeolites. Many transition metals make good catalysts, many of which form basic substances. Basic catalysts are used for hydrogenation, the migration of double bonds, in the Meerwein-Ponndorf-Verley reduction, the Michael reaction, and many others. Both CaO and BaO can be highly active catalysts if they are heated to high temperatures.
Monoprotic and polyprotic bases.
Bases with only one ionizable hydroxide (OH−) ion per formula unit are called monoprotic since they can accept one proton (H+). Bases with more than one OH- per formula unit are polyprotic.
The number of ionizable hydroxide (OH−) ions present in one formula unit of a base is also called the acidity of the base. On the basis of acidity bases can be classified into three types: monoacidic, diacidic and triacidic.
Monoacidic bases.
When one molecule of a base via complete ionization produces one hydroxide ion, the base is said to be a monoacidic or monoprotic base. Examples of monoacidic bases are:
Sodium hydroxide, potassium hydroxide, silver hydroxide, ammonium hydroxide, etc.
Diacidic bases.
When one molecule of base via complete ionization produces two hydroxide ions, the base is said to be diacidic or diprotic. Examples of diacidic bases are:
Barium hydroxide, magnesium hydroxide, calcium hydroxide, zinc hydroxide, iron(II) hydroxide, tin(II) hydroxide, lead(II) hydroxide, copper(II) hydroxide, etc.
Triacidic bases.
When one molecule of base via complete ionization produces three hydroxide ions, the base is said to be triacidic or triprotic. Examples of triacidic bases are:
Aluminium hydroxide, ferrous hydroxide, Gold Trihydroxide,
Etymology of the term.
The concept of base stems from an older alchemical notion of "the matrix":
<templatestyles src="Template:Blockquote/styles.css" />The term "base" appears to have been first used in 1717 by the French chemist, Louis Lémery, as a synonym for the older Paracelsian term "matrix." In keeping with 16th-century animism, Paracelsus had postulated that naturally occurring salts grew within the earth as a result of a universal acid or seminal principle having impregnated an earthy matrix or womb. ... Its modern meaning and general introduction into the chemical vocabulary, however, is usually attributed to the French chemist, Guillaume-François Rouelle. ... In 1754 Rouelle explicitly defined a neutral salt as the product formed by the union of an acid with any substance, be it a water-soluble alkali, a volatile alkali, an absorbent earth, a metal, or an oil, capable of serving as "a base" for the salt "by giving it a concrete or solid form." Most acids known in the 18th century were volatile liquids or "spirits" capable of distillation, whereas salts, by their very nature, were crystalline solids. Hence it was the substance that neutralized the acid which supposedly destroyed the volatility or spirit of the acid and which imparted the property of solidity (i.e., gave a concrete base) to the resulting salt.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "K_b = \\frac{[BH^+][OH^-]}{[B]}"
}
]
| https://en.wikipedia.org/wiki?curid=140459 |
14045901 | Phenol 2-monooxygenase | Class of enzymes
In enzymology, a phenol 2-monooxygenase (EC 1.14.13.7) is an enzyme that catalyzes the chemical reaction
phenol + NADPH + H+ + O2 formula_0 catechol + NADP+ + H2O
The 4 substrates of this enzyme are phenol, NADPH, H+, and O2, whereas its 3 products are catechol, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is phenol,NADPH:oxygen oxidoreductase (2-hydroxylating). Other names in common use include phenol hydroxylase, and phenol o-hydroxylase. This enzyme participates in 3 metabolic pathways: gamma-hexachlorocyclohexane degradation, toluene and xylene degradation, and naphthalene and anthracene degradation. It employs one cofactor, FAD.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1FOH, 1HQI, and 1PN0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045901 |
14045915 | Phenylacetone monooxygenase | Class of enzymes
In enzymology, a phenylacetone monooxygenase (EC 1.14.13.92) is an enzyme that catalyzes the chemical reaction
phenylacetone + NADPH + H+ + O2 formula_0 benzyl acetate + NADP+ + H2O
The 4 substrates of this enzyme are phenylacetone, NADPH, H+, and O2, whereas its 3 products are benzyl acetate, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is phenylacetone,NADPH:oxygen oxidoreductase. This enzyme is also called PAMO.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045915 |
14045922 | Phosphatidylcholine 12-monooxygenase | Class of enzymes
In enzymology, a phosphatidylcholine 12-monooxygenase (EC 1.14.13.26) is an enzyme that catalyzes the chemical reaction
1-acyl-2-oleoyl-sn-glycero-3-phosphocholine + NADH + H+ + O2 formula_0 1-acyl-2-[(S)-12-hydroxyoleoyl]-sn-glycero-3-phosphocholine + NAD+ + H2O
The 4 substrates of this enzyme are 1-acyl-2-oleoyl-sn-glycero-3-phosphocholine, NADH, H+, and O2, whereas its 3 products are 1-acyl-2-[(S)-12-hydroxyoleoyl]-sn-glycero-3-phosphocholine, NAD+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is 1-acyl-2-oleoyl-sn-glycero-3-phosphocholine,NADH:oxygen oxidoreductase (12-hydroxylating). Other names in common use include ricinoleic acid synthase, oleate Delta12-hydroxylase, and oleate Delta12-monooxygenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045922 |
14045933 | Phthalate 4,5-dioxygenase | In enzymology, a phthalate 4,5-dioxygenase (EC 1.14.12.7) is an enzyme that catalyzes the chemical reaction
phthalate + NADH + H+ + O2 formula_0 cis-4,5-dihydroxycyclohexa-1(6),2-diene-1,2-dicarboxylate + NAD+
The 4 substrates of this enzyme are phthalate, NADH, H+, and O2, whereas its two products are cis-4,5-dihydroxycyclohexa-1(6),2-diene-1,2-dicarboxylate and NAD+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of two atoms of oxygen into the other donor. The systematic name of this enzyme class is phthalate,NADH:oxygen oxidoreductase (4,5-hydroxylating). Other names in common use include PDO, and phthalate dioxygenase. This enzyme participates in 2,4-dichlorobenzoate degradation. It has 3 cofactors: iron, FMN, and Iron-sulfur.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045933 |
14045946 | Phylloquinone monooxygenase (2,3-epoxidizing) | In enzymology, a phylloquinone monooxygenase (2,3-epoxidizing) (EC 1.14.99.20) is an enzyme that catalyzes the chemical reaction
phylloquinone + AH2 + O2 formula_0 2,3-epoxyphylloquinone + A + H2O
The three substrates of this enzyme are phylloquinone, an electron acceptor AH2, and O2, whereas its three products are 2,3-epoxyphylloquinone, the reduction product A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derive from O miscellaneous. The systematic name of this enzyme class is phylloquinone,hydrogen-donor:oxygen oxidoreductase (2,3-epoxidizing). Other names in common use include phylloquinone epoxidase, vitamin K 2,3-epoxidase, vitamin K epoxidase, and vitamin K1 epoxidase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045946 |
14045957 | Phytanoyl-CoA dioxygenase | Class of enzymes
In enzymology, a phytanoyl-CoA dioxygenase (EC 1.14.11.18) is an enzyme that catalyzes the chemical reaction
phytanoyl-CoA + 2-oxoglutarate + O2 formula_0 2-hydroxyphytanoyl-CoA + succinate + CO2
The three substrates of this enzyme are phytanoyl-CoA, 2-oxoglutarate (2OG), and O2, whereas its three products are 2-hydroxyphytanoyl-CoA, succinate, and CO2.
This enzyme belongs to the family of iron(II)-dependent oxygenases, which typically incorporate one atom of dioxygen into the substrate and one atom into the succinate carboxylate group. The mechanism is complex, but is believed to involve ordered binding of 2-oxoglutarate to the iron(II) containing enzyme followed by substrate. Binding of substrate causes displacement of a water molecule from the iron(II) cofactor, leaving a vacant coordination position to which dioxygen binds. A rearrangement occurs to form a high energy iron-oxygen species (which is generally thought to be an iron(IV)=O species) that performs the actual oxidation reaction.
Nomenclature.
The systematic name of this enzyme class is phytanoyl-CoA, 2-oxoglutarate:oxygen oxidoreductase (2-hydroxylating). These enzymes are also called phytanoyl-CoA hydroxylases and phytanoyl-CoA alpha-hydroxylases.
Examples.
In humans, phytanoyl-CoA hydroxylase is encoded by the "PHYH" ("aka PAHX") gene and is required for the alpha-oxidation of branched chain fatty acids (e.g. phytanic acid) in peroxisomes. PHYH deficiency results in the accumulation of large tissue stores of phytanic acid and is the major cause of Refsum disease.
Related enzymes.
Iron(II) and 2OG-dependent oxygenases are common in microorganisms, plants, and animals; the human genome is predicted to contain about 80 examples, and the model plant "Arabidopsis thaliana" likely contains more. In plants and microorganisms this enzyme family is associated with a large diversity of oxidative reactions.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045957 |
14045975 | Plasmanylethanolamine desaturase | Class of enzymes
In enzymology, a plasmanylethanolamine desaturase (EC 1.14.99.19) is an enzyme that catalyzes the chemical reaction
O-1-alkyl-2-acyl-sn-glycero-3-phosphoethanolamine + AH2 + O2 formula_0 O-1-alk-1-enyl-2-acyl-sn-glycero-3-phosphoethanolamine + A + 2 H2O
The 3 substrates of this enzyme are O-1-alkyl-2-acyl-sn-glycero-3-phosphoethanolamine, an electron acceptor AH2, and O2, whereas its 3 products are O-1-alk-1-enyl-2-acyl-sn-glycero-3-phosphoethanolamine, the reduction product A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derive from O miscellaneous. The systematic name of this enzyme class is O-1-alkyl-2-acyl-sn-glycero-3-phosphoethanolamine,hydrogen-donor:oxy gen oxidoreductase. Other names in common use include alkylacylglycerophosphoethanolamine desaturase, alkylacylglycero-phosphorylethanolamine dehydrogenase, dehydrogenase, alkyl-acylglycerophosphorylethanolamine, 1-O-alkyl-2-acyl-sn-glycero-3-phosphorylethanolamine desaturase, and 1-O-alkyl 2-acyl-sn-glycero-3-phosphorylethanolamine desaturase. This enzyme participates in ether lipid metabolism. It requires NADPH.
Plasmanylethanolamine desaturase used to be described as an orphan enzyme, that is one whose activity is known but whose identity (gene, protein sequence) is unknown. It has now been identified and corresponds to protein CarF in bacteria and TMEM189 in humans (and animals). It contains the pfam10520 lipid desaturase domain which has 8 conserved histidines and which is also found in FAD4 plant desaturases. Mice lacking plasmanylethanolamine desaturase lack plasmalogens in their tissues and have reduced body weight. | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045975 |
14045989 | Precorrin-3B synthase | Class of enzymes
In enzymology, a precorrin-3B synthase (EC 1.14.13.83) is an enzyme that catalyzes the chemical reaction
precorrin-3A + NADH + H+ + O2 formula_0 precorrin-3B + NAD+ + H2O
The 4 substrates of this enzyme are precorrin 3A, NADH, H+, and O2, whereas its 3 products are precorrin 3B, NAD+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is precorrin-3A,NADH:oxygen oxidoreductase (20-hydroxylating). Other names in common use include precorrin-3X synthase, and CobG. This enzyme is part of the biosynthetic pathway to cobalamin (vitamin B12) in aerobic bacteria.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14045989 |
14046001 | Procollagen-proline 3-dioxygenase | In enzymology, a procollagen-proline 3-dioxygenase (EC 1.14.11.7) is an enzyme that catalyzes the chemical reaction
procollagen L-proline + 2-oxoglutarate + O2 formula_0 procollagen trans-3-hydroxy-L-proline + succinate + CO2
The enzyme is a member of the alpha-ketoglutarate-dependent hydroxylases superfamily. The 3 substrates of this enzyme are procollagen L-proline, 2-oxoglutarate, and O2, whereas its 3 products are procollagen trans-3-hydroxy-L-proline, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is procollagen-L-proline,2-oxoglutarate:oxygen oxidoreductase (3-hydroxylating). Other names in common use include proline,2-oxoglutarate 3-dioxygenase, prolyl 3-hydroxylase, protocollagen proline 3-hydroxylase, prolyl-4-hydroxyprolyl-glycyl-peptide, 2-oxoglutarate: oxygen, and oxidoreductase, 3-hydroxylating. It has 2 cofactors: iron, and Ascorbate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046001 |
14046030 | Progesterone 11alpha-monooxygenase | In enzymology, a progesterone 11alpha-monooxygenase (EC 1.14.99.14) is an enzyme that catalyzes the chemical reaction
progesterone + AH2 + O2 formula_0 11alpha-hydroxyprogesterone + A + H2O
The 3 substrates of this enzyme are progesterone, AH2, and O2, whereas its 3 products are 11alpha-hydroxyprogesterone, A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derive from O miscellaneous. The systematic name of this enzyme class is progesterone,hydrogen-donor:oxygen oxidoreductase (11alpha-hydroxylating). This enzyme is also called progesterone 11alpha-hydroxylase. This enzyme participates in c21-steroid hormone metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046030 |
14046039 | Progesterone monooxygenase | In enzymology, a progesterone monooxygenase (EC 1.14.99.4) is an enzyme that catalyzes the chemical reaction
progesterone + AH2 + O2 formula_0 testosterone acetate + A + H2O
The 3 substrates of this enzyme are progesterone, AH2, and O2, whereas its 3 products are testosterone acetate, A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O miscellaneous. The systematic name of this enzyme class is progesterone,hydrogen-donor:oxygen oxidoreductase (hydroxylating). This enzyme is also called progesterone hydroxylase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046039 |
14046046 | Proline 3-hydroxylase | In enzymology, a proline 3-hydroxylase (EC 1.14.11.28) is an enzyme that catalyzes the chemical reaction
L-proline + 2-oxoglutarate + O2 formula_0 cis-3-hydroxy-L-proline + succinate + CO2
The 3 substrates of this enzyme are L-proline, 2-oxoglutarate, and O2, whereas its 3 products are cis-3-hydroxy-L-proline, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is L-proline,2-oxoglutarate:oxygen oxidoreductase (3-hydroxylating). This enzyme is also called P-3-H.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046046 |
14046057 | Protopine 6-monooxygenase | Class of enzymes
In enzymology, a protopine 6-monooxygenase (EC 1.14.13.55) is an enzyme that catalyzes the chemical reaction
protopine + NADPH + H+ + O2 formula_0 6-hydroxyprotopine + NADP+ + H2O
The 4 substrates of this enzyme are protopine, NADPH, H+, and O2, whereas its 3 products are 6-hydroxyprotopine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is protopine,NADPH:oxygen oxidoreductase (6-hydroxylating). This enzyme is also called protopine 6-hydroxylase. This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046057 |
14046065 | Psoralen synthase | Class of enzymes
In enzymology, a psoralen synthase (EC 1.14.13.102) is an enzyme that catalyzes the chemical reaction
(+)-marmesin + NADPH + H+ + O2 formula_0 psoralen + NADP+ + acetone + 2 H2O
The 4 substrates of this enzyme are (+)-marmesin, NADPH, H+, and O2, whereas its 4 products are psoralen, NADP+, acetone, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is (?). This enzyme is also called CYP71AJ1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046065 |
14046074 | Pyrimidine-deoxynucleoside 1'-dioxygenase | In enzymology, a pyrimidine-deoxynucleoside 1'-dioxygenase (EC 1.14.11.10) is an enzyme that catalyzes the chemical reaction
2'-deoxyuridine + 2-oxoglutarate + O2 formula_0 uracil + 2-deoxyribonolactone + succinate + CO2
The 3 substrates of this enzyme are 2'-deoxyuridine, 2-oxoglutarate, and O2, whereas its 4 products are uracil, 2-deoxyribonolactone, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is 2'-deoxyuridine,2-oxoglutarate:oxygen oxidoreductase (1'-hydroxylating). This enzyme is also called deoxyuridine-uridine 1'-dioxygenase. It has 2 cofactors: iron, and Ascorbate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046074 |
14046085 | Pyrimidine-deoxynucleoside 2'-dioxygenase | In enzymology, a pyrimidine-deoxynucleoside 2'-dioxygenase (EC 1.14.11.3) is an enzyme that catalyzes the chemical reaction
2'-deoxyuridine + 2-oxoglutarate + O2 formula_0 uridine + succinate + CO2
The 3 substrates of this enzyme are 2'-deoxyuridine, 2-oxoglutarate, and O2, whereas its 3 products are uridine, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is 2'-deoxyuridine,2-oxoglutarate:oxygen oxidoreductase (2'-hydroxylating). Other names in common use include deoxyuridine 2'-dioxygenase, deoxyuridine 2'-hydroxylase, pyrimidine deoxyribonucleoside 2'-hydroxylase, thymidine 2'-dioxygenase, thymidine 2'-hydroxylase, thymidine 2-oxoglutarate dioxygenase, and thymidine dioxygenase. It has 2 cofactors: iron, and Ascorbate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046085 |
14046098 | Questin monooxygenase | Class of enzymes
In enzymology, a questin monooxygenase (EC 1.14.13.43) is an enzyme that catalyzes the chemical reaction
questin + NADPH + H+ + O2 formula_0 demethylsulochrin + NADP+ + H2O
The 4 substrates of this enzyme are questin, NADPH, H+, and O2, whereas its 3 products are demethylsulochrin, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is questin,NADPH:oxygen oxidoreductase (hydroxylating, anthraquinone-ring-opening). This enzyme is also called questin oxygenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046098 |
14046111 | Quinine 3-monooxygenase | Class of enzymes
In enzymology, a quinine 3-monooxygenase (EC 1.14.13.67) is an enzyme that catalyzes the chemical reaction
quinine + NADPH + H+ + O2 formula_0 3-hydroxyquinine + NADP+ + H2O
The 4 substrates of this enzyme are quinine, NADPH, H+, and O2, whereas its 3 products are 3-hydroxyquinine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is quinine,NADPH:oxygen oxidoreductase. This enzyme is also called quinine 3-hydroxylase.
Structural studies.
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1W0E, 1W0F, 1W0G, 2J0D, and 2V0M.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046111 |
14046120 | (R)-limonene 6-monooxygenase | Class of enzymes
In enzymology, a (R)-limonene 6-monooxygenase (EC 1.14.13.80) is an enzyme that catalyzes the chemical reaction
(+)-(R)-limonene + NADPH + H+ + O2 formula_0 (+)-trans-carveol + NADP+ + H2O
The 4 substrates of this enzyme are (+)-(R)-limonene, NADPH, H+, and O2, whereas its 3 products are (+)-trans-carveol, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is (R)-limonene,NADPH:oxygen oxidoreductase (6-hydroxylating). Other names in common use include (+)-limonene-6-hydroxylase, and (+)-limonene 6-monooxygenase. This enzyme participates in monoterpenoid biosynthesis and limonene and pinene degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046120 |
14046128 | Salicylate 1-monooxygenase | Class of enzymes
In enzymology, a salicylate 1-monooxygenase (EC 1.14.13.1) is an enzyme that catalyzes the chemical reaction
salicylate + NADH + 2 H+ + O2 formula_0 catechol + NAD+ + H2O + CO2
The 4 substrates of this enzyme are salicylate, NADH, H+, and O2, whereas its 4 products are catechol, NAD+, H2O, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. This enzyme participates in 3 metabolic pathways: 1- and 2-methylnaphthalene degradation, naphthalene and anthracene degradation, and fluorene degradation. It employs one cofactor, FAD.
Nomenclature.
The systematic name of this enzyme class is salicylate,NADH:oxygen oxidoreductase (1-hydroxylating, decarboxylating).
Other names in common use include:
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046128 |
14046141 | Salutaridine synthase | Class of enzymes
In enzymology, a salutaridine synthase (EC 1.14.21.4) is an enzyme that catalyzes the chemical reaction
(R)-reticuline + NADPH + H+ + O2 formula_0 salutaridine + NADP+ + 2 H2O
The 4 substrates of this enzyme are (R)-reticuline, NADPH, H+, and O2, whereas its 3 products are salutaridine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and the other dehydrogenated. The systematic name of this enzyme class is (R)-reticuline,NADPH:oxygen oxidoreductase (C-C phenol-coupling). This enzyme is also called (R)-reticuline oxidase (C-C phenol-coupling). This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046141 |
14046147 | (S)-canadine synthase | Class of enzymes
In enzymology, a (S)-canadine synthase (EC 1.14.21.5) is an enzyme that catalyzes the chemical reaction
(S)-tetrahydrocolumbamine + NADPH + H+ + O2 formula_0 (S)-canadine + NADP+ + 2 H2O
The 4 substrates of this enzyme are (S)-tetrahydrocolumbamine, NADPH, H+, and O2, whereas its 3 products are Canadine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and the other dehydrogenated. The systematic name of this enzyme class is (S)-tetrahydrocolumbamine,NADPH:oxygen oxidoreductase (methylenedioxy-bridge-forming). Other names in common use include (S)-tetrahydroberberine synthase, and (S)-tetrahydrocolumbamine oxidase (methylenedioxy-bridge-forming). This enzyme participates in alkaloid biosynthesis i. It employs one cofactor, heme-thiolate(P-450).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046147 |
14046152 | (S)-cheilanthifoline synthase | Class of enzymes
In enzymology, a (S)-cheilanthifoline synthase (EC 1.14.21.2) is an enzyme that catalyzes the chemical reaction
(S)-scoulerine + NADPH + H+ + O2 formula_0 (S)-cheilanthifoline + NADP+ + 2 H2O
The 4 substrates of this enzyme are (S)-scoulerine, NADPH, H+, and O2, whereas its 3 products are (S)-cheilanthifoline, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and the other dehydrogenated. The systematic name of this enzyme class is (S)-scoulerine,NADPH:oxygen oxidoreductase (methylenedioxy-bridge-forming). This enzyme is also called (S)-scoulerine oxidase (methylenedioxy-bridge-forming). This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046152 |
14046164 | Senecionine N-oxygenase | Class of enzymes
In enzymology, a senecionine N-oxygenase (EC 1.14.13.101) is an enzyme that catalyzes the chemical reaction
senecionine + NADPH + H+ + O2 formula_0 senecionine N-oxide + NADP+ + H2O
The 4 substrates of this enzyme are senecionine, NADPH, H+, and O2, whereas its 3 products are senecionine N-oxide, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is senecionine,NADPH:oxygen oxidoreductase (N-oxide-forming). Other names in common use include senecionine monooxygenase (N-oxide-forming) and SNO.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046164 |
14046171 | (S)-limonene 3-monooxygenase | Class of enzymes
In enzymology, a (S)-limonene 3-monooxygenase (EC 1.14.13.47) is an enzyme that catalyzes the chemical reaction
(−)-(S)-limonene + NADPH + H+ + O2 formula_0 (−)-trans-isopiperitenol + NADP+ + H2O
The 4 substrates of this enzyme are (−)-(S)-limonene, NADPH, H+, and O2, whereas its 3 products are (−)-trans-isopiperitenol, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is (S)-limonene,NADPH:oxygen oxidoreductase (3-hydroxylating). Other names in common use include (−)-limonene 3-hydroxylase, (−)-limonene 3-monooxygenase, and (−)-limonene,NADPH:oxygen oxidoreductase (3-hydroxylating). This enzyme participates in monoterpenoid biosynthesis. It employs one cofactor, heme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046171 |
14046174 | (S)-limonene 6-monooxygenase | Class of enzymes
In enzymology, a (S)-limonene 6-monooxygenase (EC 1.14.13.48) is an enzyme that catalyzes the chemical reaction
(−)-(S)-limonene + NADPH + H+ + O2 formula_0 (−)-trans-carveol + NADP+ + H2O
The 4 substrates of this enzyme are (−)-(S)-limonene, NADPH, H+, and O2, whereas its 3 products are (−)-trans-carveol, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is (S)-limonene,NADPH:oxygen oxidoreductase (6-hydroxylating). Other names in common use include (−)-limonene 6-hydroxylase, (−)-limonene 6-monooxygenase, and (−)-limonene,NADPH:oxygen oxidoreductase (6-hydroxylating). This enzyme participates in monoterpenoid biosynthesis and limonene and pinene degradation. It employs one cofactor, heme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046174 |
14046180 | (S)-limonene 7-monooxygenase | Class of enzymes
In enzymology, a (S)-limonene 7-monooxygenase (EC 1.14.13.49) is an enzyme that catalyzes the chemical reaction
(−)-(S)-limonene + NADPH + H+ + O2 formula_0 (−)-perillyl alcohol + NADP+ + H2O
The 4 substrates of this enzyme are (−)-(S)-limonene, NADPH, H+, and O2, whereas its 3 products are (−)-perillyl alcohol, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is (S)-limonene,NADPH:oxygen oxidoreductase (7-hydroxylating). Other names in common use include (−)-limonene 7-monooxygenase, (−)-limonene hydroxylase, (−)-limonene monooxygenase, and (−)-limonene,NADPH:oxygen oxidoreductase (7-hydroxylating). This enzyme participates in monoterpenoid biosynthesis and limonene and pinene degradation. It employs one cofactor, heme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046180 |
14046185 | (S)-stylopine synthase | Class of enzymes
In enzymology, a (S)-stylopine synthase (EC 1.14.21.1) is an enzyme that catalyzes the chemical reaction
(S)-cheilanthifoline + NADPH + H+ + O2 formula_0 (S)-stylopine + NADP+ + 2 H2O
The 4 substrates of this enzyme are (S)-cheilanthifoline, NADPH, H+, and O2, whereas its 3 products are (S)-stylopine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and the other dehydrogenated. The systematic name of this enzyme class is (S)-cheilanthifoline,NADPH:oxygen oxidoreductase (methylenedioxy-bridge-forming). This enzyme is also called (S)-cheilanthifoline oxidase (methylenedioxy-bridge-forming). This enzyme participates in alkaloid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046185 |
14046204 | Stearoyl-CoA 9-desaturase | Class of enzymes
Stearoyl-CoA desaturase (Δ-9-desaturase or SCD-1) is an endoplasmic reticulum enzyme that catalyzes the rate-limiting step in the formation of monounsaturated fatty acids (MUFAs), specifically oleate and palmitoleate from stearoyl-CoA and palmitoyl-CoA. Oleate and palmitoleate are major components of membrane phospholipids, cholesterol esters and alkyl-diacylglycerol. In humans, the enzyme is present in two isoforms, encoded respectively by the "SCD1" and "SCD5" genes.
Stearoyl-CoA desaturase-1 is a key enzyme in fatty acid metabolism. It is responsible for forming a double bond in stearoyl-CoA. This is how the monounsaturated fatty acid oleic acid is produced from the saturated fatty acid, stearic acid.
A series of redox reactions, during which two electrons flow from NADH to flavoprotein cytochrome b5, then to the electron acceptor cytochrome b5 as well as molecular oxygen introduces a single double bond within a row of methylene fatty acyl-CoA substrates. The complexed enzyme adds a single double bond between the C9 and C10 of long-chain acyl-CoAs from de-novo synthesis.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with oxidation of a pair of donors resulting in the reduction of O to two molecules of water. The systematic name of this enzyme class is "stearoyl-CoA,ferrocytochrome-b5:oxygen oxidoreductase (9,10-dehydrogenating)". This enzyme participates in polyunsaturated fatty acid biosynthesis and PPAR signaling pathway. It employs one cofactor, iron.
Function.
Stearoyl-CoA desaturase (SCD; EC 1.14.19.1) is an iron-containing enzyme that catalyzes a rate-limiting step in the synthesis of unsaturated fatty acids. The principal product of SCD is oleic acid, which is formed by desaturation of stearic acid. The ratio of stearic acid to oleic acid has been implicated in the regulation of cell growth and differentiation through effects on cell membrane fluidity and signal transduction.
Four SCD isoforms, Scd1 through Scd4, have been identified in mouse. In contrast, only 2 SCD isoforms, SCD1 and SCD5 (MIM 608370, Uniprot Q86SK9), have been identified in human. SCD1 shares about 85% amino acid identity with all 4 mouse SCD isoforms, as well as with rat Scd1 and Scd2. In contrast, SCD5 (also known as hSCD2) shares limited homology with the rodent SCDs and appears to be unique to primates.
SCD-1 is an important metabolic control point. Inhibition of its expression may enhance the treatment of a host of metabolic diseases. One of the unanswered questions is that SCD remains a highly regulated enzyme, even though oleate is readily available, as it is an abundant monounsaturated fatty acid in dietary fat.
It catalyzes the chemical reaction
stearoyl-CoA + 2 ferrocytochrome b5 + O2 + 2 H+ formula_0 oleoyl-CoA + 2 ferricytochrome b5 + 2 H2O
The 4 substrates of this enzyme are stearoyl-CoA, ferrocytochrome b5, O2, and H+, whereas its 3 products are oleoyl-CoA, ferricytochrome b5, and H2O.
Structure.
The enzyme's structure is key to its function. SCD-1 consists of four transmembrane domains. Both the amino and carboxyl terminus and eight catalytically important histidine regions, which collectively bind iron within the catalytic center of the enzyme, lie in the cytosol region. The five cysteines in SCD-1 are located within the lumen of the endoplasmic reticulum.
The substrate binding site is long, thin and hydrophobic and kinks the substrate tail at the location where the di-iron catalytic centre introduces the double bond.
The literature suggests that the enzyme accomplishes the desaturation reaction by removing the first hydrogen at C9 position and then the second hydrogen from the C-10 position. Because the C-9 and C-10 are positioned close to the iron-containing center of the enzyme, this mechanism is hypothesized to be specific for the position at which the double bond is formed.
Role in human disease.
Monounsaturated fatty acids, the products of SCD-1 catalyzed reactions, can serve as substrates for the synthesis of various kinds of lipids, including phospholipids, triglycerides, and can also be used as mediators in signal transduction and differentiation. Because MUFAs are heavily utilized in cellular processes, variation in SCD activity in mammals is expected to influence physiological variables, including cellular differentiation, insulin sensitivity, metabolic syndrome, atherosclerosis, cancer, and obesity. SCD-1 deficiency results in reduced adiposity, increased insulin sensitivity, and resistance to diet-induced obesity.
Under non-fasting conditions, SCD-1 mRNA is highly expressed in white adipose tissue, brown adipose tissue, and the Harderian gland. SCD-1 expression is significantly increased in liver tissue and heart in response to a high-carbohydrate diet, whereas SCD-2 expression is observed in brain tissue and induced during the neonatal myelination. Diets high in high-saturated as well as monounsaturated-fat can also increase SCD-1 expression, although not to the extent of the lipogenic effect of a high-carb diet.
Elevated expression levels of SCD1 is found to be correlated with obesity and tumor malignancy. It is believed that tumor cells obtain most part of their requirement for fatty acids by de novo synthesis. This phenomenon depends on increased expression of fatty acid biosynthetic enzymes that produce required fatty acids in large quantities. Mice that were fed a high-carbohydrate diet had an induced expression of the liver SCD-1 gene and other lipogenic genes through an insulin-mediated SREBP-1c-dependent mechanism. Activation of SREBP-1c results in upregulated synthesis of MUFAs and liver triglycerides. SCD-1 knockout mice did not increase de novo lipogenesis but created an abundance of cholesterol esters.
SCD1 function has also been shown to be involved in germ cell determination, adipose tissue specification, liver cell differentiation and cardiac development.
The human SCD-1 gene structure and regulation is very similar to that of mouse SCD-1. Overexpression of SCD-1 in humans may be involved in the development of hypertriglyceridemia, atherosclerosis, and diabetes. One study showed that SCD-1 activity was associated with inherited hyperlipidemia. SCD-1 deficiency has also been shown to reduce ceramide synthesis by downregulating serine palmitoyltransferase. This consequently increases the rate of beta-oxidation in skeletal muscle.
In carbohydrate metabolism studies, knockout SCD-1 mice show increased insulin sensitivity. Oleate is a major constituent of membrane phospholipids and membrane fluidity is influenced by the ratio of saturated to monounsaturated fatty acids. One proposed mechanism is that an increase in cell membrane fluidity, consisting largely of lipid, activates the insulin receptor. A decrease in MUFA content of the membrane phospholipids in the SCD-1−/− mice is offset by an increase in polyunsaturated fatty acids, effectively increasing membrane fluidity due to the introduction of more double bonds in the fatty acyl chain.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046204 |
14046215 | Steroid 11beta-monooxygenase | Enzyme
In enzymology, a steroid 11beta-monooxygenase (EC 1.14.15.4) is an enzyme that catalyzes the chemical reaction
a steroid + reduced adrenal ferredoxin + O2 formula_0 an 11beta-hydroxysteroid + oxidized adrenal ferredoxin + H2O
The 3 substrates of this enzyme are steroid, reduced adrenal ferredoxin, and O2, whereas its 3 products are 11beta-hydroxysteroid, oxidized adrenal ferredoxin, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with reduced iron-sulfur protein as one donor, and incorporation o one atom of oxygen into the other donor. The systematic name of this enzyme class is steroid,reduced-adrenal-ferredoxin:oxygen oxidoreductase (11beta-hydroxylating). Other names in common use include steroid 11beta-hydroxylase, steroid 11beta/18-hydroxylase, and oxygenase, steroid 11beta -mono-. This enzyme participates in c21-steroid hormone metabolism and androgen and estrogen metabolism. It employs one cofactor, heme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046215 |
14046224 | Steroid 17alpha-monooxygenase | In enzymology, a steroid 17alpha-monooxygenase (EC 1.14.99.9) is an enzyme that catalyzes the chemical reaction
a steroid + AH2 + O2 formula_0 a 17alpha-hydroxysteroid + A + H2O
The 3 substrates of this enzyme are steroid, an electron acceptor AH2, and O2, whereas its 3 products are 17alpha-hydroxysteroid, the reduction product A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derive from O miscellaneous. The systematic name of this enzyme class is steroid,hydrogen-donor:oxygen oxidoreductase (17alpha-hydroxylating). Other names in common use include steroid 17alpha-hydroxylase, cytochrome P-45017alpha, cytochrome P-450 (P-45017alpha,lyase), and 17alpha-hydroxylase-C17,20 lyase. This enzyme participates in c21-steroid hormone metabolism. It has 3 cofactors: NADH, NADPH, and Heme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046224 |
14046236 | Steroid 9alpha-monooxygenase | In enzymology, a steroid 9alpha-monooxygenase (EC 1.14.99.24) is an enzyme that catalyzes the chemical reaction
pregna-4,9(11)-diene-3,20-dione + AH2 + O2 formula_0 9,11alpha-epoxypregn-4-ene-3,20-dione + A + H2O
The 3 substrates of this enzyme are pregna-4,9(11)-diene-3,20-dione, an electron acceptor AH2, and O2, whereas its 3 products are 9,11alpha-epoxypregn-4-ene-3,20-dione, the reduction product A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derive from O miscellaneous. The systematic name of this enzyme class is steroid,hydrogen-donor:oxygen oxidoreductase (9-epoxidizing). This enzyme is also called steroid 9alpha-hydroxylase. It has 2 cofactors: FMN, and Iron-sulfur.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046236 |
14046249 | Sterol 14-demethylase | Class of enzymes
In enzymology, a sterol 14-demethylase (EC 1.14.13.70) is an enzyme of the cytochrome P450 (CYP) superfamily. It is any member of the CYP51 family. It catalyzes a chemical reaction such as:
obtusifoliol + 3 O2 + 3 NADPH + 3 H+ formula_0 4alpha-methyl-5alpha-ergosta-8,14,24(28)-trien-3beta-ol + formate + 3 NADP+ + 4 H2O
The 4 substrates here are obtusifoliol, O2, NADPH, and H+, whereas its 4 products are 4alpha-methyl-5alpha-ergosta-8,14,24(28)-trien-3beta-ol, formate, NADP+, and H2O.
Although the lanosterol 14α-demethylase is present in a wide variety of organisms, the enzyme is studied primarily in the context of fungi, where it plays an essential role in mediating membrane permeability. In fungi, CYP51 catalyzes the demethylation of lanosterol to create an important precursor that is eventually converted into ergosterol. This steroid then makes its way throughout the cell, where it alters the permeability and rigidity of plasma membranes much as cholesterol does in animals. Because ergosterol constitutes a fundamental component of fungal membranes, many antifungal medications have been developed to inhibit 14α-demethylase activity and prevent the production of this key compound.
Nomenclature.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is sterol,NADPH:oxygen oxidoreductase (14-methyl cleaving). Other names in common use include obtusufoliol 14-demethylase, lanosterol 14-demethylase, lanosterol 14alpha-demethylase, and sterol 14alpha-demethylase. This enzyme participates in biosynthesis of steroids.
These are not the typical CYP subfamilies, but only one subfamily is created for each major taxonomic group. CYP51A for Animals, CYP51B for Bacteria. CYP51C for Chromista, CYP51D for Dictyostelium, CYP51E for Euglenozoa, CYP51F for Fungi. Those groups with only one CYP51 per species are all called by one name: CYP51A1 is for all animal CYP51s since they are orthologous. The same is true for CYP51B, C, D, E and F. CYP51G (green plants) and CYP51Hs (monocots only so far) have individual sequence numbers.
Function.
The biological role of this protein is also well understood. The demethylated products of the CYP51 reaction are vital intermediates in pathways leading to the formation of cholesterol in humans, ergosterol in fungi, and other types of sterols in plants. These sterols localize to the plasma membrane of cells, where they play an important structural role in the regulation of membrane fluidity and permeability and also influence the activity of enzymes, ion channels, and other cell components that are embedded within. With the proliferation of immuno-suppressive diseases such as HIV/AIDS and cancer, patients have become increasingly vulnerable to opportunistic fungal infections (Richardson et al.). Seeking new means to treat such infections, drug researchers have begun targeting the 14α-demethylase enzyme in fungi; destroying the fungal cell's ability to produce ergosterol causes a disruption of the plasma membrane, thereby resulting in cellular leakage and ultimately the death of the pathogen ("DrugBank").
Azoles are currently the most popular class of antifungals used in both agricultural and medical settings. These compounds bind as the sixth ligand to the heme group in CYP51, thereby altering the structure of the active site and acting as noncompetitive inhibitors. The effectiveness of imidazoles and triazoles (common azole subclasses) as inhibitors of 14α-demethylase have been confirmed through several experiments. Some studies test for changes in the production of important downstream ergosterol intermediates in the presence of these compounds. Other studies employ spectrophotometry to quantify azole-CYP51 interactions. Coordination of azoles to the prosthetic heme group in the enzyme's active site causes a characteristic shift in CYP51 absorbance, creating what is commonly referred to as a type II difference spectrum.
Prolonged use of azoles as antifungals has resulted in the emergence of drug resistance among certain fungal strains. Mutations in the coding region of CYP51 genes, overexpression of CYP51, and overexpression of membrane efflux transporters can all lead to resistance to these antifungals. Consequently, the focus of azole research is beginning to shift towards identifying new ways to circumvent this major obstacle.
Structure.
As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1H5Z, 1U13, 1X8V, 2BZ9, 2CI0, and 2CIB.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046249 |
14046263 | Tabersonine 16-hydroxylase | Class of enzymes
In enzymology, a tabersonine 16-hydroxylase (EC 1.14.13.73) is an enzyme that catalyzes the chemical reaction
tabersonine + NADPH + H+ + O2 formula_0 16-hydroxytabersonine + NADP+ + H2O
The 4 substrates of this enzyme are tabersonine, NADPH, H+, and O2, whereas its 3 products are 16-hydroxytabersonine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is tabersonine,NADPH:oxygen oxidoreductase (16-hydroxylating). Other names in common use include tabersonine-11-hydroxylase, and T11H. This enzyme participates in terpene indole and ipecac alkaloid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046263 |
14046279 | Taurine dioxygenase | Enzyme that catalyzes the chemical reaction
In enzymology, a taurine dioxygenase (EC 1.14.11.17) is an enzyme that catalyzes the chemical reaction.
taurine + 2-oxoglutarate + O2 formula_0 sulfite + aminoacetaldehyde + succinate + CO2
The 3 substrates of this enzyme are taurine, 2-oxoglutarate, and O2, whereas its 4 products are sulfite, aminoacetaldehyde, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is taurine, 2-oxoglutarate:O2 oxidoreductase (sulfite-forming). Other names in common use include 2-aminoethanesulfonate dioxygenase, and alpha-ketoglutarate-dependent taurine dioxygenase. This enzyme participates in taurine and hypotaurine metabolism. It has 3 cofactors: iron, Ascorbate, and Fe2+.
Structural studies.
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1GQW, 1GY9, 1OS7, and 1OTJ.
Mechanism.
Initiating steps.
In the decomposition of taurine, it has been shown that molecular oxygen is activated by Iron II, which lies in the coordinating complex of taurine dioxygenase. Here the enzyme with conjunction of an Iron II and 2-oxoglutarate maintain non-covalent bonds by electrostatic interactions, and coordinate a nucleophilic attack from dioxygen on 2-oxoglutarate carbon number 2. This leads to the two oxidations, one on 2-oxoglutarate, and another on taurine, each one electron.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046279 |
14046289 | Taxadiene 5alpha-hydroxylase | In enzymology, a taxadiene 5alpha-hydroxylase (EC 1.14.99.37) is an enzyme that catalyzes the chemical reaction
taxa-4,11-diene + AH2 + O2 formula_0 taxa-4(20),11-dien-5alpha-ol + A + H2O
The 3 substrates of this enzyme are taxa-4,11-diene, an electron acceptor AH2, and O2, whereas its 3 products are taxa-4(20),11-dien-5alpha-ol, the reduction product A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derive from O miscellaneous. The systematic name of this enzyme class is taxa-4,11-diene,hydrogen-donor:oxygen oxidoreductase (5alpha-hydroxylating). This enzyme participates in diterpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046289 |
14046296 | Taxane 10beta-hydroxylase | Class of enzymes
In enzymology, a taxane 10beta-hydroxylase (EC 1.14.13.76) is an enzyme that catalyzes the chemical reaction
taxa-4(20),11-dien-5alpha-yl acetate + NADPH + H+ + O2 formula_0 10beta-hydroxytaxa-4(20),11-dien-5alpha-yl acetate + NADP+ + H2O
The 4 substrates of this enzyme are taxa-4(20),11-dien-5alpha-yl acetate, NADPH, H+, and O2, whereas its 3 products are 10beta-hydroxytaxa-4(20),11-dien-5alpha-yl acetate, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is taxa-4(20),11-dien-5alpha-yl acetate,NADPH:oxygen oxidoreductase (10beta-hydroxylating). This enzyme participates in diterpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046296 |
14046307 | Taxane 13alpha-hydroxylase | Class of enzymes
In enzymology, a taxane 13alpha-hydroxylase (EC 1.14.13.77) is an enzyme that catalyzes the chemical reaction
taxa-4(20),11-dien-5alpha-ol + NADPH + H+ + O2 formula_0 taxa-4(20),11-dien-5alpha,13alpha-diol + NADP+ + H2O
The 4 substrates of this enzyme are taxa-4(20),11-dien-5alpha-ol, NADPH, H+, and O2, whereas its 3 products are taxa-4(20),11-dien-5alpha,13alpha-diol, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is taxa-4(20),11-dien-5alpha-ol,NADPH:oxygen oxidoreductase (13alpha-hydroxylating). This enzyme participates in diterpenoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046307 |
14046319 | Taxifolin 8-monooxygenase | Class of enzymes
In enzymology, a taxifolin 8-monooxygenase (EC 1.14.13.19) is an enzyme that catalyzes the chemical reaction
taxifolin + NAD(P)H + H+ + O2 formula_0 2,3-dihydrogossypetin + NAD(P)+ + H2O
The 5 substrates of this enzyme are taxifolin, NADH, NADPH, H+, and O2, whereas its 4 products are 2,3-dihydrogossypetin, NAD+, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is taxifolin,NAD(P)H:oxygen oxidoreductase (8-hydroxylating). This enzyme is also called taxifolin hydroxylase. It has 2 cofactors: FAD, and Flavoprotein.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046319 |
14046335 | Terephthalate 1,2-dioxygenase | Class of enzymes
In enzymology, a terephthalate 1,2-dioxygenase (EC 1.14.12.15) is an enzyme that catalyzes the chemical reaction
terephthalate + NADH + H+ + O2 formula_0 (1R,6S)-dihydroxycyclohexa-2,4-diene-1,4-dicarboxylate + NAD+
The 4 substrates of this enzyme are terephthalate, NADH, H+, and O2, whereas its two products are (1R,6S)-dihydroxycyclohexa-2,4-diene-1,4-dicarboxylate and NAD+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of two atoms o oxygen into the other donor. The systematic name of this enzyme class is benzene-1,4-dicarboxylate,NADH:oxygen oxidoreductase (1,2-hydroxylating). Other names in common use include benzene-1,4-dicarboxylate 1,2-dioxygenase, and 1,4-dicarboxybenzoate 1,2-dioxygenase. This enzyme participates in 2,4-dichlorobenzoate degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046335 |
14046352 | Thiophene-2-carbonyl-CoA monooxygenase | In enzymology, a thiophene-2-carbonyl-CoA monooxygenase (EC 1.14.99.35) is an enzyme that catalyzes the chemical reaction
thiophene-2-carbonyl-CoA + AH2 + O2 formula_0 5-hydroxythiophene-2-carbonyl-CoA + A + H2O
The three substrates of this enzyme are thiophene-2-carbonyl-CoA, an electron acceptor AH2, and O2. Its three products are 5-hydroxythiophene-2-carbonyl-CoA, the reduction product A, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O miscellaneous. The systematic name of this enzyme class is thiophene-2-carbonyl-CoA, hydrogen-donor:oxygen oxidoreductase. Other names in common use include thiophene-2-carboxyl-CoA dehydrogenase, thiophene-2-carboxyl-CoA hydroxylase, and thiophene-2-carboxyl-CoA monooxygenase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046352 |
14046366 | Thymine dioxygenase | In enzymology, a thymine dioxygenase (EC 1.14.11.6) is an enzyme that catalyzes the chemical reaction
thymine + 2-oxoglutarate + O2 formula_0 5-hydroxymethyluracil + succinate + CO2
The 3 substrates of this enzyme are thymine, 2-oxoglutarate, and O2, whereas its 3 products are 5-hydroxymethyluracil, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is thymine,2-oxoglutarate:oxygen oxidoreductase (7-hydroxylating). Other names in common use include thymine 7-hydroxylase, 5-hydroxy-methyluracil dioxygenase, and 5-hydroxymethyluracil oxygenase. It has 2 cofactors: iron, and Ascorbate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046366 |
14046382 | Toluene dioxygenase | Class of enzymes
In enzymology, a toluene dioxygenase (EC 1.14.12.11) is an enzyme that catalyzes the chemical reaction
toluene + NADH + H+ + O2 formula_0 (1S,2R)-3-methylcyclohexa-3,5-diene-1,2-diol + NAD+
The 4 substrates of this enzyme are toluene, NADH, H+, and O2, whereas its two products are (1S,2R)-3-methylcyclohexa-3,5-diene-1,2-diol and NAD+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of two atoms o oxygen into the other donor. The systematic name of this enzyme class is toluene,NADH:oxygen oxidoreductase (1,2-hydroxylating). This enzyme is also called toluene 2,3-dioxygenase. This enzyme participates in toluene and xylene degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046382 |
14046394 | Trans-cinnamate 2-monooxygenase | Class of enzymes
In enzymology, a trans-cinnamate 2-monooxygenase (EC 1.14.13.14) is an enzyme that catalyzes the chemical reaction
trans-cinnamate + NADPH + H+ + O2 formula_0 2-hydroxycinnamate + NADP+ + H2O
The 4 substrates of this enzyme are trans-cinnamate, NADPH, H+, and O2, whereas its 3 products are 2-hydroxycinnamate, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is trans-cinnamate,NADPH:oxygen oxidoreductase (2-hydroxylating). Other names in common use include cinnamic acid 2-hydroxylase, cinnamate 2-monooxygenase, cinnamic 2-hydroxylase, cinnamate 2-hydroxylase, and trans-cinnamic acid 2-hydroxylase. This enzyme participates in phenylalanine metabolism and phenylpropanoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046394 |
14046408 | Trans-cinnamate 4-monooxygenase | Class of enzymes
In enzymology, a trans-cinnamate 4-monooxygenase (EC 1.14.14.91) is an enzyme that catalyzes the chemical reaction
trans-cinnamate + NADPH + H+ + O2 formula_0 4-hydroxycinnamate + NADP+ + H2O
The 4 substrates of this enzyme are trans-cinnamate, NADPH, H+, and O2, whereas its 3 products are 4-hydroxycinnamate, NADP+, and H2O. This enzyme participates in phenylalanine metabolism and phenylpropanoid biosynthesis. It employs one cofactor, heme.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor.
Nomenclature.
The systematic name of this enzyme class is trans-cinnamate,NADPH:oxygen oxidoreductase (4-hydroxylating). Other names in common use include:
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046408 |
14046416 | Trimethyllysine dioxygenase | Class of enzymes
In enzymology, a trimethyllysine dioxygenase (TMLH; EC 1.14.11.8) is an enzyme that catalyzes the chemical reaction
N6,N6,N6-trimethyl-L-lysine + 2-oxoglutarate + O2 formula_0 3-hydroxy-N6,N6,N6-trimethyl-L-lysine + succinate + CO2
TMLH is a member of the alpha-ketoglutarate-dependent hydroxylases superfamily. The 3 substrates of this enzyme are N6,N6,N6-trimethyl-L-lysine, 2-oxoglutarate, and O2, whereas its 3 products are 3-hydroxy-N6,N6,N6-trimethyl-L-lysine, succinate, and CO2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is N6,N6,N6-trimethyl-L-lysine,2-oxoglutarate:oxygen oxidoreductase (3-hydroxylating). Other names in common use include trimethyllysine alpha-ketoglutarate dioxygenase, TML-alpha-ketoglutarate dioxygenase, TML hydroxylase, 6-N,6-N,6-N-trimethyl-L-lysine,2-oxoglutarate:oxygen oxidoreductase, and (3-hydroxylating). This enzyme participates in lysine degradation and L-carnitine biosynthesis and requires the presence of iron and ascorbate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046416 |
14046442 | Unspecific monooxygenase | Class of enzymes
In enzymology, an unspecific monooxygenase (EC 1.14.14.1) is an enzyme that catalyzes the chemical reaction
RH + reduced flavoprotein + O2 formula_0 ROH + oxidized flavoprotein + H2O
The 3 substrates of this enzyme are RH (reduced substrate), reduced flavoprotein, and O2, whereas its 3 products are ROH (oxidized substrate), oxidized flavoprotein, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with reduced flavin or flavoprotein as one donor, and incorporation of one atom of oxygen into the other donor. The systematic name of this enzyme class is substrate,reduced-flavoprotein:oxygen oxidoreductase (RH-hydroxylating or -epoxidizing). Other names in common use include microsomal monooxygenase, xenobiotic monooxygenase, aryl-4-monooxygenase, aryl hydrocarbon hydroxylase, microsomal P-450, flavoprotein-linked monooxygenase, and flavoprotein monooxygenase. This enzyme participates in 7 metabolic pathways: fatty acid metabolism, androgen and estrogen metabolism, gamma-hexachlorocyclohexane degradation, tryptophan metabolism, arachidonic acid metabolism, linoleic acid metabolism, and metabolism of xenobiotics by cytochrome p450. It employs one cofactor, heme.
Structural studies.
As of late 2007, 53 structures have been solved for this class of enzymes, with PDB accession codes 1BU7, 1BVY, 1DT6, 1FAG, 1FAH, 1JME, 1JPZ, 1N6B, 1NR6, 1OG2, 1OG5, 1P0V, 1P0W, 1P0X, 1PO5, 1PQ2, 1R9O, 1SMI, 1SMJ, 1SUO, 1TQN, 1W0E, 1W0F, 1W0G, 1YQO, 1YQP, 1Z10, 1Z11, 1ZO4, 1ZO9, 1ZOA, 2BDM, 2BMH, 2F9Q, 2FDU, 2FDV, 2FDW, 2FDY, 2HI4, 2HPD, 2IJ2, 2IJ3, 2IJ4, 2J0D, 2J1M, 2J4S, 2NNB, 2P85, 2PG5, 2PG6, 2PG7, 2UWH, and 2V0M.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046442 |
14046461 | Vanillate monooxygenase | Class of enzymes
In enzymology, a vanillate monooxygenase (EC 1.14.13.82) is an enzyme that catalyzes the chemical reaction
+ O2 + NADH + H+ formula_0 + NAD+ + H2O + formaldehyde
The 4 substrates of this enzyme are vanillate, O2, NADH, and H+, whereas its 4 products are 3,4-dihydroxybenzoate, NAD+, H2O, and formaldehyde.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is vanillate:oxygen oxidoreductase (demethylating). Other names in common use include 4-hydroxy-3-methoxybenzoate demethylase, and vanillate demethylase. This enzyme participates in 2,4-dichlorobenzoate degradation.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046461 |
14046477 | Vinorine hydroxylase | In enzymology, a vinorine hydroxylase (EC 1.14.14.104, Formerly EC 1.14.13.75) is an enzyme that catalyzes the chemical reaction
vinorine + NADPH + H+ + O2 formula_0 vomilenine + NADP+ + H2O
The 4 substrates of this enzyme are vinorine, NADPH, H+, and O2, whereas its 3 products are vomilenine, NADP+, and H2O.
This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with NADH or NADPH as one donor, and incorporation of one atom o oxygen into the other donor. The systematic name of this enzyme class is vinorine,NADPH:oxygen oxidoreductase (21alpha-hydroxylating). This enzyme participates in indole and ipecac alkaloid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14046477 |
1404680 | Midpoint method | Numeric solution for differential equations
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation,
formula_3
The explicit midpoint method is given by the formula
the implicit midpoint method by
for formula_4 Here, formula_5 is the "step size" — a small positive number, formula_6 and formula_0 is the computed approximate value of formula_1 The explicit midpoint method is sometimes also known as the modified Euler method, the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method, for further clarity see List of Runge–Kutta methods.
The name of the method comes from the fact that in the formula above, the function formula_7 giving the slope of the solution is evaluated at formula_8 the midpoint between formula_9 at which the value of formula_10 is known and formula_11 at which the value of formula_10 needs to be found.
A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at formula_12 is computed using formula_13. The next value formula_14 is found where the tangent intersects the vertical line formula_15. However, if the second derivative is only positive between formula_9 and formula_11, or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as formula_5 increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of formula_10 at the midpoint, then computing the slope of the tangent with formula_16. Finally, the improved tangent is used to calculate the value of formula_2 from formula_0. This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of formula_10 at the midpoint.
The local error at each step of the midpoint method is of order formula_17, giving a global error of order formula_18. Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as formula_19.
The methods are examples of a class of higher-order methods known as Runge–Kutta methods.
Derivation of the midpoint method.
The midpoint method is a refinement of the Euler method
formula_20
and is derived in a similar manner.
The key to deriving Euler's method is the approximate equality
which is obtained from the slope formula
and keeping in mind that formula_21
For the midpoint methods, one replaces (3) with the more accurate
formula_22
when instead of (2) we find
One cannot use this equation to find formula_23 as one does not know formula_24 at formula_25. The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for formula_26:
formula_27
which, when plugged in (4), gives us
formula_28
and the explicit midpoint method (1e).
The implicit method (1i) is obtained by approximating the value at the half step formula_25 by the midpoint of the line segment from formula_10 to formula_29
formula_30
and thus
formula_31
Inserting the approximation formula_32 for formula_33
results in the implicit Runge-Kutta method
formula_34
which contains the implicit Euler method with step size formula_35 as its first part.
Because of the time symmetry of the implicit method, all
terms of even degree in formula_5 of the local error cancel, so that the local error is automatically of order formula_36. Replacing the implicit with the explicit Euler method in the determination of formula_37 results again in the explicit midpoint method.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_n"
},
{
"math_id": 1,
"text": "y(t_n)."
},
{
"math_id": 2,
"text": "y_{n+1}"
},
{
"math_id": 3,
"text": " y'(t) = f(t, y(t)), \\quad y(t_0) = y_0 ."
},
{
"math_id": 4,
"text": "n=0, 1, 2, \\dots"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "t_n=t_0 + n h,"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "t = t_n + h/2= \\tfrac{t_n+t_{n+1}}{2},"
},
{
"math_id": 9,
"text": "t_n"
},
{
"math_id": 10,
"text": "y(t)"
},
{
"math_id": 11,
"text": "t_{n+1}"
},
{
"math_id": 12,
"text": "(t_n, y_n)"
},
{
"math_id": 13,
"text": "f(t_n, y_n)"
},
{
"math_id": 14,
"text": " y_{n+1}"
},
{
"math_id": 15,
"text": "t=t_{n+1}"
},
{
"math_id": 16,
"text": "f()"
},
{
"math_id": 17,
"text": "O\\left(h^3\\right)"
},
{
"math_id": 18,
"text": "O\\left(h^2\\right)"
},
{
"math_id": 19,
"text": "h \\to 0"
},
{
"math_id": 20,
"text": " y_{n+1} = y_n + hf(t_n,y_n),\\, "
},
{
"math_id": 21,
"text": " y' = f(t, y)."
},
{
"math_id": 22,
"text": " y'\\left(t+\\frac{h}{2}\\right) \\approx \\frac{y(t+h) - y(t)}{h} "
},
{
"math_id": 23,
"text": " y(t+h)"
},
{
"math_id": 24,
"text": "y"
},
{
"math_id": 25,
"text": "t+h/2"
},
{
"math_id": 26,
"text": "y(t+h/2)"
},
{
"math_id": 27,
"text": "y\\left(t + \\frac{h}{2}\\right) \\approx y(t) + \\frac{h}{2}y'(t)=y(t) + \\frac{h}{2}f(t, y(t)),"
},
{
"math_id": 28,
"text": "y(t + h) \\approx y(t) + hf\\left(t + \\frac{h}{2}, y(t) + \\frac{h}{2}f(t, y(t))\\right)"
},
{
"math_id": 29,
"text": "y(t+h)"
},
{
"math_id": 30,
"text": "y\\left(t+\\frac h2\\right)\\approx \\frac12\\bigl(y(t)+y(t+h)\\bigr)"
},
{
"math_id": 31,
"text": "\\frac{y(t+h)-y(t)}{h}\\approx y'\\left(t+\\frac h2\\right)\\approx k=f\\left(t+\\frac h2,\\frac12\\bigl(y(t)+y(t+h)\\bigr)\\right)"
},
{
"math_id": 32,
"text": "y_n+h\\,k"
},
{
"math_id": 33,
"text": "y(t_n+h)"
},
{
"math_id": 34,
"text": "\\begin{align}\nk&=f\\left(t_n+\\frac h2,y_n+\\frac h2 k\\right)\\\\\ny_{n+1}&=y_n+h\\,k\n\\end{align}"
},
{
"math_id": 35,
"text": "h/2"
},
{
"math_id": 36,
"text": "\\mathcal O(h^3)"
},
{
"math_id": 37,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=1404680 |
14050287 | CEILIDH | Cryptosystem
CEILIDH is a public key cryptosystem based on the discrete logarithm problem in algebraic torus. This idea was first introduced by Alice Silverberg and Karl Rubin in 2003; Silverberg named CEILIDH after her cat. The main advantage of the system is the reduced size of the keys for the same security over basic schemes.
Algorithms.
Key agreement scheme.
This Scheme is based on the Diffie-Hellman key agreement.
formula_19 is the identity, thus we have :
formula_20 which is the shared secret of Alice and Bob.
Encryption scheme.
This scheme is based on the ElGamal encryption.
Security.
The CEILIDH scheme is based on the ElGamal scheme and thus has similar security properties.
If the computational Diffie-Hellman assumption holds the underlying cyclic group formula_30, then the encryption function is one-way. If the decisional Diffie-Hellman assumption (DDH) holds in formula_30, then CEILIDH achieves semantic security. Semantic security is not implied by the computational Diffie-Hellman assumption alone. See decisional Diffie-Hellman assumption for a discussion of groups where the assumption is believed to hold.
CEILIDH encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack. For example, given an encryption formula_31 of some (possibly unknown) message formula_32, one can easily construct a valid encryption formula_33 of the message formula_34. | [
{
"math_id": 0,
"text": "q"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "T_n"
},
{
"math_id": 3,
"text": "\\Phi_n(q)"
},
{
"math_id": 4,
"text": "l"
},
{
"math_id": 5,
"text": "\\Phi_n"
},
{
"math_id": 6,
"text": "n^{th}"
},
{
"math_id": 7,
"text": "m=\\phi(n)"
},
{
"math_id": 8,
"text": "\\phi"
},
{
"math_id": 9,
"text": "\\rho : T_n(\\mathbb{F}_q) \\rightarrow {\\mathbb{F}_q}^m"
},
{
"math_id": 10,
"text": "\\psi"
},
{
"math_id": 11,
"text": "\\alpha \\in T_n"
},
{
"math_id": 12,
"text": "g=\\rho(\\alpha)"
},
{
"math_id": 13,
"text": "a\\ \\pmod{\\Phi_n(q)}"
},
{
"math_id": 14,
"text": "P_A= \\rho(\\psi(g)^a) \\in \\mathbb{F}_q^m"
},
{
"math_id": 15,
"text": "b\\ \\pmod{\\Phi_n(q)}"
},
{
"math_id": 16,
"text": "P_B= \\rho(\\psi(g)^b) \\in \\mathbb{F}_q^m"
},
{
"math_id": 17,
"text": "\\rho(\\psi(P_B))^a) \\in \\mathbb{F}_q^m"
},
{
"math_id": 18,
"text": "\\rho(\\psi(P_A))^b) \\in \\mathbb{F}_q^m"
},
{
"math_id": 19,
"text": "\\psi \\circ \\rho"
},
{
"math_id": 20,
"text": "\\rho(\\psi(P_B))^a) = \\rho(\\psi(P_A))^b) = \\rho(\\psi(g)^{ab}) "
},
{
"math_id": 21,
"text": "a\\ \\pmod{ \\Phi_n(q)}"
},
{
"math_id": 22,
"text": "M"
},
{
"math_id": 23,
"text": "\\mathbb{F}_q^m"
},
{
"math_id": 24,
"text": "k"
},
{
"math_id": 25,
"text": "1\\leq k \\leq l-1"
},
{
"math_id": 26,
"text": "\\gamma = \\rho(\\psi(g)^k) \\in \\mathbb{F}_q^m"
},
{
"math_id": 27,
"text": "\\delta = \\rho(\\psi(M)\\psi(P_A)^k) \\in \\mathbb{F}_q^m"
},
{
"math_id": 28,
"text": "(\\gamma,\\delta)"
},
{
"math_id": 29,
"text": "M = \\rho(\\psi(\\delta)\\psi(\\gamma)^{-a})"
},
{
"math_id": 30,
"text": "G"
},
{
"math_id": 31,
"text": "(c_1, c_2)"
},
{
"math_id": 32,
"text": "m"
},
{
"math_id": 33,
"text": "(c_1, 2 c_2)"
},
{
"math_id": 34,
"text": "2m"
}
]
| https://en.wikipedia.org/wiki?curid=14050287 |
14052 | Hyperbola | Plane curve: conic section
In mathematics, a hyperbola is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, that are mirror images of each other and resemble two infinite bows. The hyperbola is one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola.
Besides being a conic section, a hyperbola can arise as the locus of points whose difference of distances to two fixed foci is constant, as a curve for each point of which the rays to two fixed foci are reflections across the tangent line at that point, or as the solution of certain bivariate quadratic equations such as the reciprocal relationship formula_0 In practical applications, a hyperbola can arise as the path followed by the shadow of the tip of a sundial's gnomon, the shape of an open orbit such as that of a celestial object exceeding the escape velocity of the nearest gravitational body, or the scattering trajectory of a subatomic particle, among others.
Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called the asymptote of those two arms. So there are two asymptotes, whose intersection is at the center of symmetry of the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curve formula_1 the asymptotes are the two coordinate axes.
Hyperbolas share many of the ellipses' analytical properties such as eccentricity, focus, and directrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many other mathematical objects have their origin in the hyperbola, such as hyperbolic paraboloids (saddle surfaces), hyperboloids ("wastebaskets"), hyperbolic geometry (Lobachevsky's celebrated non-Euclidean geometry), hyperbolic functions (sinh, cosh, tanh, etc.), and gyrovector spaces (a geometry proposed for use in both relativity and quantum mechanics which is not Euclidean).
Etymology and history.
The word "hyperbola" derives from the Greek , meaning "over-thrown" or "excessive", from which the English term hyperbole also derives. Hyperbolae were discovered by Menaechmus in his investigations of the problem of doubling the cube, but were then called sections of obtuse cones. The term hyperbola is believed to have been coined by Apollonius of Perga (c. 262 – c. 190 BC) in his definitive work on the conic sections, the "Conics".
The names of the other two general conic sections, the ellipse and the parabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment.
Definitions.
As locus of points.
A hyperbola can be defined geometrically as a set of points (locus of points) in the Euclidean plane:
<templatestyles src="Block indent/styles.css"/>A hyperbola is a set of points, such that for any point formula_2 of the set, the absolute difference of the distances formula_3 to two fixed points formula_4 (the "foci") is constant, usually denoted by formula_5:
formula_6
The midpoint formula_7 of the line segment joining the foci is called the "center" of the hyperbola. The line through the foci is called the "major axis". It contains the "vertices" formula_8, which have distance formula_9 to the center. The distance formula_10 of the foci to the center is called the "focal distance" or "linear eccentricity". The quotient formula_11 is the "eccentricity" formula_12.
The equation formula_13 can be viewed in a different way (see diagram):
If formula_14 is the circle with midpoint formula_15 and radius formula_16, then the distance of a point formula_2 of the right branch to the circle formula_14 equals the distance to the focus formula_17:
formula_18
formula_14 is called the "circular directrix" (related to focus formula_15) of the hyperbola. In order to get the left branch of the hyperbola, one has to use the circular directrix related to formula_17. This property should not be confused with the definition of a hyperbola with help of a directrix (line) below.
Hyperbola with equation "y" = "A"/"x".
If the "xy"-coordinate system is rotated about the origin by the angle formula_19 and new coordinates formula_20 are assigned, then formula_21.
The rectangular hyperbola formula_22 (whose semi-axes are equal) has the new equation formula_23.
Solving for formula_24 yields formula_25
Thus, in an "xy"-coordinate system the graph of a function formula_26 with equation
formula_27 is a "rectangular hyperbola" entirely in the first and third quadrants with
A rotation of the original hyperbola by formula_37 results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of formula_19 rotation, with equation
formula_38
Shifting the hyperbola with equation formula_41 so that the new center is formula_42, yields the new equation
formula_43
and the new asymptotes are formula_44 and formula_45. The shape parameters formula_46 remain unchanged.
By the directrix property.
The two lines at distance formula_47 from the center and parallel to the minor axis are called directrices of the hyperbola (see diagram).
For an arbitrary point formula_2 of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity:
formula_48
The proof for the pair formula_49 follows from the fact that formula_50 and formula_51 satisfy the equation
formula_52
The second case is proven analogously.
The "inverse statement" is also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola):
For any point formula_53 (focus), any line formula_54 (directrix) not through formula_53 and any real number formula_12 with formula_55 the set of points (locus of points), for which the quotient of the distances to the point and to the line is formula_12
formula_56
is a hyperbola.
Proof.
Let formula_59 and assume formula_29 is a point on the curve.
The directrix formula_54 has equation formula_60. With formula_61, the relation formula_62 produces the equations
formula_63 and formula_64
The substitution formula_65 yields
formula_66
This is the equation of an "ellipse" (formula_67) or a "parabola" (formula_68) or a "hyperbola" (formula_69). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).
If formula_55, introduce new parameters formula_70 so that formula_71, and then the equation above becomes
formula_72
which is the equation of a hyperbola with center formula_73, the "x"-axis as major axis and the major/minor semi axis formula_70.
Construction of a directrix.
Because of formula_74 point formula_75 of directrix formula_76 (see diagram) and focus formula_17 are inverse with respect to the circle inversion at circle formula_77 (in diagram green). Hence point formula_78 can be constructed using the theorem of Thales (not shown in the diagram). The directrix formula_76 is the perpendicular to line formula_79 through point formula_78.
"Alternative construction of formula_78": Calculation shows, that point formula_78 is the intersection of the asymptote with its perpendicular through formula_17 (see diagram).
As plane section of a cone.
The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses two Dandelin spheres formula_80, which are spheres that touch the cone along circles formula_81, formula_82 and the intersecting (hyperbola) plane at points formula_17 and formula_15. It turns out: formula_4 are the "foci" of the hyperbola.
Pin and string construction.
The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler:
Steiner generation of a hyperbola.
The following method to construct single points of a hyperbola relies on the Steiner generation of a non degenerate conic section:
<templatestyles src="Block indent/styles.css"/> Given two pencils formula_98 of lines at two points formula_99 (all lines containing formula_100 and formula_101, respectively) and a projective but not perspective mapping formula_102 of formula_103 onto formula_104, then the intersection points of corresponding lines form a non-degenerate projective conic section.
For the generation of points of the hyperbola formula_105 one uses the pencils at the vertices formula_95. Let formula_106 be a point of the hyperbola and formula_107. The line segment formula_108 is divided into n equally-spaced segments and this division is projected parallel with the diagonal formula_93 as direction onto the line segment formula_109 (see diagram). The parallel projection is part of the projective mapping between the pencils at formula_110 and formula_111 needed. The intersection points of any two related lines formula_112 and formula_113 are points of the uniquely defined hyperbola.
"Remarks:"
Inscribed angles for hyperbolas "y" = "a"/("x" − "b") + "c" and the 3-point-form.
A hyperbola with equation formula_114 is uniquely determined by three points formula_115 with different "x"- and "y"-coordinates. A simple way to determine the shape parameters formula_116 uses the "inscribed angle theorem" for hyperbolas:
<templatestyles src="Block indent/styles.css"/>In order to measure an angle between two lines with equations formula_117 in this context one uses the quotient
formula_118
Analogous to the inscribed angle theorem for circles one gets the
<templatestyles src="Math_theorem/styles.css" />
Inscribed angle theorem for hyperbolas — For four points formula_119 (see diagram) the following statement is true:
The four points are on a hyperbola with equation formula_120 if and only if the angles at formula_121 and formula_122 are equal in the sense of the measurement above. That means if formula_123
The proof can be derived by straightforward calculation. If the points are on a hyperbola, one can assume the hyperbola's equation is formula_124.
A consequence of the inscribed angle theorem for hyperbolas is the
<templatestyles src="Math_theorem/styles.css" />
3-point-form of a hyperbola's equation — The equation of the hyperbola determined by 3 points formula_125 is the solution of the equation formula_126 for formula_127.
As an affine image of the unit hyperbola "x"2 − "y"2 = 1.
Another definition of a hyperbola uses affine transformations:
<templatestyles src="Block indent/styles.css"/>Any "hyperbola" is the affine image of the unit hyperbola with equation formula_128.
Parametric representation.
An affine transformation of the Euclidean plane has the form formula_129, where formula_83 is a regular matrix (its determinant is not 0) and formula_130 is an arbitrary vector. If formula_131 are the column vectors of the matrix formula_83, the unit hyperbola formula_132 is mapped onto the hyperbola
formula_133
formula_130 is the center, formula_134 a point of the hyperbola and formula_135 a tangent vector at this point.
Vertices.
In general the vectors formula_131 are not perpendicular. That means, in general formula_136 are "not" the vertices of the hyperbola. But formula_137 point into the directions of the asymptotes. The tangent vector at point formula_138 is
formula_139
Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parameter formula_140 of a vertex from the equation
formula_141
and hence from
formula_142
which yields
formula_143
The formulae formula_144, formula_145, and formula_146 were used.
The two "vertices" of the hyperbola are formula_147
Implicit representation.
Solving the parametric representation for formula_148 by Cramer's rule and using formula_149, one gets the implicit representation
formula_150
Hyperbola in space.
The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allows formula_151 to be vectors in space.
As an affine image of the hyperbola "y" = 1/"x".
Because the unit hyperbola formula_152 is affinely equivalent to the hyperbola formula_153, an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbola formula_154:
formula_155
formula_156 is the center of the hyperbola, the vectors formula_157 have the directions of the asymptotes and formula_158 is a point of the hyperbola. The tangent vector is
formula_159
At a vertex the tangent is perpendicular to the major axis. Hence
formula_160
and the parameter of a vertex is
formula_161
formula_162 is equivalent to formula_163 and formula_164 are the vertices of the hyperbola.
The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section.
Tangent construction.
The tangent vector can be rewritten by factorization:
formula_165
This means that
<templatestyles src="Block indent/styles.css"/>the diagonal formula_93 of the parallelogram formula_166 is parallel to the tangent at the hyperbola point formula_2 (see diagram).
This property provides a way to construct the tangent at a point on the hyperbola.
This property of a hyperbola is an affine version of the 3-point-degeneration of Pascal's theorem.
The area of the grey parallelogram formula_167 in the above diagram is
formula_168
and hence independent of point formula_2. The last equation follows from a calculation for the case, where formula_2 is a vertex and the hyperbola in its canonical form formula_169
Point construction.
For a hyperbola with parametric representation formula_170 (for simplicity the center is the origin) the following is true:
<templatestyles src="Block indent/styles.css"/>For any two points formula_171 the points
formula_172
are collinear with the center of the hyperbola (see diagram).
The simple proof is a consequence of the equation formula_173.
This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given.
This property of a hyperbola is an affine version of the 4-point-degeneration of Pascal's theorem.
Tangent–asymptotes triangle.
For simplicity the center of the hyperbola may be the origin and the vectors formula_174 have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence formula_175 are the vertices, formula_176 span the minor axis and one gets formula_177 and formula_178.
For the intersection points of the tangent at point formula_179 with the asymptotes one gets the points
formula_180
The "area" of the triangle formula_181 can be calculated by a 2 × 2 determinant:
formula_182
(see rules for determinants).
formula_183 is the area of the rhombus generated by formula_174. The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axes formula_70 of the hyperbola. Hence:
<templatestyles src="Block indent/styles.css"/>The "area" of the triangle formula_184 is independent of the point of the hyperbola: formula_185
Reciprocation of a circle.
The reciprocation of a circle "B" in a circle "C" always yields a conic section such as a hyperbola. The process of "reciprocation in a circle "C"" consists of replacing every line and point in a geometrical figure with their corresponding pole and polar, respectively. The "pole" of a line is the inversion of its closest point to the circle "C", whereas the polar of a point is the converse, namely, a line whose closest point to "C" is the inversion of the point.
The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radius "r" of reciprocation circle "C". If B and C represent the points at the centers of the corresponding circles, then
formula_186
Since the eccentricity of a hyperbola is always greater than one, the center B must lie outside of the reciprocating circle "C".
This definition implies that the hyperbola is both the locus of the poles of the tangent lines to the circle "B", as well as the envelope of the polar lines of the points on "B". Conversely, the circle "B" is the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines to "B" have no (finite) poles because they pass through the center C of the reciprocation circle "C"; the polars of the corresponding tangent points on "B" are the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circle "B" that are separated by these tangent points.
Quadratic equation.
A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates formula_187 in the plane,
formula_188
provided that the constants formula_189 formula_190 formula_191 formula_192 formula_193 and formula_194 satisfy the determinant condition
formula_195
This determinant is conventionally called the discriminant of the conic section.
A special case of a hyperbola—the "degenerate hyperbola" consisting of two intersecting lines—occurs when another determinant is zero:
formula_196
This determinant formula_197 is sometimes called the discriminant of the conic section.
The general equation's coefficients can be obtained from known semi-major axis formula_198 semi-minor axis formula_199 center coordinates formula_200, and rotation angle formula_201 (the angle from the positive horizontal axis to the hyperbola's major axis) using the formulae:
formula_202
These expressions can be derived from the canonical equation
formula_203
by a translation and rotation of the coordinates formula_187:
formula_204
Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of coefficients.
The center formula_205 of the hyperbola may be determined from the formulae
formula_206
In terms of new coordinates, formula_207 and formula_208 the defining equation of the hyperbola can be written
formula_209
The principal axes of the hyperbola make an angle formula_210 with the positive formula_211-axis that is given by
formula_212
Rotating the coordinate axes so that the formula_211-axis is aligned with the transverse axis brings the equation into its canonical form
formula_213
The major and minor semiaxes formula_9 and formula_214 are defined by the equations
formula_215
where formula_216 and formula_217 are the roots of the quadratic equation
formula_218
For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is
formula_219
The tangent line to a given point formula_220 on the hyperbola is defined by the equation
formula_221
where formula_222 formula_223 and formula_224 are defined by
formula_225
The normal line to the hyperbola at the same point is given by the equation
formula_226
The normal line is perpendicular to the tangent line, and both pass through the same point formula_227
From the equation
formula_228
the left focus is formula_229 and the right focus is formula_230 where formula_12 is the eccentricity. Denote the distances from a point formula_187 to the left and right foci as formula_231 and formula_232 For a point on the right branch,
formula_233
and for a point on the left branch,
formula_234
This can be proved as follows:
If formula_187 is a point on the hyperbola the distance to the left focal point is
formula_235
To the right focal point the distance is
formula_236
If formula_187 is a point on the right branch of the hyperbola then formula_237 and
formula_238
Subtracting these equations one gets
formula_239
If formula_187 is a point on the left branch of the hyperbola then formula_240 and
formula_241
Subtracting these equations one gets
formula_242
In Cartesian coordinates.
Equation.
If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and the "x"-axis is the major axis, then the hyperbola is called "east-west-opening" and
the "foci" are the points formula_243,
the "vertices" are formula_244.
For an arbitrary point formula_245 the distance to the focus formula_246 is formula_247 and to the second focus formula_248. Hence the point formula_245 is on the hyperbola if the following condition is fulfilled
formula_249
Remove the square roots by suitable squarings and use the relation formula_250 to obtain the equation of the hyperbola:
formula_251
This equation is called the canonical form of a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that is congruent to the original (see below).
The axes of symmetry or "principal axes" are the "transverse axis" (containing the segment of length 2"a" with endpoints at the vertices) and the "conjugate axis" (containing the segment of length 2"b" perpendicular to the transverse axis and with midpoint at the hyperbola's center). As opposed to an ellipse, a hyperbola has only two vertices: formula_252. The two points formula_253 on the conjugate axes are "not" on the hyperbola.
It follows from the equation that the hyperbola is "symmetric" with respect to both of the coordinate axes and hence symmetric with respect to the origin.
Eccentricity.
For a hyperbola in the above canonical form, the eccentricity is given by
formula_254
Two hyperbolas are geometrically similar to each other – meaning that they have the same shape, so that one can be transformed into the other by rigid left and right movements, rotation, taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity.
Asymptotes.
Solving the equation (above) of the hyperbola for formula_255 yields
formula_256
It follows from this that the hyperbola approaches the two lines
formula_257
for large values of formula_258. These two lines intersect at the center (origin) and are called "asymptotes" of the hyperbola formula_259
With the help of the second figure one can see that
formula_260 The "perpendicular distance from a focus to either asymptote" is formula_214 (the semi-minor axis).
From the Hesse normal form formula_261 of the asymptotes and the equation of the hyperbola one gets:
formula_262 The "product of the distances from a point on the hyperbola to both the asymptotes" is the constant formula_263 which can also be written in terms of the eccentricity "e" as formula_264
From the equation formula_265 of the hyperbola (above) one can derive:
formula_266 The "product of the slopes of lines from a point P to the two vertices" is the constant formula_267
In addition, from (2) above it can be shown that
formula_268 "The product of the distances from a point on the hyperbola to the asymptotes along lines parallel to the asymptotes" is the constant formula_269
Semi-latus rectum.
The length of the chord through one of the foci, perpendicular to the major axis of the hyperbola, is called the "latus rectum". One half of it is the "semi-latus rectum" formula_270. A calculation shows
formula_271
The semi-latus rectum formula_270 may also be viewed as the "radius of curvature " at the vertices.
Tangent.
The simplest way to determine the equation of the tangent at a point formula_272 is to implicitly differentiate the equation formula_273 of the hyperbola. Denoting "dy/dx" as "y′", this produces
formula_274
With respect to formula_275, the equation of the tangent at point formula_272 is
formula_276
A particular tangent line distinguishes the hyperbola from the other conic sections. Let "f" be the distance from the vertex "V" (on both the hyperbola and its axis through the two foci) to the nearer focus. Then the distance, along a line perpendicular to that axis, from that focus to a point P on the hyperbola is greater than 2"f". The tangent to the hyperbola at P intersects that axis at point Q at an angle ∠PQV of greater than 45°.
Rectangular hyperbola.
In the case formula_277 the hyperbola is called "rectangular" (or "equilateral"), because its asymptotes intersect at right angles. For this case, the linear eccentricity is formula_278, the eccentricity formula_279 and the semi-latus rectum formula_280. The graph of the equation formula_153 is a rectangular hyperbola.
Parametric representation with hyperbolic sine/cosine.
Using the hyperbolic sine and cosine functions formula_281, a parametric representation of the hyperbola formula_273 can be obtained, which is similar to the parametric representation of an ellipse:
formula_282
which satisfies the Cartesian equation because formula_283
Further parametric representations are given in the section Parametric equations below.
Conjugate hyperbola.
Exchange formula_284 and formula_285 to obtain the equation of the conjugate hyperbola (see diagram):
formula_286 also written as
formula_287
A hyperbola and its conjugate may have diameters which are conjugate. In the theory of special relativity, such diameters may represent axes of time and space, where one hyperbola represents events at a given spatial distance from the center, and the other represents events at a corresponding temporal distance from the center.
In polar coordinates.
Origin at the focus.
The polar coordinates used most commonly for the hyperbola are defined relative to the Cartesian coordinate system that has its "origin in a focus" and its x-axis pointing towards the origin of the "canonical coordinate system" as illustrated in the first diagram.
In this case the angle formula_210 is called true anomaly.
Relative to this coordinate system one has that
formula_288
and
formula_289
Origin at the center.
With polar coordinates relative to the "canonical coordinate system" (see second diagram)
one has that
formula_290
For the right branch of the hyperbola the range of formula_291 is
formula_292
Eccentricity.
When using polar coordinates, the eccentricity of the hyperbola can be expressed as formula_293 where formula_294 is the limit of the angular coordinate. As formula_210 approaches this limit, "r" approaches infinity and the denominator in either of the equations noted above approaches zero, hence:
formula_295
formula_296
formula_297
Parametric equations.
A hyperbola with equation formula_298 can be described by several parametric equations:
Hyperbolic functions.
Just as the trigonometric functions are defined in terms of the unit circle, so also the hyperbolic functions are defined in terms of the unit hyperbola, as shown in this diagram. In a unit circle, the angle (in radians) is equal to twice the area of the circular sector which that angle subtends. The analogous hyperbolic angle is likewise defined as twice the area of a hyperbolic sector.
Let formula_9 be twice the area between the formula_211 axis and a ray through the origin intersecting the unit hyperbola, and define formula_311 as the coordinates of the intersection point.
Then the area of the hyperbolic sector is the area of the triangle minus the curved region past the vertex at formula_312:
formula_313
which simplifies to the area hyperbolic cosine
formula_314
Solving for formula_211 yields the exponential form of the hyperbolic cosine:
formula_315
From formula_152 one gets
formula_316
and its inverse the area hyperbolic sine:
formula_317
Other hyperbolic functions are defined according to the hyperbolic cosine and hyperbolic sine, so for example
formula_318
Properties.
Reflection property.
The tangent at a point formula_2 bisects the angle between the lines formula_319 This is called the "optical property" or "reflection property" of a hyperbola.
Let formula_320 be the point on the line formula_88 with the distance formula_16 to the focus formula_15 (see diagram, formula_9 is the semi major axis of the hyperbola). Line formula_321 is the bisector of the angle between the lines formula_322. In order to prove that formula_321 is the tangent line at point formula_2, one checks that any point formula_323 on line formula_321 which is different from formula_2 cannot be on the hyperbola. Hence formula_321 has only point formula_2 in common with the hyperbola and is, therefore, the tangent at point formula_2.
From the diagram and the triangle inequality one recognizes that formula_324 holds, which means: formula_325. But if formula_323 is a point of the hyperbola, the difference should be formula_16.
Midpoints of parallel chords.
The midpoints of parallel chords of a hyperbola lie on a line through the center (see diagram).
The points of any chord may lie on different branches of the hyperbola.
The proof of the property on midpoints is best done for the hyperbola formula_153. Because any hyperbola is an affine image of the hyperbola formula_153 (see section below) and an affine transformation preserves parallelism and midpoints of line segments, the property is true for all hyperbolas:
For two points formula_326 of the hyperbola formula_153
the midpoint of the chord is formula_327
the slope of the chord is formula_328
For parallel chords the slope is constant and the midpoints of the parallel chords lie on the line formula_329
Consequence: for any pair of points formula_330 of a chord there exists a "skew reflection" with an axis (set of fixed points) passing through the center of the hyperbola, which exchanges the points formula_330 and leaves the hyperbola (as a whole) fixed. A skew reflection is a generalization of an ordinary reflection across a line formula_302, where all point-image pairs are on a line perpendicular to formula_302.
Because a skew reflection leaves the hyperbola fixed, the pair of asymptotes is fixed, too. Hence the midpoint formula_7 of a chord formula_331 divides the related line segment formula_332 between the asymptotes into halves, too. This means that formula_333. This property can be used for the construction of further points formula_323 of the hyperbola if a point formula_2 and the asymptotes are given.
If the chord degenerates into a "tangent", then the touching point divides the line segment between the asymptotes in two halves.
Orthogonal tangents – orthoptic.
For a hyperbola formula_334 the intersection points of "orthogonal" tangents lie on the circle formula_335.
This circle is called the "orthoptic" of the given hyperbola.
The tangents may belong to points on different branches of the hyperbola.
In case of formula_336 there are no pairs of orthogonal tangents.
Pole-polar relation for a hyperbola.
Any hyperbola can be described in a suitable coordinate system by an equation formula_273. The equation of the tangent at a point formula_337 of the hyperbola is formula_338 If one allows point formula_337 to be an arbitrary point different from the origin, then
point formula_339 is mapped onto the line formula_340, not through the center of the hyperbola.
This relation between points and lines is a bijection.
The inverse function maps
line formula_341 onto the point formula_342 and
line formula_343 onto the point formula_344
Such a relation between points and lines generated by a conic is called pole-polar relation or just "polarity". The pole is the point, the polar the line. See Pole and polar.
By calculation one checks the following properties of the pole-polar relation of the hyperbola:
"Remarks:"
Pole-polar relations exist for ellipses and parabolas, too.
Arc length.
The arc length of a hyperbola does not have an elementary expression. The upper half of a hyperbola can be parameterized as
formula_354
Then the integral giving the arc length formula_355 from formula_356 to formula_357 can be computed as:
formula_358
After using the substitution formula_359, this can also be represented using the incomplete elliptic integral of the second kind formula_360 with parameter formula_361:
formula_362
Using only real numbers, this becomes
formula_363
where formula_53 is the incomplete elliptic integral of the first kind with parameter formula_361 and formula_364 is the Gudermannian function.
Derived curves.
Several other curves can be derived from the hyperbola by inversion, the so-called inverse curves of the hyperbola. If the center of inversion is chosen as the hyperbola's own center, the inverse curve is the lemniscate of Bernoulli; the lemniscate is also the envelope of circles centered on a rectangular hyperbola and passing through the origin. If the center of inversion is chosen at a focus or a vertex of the hyperbola, the resulting inverse curves are a limaçon or a strophoid, respectively.
Elliptic coordinates.
A family of confocal hyperbolas is the basis of the system of elliptic coordinates in two dimensions. These hyperbolas are described by the equation
formula_365
where the foci are located at a distance "c" from the origin on the "x"-axis, and where θ is the angle of the asymptotes with the "x"-axis. Every hyperbola in this family is orthogonal to every ellipse that shares the same foci. This orthogonality may be shown by a conformal map of the Cartesian coordinate system "w" = "z" + 1/"z", where "z"= "x" + "iy" are the original Cartesian coordinates, and "w"="u" + "iv" are those after the transformation.
Other orthogonal two-dimensional coordinate systems involving hyperbolas may be obtained by other conformal mappings. For example, the mapping "w" = "z"2 transforms the Cartesian coordinate system into two families of orthogonal hyperbolas.
Conic section analysis of the hyperbolic appearance of circles.
Besides providing a uniform description of circles, ellipses, parabolas, and hyperbolas, conic sections can also be understood as a natural model of the geometry of perspective in the case where the scene being viewed consists of circles, or more generally an ellipse. The viewer is typically a camera or the human eye and the image of the scene a central projection onto an image plane, that is, all projection rays pass a fixed point "O", the center. The lens plane is a plane parallel to the image plane at the lens "O".
The image of a circle c is
These results can be understood if one recognizes that the projection process can be seen in two steps: 1) circle c and point "O" generate a cone which is 2) cut by the image plane, in order to generate the image.
One sees a hyperbola whenever catching sight of a portion of a circle cut by one's lens plane. The inability to see very much of the arms of the visible branch, combined with the complete absence of the second branch, makes it virtually impossible for the human visual system to recognize the connection with hyperbolas.
Applications.
Sundials.
Hyperbolas may be seen in many sundials. On any given day, the sun revolves in a circle on the celestial sphere, and its rays striking the point on a sundial traces out a cone of light. The intersection of this cone with the horizontal plane of the ground forms a conic section. At most populated latitudes and at most times of the year, this conic section is a hyperbola. In practical terms, the shadow of the tip of a pole traces out a hyperbola on the ground over the course of a day (this path is called the "declination line"). The shape of this hyperbola varies with the geographical latitude and with the time of the year, since those factors affect the cone of the sun's rays relative to the horizon. The collection of such hyperbolas for a whole year at a given location was called a "pelekinon" by the Greeks, since it resembles a double-bladed axe.
Multilateration.
A hyperbola is the basis for solving multilateration problems, the task of locating a point from the differences in its distances to given points — or, equivalently, the difference in arrival times of synchronized signals between the point and the given points. Such problems are important in navigation, particularly on water; a ship can locate its position from the difference in arrival times of signals from a LORAN or GPS transmitters. Conversely, a homing beacon or any transmitter can be located by comparing the arrival times of its signals at two separate receiving stations; such techniques may be used to track objects and people. In particular, the set of possible positions of a point that has a distance difference of 2"a" from two given points is a hyperbola of vertex separation 2"a" whose foci are the two given points.
Path followed by a particle.
The path followed by any particle in the classical Kepler problem is a conic section. In particular, if the total energy "E" of the particle is greater than zero (that is, if the particle is unbound), the path of such a particle is a hyperbola. This property is useful in studying atomic and sub-atomic forces by scattering high-energy particles; for example, the Rutherford experiment demonstrated the existence of an atomic nucleus by examining the scattering of alpha particles from gold atoms. If the short-range nuclear interactions are ignored, the atomic nucleus and the alpha particle interact only by a repulsive Coulomb force, which satisfies the inverse square law requirement for a Kepler problem.
Korteweg–de Vries equation.
The hyperbolic trig function formula_366 appears as one solution to the Korteweg–de Vries equation which describes the motion of a soliton wave in a canal.
Angle trisection.
As shown first by Apollonius of Perga, a hyperbola can be used to trisect any angle, a well studied problem of geometry. Given an angle, first draw a circle centered at its vertex O, which intersects the sides of the angle at points A and B. Next draw the line segment with endpoints A and B and its perpendicular bisector formula_367. Construct a hyperbola of eccentricity "e"=2 with formula_367 as directrix and B as a focus. Let P be the intersection (upper) of the hyperbola with the circle. Angle POB trisects angle AOB.
To prove this, reflect the line segment OP about the line formula_367 obtaining the point P' as the image of P. Segment AP' has the same length as segment BP due to the reflection, while segment PP' has the same length as segment BP due to the eccentricity of the hyperbola. As OA, OP', OP and OB are all radii of the same circle (and so, have the same length), the triangles OAP', OPP' and OPB are all congruent. Therefore, the angle has been trisected, since 3×POB = AOB.
Efficient portfolio frontier.
In portfolio theory, the locus of mean-variance efficient portfolios (called the efficient frontier) is the upper half of the east-opening branch of a hyperbola drawn with the portfolio return's standard deviation plotted horizontally and its expected value plotted vertically; according to this theory, all rational investors would choose a portfolio characterized by some point on this locus.
Biochemistry.
In biochemistry and pharmacology, the Hill equation and Hill-Langmuir equation respectively describe biological responses and the formation of protein–ligand complexes as functions of ligand concentration. They are both rectangular hyperbolae.
Hyperbolas as plane sections of quadrics.
Hyperbolas appear as plane sections of the following quadrics:
See also.
Other conic sections.
<templatestyles src="Div col/styles.css"/>
Other related topics.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "xy = 1."
},
{
"math_id": 1,
"text": "y(x) = 1/x"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "|PF_1|,\\, |PF_2|"
},
{
"math_id": 4,
"text": "F_1, F_2"
},
{
"math_id": 5,
"text": "2a,\\, a>0"
},
{
"math_id": 6,
"text": "H = \\left\\{P : \\left|\\left|PF_2\\right| - \\left|PF_1\\right|\\right| = 2a \\right\\} ."
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "V_1, V_2"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "c"
},
{
"math_id": 11,
"text": "\\tfrac c a"
},
{
"math_id": 12,
"text": "e"
},
{
"math_id": 13,
"text": "\\left|\\left|PF_2\\right| - \\left|PF_1\\right|\\right| = 2a"
},
{
"math_id": 14,
"text": "c_2"
},
{
"math_id": 15,
"text": "F_2"
},
{
"math_id": 16,
"text": "2a"
},
{
"math_id": 17,
"text": "F_1"
},
{
"math_id": 18,
"text": "|PF_1|=|Pc_2|."
},
{
"math_id": 19,
"text": "+45^\\circ"
},
{
"math_id": 20,
"text": "\\xi,\\eta"
},
{
"math_id": 21,
"text": "x = \\tfrac{\\xi+\\eta}{\\sqrt{2}},\\; y = \\tfrac{-\\xi+\\eta}{\\sqrt{2}} "
},
{
"math_id": 22,
"text": "\\tfrac{x^2-y^2}{a^2} = 1"
},
{
"math_id": 23,
"text": "\\tfrac{2\\xi\\eta}{a^2} = 1"
},
{
"math_id": 24,
"text": "\\eta"
},
{
"math_id": 25,
"text": "\\eta = \\tfrac{a^2/2}{\\xi} \\ . "
},
{
"math_id": 26,
"text": "f: x \\mapsto \\tfrac{A}{x},\\; A>0\\; , "
},
{
"math_id": 27,
"text": "y = \\frac{A}{x}\\;, A>0\\; ,"
},
{
"math_id": 28,
"text": "y = x"
},
{
"math_id": 29,
"text": "(0,0)"
},
{
"math_id": 30,
"text": " a = b = \\sqrt{2A} \\; ,"
},
{
"math_id": 31,
"text": "\\left(\\sqrt{A},\\sqrt{A}\\right), \\left(-\\sqrt{A},-\\sqrt{A}\\right) \\; ,"
},
{
"math_id": 32,
"text": " p=a=\\sqrt{2A} \\; ,"
},
{
"math_id": 33,
"text": "c=2\\sqrt{A}"
},
{
"math_id": 34,
"text": "e=\\sqrt{2} \\; ,"
},
{
"math_id": 35,
"text": "y=-\\tfrac{A}{x_0^2}x+2\\tfrac{A}{x_0}"
},
{
"math_id": 36,
"text": "(x_0,A/x_0)\\; ."
},
{
"math_id": 37,
"text": "-45^\\circ"
},
{
"math_id": 38,
"text": "y = -\\frac{A}{x} \\; , ~~ A>0\\; ,"
},
{
"math_id": 39,
"text": " y = -x"
},
{
"math_id": 40,
"text": "\\left(-\\sqrt{A},\\sqrt{A}\\right), \\left(\\sqrt{A},-\\sqrt{A}\\right) \\; ."
},
{
"math_id": 41,
"text": "y=\\frac{A}{x}, \\ A\\ne 0\\ ,"
},
{
"math_id": 42,
"text": "(c_0,d_0)"
},
{
"math_id": 43,
"text": "y=\\frac{A}{x-c_0}+d_0\\; ,"
},
{
"math_id": 44,
"text": "x=c_0 "
},
{
"math_id": 45,
"text": "y=d_0"
},
{
"math_id": 46,
"text": "a,b,p,c,e"
},
{
"math_id": 47,
"text": "d = \\frac{a^2}c"
},
{
"math_id": 48,
"text": "\\frac{|PF_1|}{|Pl_1|} = \\frac{|PF_2|}{|Pl_2|} = e= \\frac{c}{a} \\, ."
},
{
"math_id": 49,
"text": "F_1, l_1"
},
{
"math_id": 50,
"text": "|PF_1|^2 = (x-c)^2+y^2,\\ |Pl_1|^2 = \\left(x-\\tfrac{a^2}{c}\\right)^2"
},
{
"math_id": 51,
"text": "y^2 = \\tfrac{b^2}{a^2}x^2-b^2"
},
{
"math_id": 52,
"text": "|PF_1|^2-\\frac{c^2}{a^2}|Pl_1|^2 = 0\\ ."
},
{
"math_id": 53,
"text": "F"
},
{
"math_id": 54,
"text": "l"
},
{
"math_id": 55,
"text": "e > 1"
},
{
"math_id": 56,
"text": "H = \\left\\{P \\, \\Biggr| \\, \\frac{|PF|}{|Pl|} = e\\right\\} "
},
{
"math_id": 57,
"text": "e = 1"
},
{
"math_id": 58,
"text": "e < 1"
},
{
"math_id": 59,
"text": "F=(f,0) ,\\ e >0"
},
{
"math_id": 60,
"text": "x=-\\tfrac{f}{e}"
},
{
"math_id": 61,
"text": "P=(x,y)"
},
{
"math_id": 62,
"text": "|PF|^2 = e^2|Pl|^2"
},
{
"math_id": 63,
"text": "(x-f)^2+y^2 = e^2\\left(x+\\tfrac{f}{e}\\right)^2 = (e x+f)^2"
},
{
"math_id": 64,
"text": "x^2(e^2-1)+2xf(1+e)-y^2 = 0."
},
{
"math_id": 65,
"text": "p=f(1+e)"
},
{
"math_id": 66,
"text": "x^2(e^2-1)+2px-y^2 = 0."
},
{
"math_id": 67,
"text": "e<1"
},
{
"math_id": 68,
"text": "e=1"
},
{
"math_id": 69,
"text": "e>1"
},
{
"math_id": 70,
"text": "a,b"
},
{
"math_id": 71,
"text": "e^2-1 = \\tfrac{b^2}{a^2}, \\text { and }\\ p = \\tfrac{b^2}{a}"
},
{
"math_id": 72,
"text": "\\frac{(x+a)^2}{a^2} - \\frac{y^2}{b^2} = 1 \\, ,"
},
{
"math_id": 73,
"text": "(-a,0)"
},
{
"math_id": 74,
"text": "c \\cdot \\tfrac{a^2}{c}=a^2"
},
{
"math_id": 75,
"text": "L_1"
},
{
"math_id": 76,
"text": "l_1"
},
{
"math_id": 77,
"text": "x^2+y^2=a^2"
},
{
"math_id": 78,
"text": "E_1"
},
{
"math_id": 79,
"text": "\\overline{F_1F_2}"
},
{
"math_id": 80,
"text": "d_1, d_2"
},
{
"math_id": 81,
"text": "c_1"
},
{
"math_id": 82,
"text": "c_2 "
},
{
"math_id": 83,
"text": "A"
},
{
"math_id": 84,
"text": "B"
},
{
"math_id": 85,
"text": "\\overline{PF_1}"
},
{
"math_id": 86,
"text": "\\overline{PA}"
},
{
"math_id": 87,
"text": "d_1"
},
{
"math_id": 88,
"text": "\\overline{PF_2}"
},
{
"math_id": 89,
"text": "\\overline{PB}"
},
{
"math_id": 90,
"text": "d_2"
},
{
"math_id": 91,
"text": "|PF_1| - |PF_2| = |PA| - |PB| = |AB|"
},
{
"math_id": 92,
"text": "A, B"
},
{
"math_id": 93,
"text": "AB"
},
{
"math_id": 94,
"text": "\\overline{AB}"
},
{
"math_id": 95,
"text": "V_1,V_2"
},
{
"math_id": 96,
"text": " |AB|"
},
{
"math_id": 97,
"text": "|PF_1| = |PB|"
},
{
"math_id": 98,
"text": "B(U),B(V)"
},
{
"math_id": 99,
"text": "U,V"
},
{
"math_id": 100,
"text": "U"
},
{
"math_id": 101,
"text": "V"
},
{
"math_id": 102,
"text": "\\pi"
},
{
"math_id": 103,
"text": "B(U)"
},
{
"math_id": 104,
"text": "B(V)"
},
{
"math_id": 105,
"text": "\\tfrac{x^2}{a^2}-\\tfrac{y^2}{b^2} = 1"
},
{
"math_id": 106,
"text": "P = (x_0,y_0)"
},
{
"math_id": 107,
"text": "A = (a,y_0), B = (x_0,0)"
},
{
"math_id": 108,
"text": "\\overline{BP}"
},
{
"math_id": 109,
"text": "\\overline{AP}"
},
{
"math_id": 110,
"text": "V_1"
},
{
"math_id": 111,
"text": "V_2"
},
{
"math_id": 112,
"text": "S_1 A_i"
},
{
"math_id": 113,
"text": "S_2 B_i"
},
{
"math_id": 114,
"text": "y=\\tfrac{a}{x-b}+c,\\ a \\ne 0 "
},
{
"math_id": 115,
"text": "(x_1,y_1),\\;(x_2,y_2),\\; (x_3,y_3)"
},
{
"math_id": 116,
"text": "a,b,c"
},
{
"math_id": 117,
"text": "y=m_1x+d_1, \\ y=m_2x + d_2\\ ,m_1,m_2 \\ne 0"
},
{
"math_id": 118,
"text": "\\frac{m_1}{m_2}\\ ."
},
{
"math_id": 119,
"text": "P_i = (x_i,y_i),\\ i=1,2,3,4,\\ x_i\\ne x_k, y_i\\ne y_k, i\\ne k"
},
{
"math_id": 120,
"text": "y = \\tfrac{a}{x-b} + c"
},
{
"math_id": 121,
"text": "P_3"
},
{
"math_id": 122,
"text": "P_4"
},
{
"math_id": 123,
"text": "\\frac{(y_4-y_1)}{(x_4-x_1)}\\frac{(x_4-x_2)}{(y_4-y_2)}=\\frac{(y_3-y_1)}{(x_3-x_1)}\\frac{(x_3-x_2)}{(y_3-y_2)}"
},
{
"math_id": 124,
"text": "y = a/x"
},
{
"math_id": 125,
"text": "P_i=(x_i,y_i),\\ i=1,2,3,\\ x_i\\ne x_k, y_i\\ne y_k, i\\ne k"
},
{
"math_id": 126,
"text": "\\frac{({\\color{red}y}-y_1)}{({\\color{green}x}-x_1)}\\frac{({\\color{green}x}-x_2)}{({\\color{red}y}-y_2)}=\\frac{(y_3-y_1)}{(x_3-x_1)}\\frac{(x_3-x_2)}{(y_3-y_2)}"
},
{
"math_id": 127,
"text": "{\\color{red}y}"
},
{
"math_id": 128,
"text": "x^2 - y^2 = 1"
},
{
"math_id": 129,
"text": "\\vec x \\to \\vec f_0+A\\vec x"
},
{
"math_id": 130,
"text": "\\vec f_0"
},
{
"math_id": 131,
"text": "\\vec f_1, \\vec f_2"
},
{
"math_id": 132,
"text": "(\\pm\\cosh(t),\\sinh(t)), t \\in \\R,"
},
{
"math_id": 133,
"text": "\\vec x = \\vec p(t)=\\vec f_0 \\pm\\vec f_1 \\cosh t +\\vec f_2 \\sinh t \\ ."
},
{
"math_id": 134,
"text": "\\vec f_0+ \\vec f_1"
},
{
"math_id": 135,
"text": "\\vec f_2"
},
{
"math_id": 136,
"text": "\\vec f_0\\pm \\vec f_1"
},
{
"math_id": 137,
"text": "\\vec f_1\\pm \\vec f_2"
},
{
"math_id": 138,
"text": "\\vec p(t)"
},
{
"math_id": 139,
"text": "\\vec p'(t) = \\vec f_1\\sinh t + \\vec f_2\\cosh t \\ ."
},
{
"math_id": 140,
"text": "t_0"
},
{
"math_id": 141,
"text": "\\vec p'(t)\\cdot \\left(\\vec p(t) -\\vec f_0\\right) = \\left(\\vec f_1\\sinh t + \\vec f_2\\cosh t\\right) \\cdot \\left(\\vec f_1 \\cosh t +\\vec f_2 \\sinh t\\right) = 0"
},
{
"math_id": 142,
"text": "\\coth (2t_0)= -\\tfrac{\\vec f_1^{\\, 2}+\\vec f_2^{\\, 2}}{2\\vec f_1 \\cdot \\vec f_2} \\ ,"
},
{
"math_id": 143,
"text": "t_0=\\tfrac{1}{4}\\ln\\tfrac{\\left(\\vec f_1-\\vec f_2\\right)^2}{\\left(\\vec f_1+\\vec f_2\\right)^2}."
},
{
"math_id": 144,
"text": "\\cosh^2 x + \\sinh^2 x = \\cosh 2x"
},
{
"math_id": 145,
"text": "2\\sinh x \\cosh x = \\sinh 2x"
},
{
"math_id": 146,
"text": "\\operatorname{arcoth} x = \\tfrac{1}{2}\\ln\\tfrac{x+1}{x-1}"
},
{
"math_id": 147,
"text": "\\vec f_0\\pm\\left(\\vec f_1\\cosh t_0 +\\vec f_2 \\sinh t_0\\right)."
},
{
"math_id": 148,
"text": " \\cosh t, \\sinh t"
},
{
"math_id": 149,
"text": "\\;\\cosh^2t-\\sinh^2t -1 = 0\\; "
},
{
"math_id": 150,
"text": "\\det\\left(\\vec x\\!-\\!\\vec f\\!_0,\\vec f\\!_2\\right)^2 - \\det\\left(\\vec f\\!_1,\\vec x\\!-\\!\\vec f\\!_0\\right)^2 - \\det\\left(\\vec f\\!_1,\\vec f\\!_2\\right)^2 = 0 ."
},
{
"math_id": 151,
"text": "\\vec f\\!_0, \\vec f\\!_1, \\vec f\\!_2"
},
{
"math_id": 152,
"text": "x^2-y^2=1"
},
{
"math_id": 153,
"text": "y=1/x"
},
{
"math_id": 154,
"text": "y = 1/x \\, "
},
{
"math_id": 155,
"text": "\\vec x = \\vec p(t) = \\vec f_0 + \\vec f_1 t + \\vec f_2 \\tfrac{1}{t}, \\quad t\\ne 0\\, ."
},
{
"math_id": 156,
"text": "M: \\vec f_0 "
},
{
"math_id": 157,
"text": "\\vec f_1 , \\vec f_2 "
},
{
"math_id": 158,
"text": "\\vec f_1 + \\vec f_2 "
},
{
"math_id": 159,
"text": "\\vec p'(t)=\\vec f_1 - \\vec f_2 \\tfrac{1}{t^2}."
},
{
"math_id": 160,
"text": "\\vec p'(t)\\cdot \\left(\\vec p(t) -\\vec f_0\\right) = \\left(\\vec f_1 - \\vec f_2 \\tfrac{1}{t^2}\\right)\\cdot\\left(\\vec f_1 t+ \\vec f_2 \\tfrac{1}{t}\\right) = \\vec f_1^2t-\\vec f_2^2 \\tfrac{1}{t^3} = 0"
},
{
"math_id": 161,
"text": "t_0= \\pm \\sqrt[4]{\\frac{\\vec f_2^2}{\\vec f_1^2}}."
},
{
"math_id": 162,
"text": "\\left|\\vec f\\!_1\\right| = \\left|\\vec f\\!_2\\right|"
},
{
"math_id": 163,
"text": "t_0 = \\pm 1"
},
{
"math_id": 164,
"text": "\\vec f_0 \\pm (\\vec f_1+\\vec f_2)"
},
{
"math_id": 165,
"text": "\\vec p'(t)=\\tfrac{1}{t}\\left(\\vec f_1t - \\vec f_2 \\tfrac{1}{t}\\right) \\ ."
},
{
"math_id": 166,
"text": "M: \\ \\vec f_0, \\ A=\\vec f_0+\\vec f_1t,\\ B:\\ \\vec f_0+ \\vec f_2 \\tfrac{1}{t},\\ P:\\ \\vec f_0+\\vec f_1t+\\vec f_2 \\tfrac{1}{t}"
},
{
"math_id": 167,
"text": "MAPB"
},
{
"math_id": 168,
"text": "\\text{Area} = \\left|\\det\\left( t\\vec f_1, \\tfrac{1}{t}\\vec f_2\\right)\\right| = \\left|\\det\\left(\\vec f_1,\\vec f_2\\right)\\right| = \\cdots = \\frac{a^2+b^2}{4} "
},
{
"math_id": 169,
"text": "\\tfrac{x^2}{a^2}-\\tfrac{y^2}{b^2}=1 \\, ."
},
{
"math_id": 170,
"text": "\\vec x = \\vec p(t) = \\vec f_1 t + \\vec f_2 \\tfrac{1}{t}"
},
{
"math_id": 171,
"text": "P_1:\\ \\vec f_1 t_1+ \\vec f_2 \\tfrac{1}{t_1},\\ P_2:\\ \\vec f_1 t_2+ \\vec f_2 \\tfrac{1}{t_2}"
},
{
"math_id": 172,
"text": "A:\\ \\vec a =\\vec f_1 t_1+ \\vec f_2 \\tfrac{1}{t_2}, \\ B:\\ \\vec b=\\vec f_1 t_2+ \\vec f_2 \\tfrac{1}{t_1}"
},
{
"math_id": 173,
"text": "\\tfrac{1}{t_1}\\vec a = \\tfrac{1}{t_2}\\vec b"
},
{
"math_id": 174,
"text": "\\vec f_1,\\vec f_2"
},
{
"math_id": 175,
"text": "\\pm (\\vec f_1 + \\vec f_2)"
},
{
"math_id": 176,
"text": "\\pm(\\vec f_1-\\vec f_2)"
},
{
"math_id": 177,
"text": "|\\vec f_1 + \\vec f_2| = a"
},
{
"math_id": 178,
"text": "|\\vec f_1 - \\vec f_2| = b"
},
{
"math_id": 179,
"text": "\\vec p(t_0) = \\vec f_1 t_0 + \\vec f_2 \\tfrac{1}{t_0}"
},
{
"math_id": 180,
"text": "C = 2t_0\\vec f_1,\\ D = \\tfrac{2}{t_0}\\vec f_2."
},
{
"math_id": 181,
"text": "M,C,D"
},
{
"math_id": 182,
"text": "A = \\tfrac{1}{2}\\Big|\\det\\left( 2t_0\\vec f_1, \\tfrac{2}{t_0}\\vec f_2\\right)\\Big| = 2\\Big|\\det\\left(\\vec f_1,\\vec f_2\\right)\\Big|"
},
{
"math_id": 183,
"text": "\\left|\\det(\\vec f_1,\\vec f_2)\\right|"
},
{
"math_id": 184,
"text": "MCD"
},
{
"math_id": 185,
"text": "A = ab."
},
{
"math_id": 186,
"text": "e = \\frac{\\overline{BC}}{r}."
},
{
"math_id": 187,
"text": "(x, y)"
},
{
"math_id": 188,
"text": "\nA_{xx} x^2 + 2 A_{xy} xy + A_{yy} y^2 + 2 B_x x + 2 B_y y + C = 0,\n"
},
{
"math_id": 189,
"text": "A_{xx},"
},
{
"math_id": 190,
"text": "A_{xy},"
},
{
"math_id": 191,
"text": "A_{yy},"
},
{
"math_id": 192,
"text": "B_x,"
},
{
"math_id": 193,
"text": "B_y,"
},
{
"math_id": 194,
"text": "C"
},
{
"math_id": 195,
"text": "\nD := \\begin{vmatrix}\n A_{xx} & A_{xy} \\\\\n A_{xy} & A_{yy} \\end{vmatrix} < 0.\n"
},
{
"math_id": 196,
"text": "\n\\Delta := \\begin{vmatrix}\n A_{xx} & A_{xy} & B_x \\\\\n A_{xy} & A_{yy} & B_y \\\\\n B_x & B_y & C\n\\end{vmatrix} = 0.\n"
},
{
"math_id": 197,
"text": "\\Delta"
},
{
"math_id": 198,
"text": "a,"
},
{
"math_id": 199,
"text": "b,"
},
{
"math_id": 200,
"text": "(x_\\circ, y_\\circ)"
},
{
"math_id": 201,
"text": "\\theta"
},
{
"math_id": 202,
"text": "\\begin{align}\n A_{xx} &= -a^2 \\sin^2\\theta + b^2 \\cos^2\\theta, &\n B_{x} &= -A_{xx} x_\\circ - A_{xy} y_\\circ, \\\\[1ex]\n A_{yy} &= -a^2 \\cos^2\\theta + b^2 \\sin^2\\theta, &\n B_{y} &= - A_{xy} x_\\circ - A_{yy} y_\\circ, \\\\[1ex]\n A_{xy} &= \\left(a^2 + b^2\\right) \\sin\\theta \\cos\\theta, &\n C &= A_{xx} x_\\circ^2 + 2A_{xy} x_\\circ y_\\circ + A_{yy} y_\\circ^2 - a^2 b^2.\n\\end{align}"
},
{
"math_id": 203,
"text": "\\frac{X^2}{a^2} - \\frac{Y^2}{b^2} = 1"
},
{
"math_id": 204,
"text": "\\begin{alignat}{2}\nX &= \\phantom+\\left(x - x_\\circ\\right) \\cos\\theta &&+ \\left(y - y_\\circ\\right) \\sin\\theta, \\\\\nY &= -\\left(x - x_\\circ\\right) \\sin\\theta &&+ \\left(y - y_\\circ\\right) \\cos\\theta.\n\\end{alignat}"
},
{
"math_id": 205,
"text": "(x_c, y_c)"
},
{
"math_id": 206,
"text": "\\begin{align}\nx_c &= -\\frac{1}{D} \\, \\begin{vmatrix} B_x & A_{xy} \\\\ B_y & A_{yy} \\end{vmatrix} \\,, \\\\[1ex]\ny_c &= -\\frac{1}{D} \\, \\begin{vmatrix} A_{xx} & B_x \\\\ A_{xy} & B_y \\end{vmatrix} \\,.\n\\end{align}"
},
{
"math_id": 207,
"text": "\\xi = x - x_c"
},
{
"math_id": 208,
"text": "\\eta = y - y_c,"
},
{
"math_id": 209,
"text": "\nA_{xx} \\xi^2 + 2A_{xy} \\xi\\eta + A_{yy} \\eta^2 + \\frac \\Delta D = 0.\n"
},
{
"math_id": 210,
"text": "\\varphi"
},
{
"math_id": 211,
"text": "x"
},
{
"math_id": 212,
"text": "\\tan (2\\varphi) = \\frac{2A_{xy}}{A_{xx} - A_{yy}}."
},
{
"math_id": 213,
"text": "\\frac{x^2}{a^2} - \\frac{y^2}{b^2} = 1."
},
{
"math_id": 214,
"text": "b"
},
{
"math_id": 215,
"text": "\\begin{align}\na^2 &= -\\frac{\\Delta}{\\lambda_1 D} = -\\frac{\\Delta}{\\lambda_1^2 \\lambda_2}, \\\\[1ex]\nb^2 &= -\\frac{\\Delta}{\\lambda_2 D} = -\\frac{\\Delta}{\\lambda_1 \\lambda_2^2},\n\\end{align}"
},
{
"math_id": 216,
"text": "\\lambda_1"
},
{
"math_id": 217,
"text": "\\lambda_2"
},
{
"math_id": 218,
"text": "\\lambda^2 - \\left( A_{xx} + A_{yy} \\right)\\lambda + D = 0."
},
{
"math_id": 219,
"text": "\\frac{x^2}{a^2} - \\frac{y^2}{b^2} = 0."
},
{
"math_id": 220,
"text": "(x_0, y_0)"
},
{
"math_id": 221,
"text": "E x + F y + G = 0"
},
{
"math_id": 222,
"text": "E,"
},
{
"math_id": 223,
"text": "F,"
},
{
"math_id": 224,
"text": "G"
},
{
"math_id": 225,
"text": "\\begin{align}\nE &= A_{xx} x_0 + A_{xy} y_0 + B_x, \\\\[1ex]\nF &= A_{xy} x_0 + A_{yy} y_0 + B_y, \\\\[1ex]\nG &= B_x x_0 + B_y y_0 + C.\n\\end{align}"
},
{
"math_id": 226,
"text": "F(x - x_0) - E(y - y_0) = 0."
},
{
"math_id": 227,
"text": "(x_0, y_0)."
},
{
"math_id": 228,
"text": "\\frac{x^2}{a^2} - \\frac{y^2}{b^2} = 1, \\qquad 0 < b \\leq a,"
},
{
"math_id": 229,
"text": "(-ae,0)"
},
{
"math_id": 230,
"text": "(ae,0), "
},
{
"math_id": 231,
"text": "r_1"
},
{
"math_id": 232,
"text": "r_2."
},
{
"math_id": 233,
"text": " r_1 - r_2 = 2 a, "
},
{
"math_id": 234,
"text": " r_2 - r_1 = 2 a. "
},
{
"math_id": 235,
"text": "\nr_1^2\n= (x+a e)^2 + y^2\n= x^2 + 2 x a e + a^2 e^2 + \\left(x^2-a^2\\right) \\left(e^2-1\\right)\n= (e x + a)^2.\n"
},
{
"math_id": 236,
"text": "\nr_2^2\n= (x-a e)^2 + y^2\n= x^2 - 2 x a e + a^2 e^2 + \\left(x^2-a^2\\right) \\left(e^2-1\\right)\n= (e x - a)^2.\n"
},
{
"math_id": 237,
"text": "ex > a"
},
{
"math_id": 238,
"text": "\\begin{align}\nr_1 &= e x + a, \\\\\nr_2 &= e x - a.\n\\end{align}"
},
{
"math_id": 239,
"text": "r_1 - r_2 = 2a."
},
{
"math_id": 240,
"text": "ex < -a"
},
{
"math_id": 241,
"text": "\\begin{align}\nr_1 &= - e x - a, \\\\\nr_2 &= - e x + a.\n\\end{align}"
},
{
"math_id": 242,
"text": "r_2 - r_1 = 2a."
},
{
"math_id": 243,
"text": "F_1=(c,0),\\ F_2=(-c,0)"
},
{
"math_id": 244,
"text": "V_1=(a, 0),\\ V_2=(-a,0)"
},
{
"math_id": 245,
"text": "(x,y)"
},
{
"math_id": 246,
"text": "(c,0)"
},
{
"math_id": 247,
"text": "\\sqrt{ (x-c)^2 + y^2 }"
},
{
"math_id": 248,
"text": "\\sqrt{ (x+c)^2 + y^2 }"
},
{
"math_id": 249,
"text": "\\sqrt{(x-c)^2 + y^2} - \\sqrt{(x+c)^2 + y^2} = \\pm 2a \\ ."
},
{
"math_id": 250,
"text": "b^2 = c^2-a^2"
},
{
"math_id": 251,
"text": "\\frac{x^2}{a^2} - \\frac{y^2}{b^2} = 1 \\ ."
},
{
"math_id": 252,
"text": "(a,0),\\; (-a,0)"
},
{
"math_id": 253,
"text": "(0,b),\\; (0,-b)"
},
{
"math_id": 254,
"text": "e=\\sqrt{1+\\frac{b^2}{a^2}}."
},
{
"math_id": 255,
"text": "y"
},
{
"math_id": 256,
"text": "y=\\pm\\frac{b}{a} \\sqrt{x^2-a^2}."
},
{
"math_id": 257,
"text": "y=\\pm \\frac{b}{a}x "
},
{
"math_id": 258,
"text": "|x|"
},
{
"math_id": 259,
"text": "\\tfrac{x^2}{a^2}-\\tfrac{y^2}{b^2}= 1 \\ ."
},
{
"math_id": 260,
"text": "{\\color{blue}{(1)}}"
},
{
"math_id": 261,
"text": "\\tfrac{bx\\pm ay}{\\sqrt{a^2+b^2}}=0 "
},
{
"math_id": 262,
"text": "{\\color{magenta}{(2)}}"
},
{
"math_id": 263,
"text": "\\tfrac{a^2b^2}{a^2+b^2}\\ , "
},
{
"math_id": 264,
"text": "\\left( \\tfrac{b}{e}\\right) ^2."
},
{
"math_id": 265,
"text": "y=\\pm\\frac{b}{a}\\sqrt{x^2-a^2}"
},
{
"math_id": 266,
"text": "{\\color{green}{(3)}}"
},
{
"math_id": 267,
"text": "b^2/a^2\\ ."
},
{
"math_id": 268,
"text": "{\\color{red}{(4)}}"
},
{
"math_id": 269,
"text": "\\tfrac{a^2+b^2}{4}."
},
{
"math_id": 270,
"text": "p"
},
{
"math_id": 271,
"text": "p = \\frac{b^2}a."
},
{
"math_id": 272,
"text": "(x_0,y_0)"
},
{
"math_id": 273,
"text": "\\tfrac{x^2}{a^2}-\\tfrac{y^2}{b^2}= 1"
},
{
"math_id": 274,
"text": "\\frac{2x}{a^2}-\\frac{2yy'}{b^2}= 0 \\ \\Rightarrow \\ y'=\\frac{x}{y}\\frac{b^2}{a^2}\\ \\Rightarrow \\ y=\\frac{x_0}{y_0}\\frac{b^2}{a^2}(x-x_0) +y_0."
},
{
"math_id": 275,
"text": "\\tfrac{x_0^2}{a^2}-\\tfrac{y_0^2}{b^2}= 1"
},
{
"math_id": 276,
"text": "\\frac{x_0}{a^2}x-\\frac{y_0}{b^2}y = 1."
},
{
"math_id": 277,
"text": "a = b"
},
{
"math_id": 278,
"text": "c=\\sqrt{2}a"
},
{
"math_id": 279,
"text": "e=\\sqrt{2}"
},
{
"math_id": 280,
"text": "p=a"
},
{
"math_id": 281,
"text": "\\cosh,\\sinh"
},
{
"math_id": 282,
"text": "(\\pm a \\cosh t, b \\sinh t),\\, t \\in \\R \\ ,"
},
{
"math_id": 283,
"text": "\\cosh^2 t -\\sinh^2 t =1 ."
},
{
"math_id": 284,
"text": "\\frac{x^2}{a^2}"
},
{
"math_id": 285,
"text": "\\frac{y^2}{b^2}"
},
{
"math_id": 286,
"text": "\\frac{y^2}{b^2}-\\frac{x^2}{a^2}= 1 \\ ,"
},
{
"math_id": 287,
"text": "\\frac{x^2}{a^2}-\\frac{y^2}{b^2}= -1 \\ ."
},
{
"math_id": 288,
"text": "r = \\frac{p}{1 \\mp e \\cos \\varphi}, \\quad p = \\frac{b^2}{a}"
},
{
"math_id": 289,
"text": "-\\arccos \\left(-\\frac 1 e\\right) < \\varphi < \\arccos \\left(-\\frac 1 e\\right). "
},
{
"math_id": 290,
"text": "r =\\frac{b}{\\sqrt{e^2 \\cos^2 \\varphi -1}} .\\,"
},
{
"math_id": 291,
"text": " \\varphi "
},
{
"math_id": 292,
"text": "-\\arccos \\left(\\frac 1 e\\right) < \\varphi < \\arccos \\left(\\frac 1 e\\right)."
},
{
"math_id": 293,
"text": "\\sec\\varphi_\\text{max}"
},
{
"math_id": 294,
"text": "\\varphi_\\text{max}"
},
{
"math_id": 295,
"text": "e^2 \\cos^2 \\varphi_\\text{max} - 1 = 0"
},
{
"math_id": 296,
"text": "1 \\pm e \\cos \\varphi_\\text{max} = 0"
},
{
"math_id": 297,
"text": "\\implies e = \\sec\\varphi_\\text{max}"
},
{
"math_id": 298,
"text": "\\tfrac{x^2}{a^2} - \\tfrac{y^2}{b^2} = 1"
},
{
"math_id": 299,
"text": "\n \\begin{cases}\n x = \\pm a \\cosh t, \\\\\n y = b \\sinh t,\n \\end{cases} \\qquad t \\in \\R.\n"
},
{
"math_id": 300,
"text": "\n \\begin{cases}\n x = \\pm a \\dfrac{t^2 + 1}{2t}, \\\\[1ex]\n y = b \\dfrac{t^2 - 1}{2t},\n \\end{cases} \\qquad t > 0"
},
{
"math_id": 301,
"text": "\n \\begin{cases}\n x = \\frac{a}{\\cos t} = a \\sec t, \\\\\n y = \\pm b \\tan t,\n \\end{cases} \\qquad 0 \\le t < 2\\pi,\\ t \\ne \\frac{\\pi}{2},\\ t \\ne \\frac{3}{2} \\pi."
},
{
"math_id": 302,
"text": "m"
},
{
"math_id": 303,
"text": "b^2"
},
{
"math_id": 304,
"text": "-b^2"
},
{
"math_id": 305,
"text": "\\vec c_\\pm(m) = \\left(-\\frac{ma^2}{\\pm\\sqrt{m^2a^2 - b^2}}, \\frac{-b^2}{\\pm\\sqrt{m^2a^2 - b^2}}\\right),\\quad |m| > b/a."
},
{
"math_id": 306,
"text": "\\vec c_-"
},
{
"math_id": 307,
"text": "\\vec c_+"
},
{
"math_id": 308,
"text": "(\\pm a, 0)"
},
{
"math_id": 309,
"text": "\\vec c_\\pm(m)"
},
{
"math_id": 310,
"text": "y = m x \\pm\\sqrt{m^2a^2 - b^2}."
},
{
"math_id": 311,
"text": "(x,y) = (\\cosh a,\\sinh a) = (x, \\sqrt{x^2-1})"
},
{
"math_id": 312,
"text": "(1,0)"
},
{
"math_id": 313,
"text": "\\begin{align}\n\\frac{a}{2} &= \\frac{xy}{2} - \\int_1^x \\sqrt{t^{2}-1} \\, dt \\\\[1ex]\n &= \\frac{1}{2} \\left(x\\sqrt{x^2-1}\\right) - \\frac{1}{2} \\left(x\\sqrt{x^2-1} - \\ln \\left(x+\\sqrt{x^2-1}\\right)\\right),\n\\end{align}"
},
{
"math_id": 314,
"text": "a=\\operatorname{arcosh}x=\\ln \\left(x+\\sqrt{x^2-1}\\right)."
},
{
"math_id": 315,
"text": "x=\\cosh a=\\frac{e^a+e^{-a}}{2}."
},
{
"math_id": 316,
"text": "y=\\sinh a=\\sqrt{\\cosh^2 a - 1}=\\frac{e^a-e^{-a}}{2},"
},
{
"math_id": 317,
"text": "a=\\operatorname{arsinh}y=\\ln \\left(y+\\sqrt{y^2+1}\\right)."
},
{
"math_id": 318,
"text": "\\operatorname{tanh}a=\\frac{\\sinh a}{\\cosh a}=\\frac{e^{2a}-1}{e^{2a}+1}."
},
{
"math_id": 319,
"text": "\\overline{PF_1}, \\overline{PF_2}."
},
{
"math_id": 320,
"text": "L"
},
{
"math_id": 321,
"text": "w"
},
{
"math_id": 322,
"text": "\\overline{PF_1}, \\overline{PF_2}"
},
{
"math_id": 323,
"text": "Q"
},
{
"math_id": 324,
"text": "|QF_2|<|LF_2|+|QL|=2a+|QF_1|"
},
{
"math_id": 325,
"text": "|QF_2|-|QF_1|<2a"
},
{
"math_id": 326,
"text": "P=\\left(x_1,\\tfrac {1 }{x_1}\\right), \\ Q=\\left(x_2,\\tfrac {1 }{x_2}\\right)"
},
{
"math_id": 327,
"text": "M=\\left(\\tfrac{x_1+x_2}{2},\\cdots\\right)=\\cdots =\\tfrac{x_1+x_2}{2}\\; \\left(1,\\tfrac{1}{x_1x_2}\\right) \\ ;"
},
{
"math_id": 328,
"text": "\\frac{\\tfrac {1 }{x_2}-\\tfrac {1 }{x_1}}{x_2-x_1}=\\cdots =-\\tfrac{1}{x_1x_2} \\ ."
},
{
"math_id": 329,
"text": "y=\\tfrac{1}{x_1x_2} \\; x \\ ."
},
{
"math_id": 330,
"text": "P,Q"
},
{
"math_id": 331,
"text": "P Q"
},
{
"math_id": 332,
"text": "\\overline P \\, \\overline Q"
},
{
"math_id": 333,
"text": "|P\\overline P|=|Q\\overline Q|"
},
{
"math_id": 334,
"text": "\\frac{x^2}{a^2}-\\frac{y^2}{b^2}=1, \\, a>b"
},
{
"math_id": 335,
"text": "x^2+y^2=a^2-b^2"
},
{
"math_id": 336,
"text": "a\\le b"
},
{
"math_id": 337,
"text": "P_0=(x_0,y_0)"
},
{
"math_id": 338,
"text": "\\tfrac{x_0x}{a^2}-\\tfrac{y_0y}{b^2}=1."
},
{
"math_id": 339,
"text": "P_0=(x_0,y_0)\\ne(0,0)"
},
{
"math_id": 340,
"text": "\\frac{x_0x}{a^2}-\\frac{y_0y}{b^2}=1 "
},
{
"math_id": 341,
"text": "y=mx+d,\\ d\\ne 0"
},
{
"math_id": 342,
"text": "\\left(-\\frac{ma^2}{d},-\\frac{b^2}{d}\\right)"
},
{
"math_id": 343,
"text": "x=c,\\ c\\ne 0"
},
{
"math_id": 344,
"text": "\\left(\\frac{a^2}{c},0\\right)\\ ."
},
{
"math_id": 345,
"text": "P_1,\\ p_1"
},
{
"math_id": 346,
"text": "P_2,\\ p_2,\\ P_3,\\ p_3"
},
{
"math_id": 347,
"text": "P_4,\\ p_4"
},
{
"math_id": 348,
"text": "p_2,p_3"
},
{
"math_id": 349,
"text": "P_2,P_3"
},
{
"math_id": 350,
"text": "(c,0),"
},
{
"math_id": 351,
"text": " (-c,0)"
},
{
"math_id": 352,
"text": "x=\\tfrac{a^2}{c}"
},
{
"math_id": 353,
"text": "x=-\\tfrac{a^2}{c}"
},
{
"math_id": 354,
"text": "y = b\\sqrt{\\frac{x^{2}}{a^{2}}-1}."
},
{
"math_id": 355,
"text": "s"
},
{
"math_id": 356,
"text": "x_{1}"
},
{
"math_id": 357,
"text": "x_{2}"
},
{
"math_id": 358,
"text": "s = b\\int_{\\operatorname{arcosh}\\frac{x_{1}}{a}}^{\\operatorname{arcosh}\\frac{x_{2}}{a}} \\sqrt{1+\\left(1+\\frac{a^{2}}{b^{2}}\\right) \\sinh ^{2}v} \\, \\mathrm dv."
},
{
"math_id": 359,
"text": "z = iv"
},
{
"math_id": 360,
"text": "E"
},
{
"math_id": 361,
"text": "m = k^2"
},
{
"math_id": 362,
"text": "s = ib \\Biggr[E\\left(iv \\, \\Biggr| \\, 1 + \\frac{a^2}{b^2}\\right)\\Biggr]^{\\operatorname{arcosh}\\frac{x_1}{a}}_{\\operatorname{arcosh}\\frac{x_2}{a}}."
},
{
"math_id": 363,
"text": "s=b\\left[F\\left(\\operatorname{gd}v\\,\\Biggr|-\\frac{a^2}{b^2}\\right) - E\\left(\\operatorname{gd}v\\,\\Biggr|-\\frac{a^2}{b^2}\\right) + \\sqrt{1+\\frac{a^2}{b^2}\\tanh^2 v}\\,\\sinh v\\right]_{\\operatorname{arcosh}\\tfrac{x_1}{a}}^{\\operatorname{arcosh}\\tfrac{x_2}{a}}"
},
{
"math_id": 364,
"text": "\\operatorname{gd}v=\\arctan\\sinh v"
},
{
"math_id": 365,
"text": "\n\\left(\\frac x {c \\cos\\theta}\\right)^2 - \\left(\\frac y {c \\sin\\theta}\\right)^2 = 1\n"
},
{
"math_id": 366,
"text": "\\operatorname{sech}\\, x"
},
{
"math_id": 367,
"text": "\\ell"
}
]
| https://en.wikipedia.org/wiki?curid=14052 |
14052163 | Optical downconverter | Optical DownConverter (ODC) is an example of a non-linear optical process, in which two beams of light of different frequencies formula_0 and formula_1 interact, creating microwave with frequency formula_2. It is a generalization of microwave. In the latter, formula_3, both of which can be provided by a single light source. From a quantum mechanical perspective, ODC can be seen as result of differencing two photons to produce a microwave. Since the energy of a photon is given by
formula_4
the frequency summing formula_2 is simply a statement that energy is conserved.
In a common ODC application, light from a tunable infrared laser is combined with light from a fixed frequency visible laser to produce a microwave created by a wave mixing process.
The ODC use milimeteric microwave cavity that include photonic crystal that provide by two signal frequency light source. The microwave is detected by the cavity antenna. | [
{
"math_id": 0,
"text": "\\omega_1"
},
{
"math_id": 1,
"text": "\\omega_2"
},
{
"math_id": 2,
"text": "\\omega_3 = \\omega_1 - \\omega_2 "
},
{
"math_id": 3,
"text": "\\omega_1 = \\omega_2"
},
{
"math_id": 4,
"text": " E_\\nu = \\hbar\\omega, "
}
]
| https://en.wikipedia.org/wiki?curid=14052163 |
14053488 | Non-random two-liquid model | The non-random two-liquid model (abbreviated NRTL model) is an activity coefficient model introduced by Renon
and Prausnitz in 1968 that correlates the activity coefficients formula_0 of a compound with its mole fractions formula_1 in the liquid phase concerned. It is frequently applied in the field of chemical engineering to calculate phase equilibria. The concept of NRTL is based on the hypothesis of Wilson, who stated that the local concentration around a molecule in most mixtures is different from the bulk concentration. This difference is due to a difference between the interaction energy of the central molecule with the molecules of its own kind formula_2 and that with the molecules of the other kind formula_3. The energy difference also introduces a non-randomness at the local molecular level. The NRTL model belongs to the so-called local-composition models. Other models of this type are the Wilson model, the UNIQUAC model, and the group contribution model UNIFAC. These local-composition models are not thermodynamically consistent for a one-fluid model for a real mixture due to the assumption that the local composition around molecule "i" is independent of the local composition around molecule "j". This assumption is not true, as was shown by Flemr in 1976. However, they are consistent if a hypothetical two-liquid model is used. Models, which have consistency between bulk and the local molecular concentrations around different types of molecules are COSMO-RS, and COSMOSPACE.
Derivation.
Like Wilson (1964), Renon & Prausnitz (1968) began with local composition theory, but instead of using the Flory–Huggins volumetric expression as Wilson did, they assumed local compositions followed
formula_4
with a new "non-randomness" parameter α. The excess Gibbs free energy was then determined to be
formula_5.
Unlike Wilson's equation, this can predict partially miscible mixtures. However, the cross term, like Wohl's expansion, is more suitable for formula_6 than formula_7, and experimental data is not always sufficiently plentiful to yield three meaningful values, so later attempts to extend Wilson's equation to partial miscibility (or to extend Guggenheim's quasichemical theory for nonrandom mixtures to Wilson's different-sized molecules) eventually yielded variants like UNIQUAC.
Equations for a binary mixture.
For a binary mixture the following functions are used:
formula_8
with
formula_9
Here, formula_10 and formula_11 are the dimensionless interaction parameters, which are related to the interaction energy parameters formula_12 and formula_13 by:
formula_14
Here "R" is the gas constant and "T" the absolute temperature, and "Uij" is the energy between molecular surface "i" and "j". "Uii" is the energy of evaporation. Here "Uij" has to be equal to "Uji", but formula_15 is not necessary equal to formula_16.
The parameters formula_17 and formula_18 are the so-called non-randomness parameter, for which usually formula_17 is set equal to formula_18. For a liquid, in which the local distribution is random around the center molecule, the parameter formula_19. In that case the equations reduce to the one-parameter Margules activity model:
formula_20
In practice, formula_17 is set to 0.2, 0.3 or 0.48. The latter value is frequently used for aqueous systems. The high value reflects the ordered structure caused by hydrogen bonds. However, in the description of liquid-liquid equilibria the non-randomness parameter is set to 0.2 to avoid wrong liquid-liquid description. In some cases a better phase equilibria description is obtained by setting formula_21. However this mathematical solution is impossible from a physical point of view, since no system can be more random than random (formula_17 =0). In general NRTL offers more flexibility in the description of phase equilibria than other activity models due to the extra non-randomness parameters. However, in practice this flexibility is reduced in order to avoid wrong equilibrium description outside the range of regressed data.
The limiting activity coefficients, also known as the activity coefficients at infinite dilution, are calculated by:
formula_22
The expressions show that at formula_19 the limiting activity coefficients are equal. This situation that occurs for molecules of equal size, but of different polarities. It also shows, since three parameters are available, that multiple sets of solutions are possible.
General equations.
The general equation for formula_23 for species formula_24 in a mixture of formula_25 components is:
formula_26
with
formula_27
formula_28
formula_29
There are several different equation forms for formula_30 and formula_31, the most general of which are shown above.
Temperature dependent parameters.
To describe phase equilibria over a large temperature regime, i.e. larger than 50 K, the interaction parameter has to be made temperature dependent.
Two formats are frequently used. The extended Antoine equation format:
formula_32
Here the logarithmic and linear terms are mainly used in the description of liquid-liquid equilibria (miscibility gap).
The other format is a second-order polynomial format:
formula_33
Parameter determination.
The NRTL parameters are fitted to activity coefficients that have been derived from experimentally determined phase equilibrium data (vapor–liquid, liquid–liquid, solid–liquid) as well as from heats of mixing. The source of the experimental data are often factual data banks like the Dortmund Data Bank. Other options are direct experimental work and predicted activity coefficients with UNIFAC and similar models.
Noteworthy is that for the same liquid mixture several NRTL parameter sets might exist. The NRTL parameter set to use depends on the kind of phase equilibrium (i.e. solid–liquid (SL), liquid–liquid (LL), vapor–liquid (VL)). In the case of the description of a vapor–liquid equilibria it is necessary to know which saturated vapor pressure of the pure components was used and whether the gas phase was treated as an ideal or a real gas. Accurate saturated vapor pressure values are important in the determination or the description of an azeotrope. The gas fugacity coefficients are mostly set to unity (ideal gas assumption), but for vapor-liquid equilibria at high pressures (i.e. > 10 bar) an equation of state is needed to calculate the gas fugacity coefficient for a real gas description.
Determination of NRTL parameters from LLE data is more complicated than parameter regression from VLE data as it involves solving isoactivity equations which are highly non-linear. In addition, parameters obtained from LLE may not always represent the real activity of components due to lack of knowledge on the activity values of components in the data regression. For this reason it is necessary to confirm the consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated lie-lines, Hessian matrix, etc.).
Parameters for NRTL model.
NRTL binary interaction parameters have been published in the Dechema data series and are provided by NIST and DDBST. There also exist machine-learning approaches that are able to predict NRTL parameters by using the SMILES notation for molecules as input.
Literature.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma_i"
},
{
"math_id": 1,
"text": "x_i"
},
{
"math_id": 2,
"text": "U_{ii}"
},
{
"math_id": 3,
"text": "U_{ij}"
},
{
"math_id": 4,
"text": "\\frac{x_{21}}{x_{11}} = \\frac{x_2}{x_1} \\frac{\\exp(-\\alpha_{21} g_{21}/RT)}{\\exp(-\\alpha_{11} g_{11}/RT)}"
},
{
"math_id": 5,
"text": "\\frac{G^{ex}}{RT} = \\sum_i^N x_i \\frac{\\sum_j^N \\tau_{ji} G_{ji} x_j}{\\sum_k^N G_{ki} x_k}"
},
{
"math_id": 6,
"text": "H^\\text{ex}"
},
{
"math_id": 7,
"text": "G^\\text{ex}"
},
{
"math_id": 8,
"text": "\n\\left\\{\\begin{matrix} \\ln\\ \\gamma_1=x^2_2\\left[\\tau_{21}\\left(\\frac{G_{21}}{x_1+x_2 G_{21}}\\right)^2 +\\frac{\\tau_{12} G_{12}} {(x_2+x_1 G_{12})^2 }\\right] \\\\\n\\\\ \\ln\\ \\gamma_2=x^2_1\\left[\\tau_{12}\\left(\\frac{G_{12}}{x_2+x_1 G_{12}}\\right)^2 +\\frac{\\tau_{21} G_{21}} {(x_1+x_2 G_{21})^2 }\\right]\n\\end{matrix}\\right."
},
{
"math_id": 9,
"text": "\n\\left\\{\\begin{matrix} \\ln\\ G_{12}=-\\alpha_{12}\\ \\tau_{12}\n\\\\ \\ln\\ G_{21}=-\\alpha_{21}\\ \\tau_{21}\n\\end{matrix}\\right."
},
{
"math_id": 10,
"text": "\\tau_{12}"
},
{
"math_id": 11,
"text": "\\tau_{21}"
},
{
"math_id": 12,
"text": "\\Delta g_{12}"
},
{
"math_id": 13,
"text": "\\Delta g_{21}"
},
{
"math_id": 14,
"text": "\n\\left\\{\\begin{matrix} \\tau_{12}=\\frac{\\Delta g_{12}}{RT}=\\frac{U_{12}-U_{22}}{RT}\n\\\\ \\tau_{21}=\\frac{\\Delta g_{21}}{RT}=\\frac{U_{21}-U_{11}}{RT}\n\\end{matrix}\\right."
},
{
"math_id": 15,
"text": " \\Delta g_{ij} "
},
{
"math_id": 16,
"text": " \\Delta g_{ji} "
},
{
"math_id": 17,
"text": "\\alpha_{12}"
},
{
"math_id": 18,
"text": "\\alpha_{21}"
},
{
"math_id": 19,
"text": "\\alpha_{12}=0"
},
{
"math_id": 20,
"text": "\n\\left\\{\\begin{matrix} \\ln\\ \\gamma_1=x^2_2\\left[\\tau_{21} +\\tau_{12} \\right]=Ax^2_2\n\\\\ \\ln\\ \\gamma_2=x^2_1\\left[\\tau_{12}+\\tau_{21} \\right]=Ax^2_1\n\\end{matrix}\\right."
},
{
"math_id": 21,
"text": "\\alpha_{12}=-1"
},
{
"math_id": 22,
"text": "\n\\left\\{\\begin{matrix} \\ln\\ \\gamma_1^\\infty=\\left[\\tau_{21} +\\tau_{12} \\exp{(-\\alpha_{12}\\ \\tau_{12})} \\right]\n\\\\ \\ln\\ \\gamma_2^\\infty=\\left[\\tau_{12} +\\tau_{21}\\exp{(-\\alpha_{12}\\ \\tau_{21})}\\right]\n\\end{matrix}\\right."
},
{
"math_id": 23,
"text": "\\ln(\\gamma_i)"
},
{
"math_id": 24,
"text": "i"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "\n\\ln(\\gamma_i)=\\frac{\\displaystyle\\sum_{j=1}^{n}{x_{j}\\tau_{ji}G_{ji}}}{\\displaystyle\\sum_{k=1}^{n}{x_{k}G_{ki}}}+\\sum_{j=1}^{n}{\\frac{x_{j}G_{ij}}{\\displaystyle\\sum_{k=1}^{n}{x_{k}G_{kj}}}}{\\left ({\\tau_{ij}-\\frac{\\displaystyle\\sum_{m=1}^{n}{x_{m}\\tau_{mj}G_{mj}}}{\\displaystyle\\sum_{k=1}^{n}{x_{k}G_{kj}}}}\\right )}\n"
},
{
"math_id": 27,
"text": "\nG_{ij}=\\exp\\left ({-\\alpha_{ij}\\tau_{ij}}\\right )"
},
{
"math_id": 28,
"text": "\n\\alpha_{ij}=\\alpha_{ij_{0}}+\\alpha_{ij_{1}}T"
},
{
"math_id": 29,
"text": "\n\\tau_{i,j}=A_{ij}+\\frac{B_{ij}}{T}+\\frac{C_{ij}}{T^{2}}+D_{ij}\\ln{\\left ({T}\\right )}+E_{ij}T^{F_{ij}}"
},
{
"math_id": 30,
"text": "\\alpha_{ij}"
},
{
"math_id": 31,
"text": "\\tau_{ij}"
},
{
"math_id": 32,
"text": "\\tau_{ij}=f(T)=a_{ij}+\\frac{b_{ij}}{T}+c_{ij}\\ \\ln\\ T+d_{ij}T"
},
{
"math_id": 33,
"text": "\\Delta g_{ij}=f(T)=a_{ij}+b_{ij}\\cdot T +c_{ij}T^{2}"
}
]
| https://en.wikipedia.org/wiki?curid=14053488 |
140583 | Vienna Development Method | Formal method for the development of computer-based systems
The Vienna Development Method (VDM) is one of the longest-established formal methods for the development of computer-based systems. Originating in work done at the IBM Laboratory Vienna in the 1970s, it has grown to include a group of techniques and tools based on a formal specification language—the VDM Specification Language (VDM-SL). It has an extended form, VDM++, which supports the modeling of object-oriented and concurrent systems. Support for VDM includes commercial and academic tools for analyzing models, including support for testing and proving properties of models and generating program code from validated VDM models. There is a history of industrial usage of VDM and its tools and a growing body of research in the formalism has led to notable contributions to the engineering of critical systems, compilers, concurrent systems and in logic for computer science.
Philosophy.
Computing systems may be modeled in VDM-SL at a higher level of abstraction than is achievable using programming languages, allowing the analysis of designs and identification of key features, including defects, at an early stage of system development. Models that have been validated can be transformed into detailed system designs through a refinement process. The language has a formal semantics, enabling proof of the properties of models to a high level of assurance. It also has an executable subset, so that models may be analyzed by testing and can be executed through graphical user interfaces, so that models can be evaluated by experts who are not necessarily familiar with the modeling language itself.
History.
The origins of VDM-SL lie in the IBM Laboratory in Vienna where the first version of the language was called the Vienna Definition Language (VDL). The VDL was essentially used for giving operational semantics descriptions in contrast to the VDM – Meta-IV which provided denotational semantics
«Towards the end of 1972 the Vienna group again turned their attention to the problem of systematically developing a compiler from a language definition. The overall approach adopted has been termed the "Vienna Development Method"... The meta-language actually adopted ("Meta-IV") is used to define major portions of PL/1 (as given in ECMA 74 – interestingly a "formal standards document written as an abstract interpreter") in BEKIČ 74.»
There is no connection between Meta-IV, and Schorre's META II language, or its successor Tree Meta; these were compiler-compiler systems rather than being suitable for formal problem descriptions.
So Meta-IV was "used to define major portions of" the PL/I programming language. Other programming languages retrospectively described, or partially described, using Meta-IV and VDM-SL include the BASIC programming language, FORTRAN, the APL programming language, ALGOL 60, the Ada programming language and the Pascal programming language. Meta-IV evolved into several variants, generally described as the Danish, English and Irish Schools.
The "English School" derived from work by Cliff Jones on the aspects of VDM not specifically related to language definition and compiler design (Jones 1980, 1990). It stresses modelling persistent state through the use of data types constructed from a rich collection of base types. Functionality is typically described through operations which may have side-effects on the state and which are mostly specified implicitly using a precondition and postcondition. The "Danish School" (Bjørner "et al." 1982) has tended to stress a constructive approach with explicit operational specification used to a greater extent. Work in the Danish school led to the first European validated Ada compiler.
An ISO Standard for the language was released in 1996 (ISO, 1996).
VDM features.
The VDM-SL and VDM++ syntax and semantics are described at length in the VDMTools language manuals and in the available texts. The ISO Standard contains a formal definition of the language's semantics. In the remainder of this article, the ISO-defined interchange (ASCII) syntax is used. Some texts prefer a more concise mathematical syntax.
A VDM-SL model is a system description given in terms of the functionality performed on data. It consists of a series of definitions of data types and functions or operations performed upon them.
Basic types: numeric, character, token and quote types.
VDM-SL includes basic types modelling numbers and characters as follows:
Data types are defined to represent the main data of the modelled system. Each type definition introduces a new type name and gives a representation in terms of the basic types or in terms of types already introduced. For example, a type modelling user identifiers for a log-in management system might be defined as follows:
types
UserId = nat
For manipulating values belonging to data types, operators are defined on the values. Thus, natural number addition, subtraction etc. are provided, as are Boolean operators such as equality and inequality. The language does not fix a maximum or minimum representable number or a precision for real numbers. Such constraints are defined where they are required in each model by means of data type invariants—Boolean expressions denoting conditions that must be respected by all elements of the defined type. For example, a requirement that user identifiers must be no greater than 9999 would be expressed as follows (where codice_0 is the "less than or equal to" Boolean operator on natural numbers):
UserId = nat
inv uid == uid <= 9999
Since invariants can be arbitrarily complex logical expressions, and membership of a defined type is limited to only those values satisfying the invariant, type correctness in VDM-SL is not automatically decidable in all situations.
The other basic types include char for characters. In some cases, the representation of a type is not relevant to the model's purpose and would only add complexity. In such cases, the members of the type may be represented as structureless tokens. Values of token types can only be compared for equality – no other operators are defined on them. Where specific named values are required, these are introduced as quote types. Each quote type consists of one named value of the same name as the type itself. Values of quote types (known as quote literals) may only be compared for equality.
For example, in modelling a traffic signal controller, it may be convenient to define values to represent the colours of the traffic signal as quote types:
<Red>, <Amber>, <FlashingAmber>, <Green>
Type constructors: union, product and composite types.
The basic types alone are of limited value. New, more structured data types are built using type constructors.
The most basic type constructor forms the union of two predefined types. The type codice_1 contains all elements of the type A and all of the type codice_2. In the traffic signal controller example, the type modelling the colour of a traffic signal could be defined as follows:
SignalColour = <Red> | <Amber> | <FlashingAmber> | <Green>
Enumerated types in VDM-SL are defined as shown above as unions on quote types.
Cartesian product types may also be defined in VDM-SL. The type codice_3 is the type composed of all tuples of values, the first element of which is from the type codice_4 and the second from the type codice_5 and so on. The composite or record type is a Cartesian product with labels for the fields. The type
T :: f1:A1
f2:A2
fn:An
is the Cartesian product with fields labelled codice_6. An element of type codice_7 can be composed from its constituent parts by a constructor, written codice_8. Conversely, given an element of type codice_7, the field names can be used to select the named component. For example, the type
Date :: day:nat1
month:nat1
year:nat
inv mk_Date(d,m,y) == d<=31 and m<=12
models a simple date type. The value codice_10 corresponds to 1 April 2001. Given a date codice_11, the expression codice_12 is a natural number representing the month. Restrictions on days per month and leap years could be incorporated into the invariant if desired. Combining these:
mk_Date(1,4,2001).month = 4
Collections.
Collection types model groups of values. Sets are finite unordered collections in which duplication between values is suppressed. Sequences are finite ordered collections (lists) in which duplication may occur and mappings represent finite correspondences between two sets of values.
Sets.
The set type constructor (written codice_13 where codice_7 is a predefined type) constructs the type composed of all finite sets of values drawn from the type codice_7. For example, the type definition
UGroup = set of UserId
defines a type codice_16 composed of all finite sets of codice_17 values. Various operators are defined on sets for constructing their union, intersections, determining proper and non-strict subset relationships etc.
Sequences.
The finite sequence type constructor (written codice_18 where codice_7 is a predefined type) constructs the type composed of all finite lists of values drawn from the type codice_7. For example, the type definition
String = seq of char
Defines a type codice_21 composed of all finite strings of characters. Various operators are defined on sequences for constructing concatenation, selection of elements and subsequences etc. Many of these operators are partial in the sense that they are not defined for certain applications. For example, selecting the 5th element of a sequence that contains only three elements is undefined.
The order and repetition of items in a sequence is significant, so codice_22 is not equal to codice_23, and codice_24 is not equal to codice_25.
Maps.
A finite mapping is a correspondence between two sets, the domain and range, with the domain indexing elements of the range. It is therefore similar to a finite function. The mapping type constructor in VDM-SL (written codice_26 where codice_27 and codice_28 are predefined types) constructs the type composed of all finite mappings from sets of codice_27 values to sets of codice_28 values. For example, the type definition
Birthdays = map String to Date
Defines a type codice_31 which maps character strings to codice_32. Again, operators are defined on mappings for indexing into the mapping, merging mappings, overwriting extracting sub-mappings.
Structuring.
The main difference between the VDM-SL and VDM++ notations are the way in which structuring is dealt with. In VDM-SL there is a conventional modular extension whereas VDM++ has a traditional object-oriented structuring mechanism with classes and inheritance.
Structuring in VDM-SL.
In the ISO standard for VDM-SL there is an informative annex that contains different structuring principles. These all follow traditional information hiding principles with modules and they can be explained as:
Structuring in VDM++.
In VDM++ structuring are done using classes and multiple inheritance. The key concepts are:
Modelling functionality.
Functional modelling.
In VDM-SL, functions are defined over the data types defined in a model. Support for abstraction requires that it should be possible to characterize the result that a function should compute without having to say how it should be computed. The main mechanism for doing this is the "implicit function definition" in which, instead of a formula computing a result, a logical predicate over the input and result variables, termed a "postcondition", gives the result's properties. For example, a function codice_48 for calculating a square root of a natural number might be defined as follows:
SQRT(x:nat)r:real
post r*r = x
Here the postcondition does not define a method for calculating the result codice_49 but states what properties can be assumed to hold of it. Note that this defines a function that returns a valid square root; there is no requirement that it should be the positive or negative root. The specification above would be satisfied, for example, by a function that returned the negative root of 4 but the positive root of all other valid inputs. Note that functions in VDM-SL are required to be "deterministic" so that a function satisfying the example specification above must always return the same result for the same input.
A more constrained function specification is arrived at by strengthening the postcondition. For example, the following definition constrains the function to return the positive root.
SQRT(x:nat)r:real
post r*r = x and r>=0
All function specifications may be restricted by "preconditions" which are logical predicates over the input variables only and which describe constraints that are assumed to be satisfied when the function is executed. For example, a square root calculating function that works only on positive real numbers might be specified as follows:
SQRTP(x:real)r:real
pre x >=0
post r*r = x and r>=0
The precondition and postcondition together form a "contract" that to be satisfied by any program claiming to implement the function. The precondition records the assumptions under which the function guarantees to return a result satisfying the postcondition. If a function is called on inputs that do not satisfy its precondition, the outcome is undefined (indeed, termination is not even guaranteed).
VDM-SL also supports the definition of executable functions in the manner of a functional programming language. In an "explicit" function definition, the result is defined by means of an expression over the inputs. For example, a function that produces a list of the squares of a list of numbers might be defined as follows:
SqList: seq of nat -> seq of nat
SqList(s) == if s = [] then [] else [(hd s)**2] ^ SqList(tl s)
This recursive definition consists of a function signature giving the types of the input and result and a function body. An implicit definition of the same function might take the following form:
SqListImp(s:seq of nat)r:seq of nat
post len r = len s and
forall i in set inds s & r(i) = s(i)**2
The explicit definition is in a simple sense an implementation of the implicitly specified function. The correctness of an explicit function definition with respect to an implicit specification may be defined as follows.
Given an implicit specification:
f(p:T_p)r:T_r
pre pre-f(p)
post post-f(p, r)
and an explicit function:
f:T_p -> T_r
we say it satisfies the specification iff:
forall p in set T_p & pre-f(p) => f(p):T_r and post-f(p, f(p))
So, "codice_50 is a correct implementation" should be interpreted as "codice_50 satisfies the specification".
State-based modelling.
In VDM-SL, functions do not have side-effects such as changing the state of a persistent global variable. This is a useful ability in many programming languages, so a similar concept exists; instead of functions, "operations" are used to change state variables (also known as "globals").
For example, if we have a state consisting of a single variable codice_52, we could define this in VDM-SL as:
state Register of
someStateRegister : nat
end
In VDM++ this would instead be defined as:
instance variables
someStateRegister : nat
An operation to load a value into this variable might be specified as:
LOAD(i:nat)
ext wr someStateRegister:nat
post someStateRegister = i
The "externals" clause (codice_53) specifies which parts of the state can be accessed by the operation; codice_54 indicating read-only access and codice_55 being read/write access.
Sometimes it is important to refer to the value of a state before it was modified; for example, an operation to add a value to the variable may be specified as:
ADD(i:nat)
ext wr someStateRegister : nat
post someStateRegister = someStateRegister~ + i
Where the codice_56 symbol on the state variable in the postcondition indicates the value of the state variable before execution of the operation.
Examples.
The "max" function.
This is an example of an implicit function definition. The function returns the largest element from a set of positive integers:
max(s:set of nat)r:nat
pre card s > 0
post r in set s and
forall r' in set s & r' <= r
The postcondition characterizes the result rather than defining an algorithm for obtaining it. The precondition is needed because no function could return an r in set s when the set is empty.
Natural number multiplication.
multp(i,j:nat)r:nat
pre true
post r = i*j
Applying the proof obligation codice_57 to an explicit definition of codice_58:
multp(i,j) ==
if i=0
then 0
else if is-even(i)
then 2*multp(i/2,j)
else j+multp(i-1,j)
Then the proof obligation becomes:
forall i, j : nat & multp(i,j):nat and multp(i, j) = i*j
This can be shown correct by:
Queue abstract data type.
This is a classical example illustrating the use of implicit operation specification in a state-based model of a well-known data structure. The queue is modelled as a sequence composed of elements of a type codice_59. The representation is codice_59 is immaterial and so is defined as a token type.
types
Qelt = token;
Queue = seq of Qelt;
state TheQueue of
q : Queue
end
operations
ENQUEUE(e:Qelt)
ext wr q:Queue
post q = q~ ^ [e];
DEQUEUE()e:Qelt
ext wr q:Queue
pre q <> []
post q~ = [e]^q;
IS-EMPTY()r:bool
ext rd q:Queue
post r <=> (len q = 0)
Bank system example.
As a very simple example of a VDM-SL model, consider a system for maintaining details of customer bank account. Customers are modelled by customer numbers ("CustNum"), accounts are modelled by account numbers ("AccNum"). The representations of customer numbers are held to be immaterial and so are modelled by a token type. Balances and overdrafts are modelled by numeric types.
AccNum = token;
CustNum = token;
Balance = int;
Overdraft = nat;
AccData :: owner : CustNum
balance : Balance
state Bank of
accountMap : map AccNum to AccData
overdraftMap : map CustNum to Overdraft
inv mk_Bank(accountMap,overdraftMap) == for all a in set rng accountMap & a.owner in set dom overdraftMap and
a.balance >= -overdraftMap(a.owner)
With operations:
"NEWC" allocates a new customer number:
operations
NEWC(od : Overdraft)r : CustNum
ext wr overdraftMap : map CustNum to Overdraft
post r not in set dom ~overdraftMap and overdraftMap = ~overdraftMap ++ { r |-> od};
"NEWAC" allocates a new account number and sets the balance to zero:
NEWAC(cu : CustNum)r : AccNum
ext wr accountMap : map AccNum to AccData
rd overdraftMap map CustNum to Overdraft
pre cu in set dom overdraftMap
"ACINF" returns all the balances of all the accounts of a customer, as a map of account number to balance:
ACINF(cu : CustNum)r : map AccNum to Balance
ext rd accountMap : map AccNum to AccData
Tool support.
A number of different tools support VDM:
Industrial experience.
VDM has been applied widely in a variety of application domains. The most well-known of these applications are:
Refinement.
Use of VDM starts with a very abstract model and develops this into an implementation. Each step involves "data reification", then "operation decomposition".
Data reification develops the abstract data types into more concrete data structures, while operation decomposition develops the (abstract) implicit specifications of operations and functions into algorithms that can be directly implemented in a computer language of choice.
formula_0
Data reification.
Data reification (stepwise refinement) involves finding a more concrete representation of the abstract data types used in a specification. There may be several steps before an implementation is reached. Each reification step for an abstract data representation codice_61 involves proposing a new representation codice_62. In order to show that the new representation is accurate, a "retrieve function" is defined that relates codice_62 to codice_61, i.e. codice_65. The correctness of a data reification depends on proving "adequacy", i.e.
forall a:ABS_REP & exists r:NEW_REP & a = retr(r)
Since the data representation has changed, it is necessary to update the operations and functions so that they operate on codice_62. The new operations and functions should be shown to preserve any data type invariants on the new representation. In order to prove that the new operations and functions model those found in the original specification, it is necessary to discharge two proof obligations:
forall r: NEW_REP & pre-OPA(retr(r)) => pre-OPR(r)
forall ~r,r:NEW_REP & pre-OPA(retr(~r)) and post-OPR(~r,r) => post-OPA(retr(~r,), retr(r))
Example data reification.
In a business security system, workers are given ID cards; these are fed into card readers on entry to and exit from the factory.
Operations required:
Formally, this would be:
types
Person = token;
Workers = set of Person;
state AWCCS of
pres: Workers
end
operations
INIT()
ext wr pres: Workers
post pres = {};
ENTER(p : Person)
ext wr pres : Workers
pre p not in set pres
post pres = pres~ union {p};
EXIT(p : Person)
ext wr pres : Workers
pre p in set pres
post pres = pres~\{p};
IS-PRESENT(p : Person) r : bool
ext rd pres : Workers
post r <=> p in set pres~
As most programming languages have a concept comparable to a set (often in the form of an array), the first step from the specification is to represent the data in terms of a sequence. These sequences must not allow repetition, as we do not want the same worker to appear twice, so we must add an invariant to the new data type. In this case, ordering is not important, so codice_71 is the same as codice_72.
The Vienna Development Method is valuable for model-based systems. It is not appropriate if the system is time-based. For such cases, the calculus of communicating systems (CCS) is more useful.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{|rcl|}\n\\textbf{Specification}\n& &\n\\textbf{Implementation}\n\\\\\n\\hline\n\\text{Abstract data type} &\n\\xrightarrow\\text{Data reification} &\n\\text{Data structure}\n\\\\\n\\text{Operations} &\n\\xrightarrow[\\text{Operation decomposition}]{} &\n\n\\text{Algorithms}\n\\end{array}"
}
]
| https://en.wikipedia.org/wiki?curid=140583 |
14058565 | Surface gradient | In vector calculus, the surface gradient is a vector differential operator that is similar to the conventional gradient. The distinction is that the surface gradient takes effect along a surface.
For a surface formula_0 in a scalar field formula_1, the surface gradient is defined and notated as
formula_2
where formula_3 is a unit normal to the surface. Examining the definition shows that the surface gradient is the (conventional) gradient with the component normal to the surface removed (subtracted), hence this gradient is tangent to the surface. In other words, the surface gradient is the orthographic projection of the gradient onto the surface.
The surface gradient arises whenever the gradient of a quantity over a surface is important. In the study of capillary surfaces for example, the gradient of spatially varying surface tension doesn't make much sense, however the surface gradient does and serves certain purposes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "\\nabla_S u = \\nabla u - \\mathbf{\\hat n} (\\mathbf{\\hat n} \\cdot \\nabla u)"
},
{
"math_id": 3,
"text": "\\mathbf{\\hat n}"
}
]
| https://en.wikipedia.org/wiki?curid=14058565 |
1405861 | Representation theory of the Galilean group | Representation theory of the symmetries of non-relativistic quantum space
In nonrelativistic quantum mechanics, an account can be given of the existence of mass and spin (normally explained in Wigner's classification of relativistic mechanics) in terms of the representation theory of the Galilean group, which is the spacetime symmetry group of nonrelativistic quantum mechanics.
Background.
In 3 + 1 dimensions, this is the subgroup of the affine group on ("t, x, y, z"), whose linear part leaves invariant both the metric ("gμν"
diag(1, 0, 0, 0)) and the (independent) dual metric ("gμν"
diag(0, 1, 1, 1)). A similar definition applies for "n" + 1 dimensions.
We are interested in projective representations of this group, which are equivalent to unitary representations of the nontrivial central extension of the universal covering group of the Galilean group by the one-dimensional Lie group R, cf. the article Galilean group for the central extension of its Lie algebra. The method of induced representations will be used to survey these.
Lie algebra.
We focus on the (centrally extended, Bargmann) Lie algebra here, because it is simpler to analyze and we can always extend the results to the full Lie group through the Frobenius theorem.
formula_0
formula_1
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
formula_8
E is the generator of time translations (Hamiltonian), "Pi" is the generator of translations (momentum operator), "Ci" is the generator of Galilean boosts, and "Lij" stands for a generator of rotations (angular momentum operator).
Casimir invariants.
The central charge M is a Casimir invariant.
The mass-shell invariant
formula_9
is an additional Casimir invariant.
In 3 + 1 dimensions, a third Casimir invariant is "W"2, where
formula_10
somewhat analogous to the Pauli–Lubanski pseudovector of relativistic mechanics.
More generally, in "n" + 1 dimensions, invariants will be a function of
formula_11
and
formula_12
as well as of the above mass-shell invariant and central charge.
Schur's lemma.
Using Schur's lemma, in an irreducible unitary representation, all these Casimir invariants are multiples of the identity. Call these coefficients m and "mE"0 and (in the case of 3 + 1 dimensions) w, respectively. Recalling that we are considering unitary representations here, we see that these eigenvalues have to be real numbers.
Thus, "m" > 0, "m" = 0 and "m" < 0. (The last case is similar to the first.) In 3 + 1 dimensions, when In "m" > 0, we can write, "w" = "ms" for the third invariant, where s represents the spin, or intrinsic angular momentum. More generally, in "n" + 1 dimensions, the generators L and C will be related, respectively, to the total angular momentum and center-of-mass moment by
formula_13
formula_14
formula_15
From a purely representation-theoretic point of view, one would have to study all of the representations; but, here, we are only interested in applications to quantum mechanics. There, E represents the energy, which has to be bounded below, if thermodynamic stability is required. Consider first the case where m is nonzero.
Considering the (E, "P") space with the constraint
formula_16
we see that the Galilean boosts act transitively on this hypersurface. In fact, treating the energy E as the Hamiltonian, differentiating with respect to P, and applying Hamilton's equations, we obtain the mass-velocity relation "m" "v" = "P".
The hypersurface is parametrized by this velocity In "v". Consider the stabilizer of a point on the orbit, ("E"0, 0), where the velocity is 0. Because of transitivity, we know the unitary irrep contains a nontrivial linear subspace with these energy-momentum eigenvalues. (This subspace only exists in a rigged Hilbert space, because the momentum spectrum is continuous.)
The little group.
The subspace is spanned by E, "P", M and "L""ij". We already know how the subspace of the irrep transforms under all operators but the angular momentum. Note that the rotation subgroup is Spin(3). We have to look at its double cover, because we are considering projective representations. This is called the little group, a name given by Eugene Wigner. His method of induced representations specifies that the irrep is given by the direct sum of all the fibers in a vector bundle over the "mE" = "mE"0 + "P"2/2 hypersurface, whose fibers are a unitary irrep of Spin(3).
Spin(3) is none other than SU(2). (See representation theory of SU(2), where it is shown that the unitary irreps of SU(2) are labeled by s, a non-negative integer multiple of one half. This is called spin, for historical reasons.)
0. By unitarity, formula_17
is nonpositive. Suppose it is zero. Here, it is also the boosts as well as the rotations that constitute the little group. Any unitary irrep of this little group also gives rise to a projective irrep of the Galilean group. As far as we can tell, only the case which transforms trivially under the little group has any physical interpretation, and it corresponds to the no-particle state, the vacuum.
The case where the invariant is negative requires additional comment. This corresponds to the representation class for m = 0 and non-zero "P". Extending the bradyon, luxon, tachyon classification from the representation theory of the Poincaré group to an analogous classification, here, one may term these states as "synchrons". They represent an instantaneous transfer of non-zero momentum across a (possibly large) distance. Associated with them, by above, is a "time" operator
formula_18
which may be identified with the time of transfer. These states are naturally interpreted as the carriers of instantaneous action-at-a-distance forces.
N.B. In the 3 + 1-dimensional Galilei group, the boost generator may be decomposed into
formula_19
with "W" playing a role analogous to helicity. | [
{
"math_id": 0,
"text": "[E,P_i]=0"
},
{
"math_id": 1,
"text": "[P_i,P_j]=0"
},
{
"math_id": 2,
"text": "[L_{ij},E]=0"
},
{
"math_id": 3,
"text": "[C_i,C_j]=0"
},
{
"math_id": 4,
"text": "[L_{ij},L_{kl}]=i\\hbar [\\delta_{ik}L_{jl}-\\delta_{il}L_{jk}-\\delta_{jk}L_{il}+\\delta_{jl}L_{ik}]"
},
{
"math_id": 5,
"text": "[L_{ij},P_k]=i\\hbar[\\delta_{ik}P_j-\\delta_{jk}P_i]"
},
{
"math_id": 6,
"text": "[L_{ij},C_k]=i\\hbar[\\delta_{ik}C_j-\\delta_{jk}C_i]"
},
{
"math_id": 7,
"text": "[C_i,E]=i\\hbar P_i"
},
{
"math_id": 8,
"text": "[C_i,P_j]=i\\hbar M\\delta_{ij} ~. "
},
{
"math_id": 9,
"text": "ME-{P^2\\over 2}"
},
{
"math_id": 10,
"text": "\\vec{W} \\equiv M \\vec{L} + \\vec{P}\\times\\vec{C} ~,"
},
{
"math_id": 11,
"text": "W_{ij} = M L_{ij} + P_i C_j - P_j C_i"
},
{
"math_id": 12,
"text": "W_{ijk} = P_i L_{jk} + P_j L_{ki} + P_k L_{ij}~,"
},
{
"math_id": 13,
"text": "W_{ij} = M S_{ij}"
},
{
"math_id": 14,
"text": "L_{ij} = S_{ij} + X_i P_j - X_j P_i"
},
{
"math_id": 15,
"text": "C_i = M X_i - P_i t ~."
},
{
"math_id": 16,
"text": "mE = mE_0 + {P^2 \\over 2}~,"
},
{
"math_id": 17,
"text": "mE - {P^2 \\over 2} = {-P^2 \\over 2}"
},
{
"math_id": 18,
"text": "t=-{\\vec{P}\\cdot \\vec{C} \\over P^2} ~,"
},
{
"math_id": 19,
"text": "\\vec{C} = {\\vec{W}\\times\\vec{P} \\over P^2} - \\vec{P}t~,"
}
]
| https://en.wikipedia.org/wiki?curid=1405861 |
140592 | Assignment problem | Combinatorial optimization problem
The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows:
The problem instance has a number of "agents" and a number of "tasks". Any agent can be assigned to perform any task, incurring some "cost" that may vary depending on the agent-task assignment. It is required to perform as many tasks as possible by assigning at most one agent to each task and at most one task to each agent, in such a way that the "total cost" of the assignment is minimized.
Alternatively, describing the problem using graph theory:
The assignment problem consists of finding, in a weighted bipartite graph, a matching of a given size, in which the sum of weights of the edges is minimum.
If the numbers of agents and tasks are equal, then the problem is called "balanced assignment". Otherwise, it is called "unbalanced assignment". If the total cost of the assignment for all tasks is equal to the sum of the costs for each agent (or the sum of the costs for each task, which is the same thing in this case), then the problem is called "linear assignment". Commonly, when speaking of the "assignment problem" without any additional qualification, then the "linear balanced assignment problem" is meant.
Examples.
Suppose that a taxi firm has three taxis (the agents) available, and three customers (the tasks) wishing to be picked up as soon as possible. The firm prides itself on speedy pickups, so for each taxi the "cost" of picking up a particular customer will depend on the time taken for the taxi to reach the pickup point. This is a "balanced assignment" problem. Its solution is whichever combination of taxis and customers results in the least total cost.
Now, suppose that there are "four" taxis available, but still only three customers. This is an "unbalanced assignment" problem. One way to solve it is to invent a fourth dummy task, perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it. This reduces the problem to a balanced assignment problem, which can then be solved in the usual way and still give the best solution to the problem.
Similar adjustments can be done in order to allow more tasks than agents, tasks to which multiple agents must be assigned (for instance, a group of more customers than will fit in one taxi), or maximizing profit rather than minimizing cost.
Formal definition.
The formal definition of the assignment problem (or linear assignment problem) is
Given two sets, "A" and "T", together with a weight function "C" : "A" × "T" → R. Find a bijection "f" : "A" → "T" such that the cost function:
formula_0
is minimized.
Usually the weight function is viewed as a square real-valued matrix "C", so that the cost function is written down as:
formula_1
The problem is "linear" because the cost function to be optimized as well as all the constraints contain only linear terms.
Algorithms.
A naive solution for the assignment problem is to check all the assignments and calculate the cost of each one. This may be very inefficient since, with "n" agents and "n" tasks, there are "n"! (factorial of "n") different assignments.
Another naive solution is to greedily assign the pair with the smallest cost first, and remove the vertices; then, among the remaining vertices, assign the pair with the smallest cost; and so on. This algorithm may yield a non-optimal solution. For example, suppose there are two tasks and two agents with costs as follows:
The greedy algorithm would assign Task 1 to Alice and Task 2 to George, for a total cost of 9; but the reverse assignment has a total cost of 7.
Fortunately, there are many algorithms for finding the optimal assignment in time polynomial in "n". The assignment problem is a special case of the transportation problem, which is a special case of the minimum cost flow problem, which in turn is a special case of a linear program. While it is possible to solve any of these problems using the simplex algorithm, or in worst-case polynomial time using the ellipsoid method, each specialization has a smaller solution space and thus more efficient algorithms designed to take advantage of its special structure.
Balanced assignment.
In the balanced assignment problem, both parts of the bipartite graph have the same number of vertices, denoted by "n".
One of the first polynomial-time algorithms for balanced assignment was the Hungarian algorithm. It is a "global" algorithm – it is based on improving a matching along augmenting paths (alternating paths between unmatched vertices). Its run-time complexity, when using Fibonacci heaps, is formula_2, where "m" is a number of edges. This is currently the fastest run-time of a strongly polynomial algorithm for this problem. If all weights are integers, then the run-time can be improved to formula_3, but the resulting algorithm is only weakly-polynomial. If the weights are integers, and all weights are at most "C" (where "C">1 is some integer), then the problem can be solved in formula_4 weakly-polynomial time in a method called "weight scaling".
In addition to the global methods, there are "local methods" which are based on finding local updates (rather than full augmenting paths). These methods have worse asymptotic runtime guarantees, but they often work better in practice. These algorithms are called auction algorithms, push-relabel algorithms, or preflow-push algorithms. Some of these algorithms were shown to be equivalent.
Some of the local methods assume that the graph admits a "perfect matching"; if this is not the case, then some of these methods might run forever. A simple technical way to solve this problem is to extend the input graph to a "complete bipartite graph," by adding artificial edges with very large weights. These weights should exceed the weights of all existing matchings, to prevent appearance of artificial edges in the possible solution.
As shown by Mulmuley, Vazirani and Vazirani, the problem of minimum weight perfect matching is converted to finding minors in the adjacency matrix of a graph. Using the isolation lemma, a minimum weight perfect matching in a graph can be found with probability at least <templatestyles src="Fraction/styles.css" />1⁄2. For a graph with "n" vertices, it requires formula_5 time.
Unbalanced assignment.
In the unbalanced assignment problem, the larger part of the bipartite graph has "n" vertices and the smaller part has "r"<"n" vertices. There is also a constant "s" which is at most the cardinality of a maximum matching in the graph. The goal is to find a minimum-cost matching of size exactly "s". The most common case is the case in which the graph admits a one-sided-perfect matching (i.e., a matching of size "r"), and "s"="r".
Unbalanced assignment can be reduced to a balanced assignment. The naive reduction is to add formula_6 new vertices to the smaller part and connect them to the larger part using edges of cost 0. However, this requires formula_7 new edges. A more efficient reduction is called the "doubling technique". Here, a new graph "G'" is built from two copies of the original graph "G": a forward copy "Gf" and a backward copy "Gb." The backward copy is "flipped", so that, in each side of "G"', there are now "n"+"r" vertices. Between the copies, we need to add two kinds of linking edges:
All in all, at most formula_8 new edges are required. The resulting graph always has a perfect matching of size formula_8. A minimum-cost perfect matching in this graph must consist of minimum-cost maximum-cardinality matchings in "Gf" and "Gb." The main problem with this doubling technique is that there is no speed gain when formula_9.
Instead of using reduction, the unbalanced assignment problem can be solved by directly generalizing existing algorithms for balanced assignment. The Hungarian algorithm can be generalized to solve the problem in formula_10 strongly-polynomial time. In particular, if "s"="r" then the runtime is formula_11. If the weights are integers, then Thorup's method can be used to get a runtime of formula_12.
Solution by linear programming.
The assignment problem can be solved by presenting it as a linear program. For convenience we will present the maximization problem. Each edge ("i","j"), where "i" is in A and "j" is in T, has a weight formula_13. For each edge &NoBreak;&NoBreak; we have a variable formula_14. The variable is 1 if the edge is contained in the matching and 0 otherwise, so we set the domain constraints:
formula_15 formula_16
The total weight of the matching is: formula_17. The goal is to find a maximum-weight perfect matching.
To guarantee that the variables indeed represent a perfect matching, we add constraints saying that each vertex is adjacent to exactly one edge in the matching, i.e.,
formula_18.
All in all we have the following LP:
formula_19formula_20formula_15formula_16This is an integer linear program. However, we can solve it without the integrality constraints (i.e., drop the last constraint), using standard methods for solving continuous linear programs. While this formulation allows also fractional variable values, in this special case, the LP always has an optimal solution where the variables take integer values. This is because the constraint matrix of the fractional LP is totally unimodular – it satisfies the four conditions of Hoffman and Gale.
Other methods and approximation algorithms.
Other approaches for the assignment problem exist and are reviewed by Duan and Pettie (see Table II). Their work proposes an approximation algorithm for the assignment problem (and the more general maximum weight matching problem), which runs in linear time for any fixed error bound.
Generalization.
When phrased as a graph theory problem, the assignment problem can be extended from bipartite graphs to arbitrary graphs. The corresponding problem, of finding a matching in a weighted graph where the sum of weights is maximized, is called the maximum weight matching problem.
Another generalization of the assignment problem is extending the number of sets to be matched from two to many. So that rather than matching agents to tasks, the problem is extended to matching agents to tasks to time intervals to locations. This results in Multidimensional assignment problem (MAP).
References and further reading.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{a\\in A}C(a,f(a))"
},
{
"math_id": 1,
"text": "\\sum_{a\\in A}C_{a,f(a)}"
},
{
"math_id": 2,
"text": "O(mn + n^2\\log n)"
},
{
"math_id": 3,
"text": "O(mn + n^2\\log \\log n)"
},
{
"math_id": 4,
"text": "O(m\\sqrt{n} \\log(n\\cdot C))"
},
{
"math_id": 5,
"text": " O(\\log^2(n)) "
},
{
"math_id": 6,
"text": "n-r"
},
{
"math_id": 7,
"text": "n(n-r)"
},
{
"math_id": 8,
"text": "n+r"
},
{
"math_id": 9,
"text": "r\\ll n"
},
{
"math_id": 10,
"text": "O(ms + s^2\\log r)"
},
{
"math_id": 11,
"text": "O(mr + r^2\\log r)"
},
{
"math_id": 12,
"text": "O(ms + s^2\\log \\log r)"
},
{
"math_id": 13,
"text": "w_{ij}"
},
{
"math_id": 14,
"text": "x_{ij}"
},
{
"math_id": 15,
"text": "0\\le x_{ij}\\le 1\\text{ for }i,j\\in A,T, \\, "
},
{
"math_id": 16,
"text": "x_{ij}\\in \\mathbb{Z}\\text{ for }i,j\\in A,T. "
},
{
"math_id": 17,
"text": "\\sum_{(i,j)\\in A\\times T} w_{ij}x_{ij}"
},
{
"math_id": 18,
"text": "\\sum_{j\\in T}x_{ij}=1\\text{ for }i\\in A, \\,\n~~~\n\\sum_{i\\in A}x_{ij}=1\\text{ for }j\\in T, \\, "
},
{
"math_id": 19,
"text": "\\text{maximize}~~\\sum_{(i,j)\\in A\\times T} w_{ij}x_{ij}\n"
},
{
"math_id": 20,
"text": "\\text{subject to}~~\\sum_{j\\in T}x_{ij}=1\\text{ for }i\\in A, \\,\n~~~\n\\sum_{i\\in A}x_{ij}=1\\text{ for }j\\in T "
}
]
| https://en.wikipedia.org/wiki?curid=140592 |
14060661 | Capillary surface | Surface representing the interface between two different fluids
In fluid mechanics and mathematics, a capillary surface is a surface that represents the interface between two different fluids. As a consequence of being a surface, a capillary surface has no thickness in slight contrast with most real fluid interfaces.
Capillary surfaces are of interest in mathematics because the problems involved are very nonlinear and have interesting properties, such as discontinuous dependence on boundary data at isolated points. In particular, static capillary surfaces with gravity absent have constant mean curvature, so that a minimal surface is a special case of static capillary surface.
They are also of practical interest for fluid management in space (or other environments free of body forces), where both flow and static configuration are often dominated by capillary effects.
The stress balance equation.
The defining equation for a capillary surface is called the stress balance equation, which can be derived by considering the forces and stresses acting on a small volume that is partly bounded by a capillary surface. For a fluid meeting another fluid (the "other" fluid notated with bars) at a surface formula_0, the equation reads
formula_1
where formula_2 is the unit normal pointing toward the "other" fluid (the one whose quantities are notated with bars), formula_3 is the stress tensor (note that on the left is a tensor-vector product), formula_4 is the surface tension associated with the interface, and formula_5 is the surface gradient. Note that the quantity formula_6 is twice the mean curvature of the surface.
In fluid mechanics, this equation serves as a boundary condition for interfacial flows, typically complementing the Navier–Stokes equations. It describes the discontinuity in stress that is balanced by forces at the surface. As a boundary condition, it is somewhat unusual in that it introduces a new variable: the surface formula_0 that defines the interface. It's not too surprising then that the stress balance equation normally mandates its own boundary conditions.
For best use, this vector equation is normally turned into 3 scalar equations via dot product with the unit normal and two selected unit tangents:
formula_7
formula_8
formula_9
Note that the products lacking dots are tensor products of tensors with vectors (resulting in vectors similar to a matrix-vector product), those with dots are dot products. The first equation is called the normal stress equation, or the normal stress boundary condition. The second two equations are called tangential stress equations.
The stress tensor.
The stress tensor is related to velocity and pressure. Its actual form will depend on the specific fluid being dealt with, for the common case of incompressible Newtonian flow the stress tensor is given by
formula_10
where formula_11 is the pressure in the fluid, formula_12 is the velocity, and formula_13 is the viscosity.
Static interfaces.
In the absence of motion, the stress tensors yield only hydrostatic pressure so that formula_14, regardless of fluid type or compressibility. Considering the normal and tangential equations,
formula_15
formula_16
The first equation establishes that curvature forces are balanced by pressure forces. The second equation implies that a static interface cannot exist in the presence of nonzero surface tension gradient.
If gravity is the only body force present, the Navier–Stokes equations simplify significantly:
formula_17
If coordinates are chosen so that gravity is nonzero only in the formula_18 direction, this equation degrades to a particularly simple form:
formula_19
where formula_20 is an integration constant that represents some reference pressure at formula_21. Substituting this into the normal stress equation yields what is known as the Young-Laplace equation:
formula_22
where formula_23 is the (constant) pressure difference across the interface, and formula_24 is the difference in density. Note that, since this equation defines a surface, formula_18 is the formula_18 coordinate of the capillary surface. This nonlinear partial differential equation when supplied with the right boundary conditions will define the static interface.
The pressure difference above is a constant, but its value will change if the formula_18 coordinate is shifted. The linear solution to pressure implies that, unless the gravity term is absent, it is always possible to define the formula_18 coordinate so that formula_25. Nondimensionalized, the Young-Laplace equation is usually studied in the form
formula_26
where (if gravity is in the negative formula_18 direction) formula_27 is positive if the denser fluid is "inside" the interface, negative if it is "outside", and zero if there is no gravity or if there is no difference in density between the fluids.
This nonlinear equation has some rich properties, especially in terms of existence of unique solutions. For example, the nonexistence of solution to some boundary value problem implies that, physically, the problem can't be static. If a solution does exist, normally it'll exist for very specific values of formula_28, which is representative of the pressure jump across the interface. This is interesting because there isn't another physical equation to determine the pressure difference. In a capillary tube, for example, implementing the contact angle boundary condition will yield a unique solution for exactly one value of formula_28. Solutions often aren't unique, this implies that there are multiple static interfaces possible; while they may all solve the same boundary value problem, the minimization of energy will normally favor one. Different solutions are called "configurations" of the interface.
Energy consideration.
A deep property of capillary surfaces is the surface energy that is imparted by surface tension:
formula_29
where formula_30 is the area of the surface being considered, and the total energy is the summation of all energies. Note that "every" interface imparts energy. For example, if there are two different fluids (say liquid and gas) inside a solid container with gravity and other energy potentials absent, the energy of the system is
formula_31
where the subscripts formula_32, formula_33, and formula_34 respectively indicate the liquid–gas, solid–gas, and solid–liquid interfaces. Note that inclusion of gravity would require consideration of the volume enclosed by the capillary surface and the solid walls.
Typically the surface tension values between the solid–gas and solid–liquid interfaces are not known. This does not pose a problem; since only changes in energy are of primary interest. If the net solid area formula_35 is a constant, and the contact angle is known, it may be shown that (again, for two different fluids in a solid container)
formula_36
so that
formula_37
where formula_38 is the contact angle and the capital delta indicates the change from one configuration to another. To obtain this result, it's necessary to sum (distributed) forces at the contact line (where solid, gas, and liquid meet) in a direction tangent to the solid interface and perpendicular to the contact line:
formula_39
where the sum is zero because of the static state. When solutions to the Young-Laplace equation aren't unique, the most physically favorable solution is the one of minimum energy, though experiments (especially low gravity) show that metastable surfaces can be surprisingly persistent, and that the most stable configuration can become metastable through mechanical jarring without too much difficulty. On the other hand, a metastable surface can sometimes spontaneously achieve lower energy without any input (seemingly at least) given enough time.
Boundary conditions.
Boundary conditions for stress balance describe the capillary surface at the contact line: the line where a solid meets the capillary interface; also, volume constraints can serve as boundary conditions (a suspended drop, for example, has no contact line but clearly must admit a unique solution).
For static surfaces, the most common contact line boundary condition is the implementation of the contact angle, which specifies the angle that one of the fluids meets the solid wall. The contact angle condition on the surface formula_0 is normally written as:
formula_40
where formula_38 is the contact angle. This condition is imposed on the boundary (or boundaries) formula_41 of the surface. formula_42 is the unit outward normal to the solid surface, and formula_43 is a unit normal to formula_0. Choice of formula_43 depends on which fluid the contact angle is specified for.
For dynamic interfaces, the boundary condition showed above works well if the contact line velocity is low. If the velocity is high, the contact angle will change ("dynamic contact angle"), and as of 2007 the mechanics of the moving contact line (or even the validity of the contact angle as a parameter) is not known and an area of active research.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\begin{align}\n& (\\sigma_{ij} - \\bar{\\sigma}_{ij}) \\mathbf{\\hat{n}} = - \\gamma \\mathbf{\\hat{n}} (\\nabla_{\\!S} \\cdot \\mathbf{\\hat{n}}) + \\nabla_{\\!S} \\gamma \\\\\n& \\qquad \\nabla_{\\!S} \\gamma = \\nabla \\gamma - \\mathbf{\\hat{n}} (\\mathbf{\\hat{n}} \\cdot \\nabla \\gamma)\n\\end{align}"
},
{
"math_id": 2,
"text": "\\scriptstyle \\mathbf{\\hat{n}}"
},
{
"math_id": 3,
"text": "\\scriptstyle \\sigma_{ij}"
},
{
"math_id": 4,
"text": "\\scriptstyle \\gamma"
},
{
"math_id": 5,
"text": "\\scriptstyle \\nabla_S"
},
{
"math_id": 6,
"text": "\\scriptstyle -\\nabla_{\\!S} \\cdot \\mathbf{\\hat{n}}"
},
{
"math_id": 7,
"text": "((\\sigma_{ij} - \\bar{\\sigma}_{ij}) \\mathbf{\\hat{n}}) \\cdot \\mathbf{\\hat{n}} = -\\gamma \\nabla_{\\!S} \\cdot \\mathbf{\\hat{n}}"
},
{
"math_id": 8,
"text": "((\\sigma_{ij} - \\bar{\\sigma}_{ij}) \\mathbf{\\hat{n}}) \\cdot \\mathbf{\\hat{t}_1} = \\nabla_{\\!S} \\gamma \\cdot \\mathbf{\\hat{t}_1}"
},
{
"math_id": 9,
"text": "((\\sigma_{ij} - \\bar{\\sigma}_{ij}) \\mathbf{\\hat{n}}) \\cdot \\mathbf{\\hat{t}_2} = \\nabla_{\\!S} \\gamma \\cdot \\mathbf{\\hat{t}_2}"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n\n\\sigma_{ij} &= \n-\\begin{pmatrix}\np&0&0\\\\\n0&p&0\\\\\n0&0&p\n\\end{pmatrix} +\n\n\\mu \\begin{pmatrix}\n2 \\frac{\\partial u}{\\partial x} & \\frac{\\partial u}{\\partial y} + \\frac{\\partial v}{\\partial x} & \\frac{\\partial u}{\\partial z} + \\frac{\\partial w}{\\partial x} \\\\\n\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y} & 2 \\frac{\\partial v}{\\partial y} & \\frac{\\partial v}{\\partial z} + \\frac{\\partial w}{\\partial y} \\\\\n\\frac{\\partial w}{\\partial x} + \\frac{\\partial u}{\\partial z} & \\frac{\\partial w}{\\partial y} + \\frac{\\partial v}{\\partial z} & 2\\frac{\\partial w}{\\partial z}\n\\end{pmatrix} \\\\\n\n&= -p I + \\mu (\\nabla \\mathbf{v} + (\\nabla \\mathbf{v})^T)\n\n\\end{align}\n"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "\\scriptstyle \\mathbf{v}"
},
{
"math_id": 13,
"text": "\\mu"
},
{
"math_id": 14,
"text": "\\scriptstyle \\sigma_{ij} = -pI"
},
{
"math_id": 15,
"text": "\\bar p - p = \\gamma \\nabla \\cdot \\mathbf{\\hat{n}}"
},
{
"math_id": 16,
"text": "0 = \\nabla \\gamma \\cdot \\mathbf{\\hat{t}}"
},
{
"math_id": 17,
"text": "0 = -\\nabla p + \\rho \\mathbf{g}"
},
{
"math_id": 18,
"text": "z"
},
{
"math_id": 19,
"text": "\\frac{d p}{d z} = \\rho g \\quad \\Rightarrow \\quad p = \\rho g z + p_0"
},
{
"math_id": 20,
"text": "p_0"
},
{
"math_id": 21,
"text": "z = 0"
},
{
"math_id": 22,
"text": "\\bar \\rho g z + \\bar p_0 - (\\rho g z + p_0) = \\gamma \\nabla \\cdot \\mathbf{\\hat{n}} \\quad \\Rightarrow \\quad \\Delta \\rho g z + \\Delta p = \\gamma \\nabla \\cdot \\mathbf{\\hat{n}}"
},
{
"math_id": 23,
"text": "\\Delta p"
},
{
"math_id": 24,
"text": "\\Delta \\rho"
},
{
"math_id": 25,
"text": "\\Delta p = 0"
},
{
"math_id": 26,
"text": "\\kappa z + \\lambda = \\nabla \\cdot \\mathbf{\\hat{n}}"
},
{
"math_id": 27,
"text": "\\kappa"
},
{
"math_id": 28,
"text": "\\lambda"
},
{
"math_id": 29,
"text": "E_S = \\gamma_S A_S\\,"
},
{
"math_id": 30,
"text": "A"
},
{
"math_id": 31,
"text": "E = \\sum \\gamma_S A_S = \\gamma_{LG} A_{LG} + \\gamma_{SG} A_{SG} + \\gamma_{SL} A_{SL}\\,"
},
{
"math_id": 32,
"text": "LG"
},
{
"math_id": 33,
"text": "SG"
},
{
"math_id": 34,
"text": "SL"
},
{
"math_id": 35,
"text": "A_{SG} + A_{SL}"
},
{
"math_id": 36,
"text": "E = \\gamma_{SL}(A_{SL} + A_{SG}) + \\gamma_{LG} A_{LG} + \\gamma_{LG} A_{SG} \\cos(\\theta)\\,"
},
{
"math_id": 37,
"text": "\\frac{\\Delta E}{\\gamma_{LG}} = \\Delta A_{LG} + \\Delta A_{SG} \\cos(\\theta) = \\Delta A_{LG} - \\Delta A_{SL} \\cos(\\theta)\\,"
},
{
"math_id": 38,
"text": "\\theta"
},
{
"math_id": 39,
"text": "\n\\begin{align}\n0 &= \\sum F_{\\mathrm{Contact \\ line}} \\\\\n &= \\gamma_{LG} \\cos(\\theta) + \\gamma_{SL} - \\gamma_{SG}\n\\end{align}\n"
},
{
"math_id": 40,
"text": "\\mathbf{\\hat{n}} \\cdot \\mathbf{\\hat{v}} = \\cos(\\theta)\\,"
},
{
"math_id": 41,
"text": "\\scriptstyle \\partial S"
},
{
"math_id": 42,
"text": "\\scriptstyle \\hat v"
},
{
"math_id": 43,
"text": "\\scriptstyle \\hat n"
}
]
| https://en.wikipedia.org/wiki?curid=14060661 |
14063193 | Bargaining power | Relative ability of two parties in disagreement to influence each other
Bargaining power is the relative ability of parties in an argumentative situation (such as bargaining, contract writing, or making an agreement) to exert influence over each other in order to achieve favourable terms in an agreement. This power is derived from various factors such as each party’s alternatives to the current deal, the value of what is being negotiated, and the urgency of reaching an agreement. A party's bargaining power can significantly shift the outcome of negotiations, leading to more advantageous positions for those who possess greater leverage.
If both parties are on an equal footing in a debate, then they will have equal bargaining power, such as in a perfectly competitive market, or between an evenly matched monopoly and monopsony. In many cases, bargaining power is not static and can be enhanced through strategic actions such as improving one's alternatives, increasing the perceived value of one's offer, or altering the negotiation timeline. A party's bargaining power can significantly shift the outcome of negotiations, leading to more advantageous positions for those who possess greater leverage.
The dynamics of bargaining power extend beyond individual negotiations to affect industries, economies, and international relations. In the realm of international trade negotiations, countries with larger economies or unique resources may wield greater bargaining power, affecting the terms of trade agreements and economic policies. Similarly, in labour economics, for example, the bargaining power of workers versus employers can influence wage levels, working conditions, and job security. Understanding the factors that influence bargaining power and how it can be balanced or leveraged is crucial for negotiators, policymakers, and analysts striving to achieve favorable outcomes in various contexts.
There are a number of fields where the concept of bargaining power has proven crucial to coherent analysis, including game theory, labour economics, collective bargaining arrangements, diplomatic negotiations, settlement of litigation, the price of insurance, and any negotiation in general.
Theories of distribution.
The distribution of bargaining power among negotiating parties is a central theme in various theoretical frameworks, spanning economics, game theory, and sociology. These theories provide insights into how power dynamics are established, negotiated, and shifted in bargaining situations.
Social Exchange Theory.
Blau (1964), and Emerson (1976) were the key theorists who developed the original theories of social exchange. Social exchange theory approaches bargaining power from a sociological perspective, suggesting that power dynamics in negotiations are influenced by the value of the resources each party brings to the exchange (a cost-benefit analysis), as well as the level of dependency between the parties. According to this theory, bargaining power increases when a party possesses resources that are highly valued and scarce, and when there are few alternatives to these resources. This theory underscores the relational aspect of bargaining power, where power is not inherent to the parties but emerges from the context of their relationship and exchange.
Principal-Agent Theory.
Jensen and Meckling (1976), Mirrlees (1976), Ross (1973), and Stiglitz (1975) were the key theorists who initiated the original theories of principal-agent theory. The principal-agent theory, often discussed in the context of corporate governance and contract theory, examines how bargaining power is distributed between principals (e.g., shareholders) and agents (e.g., managers). This theory highlights issues of information asymmetry, where agents might have more information than principals, potentially skewing bargaining power in favour of the agents. Mechanisms such as incentive schemes and performance monitoring are discussed as ways to align the interests of the principal and agent, thereby rebalancing bargaining power.
Economic Theories of Bargaining.
Economic theories of bargaining often focus on how the allocation of resources, market conditions, and alternative options influence bargaining power. The concept of BATNA (Best Alternative to a Negotiated Agreement) plays a crucial role in this context, positing that a party's bargaining power is significantly determined by the attractiveness of their options outside the negotiation. According to this perspective, the more advantageous the BATNA, the greater the party's bargaining power, as they have less to lose by walking away from the negotiation table.
Game Theory and Bargaining.
Game theory provides a mathematical framework to analyze bargaining situations, offering insights into the strategies that parties may employ to maximise their outcomes. The Nash Equilibrium, for instance, describes a situation where no party can benefit by changing their strategy while the other parties keep theirs unchanged, highlighting the balance of power in strategic interactions. The Ultimatum Game is another game theory model that illustrates how the power to propose how a resource is divided can drastically affect the distribution outcomes, even when such proposals are not equitable.
Calculation.
Several formulations of bargaining power have been devised. A popular one from 1951 and due to American economist Neil W. Chamberlain is:
We may define bargaining power (of A, let us say) as being the cost to B of "disagreeing" on A's terms relative to the costs of "agreeing" on A's terms ... Stated in another way, a (relatively) high cost to B of disagreement with A means that A's bargaining power is strong. A (relatively) high cost of agreement means that A's bargaining power is weak. Such statements in themselves, however, reveal nothing of the strength or weakness of A "relative" to B, since B might similarly possess a strong or weak bargaining power. But if the cost to B of disagreeing on A's terms are greater than the cost of agreeing on A's terms, while the cost to A of disagreeing on B's terms is less than the cost of agreeing on B's terms, then A's bargaining power is greater than that of B. More generally, only if the difference to B between the costs of disagreement and agreement on A's terms is proportionately greater than the difference to A between the costs of disagreement and agreement on B's terms can it be said that A's bargaining power is greater than that of B.
In another formulation, bargaining power is expressed as a ratio of a party's ability to influence the other participant, to the costs of not reaching an agreement to that party:
formula_0
formula_1
If formula_2 is greater than formula_3, then A has greater Bargaining Power than B, and the resulting agreement will tend to favor A. The reverse is expected if B has greater bargaining power instead.
These formulations and more complex models with more precisely defined variables are used to predict the probability of observing a certain outcome from a range of outcomes based on the parties' characteristics and behavior before and after the negotiation.
Buying power.
Buying power is a specific type of bargaining power relating to a purchaser and a supplier. For example, a retailer may be able to dictate price to a small supplier if it has a large market share and or can bulk buy.
Economic theory.
In modern economic theory, the bargaining outcome between two parties is often modeled by the Nash Bargaining solution. An example is if party A and party B can collaborate in order to generate a surplus of formula_4. If the parties fail to reach an agreement, party A gets a payoff formula_5 and party B gets a payoff formula_6. If formula_7, reaching an agreement yields a larger total surplus. According to the generalized Nash bargaining solution, party A gets formula_8 and party B gets formula_9, where formula_10. There are different ways to derive formula_11. For example, Rubinstein (1982) has shown that in a bargaining game with alternating offers, formula_11 is close to formula_12 when party A is much more patient than party B, while formula_11 is equal to formula_13 if both parties are equally patient. In this case, party A's payoff is increasing in formula_11 as well as in formula_5, and so both parameters reflect different aspects of party A's power. To clearly distinguish between the two parameters, some authors such as Schmitz refer to formula_11 as party A's "bargaining power" and to formula_5 as party A's "bargaining position". A prominent application is the property rights approach to the theory of the firm. In this application, formula_11 is often exogenously fixed to formula_13, while formula_5 and formula_6 are determined by investments of the two parties.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BP_A \\text{ (Bargaining Power of A)} = \\frac{\\text{Benefits and Costs that can be inflicted upon B}}{\\text{A's cost of not agreeing}}"
},
{
"math_id": 1,
"text": "BP_B \\text{ (Bargaining Power of B)} = \\frac{\\text{Benefits and Costs that can be inflicted upon A}}{\\text{B's cost of not agreeing}}"
},
{
"math_id": 2,
"text": "BP_A"
},
{
"math_id": 3,
"text": "BP_B"
},
{
"math_id": 4,
"text": "100"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "Y"
},
{
"math_id": 7,
"text": "X+Y<100"
},
{
"math_id": 8,
"text": "X+\\pi(100-X-Y)"
},
{
"math_id": 9,
"text": "Y+(1-\\pi)(100-X-Y)"
},
{
"math_id": 10,
"text": "0 < \\pi < 1"
},
{
"math_id": 11,
"text": "\\pi"
},
{
"math_id": 12,
"text": "1"
},
{
"math_id": 13,
"text": "\\frac{1}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=14063193 |
1406385 | Radon's theorem | Says d+2 points in d dimensions can be partitioned into two subsets whose convex hulls intersect
In geometry, Radon's theorem on convex sets, published by Johann Radon in 1921, states that:Any set of d + 2 points in Rd can be partitioned into two sets whose convex hulls intersect. A point in the intersection of these convex hulls is called a Radon point of the set.For example, in the case "d" = 2, any set of four points in the Euclidean plane can be partitioned in one of two ways. It may form a triple and a singleton, where the convex hull of the triple (a triangle) contains the singleton; alternatively, it may form two pairs of points that form the endpoints of two intersecting line segments.
Proof and construction.
Consider any set formula_0 of "d" + 2 points in "d"-dimensional space. Then there exists a set of multipliers "a"1, ..., "a""d" + 2, not all of which are zero, solving the system of linear equations
formula_1
because there are "d" + 2 unknowns (the multipliers) but only "d" + 1 equations that they must satisfy (one for each coordinate of the points, together with a final equation requiring the sum of the multipliers to be zero). Fix some particular nonzero solution "a"1, ..., "a""d" + 2. Let formula_2 be the set of points with positive multipliers, and let formula_3 be the set of points with multipliers that are negative or zero. Then formula_4 and formula_5 form the required partition of the points into two subsets with intersecting convex hulls.
The convex hulls of formula_4 and formula_5 must intersect, because they both contain the point
formula_6
where
formula_7
The left hand side of the formula for formula_8 expresses this point as a convex combination of the points in formula_4, and the right hand side expresses it as a convex combination of the points in formula_5. Therefore, formula_8 belongs to both convex hulls, completing the proof.
This proof method allows for the efficient construction of a Radon point, in an amount of time that is polynomial in the dimension, by using Gaussian elimination or other efficient algorithms to solve the system of equations for the multipliers.
Topological Radon theorem.
An equivalent formulation of Radon's theorem is:If ƒ is any affine function from a ("d" + 1)-dimensional simplex Δd+1 to Rd, then there are two disjoint faces of Δd+1 whose images under ƒ intersect.They are equivalent because any affine function on a simplex is uniquely determined by the images of its vertices. Formally, let ƒ be an affine function from Δd+1 to Rd. Let formula_9 be the vertices of Δd+1, and let formula_10 be their images under "ƒ". By the original formulation, the formula_10 can be partitioned into two disjoint subsets, e.g. ("xi")i in I and ("xj")j in J, with overlapping convex hull. Because "f" is affine, the convex hull of ("xi")i in I is the image of the face spanned by the vertices ("vi")i in I, and similarly the convex hull of ("xj")j in J is the image of the face spanned by the vertices ("vj")j in j. These two faces are disjoint, and their images under "f" intersect - as claimed by the new formulation.
The topological Radon theorem generalizes this formluation. It allows "f" to be any continuous function - not necessarily affine:If ƒ is any continuous function from a ("d" + 1)-dimensional simplex Δd+1 to Rd, then there are two disjoint faces of Δd+1 whose images under ƒ intersect.More generally, if "K" is any ("d" + 1)-dimensional compact convex set, and ƒ is any continuous function from "K" to "d"-dimensional space, then there exists a linear function "g" such that some point where "g" achieves its maximum value and some other point where "g" achieves its minimum value are mapped by ƒ to the same point. In the case where "K" is a simplex, the two simplex faces formed by the maximum and minimum points of "g" must then be two disjoint faces whose images have a nonempty intersection. This same general statement, when applied to a hypersphere instead of a simplex, gives the Borsuk–Ulam theorem, that ƒ must map two opposite points of the sphere to the same point.
Proofs.
The topological Radon theorem was originally proved by Bajmoczy and Barany in the following way:
Another proof was given by Lovasz and Schrijver. A third proof is given by Matousek:115
Applications.
The Radon point of any four points in the plane is their geometric median, the point that minimizes the sum of distances to the other points.
Radon's theorem forms a key step of a standard proof of Helly's theorem on intersections of convex sets; this proof was the motivation for Radon's original discovery of Radon's theorem.
Radon's theorem can also be used to calculate the VC dimension of "d"-dimensional points with respect to linear separations. There exist sets of "d" + 1 points (for instance, the points of a regular simplex) such that every two nonempty subsets can be separated from each other by a hyperplane. However, no matter which set of "d" + 2 points is given, the two subsets of a Radon partition cannot be linearly separated. Therefore, the VC dimension of this system is exactly "d" + 1.
A randomized algorithm that repeatedly replaces sets of "d" + 2 points by their Radon point can be used to compute an approximation to a centerpoint of any point set, in an amount of time that is polynomial in both the number of points and the dimension.
Related concepts.
Geometric median. The Radon point of three points in a one-dimensional space is just their median. The geometric median of a set of points is the point minimizing the sum of distances to the points in the set; it generalizes the one-dimensional median and has been studied both from the point of view of facility location and robust statistics. For sets of four points in the plane, the geometric median coincides with the Radon point.
Tverberg's theorem. A generalization for partition into "r" sets was given by Helge Tverberg (1966) and is now known as Tverberg's theorem. It states that for any set of formula_13points in Euclidean "d"-space, there is a partition into "r" subsets whose convex hulls intersect in at least one common point.
Carathéodory's theorem states that any point in the convex hull of some set of points is also within the convex hull of a subset of at most "d" + 1 of the points; that is, that the given point is part of a Radon partition in which it is a singleton. One proof of Carathéodory's theorem uses a technique of examining solutions to systems of linear equations, similar to the proof of Radon's theorem, to eliminate one point at a time until at most "d" + 1 remain.
Convex geometries. Concepts related to Radon's theorem have also been considered for convex geometries, families of finite sets with the properties that the intersection of any two sets in the family remains in the family, and that the empty set and the union of all the sets belongs to the family. In this more general context, the convex hull of a set "S" is the intersection of the family members that contain "S", and the Radon number of a space is the smallest "r" such that any "r" points have two subsets whose convex hulls intersect. Similarly, one can define the Helly number "h" and the Carathéodory number "c" by analogy to their definitions for convex sets in Euclidean spaces, and it can be shown that these numbers satisfy the inequalities "h" < "r" ≤ "ch" + 1.
Radon theorem for graphs. In an arbitrary undirected graph, one may define a convex set to be a set of vertices that includes every induced path connecting a pair of vertices in the set. With this definition, every set of ω + 1 vertices in the graph can be partitioned into two subsets whose convex hulls intersect, and ω + 1 is the minimum number for which this is possible, where ω is the clique number of the given graph. For related results involving shortest paths instead of induced paths see and .
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X=\\{x_1,x_2,\\dots,x_{d+2}\\}\\subset \\mathbf{R}^d"
},
{
"math_id": 1,
"text": " \\sum_{i=1}^{d+2} a_i x_i=0,\\quad \\sum_{i=1}^{d+2} a_i=0,"
},
{
"math_id": 2,
"text": "I\\subseteq X"
},
{
"math_id": 3,
"text": "J=X\\setminus I"
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "J"
},
{
"math_id": 6,
"text": "p= \\sum_{x_i\\in I}\\frac{a_i}{A} x_i=\\sum_{x_j\\in J}\\frac{-a_j}{A}x_j,"
},
{
"math_id": 7,
"text": "A=\\sum_{x_i\\in I} a_i=-\\sum_{x_j\\in J} a_j."
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "v_1,v_2,\\dots,v_{d+2}"
},
{
"math_id": 10,
"text": "x_1,x_2,\\dots,x_{d+2}"
},
{
"math_id": 11,
"text": "f\\circ g"
},
{
"math_id": 12,
"text": "K^{*2}_{\\Delta}"
},
{
"math_id": 13,
"text": "(d + 1)(r - 1) + 1\\ "
}
]
| https://en.wikipedia.org/wiki?curid=1406385 |
14065259 | Differential nonlinearity | Differential nonlinearity (acronym DNL) is a commonly used measure of performance in digital-to-analog (DAC) and analog-to-digital (ADC) converters. It is a term describing the deviation between two analog values corresponding to adjacent input digital values. It is an important specification for measuring error in a digital-to-analog converter (DAC); the accuracy of a DAC is mainly determined by this specification. Ideally, any two adjacent digital codes correspond to output analog voltages that are exactly one Least Significant Bit (LSB) apart. Differential non-linearity is a measure of the worst-case deviation from the ideal 1 LSB step. For example, a DAC with a 1.5 LSB output change for a 1 LSB digital code change exhibits 1⁄2 LSB differential non-linearity. Differential non-linearity may be expressed in fractional bits or as a percentage of full scale. A differential non-linearity greater than 1 LSB may lead to a non-monotonic transfer function in a DAC. It is also known as a "missing code".
Differential linearity refers to a constant relation between the change in the output and input. For transducers if a change in the input produces a uniform step change in the output the transducer possess differential linearity. Differential linearity is desirable and is inherent to a system such as a single-slope analog-to-digital converter used in nuclear instrumentation.
Formula.
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{DNL(i)} = {{V_\\text{out}(i+1) - V_\\text{out}(i)}\\over \\text{ideal LSB step width}} - 1 "
}
]
| https://en.wikipedia.org/wiki?curid=14065259 |
1406584 | CCR and CAR algebras | Canonical commutation or anticommutation relations
In mathematics and physics CCR algebras (after canonical commutation relations) and CAR algebras (after canonical anticommutation relations) arise from the quantum mechanical study of bosons and fermions, respectively. They play a prominent role in quantum statistical mechanics and quantum field theory.
CCR and CAR as *-algebras.
Let formula_0 be a real vector space equipped with a nonsingular real antisymmetric bilinear form formula_1 (i.e. a symplectic vector space). The unital *-algebra generated by elements of formula_0 subject to the relations
formula_2
formula_3
for any formula_4 in formula_0 is called the canonical commutation relations (CCR) algebra. The uniqueness of the representations of this algebra when formula_0 is finite dimensional is discussed in the Stone–von Neumann theorem.
If formula_0 is equipped with a nonsingular real symmetric bilinear form formula_1 instead, the unital *-algebra generated by the elements of formula_0 subject to the relations
formula_5
formula_6
for any formula_4 in formula_0 is called the canonical anticommutation relations (CAR) algebra.
The C*-algebra of CCR.
There is a distinct, but closely related meaning of CCR algebra, called the CCR C*-algebra. Let formula_7 be a real symplectic vector space with nonsingular symplectic form formula_1. In the theory of operator algebras, the CCR algebra over formula_7 is the unital C*-algebra generated by elements formula_8 subject to
formula_9
formula_10
These are called the Weyl form of the canonical commutation relations and, in particular, they imply that each formula_11 is unitary and formula_12. It is well known that the CCR algebra is a simple (unless the sympletic form is degenerate) non-separable algebra and is unique up to isomorphism.
When formula_7 is a complex Hilbert space and formula_1 is given by the imaginary part of the inner-product, the CCR algebra is faithfully represented on the symmetric Fock space over formula_7 by setting
formula_13
for any formula_14. The field operators formula_15 are defined for each formula_16 as the generator of the one-parameter unitary group formula_17 on the symmetric Fock space. These are self-adjoint unbounded operators, however they formally satisfy
formula_18
As the assignment formula_19 is real-linear, so the operators formula_15 define a CCR algebra over formula_20 in the sense of Section 1.
The C*-algebra of CAR.
Let formula_7 be a Hilbert space. In the theory of operator algebras the CAR algebra is the unique C*-completion of the complex unital *-algebra generated by elements formula_21 subject to the relations
formula_22
formula_23
formula_24
formula_25
for any formula_26, formula_27.
When formula_7 is separable the CAR algebra is an AF algebra and in the special case formula_7 is infinite dimensional it is often written as formula_28.
Let formula_29 be the antisymmetric Fock space over formula_7 and let formula_30 be the orthogonal projection onto antisymmetric vectors:
formula_31
The CAR algebra is faithfully represented on formula_29 by setting
formula_32
for all formula_33 and formula_34. The fact that these form a C*-algebra is due to the fact that creation and annihilation operators on antisymmetric Fock space are bona-fide bounded operators. Moreover, the field operators formula_35 satisfy
formula_36
giving the relationship with Section 1.
Superalgebra generalization.
Let formula_0 be a real formula_37-graded vector space equipped with a nonsingular antisymmetric bilinear superform formula_1 (i.e. formula_38 ) such that formula_39 is real if either formula_40 or formula_41 is an even element and imaginary if both of them are odd. The unital *-algebra generated by the elements of formula_0 subject to the relations
formula_42
formula_43
for any two pure elements formula_4 in formula_0 is the obvious superalgebra generalization which unifies CCRs with CARs: if all pure elements are even, one obtains a CCR, while if all pure elements are odd, one obtains a CAR.
In mathematics, the abstract structure of the CCR and CAR algebras, over any field, not just the complex numbers, is studied by the name of Weyl and Clifford algebras, where many significant results have accrued. One of these is that the graded generalizations of Weyl and Clifford algebras allow the basis-free formulation of the canonical commutation and anticommutation relations in terms of a symplectic and a symmetric non-degenerate bilinear form. In addition, the binary elements in this graded Weyl algebra give a basis-free version of the commutation relations of the symplectic and indefinite orthogonal Lie algebras. | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "(\\cdot,\\cdot)"
},
{
"math_id": 2,
"text": "fg-gf=i(f,g) \\, "
},
{
"math_id": 3,
"text": " f^*=f, \\, "
},
{
"math_id": 4,
"text": "f,~g"
},
{
"math_id": 5,
"text": "fg+gf=(f,g), \\,"
},
{
"math_id": 6,
"text": " f^*=f, \\,"
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": " \\{W(f):~f\\in H\\}"
},
{
"math_id": 9,
"text": " W(f)W(g)=e^{-i(f,g)}W(f+g), \\,"
},
{
"math_id": 10,
"text": " W(f)^*=W(-f). \\,"
},
{
"math_id": 11,
"text": "W(f)"
},
{
"math_id": 12,
"text": "W(0)=1"
},
{
"math_id": 13,
"text": " W(f)\\left(1,g,\\frac{g^{\\otimes 2}}{2!},\\frac{g^{\\otimes 3}}{3!},\\ldots\\right)= e^{-\\frac{1}{2}\\|f\\|^2-\\langle f,g\\rangle }\\left(1,f+g,\\frac{(f+g)^{\\otimes 2}}{2!}, \\frac{(f+g)^{\\otimes 3}}{3!}, \\ldots\\right), "
},
{
"math_id": 14,
"text": "f,g \\in H"
},
{
"math_id": 15,
"text": "B(f)"
},
{
"math_id": 16,
"text": " f\\in H"
},
{
"math_id": 17,
"text": "(W(tf))_{t\\in\\mathbb{R}}"
},
{
"math_id": 18,
"text": " B(f)B(g)-B(g)B(f) = 2i\\operatorname{Im}\\langle f,g\\rangle. "
},
{
"math_id": 19,
"text": "f\\mapsto B(f)"
},
{
"math_id": 20,
"text": "(H,2\\operatorname{Im}\\langle\\cdot,\\cdot\\rangle)"
},
{
"math_id": 21,
"text": "\\{b(f),b^*(f):~f\\in H\\}"
},
{
"math_id": 22,
"text": "b(f)b^*(g)+b^*(g)b(f)=\\langle f,g\\rangle, \\,"
},
{
"math_id": 23,
"text": "b(f)b(g)+b(g)b(f)=0, \\,"
},
{
"math_id": 24,
"text": "\\lambda b^*(f)=b^*(\\lambda f), \\,"
},
{
"math_id": 25,
"text": "b(f)^*=b^*(f), \\, "
},
{
"math_id": 26,
"text": "f,g\\in H"
},
{
"math_id": 27,
"text": "\\lambda\\in\\mathbb{C}"
},
{
"math_id": 28,
"text": "{M_{2^\\infty}(\\mathbb{C})}"
},
{
"math_id": 29,
"text": "F_a(H)"
},
{
"math_id": 30,
"text": "P_a"
},
{
"math_id": 31,
"text": "P_a: \\bigoplus_{n=0}^\\infty H^{\\otimes n} \\to F_a(H). \\, "
},
{
"math_id": 32,
"text": " b^*(f)P_a(g_1\\otimes g_2\\otimes\\cdots\\otimes g_n)=P_a(f\\otimes g_1\\otimes g_2\\otimes\\cdots\\otimes g_n) \\, "
},
{
"math_id": 33,
"text": " f,g_1,\\ldots,g_n\\in H"
},
{
"math_id": 34,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 35,
"text": "B(f):=b^*(f)+b(f)"
},
{
"math_id": 36,
"text": " B(f)B(g)+B(g)B(f)=2\\mathrm{Re}\\langle f,g\\rangle, \\, "
},
{
"math_id": 37,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 38,
"text": "(g,f)=-(-1)^{|f||g|}(f,g) "
},
{
"math_id": 39,
"text": "(f,g)"
},
{
"math_id": 40,
"text": "f"
},
{
"math_id": 41,
"text": "g"
},
{
"math_id": 42,
"text": "fg-(-1)^{|f||g|}gf=i(f,g) \\,"
},
{
"math_id": 43,
"text": "f^*=f,~g^*=g\\,"
}
]
| https://en.wikipedia.org/wiki?curid=1406584 |
14066275 | Inversion temperature | The inversion temperature in thermodynamics and cryogenics is the critical temperature below which a non-ideal gas (all gases in reality) that is expanding at constant enthalpy will experience a temperature decrease, and above which will experience a temperature increase. This temperature change is known as the Joule–Thomson effect, and is exploited in the liquefaction of gases. Inversion temperature depends on the nature of the gas.
For a van der Waals gas we can calculate the enthalpy formula_0 using statistical mechanics as
formula_1
where formula_2 is the number of molecules, formula_3 is volume, formula_4 is temperature (in the Kelvin scale), formula_5 is the Boltzmann constant, and formula_6 and formula_7 are constants depending on intermolecular forces and molecular volume, respectively.
From this equation, we note that if we keep enthalpy constant and increase volume, temperature must change depending on the sign of formula_8. Therefore, our inversion temperature is given where the sign flips at zero, or
formula_9,
where formula_10 is the critical temperature of the substance. So for formula_11, an expansion at constant enthalpy increases temperature as the work done by the repulsive interactions of the gas is dominant, and so the change in kinetic energy is positive. But for formula_12, expansion causes temperature to decrease because the work of attractive intermolecular forces dominates, giving a negative change in average molecular speed, and therefore kinetic energy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "H = \\frac{5}{2} N k_\\mathrm B T + \\frac{N^2}{V} (b k_\\mathrm B T - 2a)"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "k_\\mathrm B"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": "b k_\\mathrm B T - 2a"
},
{
"math_id": 9,
"text": " T_\\text{inv} = \\frac{2a}{b k_\\mathrm B} = \\frac{27}{4} T_\\mathrm c "
},
{
"math_id": 10,
"text": "T_\\mathrm c"
},
{
"math_id": 11,
"text": "T > T_\\text{inv}"
},
{
"math_id": 12,
"text": "T < T_\\text{inv}"
}
]
| https://en.wikipedia.org/wiki?curid=14066275 |
1406812 | Energy harvesting | Collecting energy from external sources
Energy harvesting (EH) – also known as power harvesting, energy scavenging, or ambient power – is the process by which energy is derived from external sources (e.g., solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, also known as ambient energy), then stored for use by small, wireless autonomous devices, like those used in wearable electronics, condition monitoring, and wireless sensor networks.
Energy harvesters usually provide a very small amount of power for low-energy electronics. While the input fuel to some large-scale energy generation costs resources (oil, coal, etc.), the energy source for energy harvesters is present as ambient background. For example, temperature gradients exist from the operation of a combustion engine and in urban areas, there is a large amount of electromagnetic energy in the environment due to radio and television broadcasting.
One of the first examples of ambient energy being used to produce electricity was the successful use of electromagnetic radiation (EMR) to generate the crystal radio.
The principles of energy harvesting from ambient EMR can be demonstrated with basic components.
Operation.
Energy harvesting devices converting ambient energy into electrical energy have attracted much interest in both the military and commercial sectors. Some systems convert motion, such as that of ocean waves, into electricity to be used by oceanographic monitoring sensors for autonomous operation. Future applications may include high-power output devices (or arrays of such devices) deployed at remote locations to serve as reliable power stations for large systems. Another application is in wearable electronics, where energy-harvesting devices can power or recharge cell phones, mobile computers, and radio communication equipment. All of these devices must be sufficiently robust to endure long-term exposure to hostile environments and have a broad range of dynamic sensitivity to exploit the entire spectrum of wave motions. In addition, one of the latest techniques to generate electric power from vibration waves is the utilization of Auxetic Boosters. This method falls under the category of piezoelectric-based vibration energy harvesting (PVEH), where the harvested electric energy can be directly used to power wireless sensors, monitoring cameras, and other Internet of Things (IoT) devices.
Accumulating energy.
Energy can also be harvested to power small autonomous sensors such as those developed using MEMS technology. These systems are often very small and require little power, but their applications are limited by the reliance on battery power. Scavenging energy from ambient vibrations, wind, heat, or light could enable smart sensors to function indefinitely.
Typical power densities available from energy harvesting devices are highly dependent upon the specific application (affecting the generator's size) and the design itself of the harvesting generator. In general, for motion-powered devices, typical values are a few μW/cm3 for human body-powered applications and hundreds of μW/cm3 for generators powered by machinery. Most energy-scavenging devices for wearable electronics generate very little power.
Storage of power.
In general, energy can be stored in a capacitor, super capacitor, or battery. Capacitors are used when the application needs to provide huge energy spikes. Batteries leak less energy and are therefore used when the device needs to provide a steady flow of energy. These aspects of the battery depend on the type that is used. A common type of battery that is used for this purpose is the lead acid or lithium-ion battery although older types such as nickel metal hydride are still widely used today. Compared to batteries, super capacitors have virtually unlimited charge-discharge cycles and can therefore operate forever, enabling a maintenance-free operation in IoT and wireless sensor devices.
Use of the power.
Current interest in low-power energy harvesting is for independent sensor networks. In these applications, an energy harvesting scheme puts power stored into a capacitor then boosts/regulates it to a second storage capacitor or battery for use in the microprocessor or in the data transmission. The power is usually used in a sensor application and the data is stored or transmitted, possibly through a wireless method.
Motivation.
One of the main driving forces behind the search for new energy harvesting devices is the desire to power sensor networks and mobile devices without batteries that need external charging or service. Batteries have several limitations, such as limited lifespan, environmental impact, size, weight, and cost. Energy harvesting devices can provide an alternative or complementary source of power for applications that require low power consumption, such as remote sensing, wearable electronics, condition monitoring, and wireless sensor networks. Energy harvesting devices can also extend the battery life or enable batteryless operation of some applications.
Another motivation for energy harvesting is the potential to address the issue of climate change by reducing greenhouse gas emissions and fossil fuel consumption. Energy harvesting devices can utilize renewable and clean sources of energy that are abundant and ubiquitous in the environment, such as solar, thermal, wind, and kinetic energy. Energy harvesting devices can also reduce the need for power transmission and distribution systems that cause energy losses and environmental impacts. Energy harvesting devices can therefore contribute to the development of a more sustainable and resilient energy system.
Recent research in energy harvesting has led to the innovation of devices capable of powering themselves through user interactions. Notable examples include battery-free game boys and other toys, which showcase the potential of devices powered by the energy generated from user actions, such as pressing buttons or turning knobs. These studies highlight how energy harvested from interactions can not only power the devices themselves but also extend their operational autonomy, promoting the use of renewable energy sources and reducing reliance on traditional batteries.
Energy sources.
There are many small-scale energy sources that generally cannot be scaled up to industrial size in terms of comparable output to industrial size solar, wind or wave power:
Ambient-radiation sources.
A possible source of energy comes from ubiquitous radio transmitters. Historically, either a large collection area or close proximity to the radiating wireless energy source is needed to get useful power levels from this source. The nantenna is one proposed development which would overcome this limitation by making use of the abundant natural radiation (such as solar radiation).
One idea is to deliberately broadcast RF energy to power and collect information from remote devices. This is now commonplace in passive radio-frequency identification (RFID) systems, but the Safety and US Federal Communications Commission (and equivalent bodies worldwide) limit the maximum power that can be transmitted this way to civilian use. This method has been used to power individual nodes in a wireless sensor network.
Fluid flow.
Various turbine and non-turbine generator technologies can harvest airflow. Towered wind turbines and airborne wind energy systems (AWES) harness the flow of air. Multiple companies are developing these technologies, which can operate in low-light environments, such as HVAC ducts, and can be scaled and optimized for the energy requirements of specific applications.
The flow of blood can also be utilized to power devices. For example, a pacemaker developed at the University of Bern, uses blood flow to wind up a spring, which then drives an electrical micro-generator.
Water energy harvesting has seen advancements in design, such as generators with transistor-like architecture, achieving high energy conversion efficiency and power density.
Photovoltaic.
Photovoltaic (PV) energy harvesting wireless technology offers significant advantages over wired or solely battery-powered sensor solutions: virtually inexhaustible sources of power with little or no adverse environmental effects. Indoor PV harvesting solutions have to date been powered by specially tuned amorphous silicon (aSi)a technology most used in Solar Calculators. In recent years new PV technologies have come to the forefront in Energy Harvesting such as Dye-Sensitized Solar Cells (DSSC). The dyes absorb light much like chlorophyll does in plants. Electrons released on impact escape to the layer of TiO2 and from there diffuse, through the electrolyte, as the dye can be tuned to the visible spectrum much higher power can be produced. At 200 lux a DSSC can provide over 10 μW per cm2.
Piezoelectric.
The piezoelectric effect converts mechanical strain into electric current or voltage. This strain can come from many different sources. Human motion, low-frequency seismic vibrations, and acoustic noise are everyday examples. Except in rare instances the piezoelectric effect operates in AC requiring time-varying inputs at mechanical resonance to be efficient.
Most piezoelectric electricity sources produce power on the order of milliwatts, too small for system application, but enough for hand-held devices such as some commercially available self-winding wristwatches. One proposal is that they are used for micro-scale devices, such as in a device harvesting micro-hydraulic energy. In this device, the flow of pressurized hydraulic fluid drives a reciprocating piston supported by three piezoelectric elements which convert the pressure fluctuations into an alternating current.
As piezo energy harvesting has been investigated only since the late 1990s, it remains an emerging technology. Nevertheless, some interesting improvements were made with the self-powered electronic switch at INSA school of engineering, implemented by the spin-off Arveni. In 2006, the proof of concept of a battery-less wireless doorbell push button was created, and recently, a product showed that classical wireless wallswitch can be powered by a piezo harvester. Other industrial applications appeared between 2000 and 2005, to harvest energy from vibration and supply sensors for example, or to harvest energy from shock.
Piezoelectric systems can convert motion from the human body into electrical power. DARPA has funded efforts to harness energy from leg and arm motion, shoe impacts, and blood pressure for low level power to implantable or wearable sensors. The nanobrushes are another example of a piezoelectric energy harvester. They can be integrated into clothing. Multiple other nanostructures have been exploited to build an energy-harvesting device, for example, a single crystal PMN-PT nanobelt was fabricated and assembled into a piezoelectric energy harvester in 2016. Careful design is needed to minimise user discomfort. These energy harvesting sources by association affect the body. The Vibration Energy Scavenging Project is another project that is set up to try to scavenge electrical energy from environmental vibrations and movements. Microbelt can be used to gather electricity from respiration. Besides, as the vibration of motion from human comes in three directions, a single piezoelectric cantilever based omni-directional energy harvester is created by using 1:2 internal resonance. Finally, a millimeter-scale piezoelectric energy harvester has also already been created.
Piezo elements are being embedded in walkways to recover the "people energy" of footsteps. They can also be embedded in shoes to recover "walking energy". Researchers at MIT developed the first micro-scale piezoelectric energy harvester using thin film PZT in 2005. Arman Hajati and Sang-Gook Kim invented the Ultra Wide-Bandwidth micro-scale piezoelectric energy harvesting device by exploiting the nonlinear stiffness of a doubly clamped microelectromechanical systems (MEMSs) resonator. The stretching strain in a doubly clamped beam shows a nonlinear stiffness, which provides a passive feedback and results in amplitude-stiffened Duffing mode resonance. Typically, piezoelectric cantilevers are adopted for the above-mentioned energy harvesting system. One drawback is that the piezoelectric cantilever has gradient strain distribution, i.e., the piezoelectric transducer is not fully utilized. To address this issue, triangle shaped and L-shaped cantilever are proposed for uniform strain distribution.
In 2018, Soochow University researchers reported hybridizing a triboelectric nanogenerator and a silicon solar cell by sharing a mutual electrode. This device can collect solar energy "or" convert the mechanical energy of falling raindrops into electricity.
UK telecom company Orange UK created an energy harvesting T-shirt and boots. Other companies have also done the same.
Energy from smart roads and piezoelectricity.
Brothers Pierre Curie and Jacques Curie gave the concept of piezoelectric effect in 1880. Piezoelectric effect converts mechanical strain into voltage or electric current and generates electric energy from motion, weight, vibration and temperature changes as shown in the figure.
Considering piezoelectric effect in thin film lead zirconate titanate formula_0 PZT, microelectromechanical systems (MEMS) power generating device has been developed. During recent improvement in piezoelectric technology, Aqsa Abbasi ) differentiated two modes called formula_1 and formula_2 in vibration converters and re-designed to resonate at specific frequencies from an external vibration energy source, thereby creating electrical energy via the piezoelectric effect using electromechanical damped mass.
However, Aqsa further developed beam-structured electrostatic devices that are more difficult to fabricate than PZT MEMS devices versus a similar because general silicon processing involves many more mask steps that do not require PZT film. Piezoelectric formula_1 type sensors and actuators have a cantilever beam structure that consists of a membrane bottom electrode, film, piezoelectric film, and top electrode. More than "(3~5 masks)" mask steps are required for patterning of each layer while have very low induced voltage. Pyroelectric crystals that have a unique polar axis and have spontaneous polarization, along which the spontaneous polarization exists. These are the crystals of classes "6mm", "4mm", "mm2", "6", "4", "3m", "3","2", "m". The special polar axis—crystallophysical axis "X3" – coincides with the axes "L6","L4", "L3", and "L2" of the crystals or lies in the unique straight plane "P (class "m")". Consequently, the electric centers of positive and negative charges are displaced of an elementary cell from equilibrium positions, i.e., the spontaneous polarization of the crystal changes. Therefore, all considered crystals have spontaneous polarization formula_3. Since
piezoelectric effect in pyroelectric crystals arises as a result of changes in their spontaneous polarization under external effects (electric fields, mechanical stresses). As a result of displacement, Aqsa Abbasi introduced change in the components formula_4 along all three axes formula_5. Suppose that formula_5 is proportional to the mechanical stresses causing in a first approximation, which results formula_6 where "Tkl" represents the mechanical stress and "dikl" represents the piezoelectric modules.
PZT thin films have attracted attention for applications such as force sensors, accelerometers, gyroscopes actuators, tunable optics, micro pumps, ferroelectric RAM, display systems and smart roads, when energy sources are limited, energy harvesting plays an important role in the environment. Smart roads have the potential to play an important role in power generation. Embedding piezoelectric material in the road can convert pressure exerted by moving vehicles into voltage and current.
Smart transportation intelligent system.
Piezoelectric sensors are most useful in smart-road technologies that can be used to create systems that are intelligent and improve productivity in the long run. Imagine highways that alert motorists of a traffic jam before it forms. Or bridges that report when they are at risk of collapse, or an electric grid that fixes itself when blackouts hit. For many decades, scientists and experts have argued that the best way to fight congestion is intelligent transportation systems, such as roadside sensors to measure traffic and synchronized traffic lights to control the flow of vehicles. But the spread of these technologies has been limited by cost. There are also some other smart-technology shovel ready projects which could be deployed fairly quickly, but most of the technologies are still at the development stage and might not be practically available for five years or more.
Pyroelectric.
The pyroelectric effect converts a temperature change into electric current or voltage. It is analogous to the piezoelectric effect, which is another type of ferroelectric behavior. Pyroelectricity requires time-varying inputs and suffers from small power outputs in energy harvesting applications due to its low operating frequencies. However, one key advantage of pyroelectrics over thermoelectrics is that many pyroelectric materials are stable up to 1200 °C or higher, enabling energy harvesting from high temperature sources and thus increasing thermodynamic efficiency.
One way to directly convert waste heat into electricity is by executing the Olsen cycle on pyroelectric materials. The Olsen cycle consists of two isothermal and two isoelectric field processes in the electric displacement-electric field (D-E) diagram. The principle of the Olsen cycle is to charge a capacitor via cooling under low electric field and to discharge it under heating at higher electric field. Several pyroelectric converters have been developed to implement the Olsen cycle using conduction, convection, or radiation. It has also been established theoretically that pyroelectric conversion based on heat regeneration using an oscillating working fluid and the Olsen cycle can reach Carnot efficiency between a hot and a cold thermal reservoir. Moreover, recent studies have established polyvinylidene fluoride trifluoroethylene [P(VDF-TrFE)] polymers and lead lanthanum zirconate titanate (PLZT) ceramics as promising pyroelectric materials to use in energy converters due to their large energy densities generated at low temperatures. Additionally, a pyroelectric scavenging device that does not require time-varying inputs was recently introduced. The energy-harvesting device uses the edge-depolarizing electric field of a heated pyroelectric to convert heat energy into mechanical energy instead of drawing electric current off two plates attached to the crystal-faces.
Thermoelectrics.
In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two dissimilar conductors produces a voltage. At the heart of the thermoelectric effect is the fact that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. The heat absorbed or produced is proportional to the current, and the proportionality constant is known as the Peltier coefficient. Today, due to knowledge of the Seebeck and Peltier effects, thermoelectric materials can be used as heaters, coolers and generators (TEGs).
Ideal thermoelectric materials have a high Seebeck coefficient, high electrical conductivity, and low thermal conductivity. Low thermal conductivity is necessary to maintain a high thermal gradient at the junction. Standard thermoelectric modules manufactured today consist of P- and N-doped bismuth-telluride semiconductors sandwiched between two metallized ceramic plates. The ceramic plates add rigidity and electrical insulation to the system. The semiconductors are connected electrically in series and thermally in parallel.
Miniature thermocouples have been developed that convert body heat into electricity and generate 40 μ W at 3 V with a 5-degree temperature gradient, while on the other end of the scale, large thermocouples are used in nuclear RTG batteries.
Practical examples are the finger-heartratemeter by the Holst Centre and the thermogenerators by the Fraunhofer-Gesellschaft.
Advantages to thermoelectrics:
One downside to thermoelectric energy conversion is low efficiency (currently less than 10%). The development of materials that are able to operate in higher temperature gradients, and that can conduct electricity well without also conducting heat (something that was until recently thought impossible ), will result in increased efficiency.
Future work in thermoelectrics could be to convert wasted heat, such as in automobile engine combustion, into electricity.
Electrostatic (capacitive).
This type of harvesting is based on the changing capacitance of vibration-dependent capacitors. Vibrations separate the plates of a charged variable capacitor, and mechanical energy is converted into electrical energy.
Electrostatic energy harvesters need a polarization source to work and to convert mechanical energy from vibrations into electricity. The polarization source should be in the order of some hundreds of volts; this greatly complicates the power management circuit. Another solution consists in using electrets, that are electrically charged dielectrics able to keep the polarization on the capacitor for years.
It's possible to adapt structures from classical electrostatic induction generators, which also extract energy from variable capacitances, for this purpose. The resulting devices are self-biasing, and can directly charge batteries, or can produce exponentially growing voltages on storage capacitors, from which energy can be periodically extracted by DC/DC converters.
Magnetic induction.
Magnetic induction refers to the production of an electromotive force (i.e., voltage) in a changing magnetic field. This changing magnetic field can be created by motion, either rotation (i.e. Wiegand effect and Wiegand sensors) or linear movement (i.e. vibration).
Magnets wobbling on a cantilever are sensitive to even small vibrations and generate microcurrents by moving relative to conductors due to Faraday's law of induction. By developing a miniature device of this kind in 2007, a team from the University of Southampton made possible the planting of such a device in environments that preclude having any electrical connection to the outside world. Sensors in inaccessible places can now generate their own power and transmit data to outside receivers.
One of the major limitations of the magnetic vibration energy harvester developed at University of Southampton is the size of the generator, in this case approximately one cubic centimeter, which is much too large to integrate into today's mobile technologies. The complete generator including circuitry is a massive 4 cm by 4 cm by 1 cm nearly the same size as some mobile devices such as the iPod nano. Further reductions in the dimensions are possible through the integration of new and more flexible materials as the cantilever beam component. In 2012, a group at Northwestern University developed a vibration-powered generator out of polymer in the form of a spring. This device was able to target the same frequencies as the University of Southampton groups silicon based device but with one third the size of the beam component.
A new approach to magnetic induction based energy harvesting has also been proposed by using ferrofluids. The journal article, "Electromagnetic ferrofluid-based energy harvester", discusses the use of ferrofluids to harvest low frequency vibrational energy at 2.2 Hz with a power output of ~80 mW per g.
Quite recently, the change in domain wall pattern with the application of stress has been proposed as a method to harvest energy using magnetic induction. In this study, the authors have shown that the applied stress can change the domain pattern in microwires. Ambient vibrations can cause stress in microwires, which can induce a change in domain pattern and hence change the induction. Power, of the order of uW/cm2 has been reported.
Commercially successful vibration energy harvesters based on magnetic induction are still relatively few in number. Examples include products developed by Swedish company ReVibe Energy, a technology spin-out from Saab Group. Another example is the products developed from the early University of Southampton prototypes by Perpetuum. These have to be sufficiently large to generate the power required by wireless sensor nodes (WSN) but in M2M applications this is not normally an issue. These harvesters are now being supplied in large volumes to power WSNs made by companies such as GE and Emerson and also for train bearing monitoring systems made by Perpetuum.
Overhead powerline sensors can use magnetic induction to harvest energy directly from the conductor they are monitoring.
Blood sugar.
Another way of energy harvesting is through the oxidation of blood sugars. These energy harvesters are called biobatteries. They could be used to power implanted electronic devices (e.g., pacemakers, implanted biosensors for diabetics, implanted active RFID devices, etc.). At present, the Minteer Group of Saint Louis University has created enzymes that could be used to generate power from blood sugars. However, the enzymes would still need to be replaced after a few years. In 2012, a pacemaker was powered by implantable biofuel cells at Clarkson University under the leadership of Dr. Evgeny Katz.
Tree-based.
Tree metabolic energy harvesting is a type of bio-energy harvesting. Voltree has developed a method for harvesting energy from trees. These energy harvesters are being used to power remote sensors and mesh networks as the basis for a long term deployment system to monitor forest fires and weather in the forest. According to Voltree's website, the useful life of such a device should be limited only by the lifetime of the tree to which it is attached. A small test network was recently deployed in a US National Park forest.
Other sources of energy from trees include capturing the physical movement of the tree in a generator. Theoretical analysis of this source of energy shows some promise in powering small electronic devices. A practical device based on this theory has been built and successfully powered a sensor node for a year.
Metamaterial.
A metamaterial-based device wirelessly converts a 900 MHz microwave signal to 7.3 volts of direct current (greater than that of a USB device). The device can be tuned to harvest other signals including Wi-Fi signals, satellite signals, or even sound signals. The experimental device used a series of five fiberglass and copper conductors. Conversion efficiency reached 37 percent. When traditional antennas are close to each other in space they interfere with each other. But since RF power goes down by the cube of the distance, the amount of power is very very small. While the claim of 7.3 volts is grand, the measurement is for an open circuit. Since the power is so low, there can be almost no current when any load is attached.
Atmospheric pressure changes.
The pressure of the atmosphere changes naturally over time from temperature changes and weather patterns. Devices with a sealed chamber can use these pressure differences to extract energy. This has been used to provide power for mechanical clocks such as the Atmos clock.
Ocean energy.
A relatively new concept of generating energy is to generate energy from oceans. Large masses of waters are present on the planet which carry with them great amounts of energy. The energy in this case can be generated by tidal streams, ocean waves, difference in salinity and also difference in temperature. As of 2018[ [update]], efforts are underway to harvest energy this way. United States Navy recently was able to generate electricity using difference in temperatures present in the ocean.
One method to use the temperature difference across different levels of the thermocline in the ocean is by using a thermal energy harvester that is equipped with a material that changes phase while in different temperatures regions. This is typically a polymer-based material that can handle reversible heat treatments. When the material is changing phase, the energy differential is converted into mechanical energy. The materials used will need to be able to alter phases, from liquid to solid, depending on the position of the thermocline underwater. These phase change materials within thermal energy harvesting units would be an ideal way to recharge or power an unmanned underwater vehicle (UUV) being that it will rely on the warm and cold water already present in large bodies of water; minimizing the need for standard battery recharging. Capturing this energy would allow for longer-term missions since the need to be collected or return for charging can be eliminated. This is also a very environmentally friendly method of powering underwater vehicles. There are no emissions that come from utilizing a phase change fluid, and it will likely have a longer lifespan than that of a standard battery.
Future directions.
Electroactive polymers (EAPs) have been proposed for harvesting energy. These polymers have a large strain, elastic energy density, and high energy conversion efficiency. The total weight of systems based on EAPs (electroactive polymers) is proposed to be significantly lower than those based on piezoelectric materials.
Nanogenerators, such as the one made by Georgia Tech, could provide a new way for powering devices without batteries. As of 2008, it only generates some dozen nanowatts, which is too low for any practical application.
Noise has been the subject of a proposal by NiPS Laboratory in Italy to harvest wide spectrum low scale vibrations via a nonlinear dynamical mechanism that can improve harvester efficiency up to a factor 4 compared to traditional linear harvesters.
Combinations of different types
of energy harvesters can further reduce dependence on batteries, particularly in environments where the available ambient energy types change periodically. This type of complementary balanced energy harvesting has the potential to increase reliability of wireless sensor systems for structural health monitoring.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " Pb(Zr,Ti)O_3 "
},
{
"math_id": 1,
"text": "d_{31}"
},
{
"math_id": 2,
"text": "d_{33}"
},
{
"math_id": 3,
"text": "Ps = P3"
},
{
"math_id": 4,
"text": "\\Delta P_s"
},
{
"math_id": 5,
"text": "\\Delta P_s = (\\Delta P_1, \\Delta P_2, \\Delta P_3) "
},
{
"math_id": 6,
"text": " \\Delta P_i = dikl Tkl "
}
]
| https://en.wikipedia.org/wiki?curid=1406812 |
14069478 | Classification of manifolds | In mathematics, specifically geometry and topology, the classification of manifolds is a basic question, about which much is known, and many open questions remain.
"Low dimensions" means dimensions up to 4; "high dimensions" means 5 or more dimensions. The case of dimension 4 is somehow a boundary case, as it manifests "low dimensional" behaviour smoothly (but not topologically); see discussion of "low" versus "high" dimension.
Main themes.
Different categories and additional structure.
Formally, classifying manifolds is classifying objects up to isomorphism.
There are many different notions of "manifold", and corresponding notions of
"map between manifolds", each of which yields a different category and a different classification question.
These categories are related by forgetful functors: for instance, a differentiable manifold is also a topological manifold, and a differentiable map is also continuous, so there is a functor formula_0.
These functors are in general neither one-to-one nor onto on objects; these failures are generally referred to in terms of "structure", as follows. A topological manifold that is in the image of formula_0 is said to "admit a differentiable structure", and the fiber over a given topological manifold is "the different differentiable structures on the given topological manifold".
Thus given two categories, the two natural questions are:
More precisely, what is the structure of the set of additional structures?
In more general categories, this "structure set" has more structure: in Diff it is simply a set, but in Top it is a group, and functorially so.
Many of these structures are G-structures, and the question is reduction of the structure group. The most familiar example is orientability: some manifolds are orientable, some are not, and orientable manifolds admit 2 orientations.
Enumeration versus invariants.
There are two usual ways to give a classification: explicitly, by an enumeration, or implicitly, in terms of invariants.
For instance, for orientable surfaces,
the classification of surfaces enumerates them as the connected sum of formula_1 tori, and an invariant that classifies them is the genus or Euler characteristic.
Manifolds have a rich set of invariants, including:
Modern algebraic topology (beyond cobordism theory), such as
Extraordinary (co)homology, is little-used
in the classification of manifolds, because these invariants are homotopy-invariant, and hence don't help with the finer classifications above homotopy type.
Cobordism groups (the bordism groups of a point) are computed, but the bordism groups of a space (such as formula_2) are generally not.
Point-set.
The point-set classification is basic—one generally fixes point-set assumptions and then studies that class of manifold.
The most frequently classified class of manifolds is closed, connected manifolds.
Being homogeneous (away from any boundary), manifolds have no local point-set invariants, other than their dimension and boundary versus interior, and the most used global point-set properties are compactness and connectedness. Conventional names for combinations of these are:
For instance, formula_3 is a compact manifold, formula_4 is a closed manifold, and formula_5 is an open manifold, while formula_6 is none of these.
Computability.
The Euler characteristic is a homological invariant, and thus can be effectively computed given a CW structure, so 2-manifolds are classified homologically.
Characteristic classes and characteristic numbers are the corresponding generalized homological invariants, but they do not classify manifolds in higher dimension (they are not a complete set of invariants): for instance, orientable 3-manifolds are parallelizable (Steenrod's theorem in low-dimensional topology), so all characteristic classes vanish. In higher dimensions, characteristic classes do not in general vanish, and provide useful but not complete data.
Manifolds in dimension 4 and above cannot be effectively classified: given two "n"-manifolds (formula_7) presented as CW complexes or handlebodies, there is no algorithm for determining if they are isomorphic (homeomorphic, diffeomorphic). This is due to the unsolvability of the word problem for groups, or more precisely, the triviality problem (given a finite presentation for a group, is it the trivial group?). Any finite presentation of a group can be realized as a 2-complex, and can be realized as the 2-skeleton of a 4-manifold (or higher). Thus one cannot even compute the fundamental group of a given high-dimensional manifold, much less a classification.
This ineffectiveness is a fundamental reason why surgery theory does not classify manifolds up to homeomorphism. Instead, for any fixed manifold "M" it classifies pairs formula_8 with "N" a manifold and formula_9 a "homotopy equivalence", two such pairs, formula_8 and formula_10, being regarded as equivalent if there exist a homeomorphism formula_11 and a homotopy formula_12.
Positive curvature is constrained, negative curvature is generic.
Many classical theorems in Riemannian geometry show that manifolds with positive curvature are constrained, most dramatically the 1/4-pinched sphere theorem. Conversely, negative curvature is generic: for instance, any manifold of dimension formula_13 admits a metric with negative Ricci curvature.
This phenomenon is evident already for surfaces: there is a single orientable (and a single non-orientable) closed surface with positive curvature (the sphere and projective plane),
and likewise for zero curvature (the torus and the Klein bottle), and all surfaces of higher genus admit negative curvature metrics only.
Similarly for 3-manifolds: of the 8 geometries,
all but hyperbolic are quite constrained.
Overview by dimension.
Thus dimension 4 differentiable manifolds are the most complicated:
they are neither geometrizable (as in lower dimension),
nor are they classified by surgery (as in higher dimension or topologically),
and they exhibit unusual phenomena, most strikingly the uncountably infinitely many exotic differentiable structures on R4. Notably, differentiable 4-manifolds is the only remaining open case of the generalized Poincaré conjecture.
One can take a low-dimensional point of view on high-dimensional manifolds
and ask "Which high-dimensional manifolds are geometrizable?",
for various notions of geometrizable (cut into geometrizable pieces as in 3 dimensions, into symplectic manifolds, and so forth). In dimension 4 and above not all manifolds
are geometrizable, but they are an interesting class.
Conversely, one can take a high-dimensional point of view on low-dimensional manifolds
and ask "What does surgery "predict" for low-dimensional manifolds?",
meaning "If surgery worked in low dimensions, what would low-dimensional manifolds look like?"
One can then compare the actual theory of low-dimensional manifolds
to the low-dimensional analog of high-dimensional manifolds,
and see if low-dimensional manifolds behave "as you would expect":
in what ways do they behave like high-dimensional manifolds (but for different reasons,
or via different proofs)
and in what ways are they unusual?
Dimensions 0 and 1.
There is a unique connected 0-dimensional manifold, namely the point, and disconnected 0-dimensional manifolds are just discrete sets, classified by cardinality. They have no geometry, and their study is combinatorics.
A connected compact 1-dimensional manifold without boundary is homeomorphic (or diffeomorphic if it is smooth) to the circle. A second countable, non-compact 1-dimensional manifold is homeomorphic or diffeomorphic to the real line. Dropping the assumption of second countability one gets two additional manifolds: the long line, and a space formed from a ray of the real line and a ray of the long line meeting at a point.
The study of maps of 1-dimensional manifolds are a non-trivial area. For example:
Dimensions 2 and 3: geometrizable.
Every connected closed 2-dimensional manifold (surface) admits a constant curvature metric, by the uniformization theorem. There are 3 such curvatures (positive, zero, and negative).
This is a classical result, and as stated, easy (the full uniformization theorem is subtler). The study of surfaces is deeply connected with complex analysis and algebraic geometry, as every orientable surface can be considered a Riemann surface or complex algebraic curve. While the classification of surfaces is classical, maps of surfaces is an active area; see below.
Every closed 3-dimensional manifold can be cut into pieces that are geometrizable, by the geometrization conjecture, and there are 8 such geometries.
This is a recent result, and quite difficult. The proof (the Solution of the Poincaré conjecture) is analytic, not topological.
Dimension 4: exotic.
Four-dimensional manifolds are the most unusual: they are not geometrizable (as in lower dimensions), and surgery works topologically, but not differentiably.
Since "topologically", 4-manifolds are classified by surgery, the differentiable classification question is phrased in terms of "differentiable structures": "which (topological) 4-manifolds admit a differentiable structure, and on those that do, how many differentiable structures are there?"
Four-manifolds often admit many unusual differentiable structures, most strikingly the uncountably infinitely many exotic differentiable structures on R4.
Similarly, differentiable 4-manifolds is the only remaining open case of the generalized Poincaré conjecture.
Dimension 5 and more: surgery.
In dimension 5 and above (and 4 dimensions topologically), manifolds are classified by surgery theory.
The reason for dimension 5 is that the Whitney trick works in the middle dimension in dimension 5 and more: two Whitney disks generically don't intersect in dimension 5 and above, by general position (formula_14).
In dimension 4, one can resolve intersections of two Whitney disks via Casson handles, which works topologically but not differentiably; see Geometric topology: Dimension for details on dimension.
More subtly, dimension 5 is the cut-off because the middle dimension has codimension more than 2: when the codimension is 2, one encounters knot theory, but when the codimension is more than 2, embedding theory is tractable, via the calculus of functors. This is discussed further below.
Maps between manifolds.
From the point of view of category theory, the classification of manifolds is one piece of understanding the category: it's classifying the "objects". The other question is classifying "maps" of manifolds up to various equivalences, and there are many results and open questions in this area.
For maps, the appropriate notion of "low dimension" is for some purposes "self maps of low-dimensional manifolds", and for other purposes "low codimension".
Low codimension.
Analogously to the classification of manifolds, in high "co"dimension (meaning more than 2), embeddings are classified by surgery, while in low codimension or in relative dimension, they are rigid and geometric, and in the middle (codimension 2), one has a difficult exotic theory (knot theory).
High dimensions.
Particularly topologically interesting classes of maps include embeddings, immersions, and submersions.
Geometrically interesting are isometries and isometric immersions.
Fundamental results in embeddings and immersions include:
Key tools in studying these maps are:
One may classify maps up to various equivalences:
Diffeomorphisms up to cobordism have been classified by Matthias Kreck
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mbox{Diff} \\to \\mbox{Top}"
},
{
"math_id": 1,
"text": "n \\geq 0"
},
{
"math_id": 2,
"text": "MO_*(M)"
},
{
"math_id": 3,
"text": "[0,1]"
},
{
"math_id": 4,
"text": "S^1"
},
{
"math_id": 5,
"text": "(0,1)"
},
{
"math_id": 6,
"text": "[0,1)"
},
{
"math_id": 7,
"text": "n \\geq 4"
},
{
"math_id": 8,
"text": "(N,f)"
},
{
"math_id": 9,
"text": "f\\colon N\\to M"
},
{
"math_id": 10,
"text": "(N',f')"
},
{
"math_id": 11,
"text": "h\\colon N\\to N'"
},
{
"math_id": 12,
"text": "f'h \\sim f\\colon N\\to M"
},
{
"math_id": 13,
"text": "n\\geq 3"
},
{
"math_id": 14,
"text": "2+2 < 5"
}
]
| https://en.wikipedia.org/wiki?curid=14069478 |
140710 | Electrical reactance | Opposition to current by inductance or capacitance
In electrical circuits, reactance is the opposition presented to alternating current by inductance and capacitance. Along with resistance, it is one of two elements of impedance; however, while both elements involve transfer of electrical energy, no dissipation of electrical energy as heat occurs in reactance; instead, the reactance stores energy until a quarter-cycle later when the energy is returned to the circuit. Greater reactance gives smaller current for the same applied voltage.
Reactance is used to compute amplitude and phase changes of sinusoidal alternating current going through a circuit element. Like resistance, reactance is measured in ohms, with positive values indicating "inductive" reactance and negative indicating "capacitive" reactance. It is denoted by the symbol formula_0. An ideal resistor has zero reactance, whereas ideal reactors have no shunt conductance and no series resistance. As frequency increases, inductive reactance increases and capacitive reactance decreases.
Comparison to resistance.
Reactance is similar to resistance in that larger reactance leads to smaller currents for the same applied voltage. Further, a circuit made entirely of elements that have only reactance (and no resistance) can be treated the same way as a circuit made entirely of resistances. These same techniques can also be used to combine elements with reactance with elements with resistance but complex numbers are typically needed. This is treated below in the section on impedance.
There are several important differences between reactance and resistance, though. First, reactance changes the phase so that the current through the element is shifted by a quarter of a cycle relative to the phase of the voltage applied across the element. Second, power is not dissipated in a purely reactive element but is stored instead. Third, reactances can be negative so that they can 'cancel' each other out. Finally, the main circuit elements that have reactance (capacitors and inductors) have a frequency dependent reactance, unlike resistors which have the same resistance for all frequencies, at least in the ideal case.
The term "reactance" was first suggested by French engineer M. Hospitalier in "L'Industrie Electrique" on 10 May 1893. It was officially adopted by the American Institute of Electrical Engineers in May 1894.
Capacitive reactance.
A capacitor consists of two conductors separated by an insulator, also known as a dielectric.
"Capacitive reactance" is an opposition to the change of voltage across an element. Capacitive reactance formula_1 is inversely proportional to the signal frequency formula_2 (or angular frequency formula_3) and the capacitance formula_4.
There are two choices in the literature for defining reactance for a capacitor. One is to use a uniform notion of reactance as the imaginary part of impedance, in which case the reactance of a capacitor is the negative number,
formula_5.
Another choice is to define capacitive reactance as a positive number,
formula_6.
In this case however one needs to remember to add a negative sign for the impedance of a capacitor, i.e. formula_7.
At formula_8, the magnitude of the capacitor's reactance is infinite, behaving like an open circuit (preventing any current from flowing through the dielectric). As frequency increases, the magnitude of reactance decreases, allowing more current to flow. As formula_2 approaches formula_9, the capacitor's reactance approaches formula_10, behaving like a short circuit.
The application of a DC voltage across a capacitor causes positive charge to accumulate on one side and negative charge to accumulate on the other side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero.
Driven by an AC supply (ideal AC current source), a capacitor will only accumulate a limited amount of charge before the potential difference changes polarity and the charge is returned to the source. The higher the frequency, the less charge will accumulate and the smaller the opposition to the current.
Inductive reactance.
Inductive reactance is a property exhibited by an inductor, and inductive reactance exists based on the fact that an electric current produces a magnetic field around it. In the context of an AC circuit (although this concept applies any time current is changing), this magnetic field is constantly changing as a result of current that oscillates back and forth. It is this change in magnetic field that induces another electric current to flow in the same wire (counter-EMF), in a direction such as to oppose the flow of the current originally responsible for producing the magnetic field (known as Lenz's Law). Hence, "inductive reactance" is an opposition to the change of current through an element.
For an ideal inductor in an AC circuit, the inhibitive effect on change in current flow results in a delay, or a phase shift, of the alternating current with respect to alternating voltage. Specifically, an ideal inductor (with no resistance) will cause the current to lag the voltage by a quarter cycle, or 90°.
In electric power systems, inductive reactance (and capacitive reactance, however inductive reactance is more common) can limit the power capacity of an AC transmission line, because power is not completely transferred when voltage and current are out-of-phase (detailed above). That is, current will flow for an out-of-phase system, however real power at certain times will not be transferred, because there will be points during which instantaneous current is positive while instantaneous voltage is negative, or vice versa, implying negative power transfer. Hence, real work is not performed when power transfer is "negative". However, current still flows even when a system is out-of-phase, which causes transmission lines to heat up due to current flow. Consequently, transmission lines can only heat up so much (or else they would physically sag too much, due to the heat expanding the metal transmission lines), so transmission line operators have a "ceiling" on the amount of current that can flow through a given line, and excessive inductive reactance can limit the power capacity of a line. Power providers utilize capacitors to shift the phase and minimize the losses, based on usage patterns.
Inductive reactance formula_11 is proportional to the sinusoidal signal frequency formula_2 and the inductance formula_12, which depends on the physical shape of the inductor:
formula_13.
The average current flowing through an inductance formula_12 in series with a sinusoidal AC voltage source of RMS amplitude formula_14 and frequency formula_2 is equal to:
formula_15
Because a square wave has multiple amplitudes at sinusoidal harmonics, the average current flowing through an inductance formula_12 in series with a square wave AC voltage source of RMS amplitude formula_14 and frequency formula_2 is equal to:
formula_16
making it appear as if the inductive reactance to a square wave was about 19% smaller formula_17 than the reactance to the AC sine wave.
Any conductor of finite dimensions has inductance; the inductance is made larger by the multiple turns in an electromagnetic coil. Faraday's law of electromagnetic induction gives the counter-emf formula_18 (voltage opposing current) due to a rate-of-change of magnetic flux density formula_19 through a current loop.
formula_20
For an inductor consisting of a coil with formula_21 loops this gives:
formula_22.
The counter-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.
Impedance.
Both reactance formula_23 and resistance formula_24 are components of impedance formula_25.
formula_26
where:
When both a capacitor and an inductor are placed in series in a circuit, their contributions to the total circuit impedance are opposite. Capacitive reactance formula_1 and inductive reactance formula_11 contribute to the total reactance formula_0 as follows:
formula_33
where:
Hence:
Note however that if formula_11 and formula_1 are assumed both positive by definition, then the intermediary formula changes to a difference:
formula_38
but the ultimate value is the same.
Phase relationship.
The phase of the voltage across a purely reactive device (i.e. with zero parasitic resistance) "lags" the current by formula_39 radians for a capacitive reactance and "leads" the current by formula_39 radians for an inductive reactance. Without knowledge of both the resistance and reactance the relationship between voltage and current cannot be determined.
The origin of the different signs for capacitive and inductive reactance is the phase factor formula_40 in the impedance.
formula_41
For a reactive component the sinusoidal voltage across the component is in quadrature (a formula_39 phase difference) with the sinusoidal current through the component. The component alternately absorbs energy from the circuit and then returns energy to the circuit, thus a pure reactance does not dissipate power.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X_C"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "X_C = -\\frac {1} {\\omega C} = -\\frac {1} {2\\pi f C}"
},
{
"math_id": 6,
"text": "X_C = \\frac {1} {\\omega C} = \\frac {1} {2\\pi f C}"
},
{
"math_id": 7,
"text": "Z_c=-jX_c"
},
{
"math_id": 8,
"text": "f=0"
},
{
"math_id": 9,
"text": "\\infty"
},
{
"math_id": 10,
"text": "0"
},
{
"math_id": 11,
"text": "X_L"
},
{
"math_id": 12,
"text": "L"
},
{
"math_id": 13,
"text": "X_L = \\omega L = 2\\pi f L"
},
{
"math_id": 14,
"text": "A"
},
{
"math_id": 15,
"text": "I_L = {A \\over \\omega L} = {A \\over 2\\pi f L}."
},
{
"math_id": 16,
"text": "I_L = {A \\pi^2 \\over 8 \\omega L} = {A\\pi \\over 16 f L}"
},
{
"math_id": 17,
"text": "X_L = {16 \\over \\pi} f L"
},
{
"math_id": 18,
"text": "\\mathcal{E}"
},
{
"math_id": 19,
"text": "\\scriptstyle{B}"
},
{
"math_id": 20,
"text": "\\mathcal{E} = -{{d\\Phi_B} \\over dt}"
},
{
"math_id": 21,
"text": "N"
},
{
"math_id": 22,
"text": "\\mathcal{E} = -N{d\\Phi_B \\over dt}"
},
{
"math_id": 23,
"text": "{X}"
},
{
"math_id": 24,
"text": "{R}"
},
{
"math_id": 25,
"text": "{\\mathbf{Z}}"
},
{
"math_id": 26,
"text": "\\mathbf{Z} = R + \\mathbf{j}X"
},
{
"math_id": 27,
"text": "\\mathbf{Z}"
},
{
"math_id": 28,
"text": "R"
},
{
"math_id": 29,
"text": "{R=\\text{Re}{(\\mathbf{Z})}}"
},
{
"math_id": 30,
"text": "{X=\\text{Im}{(\\mathbf{Z})}}"
},
{
"math_id": 31,
"text": "\\mathbf{j}"
},
{
"math_id": 32,
"text": "\\mathbf{i}"
},
{
"math_id": 33,
"text": "{X = X_L + X_C = \\omega L -\\frac {1} {\\omega C}}"
},
{
"math_id": 34,
"text": "2\\pi"
},
{
"math_id": 35,
"text": "\\scriptstyle X > 0"
},
{
"math_id": 36,
"text": "\\scriptstyle X = 0"
},
{
"math_id": 37,
"text": "\\scriptstyle X < 0"
},
{
"math_id": 38,
"text": "{X = X_L - X_C = \\omega L -\\frac {1} {\\omega C}}"
},
{
"math_id": 39,
"text": "\\tfrac{\\pi}{2}"
},
{
"math_id": 40,
"text": "e^{\\pm \\mathbf{j}{\\frac{\\pi}{2}}}"
},
{
"math_id": 41,
"text": "\\begin{align}\n \\mathbf{Z}_C &= {1 \\over \\omega C}e^{-\\mathbf{j}{\\pi \\over 2}} = \\mathbf{j}\\left({ -\\frac{1}{\\omega C}}\\right) = \\mathbf{j}X_C \\\\\n \\mathbf{Z}_L &= \\omega Le^{\\mathbf{j}{\\pi \\over 2}} = \\mathbf{j}\\omega L = \\mathbf{j}X_L\\quad\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=140710 |
140711 | Capacitance | Ability of a body to store an electrical charge
Capacitance is the ability of a material object or device to store electric charge. It is measured by the charge in response to a difference in electric potential, expressed as the ratio of those quantities. Commonly recognized are two closely related notions of capacitance: "self capacitance" and "mutual capacitance". An object that can be electrically charged exhibits self capacitance, for which the electric potential is measured between the object and ground. Mutual capacitance is measured between two components, and is particularly important in the operation of the capacitor, an elementary linear electronic component designed to add capacitance to an electric circuit.
The capacitance between two conductors depends only on the geometry; the opposing surface area of the conductors and the distance between them; and the permittivity of any dielectric material between them. For many dielectric materials, the permittivity, and thus the capacitance, is independent of the potential difference between the conductors and the total charge on them.
The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. The reciprocal of capacitance is called elastance.
Historically: the farad was regarded as an inconveniently large unit and the range of capacitors encountered would range from a few picofarads to a few thousand microfarads. More recent developments in dielectric materials has permitted the manufacture of many types of capacitor of up to (as of 2024[ [update]]) a few thousand farads in reasonable physical sizes. These are usually described as 'supercapacitors'.
Self capacitance.
In discussing electrical circuits, the term "capacitance" is usually a shorthand for the mutual capacitance between two adjacent conductors, such as the two plates of a capacitor. However, every isolated conductor also exhibits capacitance, here called "self capacitance". It is measured by the amount of electric charge that must be added to an isolated conductor to raise its electric potential by one unit of measurement, e.g., one volt. The reference point for this potential is a theoretical hollow conducting sphere, of infinite radius, with the conductor centered inside this sphere.
Self capacitance of a conductor is defined by the ratio of charge and electric potential:
formula_0
where
Using this method, the self capacitance of a conducting sphere of radius formula_7 in free space (i.e. far away from any other charge distributions) is:
formula_8
Example values of self capacitance are:
The inter-winding capacitance of a coil is sometimes called self capacitance, but this is a different phenomenon. It is actually mutual capacitance between the individual turns of the coil and is a form of stray or parasitic capacitance. This self capacitance is an important consideration at high frequencies: it changes the impedance of the coil and gives rise to parallel resonance. In many applications this is an undesirable effect and sets an upper frequency limit for the correct operation of the circuit.
Mutual capacitance.
A common form is a parallel-plate capacitor, which consists of two conductive plates insulated from each other, usually sandwiching a dielectric material. In a parallel plate capacitor, capacitance is very nearly proportional to the surface area of the conductor plates and inversely proportional to the separation distance between the plates.
If the charges on the plates are formula_9 and formula_10, and formula_11 gives the voltage between the plates, then the capacitance formula_12 is given by formula_0
which gives the voltage/current relationship
formula_13
where formula_14 is the instantaneous rate of change of voltage, and formula_15 is the instantaneous rate of change of the capacitance. For most applications, the change in capacitance over time is negligible, so the formula reduces to:
formula_16
The energy stored in a capacitor is found by integrating the work formula_17:
formula_18
Capacitance matrix.
The discussion above is limited to the case of two conducting plates, although of arbitrary size and shape. The definition formula_19 does not apply when there are more than two charged plates, or when the net charge on the two plates is non-zero. To handle this case, James Clerk Maxwell introduced his "coefficients of potential". If three (nearly ideal) conductors are given charges formula_20, then the voltage at conductor 1 is given by
formula_21
and similarly for the other voltages. Hermann von Helmholtz and Sir William Thomson showed that the coefficients of potential are symmetric, so that formula_22, etc. Thus the system can be described by a collection of coefficients known as the "elastance matrix" or "reciprocal capacitance matrix", which is defined as:
formula_23
From this, the mutual capacitance formula_24 between two objects can be defined by solving for the total charge formula_25 and using formula_26.
formula_27
Since no actual device holds perfectly equal and opposite charges on each of the two "plates", it is the mutual capacitance that is reported on capacitors.
The collection of coefficients formula_28 is known as the "capacitance matrix", and is the inverse of the elastance matrix.
Capacitors.
The capacitance of the majority of capacitors used in electronic circuits is generally several orders of magnitude smaller than the farad. The most common units of capacitance are the microfarad (μF), nanofarad (nF), picofarad (pF), and, in microcircuits, femtofarad (fF). Some applications also use supercapacitors that can be much larger, as much as hundreds of farads, and parasitic capacitive elements can be less than a femtofarad. Historical texts use other, obsolete submultiples of the farad, such as "mf" and "mfd" for microfarad (μF); "mmf", "mmfd", "pfd", "μμF" for picofarad (pF).
The capacitance can be calculated if the geometry of the conductors and the dielectric properties of the insulator between the conductors are known. Capacitance is proportional to the area of overlap and inversely proportional to the separation between conducting sheets. The closer the sheets are to each other, the greater the capacitance.
An example is the capacitance of a capacitor constructed of two parallel plates both of area formula_29 separated by a distance formula_30. If formula_30 is sufficiently small with respect to the smallest chord of formula_29, there holds, to a high level of accuracy:
formula_31
formula_32
where
The equation is a good approximation if "d" is small compared to the other dimensions of the plates so that the electric field in the capacitor area is uniform, and the so-called "fringing field" around the periphery provides only a small contribution to the capacitance.
Combining the equation for capacitance with the above equation for the energy stored in a capacitor, for a flat-plate capacitor the energy stored is:
formula_37
where formula_17 is the energy, in joules; formula_12 is the capacitance, in farads; and formula_11 is the voltage, in volts.
Stray capacitance.
Any two adjacent conductors can function as a capacitor, though the capacitance is small unless the conductors are close together for long distances or over a large area. This (often unwanted) capacitance is called parasitic or stray capacitance. Stray capacitance can allow signals to leak between otherwise isolated circuits (an effect called crosstalk), and it can be a limiting factor for proper functioning of circuits at high frequency.
Stray capacitance between the input and output in amplifier circuits can be troublesome because it can form a path for feedback, which can cause instability and parasitic oscillation in the amplifier. It is often convenient for analytical purposes to replace this capacitance with a combination of one input-to-ground capacitance and one output-to-ground capacitance; the original configuration – including the input-to-output capacitance – is often referred to as a pi-configuration. Miller's theorem can be used to effect this replacement: it states that, if the gain ratio of two nodes is , then an impedance of "Z" connecting the two nodes can be replaced with a impedance between the first node and ground and a impedance between the second node and ground. Since impedance varies inversely with capacitance, the internode capacitance, "C", is replaced by a capacitance of KC from input to ground and a capacitance of from output to ground. When the input-to-output gain is very large, the equivalent input-to-ground impedance is very small while the output-to-ground impedance is essentially equal to the original (input-to-output) impedance.
Capacitance of conductors with simple shapes.
Calculating the capacitance of a system amounts to solving the Laplace equation formula_38 with a constant potential formula_39 on the 2-dimensional surface of the conductors embedded in 3-space. This is simplified by symmetries. There is no solution in terms of elementary functions in more complicated cases.
For plane situations, analytic functions may be used to map different geometries to each other. See also Schwarz–Christoffel mapping.
Energy storage.
The energy (measured in joules) stored in a capacitor is equal to the "work" required to push the charges into the capacitor, i.e. to charge it. Consider a capacitor of capacitance "C", holding a charge +"q" on one plate and −"q" on the other. Moving a small element of charge d"q" from one plate to the other against the potential difference "V" = "q"/"C" requires the work d"W":
formula_40
where "W" is the work measured in joules, "q" is the charge measured in coulombs and "C" is the capacitance, measured in farads.
The energy stored in a capacitor is found by integrating this equation. Starting with an uncharged capacitance ("q" = 0) and moving charge from one plate to the other until the plates have charge +"Q" and −"Q" requires the work "W":
formula_41
Nanoscale systems.
The capacitance of nanoscale dielectric capacitors such as quantum dots may differ from conventional formulations of larger capacitors. In particular, the electrostatic potential difference experienced by electrons in conventional capacitors is spatially well-defined and fixed by the shape and size of metallic electrodes in addition to the statistically large number of electrons present in conventional capacitors. In nanoscale capacitors, however, the electrostatic potentials experienced by electrons are determined by the number and locations of all electrons that contribute to the electronic properties of the device. In such devices, the number of electrons may be very small, so the resulting spatial distribution of equipotential surfaces within the device is exceedingly complex.
Single-electron devices.
The capacitance of a connected, or "closed", single-electron device is twice the capacitance of an unconnected, or "open", single-electron device. This fact may be traced more fundamentally to the energy stored in the single-electron device whose "direct polarization" interaction energy may be equally divided into the interaction of the electron with the polarized charge on the device itself due to the presence of the electron and the amount of potential energy required to form the polarized charge on the device (the interaction of charges in the device's dielectric material with the potential due to the electron).
Few-electron devices.
The derivation of a "quantum capacitance" of a few-electron device involves the thermodynamic chemical potential of an "N"-particle system given by
formula_42
whose energy terms may be obtained as solutions of the Schrödinger equation. The definition of capacitance,
formula_43
with the potential difference
formula_44
may be applied to the device with the addition or removal of individual electrons,
formula_45 and formula_46
The "quantum capacitance" of the device is then
formula_47
This expression of "quantum capacitance" may be written as
formula_48
which differs from the conventional expression described in the introduction where formula_49, the stored electrostatic potential energy,
formula_50
by a factor of with formula_51.
However, within the framework of purely classical electrostatic interactions, the appearance of the factor of is the result of integration in the conventional formulation involving the work done when charging a capacitor,
formula_52
which is appropriate since formula_53 for systems involving either many electrons or metallic electrodes, but in few-electron systems, formula_54. The integral generally becomes a summation. One may trivially combine the expressions of capacitance
formula_55
and electrostatic interaction energy,
formula_56
to obtain
formula_57
which is similar to the quantum capacitance. A more rigorous derivation is reported in the literature. In particular, to circumvent the mathematical challenges of spatially complex equipotential surfaces within the device, an "average" electrostatic potential experienced by each electron is utilized in the derivation.
Apparent mathematical differences may be understood more fundamentally. The potential energy, formula_58, of an isolated device (self-capacitance) is twice that stored in a "connected" device in the lower limit formula_59. As formula_60 grows large, formula_61. Thus, the general expression of capacitance is
formula_62
In nanoscale devices such as quantum dots, the "capacitor" is often an isolated or partially isolated component within the device. The primary differences between nanoscale capacitors and macroscopic (conventional) capacitors are the number of excess electrons (charge carriers, or electrons, that contribute to the device's electronic behavior) and the shape and size of metallic electrodes. In nanoscale devices, nanowires consisting of metal atoms typically do not exhibit the same conductive properties as their macroscopic, or bulk material, counterparts.
Capacitance in electronic and semiconductor devices.
In electronic and semiconductor devices, transient or frequency-dependent current between terminals contains both conduction and displacement components. Conduction current is related to moving charge carriers (electrons, holes, ions, etc.), while displacement current is caused by a time-varying electric field. Carrier transport is affected by electric fields and by a number of physical phenomena - such as carrier drift and diffusion, trapping, injection, contact-related effects, impact ionization, etc. As a result, device admittance is frequency-dependent, and a simple electrostatic formula for capacitance formula_63 is not applicable. A more general definition of capacitance, encompassing electrostatic formula, is:
formula_64
where formula_65 is the device admittance, and formula_66 is the angular frequency.
In general, capacitance is a function of frequency. At high frequencies, capacitance approaches a constant value, equal to "geometric" capacitance, determined by the terminals' geometry and dielectric content in the device.
A paper by Steven Laux presents a review of numerical techniques for capacitance calculation. In particular, capacitance can be calculated by a Fourier transform of a transient current in response to a step-like voltage excitation:
formula_67
Negative capacitance in semiconductor devices.
Usually, capacitance in semiconductor devices is positive. However, in some devices and under certain conditions (temperature, applied voltages, frequency, etc.), capacitance can become negative. Non-monotonic behavior of the transient current in response to a step-like excitation has been proposed as the mechanism of negative capacitance. Negative capacitance has been demonstrated and explored in many different types of semiconductor devices.
Measuring capacitance.
A capacitance meter is a piece of electronic test equipment used to measure capacitance, mainly of discrete capacitors. For most purposes and in most cases the capacitor must be disconnected from circuit.
Many DVMs (digital volt meters) have a capacitance-measuring function. These usually operate by charging and discharging the capacitor under test with a known current and measuring the rate of rise of the resulting voltage; the slower the rate of rise, the larger the capacitance. DVMs can usually measure capacitance from nanofarads to a few hundred microfarads, but wider ranges are not unusual. It is also possible to measure capacitance by passing a known high-frequency alternating current through the device under test and measuring the resulting voltage across it (does not work for polarised capacitors).
More sophisticated instruments use other techniques such as inserting the capacitor-under-test into a bridge circuit. By varying the values of the other legs in the bridge (so as to bring the bridge into balance), the value of the unknown capacitor is determined. This method of "indirect" use of measuring capacitance ensures greater precision. Through the use of Kelvin connections and other careful design techniques, these instruments can usually measure capacitors over a range from picofarads to farads.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "C = \\frac{q}{V},"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "V = \\frac{1}{4\\pi\\varepsilon_0}\\int \\frac{\\sigma}{r}\\,dS"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "dS"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "\\varepsilon_0"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "C = 4 \\pi \\varepsilon_0 R."
},
{
"math_id": 9,
"text": "+q"
},
{
"math_id": 10,
"text": "-q"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "C"
},
{
"math_id": 13,
"text": "i(t) = C \\frac{dv(t)}{dt} + V\\frac{dC}{dt},"
},
{
"math_id": 14,
"text": "\\frac{dv(t)}{dt}"
},
{
"math_id": 15,
"text": "\\frac{dC}{dt}"
},
{
"math_id": 16,
"text": "i(t) = C \\frac{dv(t)}{dt},"
},
{
"math_id": 17,
"text": "W"
},
{
"math_id": 18,
"text": " W_\\text{charging} = \\frac{1}{2}CV^2."
},
{
"math_id": 19,
"text": "C = Q/V"
},
{
"math_id": 20,
"text": "Q_1, Q_2, Q_3"
},
{
"math_id": 21,
"text": "V_1 = P_{11}Q_1 + P_{12} Q_2 + P_{13}Q_3, "
},
{
"math_id": 22,
"text": "P_{12} = P_{21}"
},
{
"math_id": 23,
"text": "P_{ij} = \\frac{\\partial V_{i}}{\\partial Q_{j}}."
},
{
"math_id": 24,
"text": "C_{m}"
},
{
"math_id": 25,
"text": "Q"
},
{
"math_id": 26,
"text": "C_{m}=Q/V"
},
{
"math_id": 27,
"text": "C_m = \\frac{1}{(P_{11} + P_{22})-(P_{12} + P_{21})}."
},
{
"math_id": 28,
"text": "C_{ij} = \\frac{\\partial Q_{i}}{\\partial V_{j}}"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "d"
},
{
"math_id": 31,
"text": "\\ C=\\varepsilon\\frac{A}{d};"
},
{
"math_id": 32,
"text": "\\varepsilon=\\varepsilon_0 \\varepsilon_r,"
},
{
"math_id": 33,
"text": "\\varepsilon_0"
},
{
"math_id": 34,
"text": "\\varepsilon_0 \\approx 8.854\\times 10^{-12} ~ \\mathrm{F{\\cdot}m^{-1}}"
},
{
"math_id": 35,
"text": "\\varepsilon_r"
},
{
"math_id": 36,
"text": "\\varepsilon_r \\approx 1"
},
{
"math_id": 37,
"text": " W_\\text{stored} = \\frac{1}{2} C V^2 = \\frac{1}{2} \\varepsilon \\frac{A}{d} V^2."
},
{
"math_id": 38,
"text": "\\nabla^2\\varphi=0"
},
{
"math_id": 39,
"text": "\\varphi"
},
{
"math_id": 40,
"text": " \\mathrm{d}W = \\frac{q}{C}\\,\\mathrm{d}q,"
},
{
"math_id": 41,
"text": " W_\\text{charging} = \\int_0^Q \\frac{q}{C} \\, \\mathrm{d}q = \\frac{1}{2}\\frac{Q^2}{C} = \\frac{1}{2}QV = \\frac{1}{2}CV^2 = W_\\text{stored}."
},
{
"math_id": 42,
"text": "\\mu(N) = U(N) - U(N-1),"
},
{
"math_id": 43,
"text": "{1\\over C} \\equiv {\\Delta V\\over\\Delta Q},"
},
{
"math_id": 44,
"text": "\\Delta V = {\\Delta \\mu \\,\\over e} = {\\mu(N + \\Delta N) -\\mu(N) \\over e}"
},
{
"math_id": 45,
"text": "\\Delta N = 1"
},
{
"math_id": 46,
"text": "\\Delta Q = e."
},
{
"math_id": 47,
"text": "C_Q(N) = \\frac{e^2}{\\mu(N+1)-\\mu(N)} = \\frac{e^2}{E(N)}."
},
{
"math_id": 48,
"text": "C_Q(N) = {e^2\\over U(N)},"
},
{
"math_id": 49,
"text": "W_\\text{stored} = U"
},
{
"math_id": 50,
"text": "C = {Q^2\\over 2U},"
},
{
"math_id": 51,
"text": "Q = Ne"
},
{
"math_id": 52,
"text": " W_\\text{charging} = U = \\int_0^Q \\frac{q}{C} \\, \\mathrm{d}q,"
},
{
"math_id": 53,
"text": "\\mathrm{d}q = 0"
},
{
"math_id": 54,
"text": "\\mathrm{d}q \\to \\Delta \\,Q= e"
},
{
"math_id": 55,
"text": "Q=CV"
},
{
"math_id": 56,
"text": "U = Q V ,"
},
{
"math_id": 57,
"text": "C = Q{1\\over V} = Q {Q \\over U} = {Q^2 \\over U},"
},
{
"math_id": 58,
"text": "U(N)"
},
{
"math_id": 59,
"text": "N = 1"
},
{
"math_id": 60,
"text": "N"
},
{
"math_id": 61,
"text": "U(N)\\to U"
},
{
"math_id": 62,
"text": "C(N) = {(Ne)^2 \\over U(N)}."
},
{
"math_id": 63,
"text": "C = q/V,"
},
{
"math_id": 64,
"text": "C = \\frac{\\operatorname{Im}(Y(\\omega))}{\\omega} ,"
},
{
"math_id": 65,
"text": "Y(\\omega)"
},
{
"math_id": 66,
"text": "\\omega"
},
{
"math_id": 67,
"text": "C(\\omega) = \\frac{1}{\\Delta V} \\int_0^\\infty [i(t)-i(\\infty)] \\cos (\\omega t) dt."
}
]
| https://en.wikipedia.org/wiki?curid=140711 |
14071480 | Formation flying | Flight of multiple objects in a coordinated shape or pattern
Formation flying is the flight of multiple objects in coordination. Formation flying occurs in nature among flying and gliding animals, and is also conducted in human aviation, often in military aviation and air shows.
A multitude of studies have been performed on the performance benefits of aircraft flying in formation.
History.
Birds have been known to receive performance benefits from formation flight for over a century, through aerodynamic theory of Wieselsberger in 1914.
Formation flight in human aviation originated in World War I, when fighter aircraft were assigned to escort reconnaissance aircraft. It was found that pairs of aircraft were more combat effective than single aircraft, and therefore, military aircraft would always fly in formations of at least two. By World War II, pilots had discovered other strategic advantages to formation flight such as enhanced stability and optimal visibility.
Mechanism of drag reduction.
It is a common misunderstanding to relate the reduction of drag in organized flight to the reduction of drag in drafting. However, they are quite different mechanistically.
The drag reduction occurred in the drafting is due to a reduction in flow speed in the wake of a leading vehicle, reducing the amount the flow needs to accelerate to move around the body, reducing pressure in front of the trailing vehicle. This leads to a lesser pressure differential between the frontal and rear projected surfaces of the body, and hence, less drag. This can also be understood somewhat tautologically through the common drag equation for a body formula_0, where formula_1 is the experimentally obtained unitless number, formula_2 is the density of the fluid medium through which the object travels, formula_3 is the cross-sectional area normal to the direction mean flow, and formula_4 is the speed of the mean flow. It can be seen by inspection, that a decrease in mean velocity will generate less drag force, as is the case with drafting.
In juxtaposition, the drag reduction felt by trailing agents in formation flight may be thought more of as the trailing agents "surfing" on the vortices shed by wings of leading agents, reducing the amount of force needed to stay in the air. This force is known as lift and acts perpendicular to the freestream flow direction and drag. These vortices are known as wingtip vortices and are formed by fluid flowing around the wingtips from the high-pressure region that is the bottom of the wing to the low-pressure region that is the top of the wing. The flow becomes separated from the airfoil and rotates about a low pressure wake that forms the core of the vortex. This vortex acts to change the direction of the flow for trailing aircraft, increasing the lift over a segment of the wing and allowing for a reduction in induced drag by lowering its angle of attack.
This can also be shown by the drag and analogous lift equation, formula_5. The difference now is that formula_1 and formula_6 vary linearly with angle of attack formula_7, which is the angle formed by the neutral axis of the aircraft and the freestream flow. Since the local flow is coming in at a higher angle of attack due to the vortex, both the lift and drag forces are rotated such that lift force vector generates a forward thrust and the drag force vector generates an increase in lift. With this increase in lift force, the angle of attack may be reduced to maintain the target lift needed to maintain an altitude while cruising, which causes a reduction in induced drag since drag and lift are a function of formula_7 through the coefficients formula_1 and formula_6.
Nature.
Migrating birds.
Birds are typically observed to fly in V-shaped formations or J-shaped formations, the latter commonly known as echelon. The first study to attempt to quantify the energy saving of a large flock of birds was Lissaman & Schollenberger who provided the first, albeit notably flawed, estimate for a 25-member flock of birds. A most impressive 71% range extension relative to single bird flight was reported. These reported extensions are typically due to using a fixed wing approximation. Haffner (1977) experimented with birds flying in wind tunnels and calculated a range extension of a more conservative value of 22%.
Studies have been performed on the phase of flapping and found that birds that fly in V-shaped formations coordinate their flapping, while those in echelon do not. Willis et al (2007) found that optimal phasing of flaps accounts for 20% of power saving, suggesting that positioning is more important than perfectly caching the oncoming vortex.
Studies of birds have shown that the V formation can greatly enhance the overall aerodynamic efficiency by reducing the drag and thereby increasing the flight range.
Insects.
Insect swarms are a collective animal behavior that is an area of active research for the application of drones. The unique feature of insect swarms is their leaderless, yet organized flight. In a particle image velocimetry study of 10 midges by Kelley and Ouellette (2013), the boundaries of the swarm are statistically consistent even though the flying of the insects within the swarm are virtually asynchronous. There is also some suggestion of clustering, implying there may be some self-organizing behavior.
Aviation.
Terminology and examples.
The smallest unit of a formation is called a section or element, consisting of two aircraft; these pilots are a leader and wingman. A division or flight consists of two sections or elements. Multiple divisions or flights are assembled into a formation. A standard fighter formation includes aircraft whose positions are maintained by the wingmen to within laterally and vertically of the flight leader's aircraft. A nonstandard formation results when the flight leader has requested, and air traffic control has approved dimensions that do not conform with the stated boundaries; when operating within an authorized altitude reservation or under the provisions of a Letter of Agreement; or when flight operations are being conducted in a specially-designated airspace.
The "fingertip four" (or "finger-four") is the basic four-ship formation that resembles the position of the fingertips with the hand outstretched. The flight leader (#1) is piloting the foremost aircraft (middle fingertip), with the lead's wingman (#2) to the side and trailing (index fingertip); the section lead (#3) is opposite the lead's wingman on the opposite side (ring fingertip) while the section leader's wingman (#4) is trailing the section lead towards the same side (little fingertip). The fingertip formation is designated "strong right" or "strong left", depending on the side being flown by the section (#3 and #4) aircraft. For example, viewed from overhead, the "fingertip four strong right" formation from left to right consists of the #2 (lead's wingman), #1 (flight leader), #3 (section lead), and #4 (section lead's wingman) aircraft.
The flight leader should decide and communicate which orientation, "fingertip right" or "fingertip left", should be used as the basic formation prior to flight operations. Formations should transition to and from the basic formation to facilitate the use of hand and plane signals.
Military.
In military aviation, tactical formation flying is the disciplined flight of two or more aircraft under the command of a flight leader. Military pilots use tactical formations for mutual defense and concentration of firepower.
Unmanned aerial vehicles.
The challenge of achieving safe formation flight by unmanned aerial vehicles has been extensively investigated in the 21st century with aircraft and spacecraft systems. For aerial vehicles the advantages of performing formation flight include fuel saving, improved efficiency in air traffic control and cooperative task allocation. For space vehicles precise control of formation flight may enable future large aperture space telescopes, variable baseline space interferometers, autonomous rendezvous and docking and robotic assembly of space structures. One of the simplest formations used is where autonomous aircraft maintain formation with a lead aircraft which may itself be autonomous.
Civil aviation.
In civil aviation formation flying is performed at air shows or for recreation. It is used to improve flying technique and also as a prestigious activity of old aviation organizations. It represents the more challenging skill of flying near another aircraft. Formation flying proposed to reduce fuel use by minimizing drag.
In the early 2000s, NASA's Autonomous Formation Flight program used a pair of F/A-18s.
In 2013, the Air Force Research Laboratory's Surfing Aircraft Vortices for Energy project showed 10–15% in fuel savings, installed on two Boeing C-17 Globemaster IIIs.
In 2017, NASA measured 8–10% lower fuel flow with two Gulfstream III aircraft on wake surfing test flights.
In 2018, the ecoDemonstrator, a Boeing 777F freighter from FedEx Express, had its fuel consumption reduced by 5–10% with the autopilot maintaining the separation based on ADS-B and TCAS information.
By taking advantage of wake updraft like migrating birds (biomimicry), Airbus believes an aircraft can save 5–10% of fuel by flying behind the preceding one.
After Airbus A380s tests showing 12% savings, it launched its 'fello'fly' project in November 2019 for test flights in 2020 with two A350s, before transatlantic flight trials with airlines in 2021.
Certification for shorter separation is enabled by ADS-B in oceanic airspace, and the only modification required would be flight control systems software.
Comfort would not be affected and trials are limited to two aircraft to reduce complexity but the concept could be expanded to include more.
Commercial operations could begin in 2025 with airline schedule adjustments, and other manufacturers' aircraft could be included.
On 9 November 2021, Airbus performed a 7h 40min Toulouse-Montreal demonstration with an A350-900 and A350-1000 separated by , saving over of carbon dioxide: a potential of more than 5% fuel savings.
Partly funded by the EU SESAR air traffic management research, Airbus' "Geese initiative" will include Air France and French Bee A350s for flight trials in 2025 to 2026, and will include Boeing for interoperability.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_\\text{D} = \\frac{1}{2} C_\\text{D} A \\rho v^2"
},
{
"math_id": 1,
"text": "C_\\text{D}"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "F_\\text{L} = \\frac{1}{2} C_\\text{L} A \\rho v^2"
},
{
"math_id": 6,
"text": "C_\\text{L}"
},
{
"math_id": 7,
"text": "\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=14071480 |
14073 | Hydropower | Power generation via movement of water
Hydropower (from Ancient Greek -, "water"), also known as water power, is the use of falling or fast-running water to produce electricity or to power machines. This is achieved by converting the gravitational potential or kinetic energy of a water source to produce power. Hydropower is a method of sustainable energy production. Hydropower is now used principally for hydroelectric power generation, and is also applied as one half of an energy storage system known as pumped-storage hydroelectricity.
Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power. Nonetheless, it has economic, sociological, and environmental downsides and requires a sufficiently energetic source of water, such as a river or elevated lake. International institutions such as the World Bank view hydropower as a low-carbon means for economic development.
Since ancient times, hydropower from watermills has been used as a renewable energy source for irrigation and the operation of mechanical devices, such as gristmills, sawmills, textile mills, trip hammers, dock cranes, domestic lifts, and ore mills. A trompe, which produces compressed air from falling water, is sometimes used to power other machinery at a distance.
<templatestyles src="Template:TOC limit/styles.css" />
Calculating the amount of available power.
A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head.
The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity:
formula_0
where
* formula_1 (work flow rate out) is the useful power output (SI unit: watts)
* formula_2 ("eta") is the efficiency of the turbine (dimensionless)
* formula_3 is the mass flow rate (SI unit: kilograms per second)
* formula_4 ("rho") is the density of water (SI unit: kilograms per cubic metre)
* formula_5 is the volumetric flow rate (SI unit: cubic metres per second)
* formula_6 is the acceleration due to gravity (SI unit: metres per second per second)
* formula_7 ("Delta h") is the difference in height between the outlet and inlet (SI unit: metres)
To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of , is 97 megawatts:
formula_8
Operators of hydroelectric stations compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's efficiency guarantee. Detailed calculation of the efficiency of a hydropower turbine accounts for the head lost due to flow friction in the power canal or penstock, rise in tailwater level due to flow, the location of the station and effect of varying gravity, the air temperature and barometric pressure, the density of the water at ambient temperature, and the relative altitudes of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered.
Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy. The flow in a stream can vary widely from season to season. The development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However, reservoirs have a significant environmental impact, as does alteration of naturally occurring streamflow. Dam design must account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to route flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood.
Disadvantages and limitations.
Some disadvantages of hydropower have been identified. Dam failures can have catastrophic effects, including loss of life, property and pollution of land.
Dams and reservoirs can have major negative impacts on river ecosystems such as preventing some animals traveling upstream, cooling and de-oxygenating of water released downstream, and loss of nutrients due to settling of particulates. River sediment builds river deltas and dams prevent them from restoring what is lost from erosion. Furthermore, studies found that the construction of dams and reservoirs can result in habitat loss for some aquatic species.Large and deep dam and reservoir plants cover large areas of land which causes greenhouse gas emissions from underwater rotting vegetation. Furthermore, although at lower levels than other renewable energy sources, it was found that hydropower produces methane equivalent to almost a billion tonnes of CO2 greenhouse gas a year. This occurs when organic matters accumulate at the bottom of the reservoir because of the deoxygenation of water which triggers anaerobic digestion.
People who live near a hydro plant site are displaced during construction or when reservoir banks become unstable. Another potential disadvantage is cultural or religious sites may block construction.
Applications.
Mechanical power.
Compressed air.
A plentiful head of water can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is deliberately mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This allows it to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario, in 1910 and supplied 5,000 horsepower to nearby mines.
Electricity.
Hydroelectricity is the biggest hydropower application. Hydroelectricity generates about 15% of global electricity and provides at least 50% of the total electricity supply for more than 35 countries. In 2021, global installed hydropower electrical capacity reached almost 1400 GW, the highest among all renewable energy technologies.
Hydroelectricity generation starts with converting either the potential energy of water that is present due to the site's elevation or the kinetic energy of moving water into electrical energy.
Hydroelectric power plants vary in terms of the way they harvest energy. One type involves a dam and a reservoir. The water in the reservoir is available on demand to be used to generate electricity by passing through channels that connect the dam to the reservoir. The water spins a turbine, which is connected to the generator that produces electricity.
The other type is called a run-of-river plant. In this case, a barrage is built to control the flow of water, absent a reservoir. The run-of river power plant needs continuous water flow and therefore has less ability to provide power on demand. The kinetic energy of flowing water is the main source of energy.
Both designs have limitations. For example, dam construction can result in discomfort to nearby residents. The dam and reservoirs occupy a relatively large amount of space that may be opposed by nearby communities. Moreover, reservoirs can potentially have major environmental consequences such as harming downstream habitats. On the other hand, the limitation of the run-of-river project is the decreased efficiency of electricity generation because the process depends on the speed of the seasonal river flow. This means that the rainy season increases electricity generation compared to the dry season.
The size of hydroelectric plants can vary from small plants called micro hydro, to large plants that supply power to a whole country. As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams.
Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage. Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low.
Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity.
Rain power.
Rain has been referred to as "one of the last unexploited energy sources in nature. When it rains, billions of litres of water can fall, which have enormous electric potential if used in the right way." Research is being done into the different methods of generating power from rain, such as by using the energy in the impact of raindrops. This is in its very early stages with new and emerging technologies being tested, prototyped and created. Such power has been called rain power. One method in which this has been attempted is by using hybrid solar panels called "all-weather solar panels" that can generate electricity from both the sun and the rain.
According to zoologist and science and technology educator, Luis Villazon, "A 2008 French study estimated that you could use piezoelectric devices, which generate power when they move, to extract 12 milliwatts from a raindrop. Over a year, this would amount to less than 0.001kWh per square metre – enough to power a remote sensor." Villazon suggested a better application would be to collect the water from fallen rain and use it to drive a turbine, with an estimated energy generation of 3 kWh of energy per year for a 185 m2 roof. A microturbine-based system created by three students from the Technological University of Mexico has been used to generate electricity. The Pluvia system "uses the stream of rainwater runoff from houses' rooftop rain gutters to spin a microturbine in a cylindrical housing. Electricity generated by that turbine is used to charge 12-volt batteries."
The term rain power has also been applied to hydropower systems which include the process of capturing the rain.
History.
Ancient history.
Evidence suggests that the fundamentals of hydropower date to ancient Greek civilization. Other evidence indicates that the waterwheel independently emerged in China around the same period. Evidence of water wheels and watermills date to the ancient Near East in the 4th century BC. Moreover, evidence indicates the use of hydropower using irrigation machines to ancient civilizations such as Sumer and Babylonia. Studies suggest that the water wheel was the initial form of water power and it was driven by either humans or animals.
In the Roman Empire, water-powered mills were described by Vitruvius by the first century BC. The Barbegal mill, located in modern-day France, had 16 water wheels processing up to 28 tons of grain per day. Roman waterwheels were also used for sawing marble such as the Hierapolis sawmill of the late 3rd century AD. Such sawmills had a waterwheel that drove two crank-and-connecting rods to power two saws. It also appears in two 6th century Eastern Roman sawmills excavated at Ephesus and Gerasa respectively. The crank and connecting rod mechanism of these Roman watermills converted the rotary motion of the waterwheel into the linear movement of the saw blades.
Water-powered trip hammers and bellows in China, during the Han dynasty (202 BC – 220 AD), were initially thought to be powered by water scoops. However, some historians suggested that they were powered by waterwheels. This is since it was theorized that water scoops would not have had the motive force to operate their blast furnace bellows. Many texts describe the Hun waterwheel; some of the earliest ones are the "Jijiupian" dictionary of 40 BC, Yang Xiong's text known as the "Fangyan" of 15 BC, as well as "Xin Lun," written by Huan Tan about 20 AD. It was also during this time that the engineer Du Shi (c. AD 31) applied the power of waterwheels to piston-bellows in forging cast iron.
Ancient Indian texts dating back to the 4th century BC refer to the term "cakkavattaka" (turning wheel), which commentaries explain as "arahatta-ghati-yanta" (machine with wheel-pots attached), however whether this is water or hand powered is disputed by scholars India received Roman water mills and baths in the early 4th century AD when a certain according to Greek sources. Dams, spillways, reservoirs, channels, and water balance would develop in India during the Mauryan, Gupta and Chola empires.
Another example of the early use of hydropower is seen in hushing, a historic method of mining that uses flood or torrent of water to reveal mineral veins. The method was first used at the Dolaucothi Gold Mines in Wales from 75 AD onwards. This method was further developed in Spain in mines such as Las Médulas. Hushing was also widely used in Britain in the Medieval and later periods to extract lead and tin ores. It later evolved into hydraulic mining when used during the California Gold Rush in the 19th century.
The Islamic Empire spanned a large region, mainly in Asia and Africa, along with other surrounding areas. During the Islamic Golden Age and the Arab Agricultural Revolution (8th–13th centuries), hydropower was widely used and developed. Early uses of tidal power emerged along with large hydraulic factory complexes. A wide range of water-powered industrial mills were used in the region including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic Empire had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines while employing gears in watermills and water-raising machines. They also pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines. Islamic irriguation techniques including Persian Wheels would be introduced to India, and would be combined with local methods, during the Delhi Sultanate and the Mughal Empire.
Furthermore, in his book, "The Book of Knowledge of Ingenious Mechanical Devices", the Muslim mechanical engineer, Al-Jazari (1136–1206) described designs for 50 devices. Many of these devices were water-powered, including clocks, a device to serve wine, and five devices to lift water from rivers or pools, where three of them are animal-powered and one can be powered by animal or water. Moreover, they included an endless belt with jugs attached, a cow-powered shadoof (a crane-like irrigation tool), and a reciprocating device with hinged valves.
19th century.
In the 19th century, French engineer Benoît Fourneyron developed the first hydropower turbine. This device was implemented in the commercial plant of Niagara Falls in 1895 and it is still operating. In the early 20th century, English engineer William Armstrong built and operated the first private electrical power station which was located in his house in Cragside in Northumberland, England. In 1753, the French engineer Bernard Forest de Bélidor published his book, "Architecture Hydraulique", which described vertical-axis and horizontal-axis hydraulic machines.
The growing demand for the Industrial Revolution would drive development as well. At the beginning of the Industrial Revolution in Britain, water was the main power source for new inventions such as Richard Arkwright's water frame. Although water power gave way to steam power in many of the larger mills and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the drop in the Mississippi River.
Technological advances moved the open water wheel into an enclosed turbine or water motor. In 1848, the British-American engineer James B. Francis, head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in use. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high-efficiency Pelton wheel impulse turbine, which used hydropower from the high head streams characteristic of the Sierra Nevada.
20th century.
The modern history of hydropower begins in the 1900s, with large dams built not simply to power neighboring mills or factories but provide extensive electricity for increasingly distant groups of people. Competition drove much of the global hydroelectric craze: Europe competed amongst itself to electrify first, and the United States' hydroelectric plants in Niagara Falls and the Sierra Nevada inspired bigger and bolder creations across the globe. American and USSR financers and hydropower experts also spread the gospel of dams and hydroelectricity across the globe during the Cold War, contributing to projects such as the Three Gorges Dam and the Aswan High Dam.
Feeding desire for large scale electrification with water inherently required large dams across powerful rivers, which impacted public and private interests downstream and in flood zones. Inevitably smaller communities and marginalized groups suffered. They were unable to successfully resist companies flooding them out of their homes or blocking traditional salmon passages. The stagnant water created by hydroelectric dams provides breeding ground for pests and pathogens, leading to local epidemics. However, in some cases, a mutual need for hydropower could lead to cooperation between otherwise adversarial nations.
Hydropower technology and attitude began to shift in the second half of the 20th century. While countries had largely abandoned their small hydropower systems by the 1930s, the smaller hydropower plants began to make a comeback in the 1970s, boosted by government subsidies and a push for more independent energy producers. Some politicians who once advocated for large hydropower projects in the first half of the 20th century began to speak out against them, and citizen groups organizing against dam projects increased.
In the 1980s and 90s the international anti-dam movement had made finding government or private investors for new large hydropower projects incredibly difficult, and given rise to NGOs devoted to fighting dams. Additionally, while the cost of other energy sources fell, the cost of building new hydroelectric dams increased 4% annually between 1965 and 1990, due both to the increasing costs of construction and to the decrease in high quality building sites. In the 1990s, only 18% of the world's electricity came from hydropower. Tidal power production also emerged in the 1960s as a burgeoning alternative hydropower system, though still has not taken hold as a strong energy contender.
United States.
Especially at the start of the American hydropower experiment, engineers and politicians began major hydroelectricity projects to solve a problem of 'wasted potential' rather than to power a population that needed the electricity. When the Niagara Falls Power Company began looking into damming Niagara, the first major hydroelectric project in the United States, in the 1890s they struggled to transport electricity from the falls far enough away to actually reach enough people and justify installation. The project succeeded in large part due to Nikola Tesla's invention of the alternating current motor. On the other side of the country, San Francisco engineers, the Sierra Club, and the federal government fought over acceptable use of the Hetch Hetchy Valley. Despite ostensible protection within a national park, city engineers successfully won the rights to both water and power in the Hetch Hetchy Valley in 1913. After their victory they delivered Hetch Hetchy hydropower and water to San Francisco a decade later and at twice the promised cost, selling power to PG&E which resold to San Francisco residents at a profit.
The American West, with its mountain rivers and lack of coal, turned to hydropower early and often, especially along the Columbia River and its tributaries. The Bureau of Reclamation built the Hoover Dam in 1931, symbolically linking the job creation and economic growth priorities of the New Deal. The federal government quickly followed Hoover with the Shasta Dam and Grand Coulee Dam. Power demand in Oregon did not justify damming the Columbia until WWI revealed the weaknesses of a coal-based energy economy. The federal government then began prioritizing interconnected power—and lots of it. Electricity from all three dams poured into war production during WWII.
After the war, the Grand Coulee Dam and accompanying hydroelectric projects electrified almost all of the rural Columbia Basin, but failed to improve the lives of those living and farming there the way its boosters had promised and also damaged the river ecosystem and migrating salmon populations. In the 1940s as well, the federal government took advantage of the sheer amount of unused power and flowing water from the Grand Coulee to build a nuclear site placed on the banks of the Columbia. The nuclear site leaked radioactive matter into the river, contaminating the entire area.
Post-WWII Americans, especially engineers from the Tennessee Valley Authority, refocused from simply building domestic dams to promoting hydropower abroad. While domestic dam building continued well into the 1970s, with the Reclamation Bureau and Army Corps of Engineers building more than 150 new dams across the American West, organized opposition to hydroelectric dams sparked up in the 1950s and 60s based on environmental concerns. Environmental movements successfully shut down proposed hydropower dams in Dinosaur National Monument and the Grand Canyon, and gained more hydropower-fighting tools with 1970s environmental legislation. As nuclear and fossil fuels grew in the 70s and 80s and environmental activists push for river restoration, hydropower gradually faded in American importance.
Africa.
Foreign powers and IGOs have frequently used hydropower projects in Africa as a tool to interfere in the economic development of African countries, such as the World Bank with the Kariba and Akosombo Dams, and the Soviet Union with the Aswan Dam. The Nile River especially has borne the consequences of countries both along the Nile and distant foreign actors using the river to expand their economic power or national force. After the British occupation of Egypt in 1882, the British worked with Egypt to construct the first Aswan Dam, which they heightened in 1912 and 1934 to try to hold back the Nile floods. Egyptian engineer Adriano Daninos developed a plan for the Aswan High Dam, inspired by the Tennessee Valley Authority's multipurpose dam.
When Gamal Abdel Nasser took power in the 1950s, his government decided to undertake the High Dam project, publicizing it as an economic development project. After American refusal to help fund the dam, and anti-British sentiment in Egypt and British interests in neighboring Sudan combined to make the United Kingdom pull out as well, the Soviet Union funded the Aswan High Dam. Between 1977 and 1990 the dam's turbines generated one third of Egypt's electricity. The building of the Aswan Dam triggered a dispute between Sudan and Egypt over the sharing of the Nile, especially since the dam flooded part of Sudan and decreased the volume of water available to them. Ethiopia, also located on the Nile, took advantage of the Cold War tensions to request assistance from the United States for their own irrigation and hydropower investments in the 1960s. While progress stalled due to the coup d'état of 1974 and following 17-year-long Ethiopian Civil War Ethiopia began construction on the Grand Ethiopian Renaissance Dam in 2011.
Beyond the Nile, hydroelectric projects cover the rivers and lakes of Africa. The Inga powerplant on the Congo River had been discussed since Belgian colonization in the late 19th century, and was successfully built after independence. Mobutu's government failed to regularly maintain the plants and their capacity declined until the 1995 formation of the Southern African Power Pool created a multi-national power grid and plant maintenance program. States with an abundance of hydropower, such as the Democratic Republic of the Congo and Ghana, frequently sell excess power to neighboring countries. Foreign actors such as Chinese hydropower companies have proposed a significant amount of new hydropower projects in Africa, and already funded and consulted on many others in countries like Mozambique and Ghana.
Small hydropower also played an important role in early 20th century electrification across Africa. In South Africa, small turbines powered gold mines and the first electric railway in the 1890s, and Zimbabwean farmers installed small hydropower stations in the 1930s. While interest faded as national grids improved in the second half of the century, 21st century national governments in countries including South Africa and Mozambique, as well as NGOs serving countries like Zimbabwe, have begun re-exploring small-scale hydropower to diversify power sources and improve rural electrification.
Europe.
In the early 20th century, two major factors motivated the expansion of hydropower in Europe: in the northern countries of Norway and Sweden high rainfall and mountains proved exceptional resources for abundant hydropower, and in the south coal shortages pushed governments and utility companies to seek alternative power sources.
Early on, Switzerland dammed the Alpine rivers and the Swiss Rhine, creating, along with Italy and Scandinavia, a Southern Europe hydropower race. In Italy's Po Valley, the main 20th century transition was not the creation of hydropower but the transition from mechanical to electrical hydropower. 12,000 watermills churned in the Po watershed in the 1890s, but the first commercial hydroelectric plant, completed in 1898, signaled the end of the mechanical reign. These new large plants moved power away from rural mountainous areas to urban centers in the lower plain. Italy prioritized early near-nationwide electrification, almost entirely from hydropower, which powered their rise as a dominant European and imperial force. However, they failed to reach any conclusive standard for determining water rights before WWI.
Modern German hydropower dam construction built off a history of small dams powering mines and mills going back to the 15th century. Some parts of Germany industry even relied more on waterwheels than steam until the 1870s. The German government did not set out building large dams such as the prewar Urft, Mohne, and Eder dams to expand hydropower: they mostly wanted to reduce flooding and improve navigation. However, hydropower quickly emerged as an added bonus for all these dams, especially in the coal-poor south. Bavaria even achieved a statewide power grid by damming the Walchensee in 1924, inspired in part by loss of coal reserves after WWI.
Hydropower became a symbol of regional pride and distaste for northern 'coal barons', although the north also held strong enthusiasm for hydropower. Dam building rapidly increased after WWII, this time with the express purpose of increasing hydropower. However, conflict accompanied the dam building and spread of hydropower: agrarian interests suffered from decreased irrigation, small mills lost water flow, and different interest groups fought over where dams should be located, controlling who benefited and whose homes they drowned.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dot{W}_\\text{out} =-\\eta \\ \\dot{m} g \\ \\Delta h =-\\eta \\ \\rho \\dot{V} \\ g \\ \\Delta h"
},
{
"math_id": 1,
"text": "\\dot{W}_\\text{out}"
},
{
"math_id": 2,
"text": "\\eta"
},
{
"math_id": 3,
"text": "\\dot{m}"
},
{
"math_id": 4,
"text": "\\rho"
},
{
"math_id": 5,
"text": "\\dot{V}"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "\\Delta h"
},
{
"math_id": 8,
"text": "\\dot{W}_\\text{out} = 0.85\\times 1000 \\ (\\text{kg}/\\text{m}^3) \\times 80 \\ (\\text{m}^3/\\text{s}) \\times 9.81 \\ (\\text{m}/\\text{s}^2) \\times 145 \\ \\text{m} = 97 \\times 10^6 \\ (\\text{kg}\\ \\text{m}^2/\\text{s}^3) = 97 \\ \\text{MW}"
}
]
| https://en.wikipedia.org/wiki?curid=14073 |
14073967 | Enthalpy of mixing | Change in enthalpy during the mixture of substances
In thermodynamics, the enthalpy of mixing (also heat of mixing and excess enthalpy) is the enthalpy liberated or absorbed from a substance upon mixing. When a substance or compound is combined with any other substance or compound, the enthalpy of mixing is the consequence of the new interactions between the two substances or compounds. This enthalpy, if released exothermically, can in an extreme case cause an explosion.
Enthalpy of mixing can often be ignored in calculations for mixtures where other heat terms exist, or in cases where the mixture is ideal. The sign convention is the same as for enthalpy of reaction: when the enthalpy of mixing is positive, mixing is endothermic, while negative enthalpy of mixing signifies exothermic mixing. In ideal mixtures, the enthalpy of mixing is null. In non-ideal mixtures, the thermodynamic activity of each component is different from its concentration by multiplying with the activity coefficient.
One approximation for calculating the heat of mixing is Flory–Huggins solution theory for polymer solutions.
Formal definition.
For a liquid, enthalpy of mixing can be defined as follows
formula_0
Where:
Enthalpy of mixing can also be defined using Gibbs free energy of mixing
formula_1
However, Gibbs free energy of mixing and entropy of mixing tend to be more difficult to determine experimentally. As such, enthalpy of mixing tends to be determined experimentally in order to calculate entropy of mixing, rather than the reverse.
Enthalpy of mixing is defined exclusively for the continuum regime, which excludes molecular-scale effects (However, first-principles calculations have been made for some metal-alloy systems such as Al-Co-Cr or β-Ti).
When two substances are mixed the resulting enthalpy is not an addition of the pure component enthalpies, unless the substances form an ideal mixture. The interactions between each set of molecules determines the final change in enthalpy. For example, when compound “x” has a strong attractive interaction with compound “y” the resulting enthalpy is exothermic. In the case of alcohol and its interactions with a hydrocarbon, the alcohol molecule participates in hydrogen bonding with other alcohol molecules, and these hydrogen bonding interactions are much stronger than alcohol-hydrocarbon interactions, which results in an endothermic heat of mixing.
Calculations.
Enthalpy of mixing is often calculated experimentally using calorimetry methods. A bomb calorimeter is created to be an isolated system. With an insulated frame and a reaction chamber, a bomb calorimeter is used to transfer heat of a reaction or mixing into surrounding water which is then calculated for temperature. A typical solution would use the equation formula_2 (derived from the definition above) in conjunction experimentally determined total-mixture enthalpies and tabulated pure species enthalpies, the difference being equal to enthalpy of mixing.
More complex models, such as the Flory-Huggins and UNIFAC models, allow prediction of enthalpies of mixing. Flory-Huggins is useful in calculating enthalpies of mixing for polymeric mixtures and considers a system from a multiplicity perspective.
Calculations of organic enthalpies of mixing can be made by modifying UNIFAC using the equations
Where:
It can be seen that prediction of enthalpy of mixing is incredibly complex and requires a plethora of system variables to be known. This explains why enthalpy of mixing is typically experimentally determined.
Relation to the Gibbs free energy of mixing.
The excess Gibbs free energy of mixing can be related to the enthalpy of mixing by the ușe of the Gibbs-Helmholtz equation:
formula_17
or equivalently
formula_18
In these equations, the excess and total enthalpies of mixing are equal because the ideal enthalpy of mixing is zero. This is not true for the corresponding Gibbs free energies however.
Ideal and regular mixtures.
An ideal mixture is any in which the arithmetic mean (with respect to mole fraction) of the two pure substances is the same as that of the final mixture. Among other important thermodynamic simplifications, this means that enthalpy of mixing is zero: formula_19. Any gas that follows the ideal gas law can be assumed to mix ideally, as can hydrocarbons and liquids with similar molecular interactions and properties.
A regular solution or mixture has a non-zero enthalpy of mixing with an ideal entropy of mixing. Under this assumption, formula_20 scales linearly with formula_21, and is equivalent to the excess internal energy.
Mixing binary mixtures to form ternary mixtures.
The enthalpy of mixing for a ternary mixture can be expressed in terms of the enthalpies of mixing of the corresponding binary mixtures:
formula_22
Where:
This method requires that the interactions between two species are unaffected by the addition of the third species. formula_23 is then evaluated for a binary concentration ratio equal to the concentration ratio of species "i" to "j" in the ternary mixture (formula_24).
Intermolecular forces.
Intermolecular forces are the main constituent of changes in the enthalpy of a mixture. Stronger attractive forces between the mixed molecules, such as hydrogen-bonding, induced-dipole, and dipole-dipole interactions result in a lower enthalpy of the mixture and a release of heat. If strong interactions only exist between like-molecules, such as H-bonds between water in a water-hexane solution, the mixture will have a higher total enthalpy and absorb heat.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_{(mixture)}=\\Delta H_{mix}+\\sum x_iH_{i}"
},
{
"math_id": 1,
"text": "\\Delta G_{mix}=\\Delta H_{mix}-T\\Delta S_{mix}"
},
{
"math_id": 2,
"text": "H_{mixture}=\\Delta H_{mix}+\\sum x_iH_{i}"
},
{
"math_id": 3,
"text": "\\Delta H_{mix}=\\sum x_i \\overline{\\Delta H_i}"
},
{
"math_id": 4,
"text": "\\overline{\\Delta H_i}=\\sum_k N_{ki}(H_k-H^*_{ki})"
},
{
"math_id": 5,
"text": "{H_k\\over{RT^2}}=Q_k\\biggl({\\sum_m{\\theta \\psi '_{mk}}\\over{\\sum_m{\\theta \\psi_{mk}}}}-\\biggl(\\sum_m {{\\theta_m \\psi '_km}\\over{\\sum_n \\theta_n \\psi_{nm}}}-{{{\\theta_m \\psi_{km} (\\sum_n \\theta_n \\psi '_{nm} )\\over{(\\sum_n \\theta_n \\psi_{nm})^2}}}}\\biggr)\\biggr)"
},
{
"math_id": 6,
"text": "x_i"
},
{
"math_id": 7,
"text": "\\overline{\\Delta H_i}"
},
{
"math_id": 8,
"text": "N_{ki}"
},
{
"math_id": 9,
"text": "H_k"
},
{
"math_id": 10,
"text": "H^* _{ki}"
},
{
"math_id": 11,
"text": "Q_k"
},
{
"math_id": 12,
"text": "\\theta_m = {Q_m X_m \\over \\sum_n Q_n X_n}"
},
{
"math_id": 13,
"text": "X_m = {\\sum_i x_i N_{mi} \\over \\sum_i x_i \\sum_k N_{ki}}"
},
{
"math_id": 14,
"text": "\\psi_{mn} = exp \\biggl( - {Z a_{mn} \\over 2T} \\biggr)"
},
{
"math_id": 15,
"text": "\\psi ^* _{mn} = {\\delta \\over \\delta T} ( {\\psi_mn} )"
},
{
"math_id": 16,
"text": "Z=35.2-0.1272T+0.00014T^2"
},
{
"math_id": 17,
"text": "\\left( \\frac{\\partial ( \\Delta G^E/T ) } {\\partial T} \\right)_p = - \\frac {\\Delta H^E} {T^2} = - \\frac {\\Delta H_{mix}} {T^2}"
},
{
"math_id": 18,
"text": "\\left( \\frac{\\partial ( \\Delta G^E/T ) } {\\partial (1/T)} \\right)_p = \\Delta H^E = \\Delta H_{mix}"
},
{
"math_id": 19,
"text": "\\Delta H_{mix,ideal}=0"
},
{
"math_id": 20,
"text": "\\Delta H_{mix}"
},
{
"math_id": 21,
"text": "X_1X_2"
},
{
"math_id": 22,
"text": " \\Delta H_{123} = (1 - x_1)^2 \\Delta H_{23} + (1 - x_2)^2 \\Delta H_{13} + (1 - x_3)^2 \\Delta H_{12}"
},
{
"math_id": 23,
"text": "\\Delta H_{ij}"
},
{
"math_id": 24,
"text": "x_i/x_j"
}
]
| https://en.wikipedia.org/wiki?curid=14073967 |
14075419 | Volume hologram | Volume holograms are holograms where the thickness of the recording material is much larger than the light wavelength used for recording. In this case diffraction of light from the hologram is possible only as Bragg diffraction, i.e., the light has to have the right wavelength (color) and the wave must have the right shape (beam direction, wavefront profile). Volume holograms are also called "thick holograms" or "Bragg holograms".
Theory.
Volume holograms were first treated by H. Kogelnik in 1969 by the so-called "coupled-wave theory". For volume "phase" holograms it is possible to diffract 100% of the incoming reference light into the signal wave, i.e., full diffraction of light can be achieved. Volume "absorption" holograms show much lower efficiencies. H. Kogelnik provides analytical solutions for transmission as well as for reflection conditions. A good text-book description of the theory of volume holograms can be found in a book from J. Goodman.
Manufacturing.
A volume hologram is usually made by exposing a photo-thermo-refractive glass to an interference pattern from an ultraviolet laser. It is also possible to make volume holograms in nonphotosensitive glass by exposing it to femtosecond laser pulses.
Bragg selectivity.
In the case of a simple Bragg reflector the wavelength selectivity formula_0 can be estimated by formula_1, where formula_2 is the vacuum wavelength of the reading light, formula_3 is the period length of the grating, and formula_4 is the thickness of the grating. The assumption is just that the grating is not too strong, i.e., that the full length of the grating is used for light diffraction. Considering that because of the Bragg condition the simple relation formula_5 holds, where formula_6 is the modulated refractive index in the material (not the base index) at this wavelength, one sees that for typical values (formula_7) one gets formula_8, showing the extraordinary wavelength selectivity of such volume holograms.
In the case of a simple grating in the transmission geometry the angular selectivity formula_9 can be estimated as well: formula_10, where formula_11 is the thickness of the holographic grating. Here formula_3 is given by formula_12).
Using again typical numbers (formula_13), one ends up with formula_14, showing the impressive angular selectivity of volume holograms.
Applications of volume holograms.
The Bragg selectivity makes volume holograms very important. Prominent examples are:
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta\\lambda"
},
{
"math_id": 1,
"text": "\\Delta\\lambda/\\lambda \\approx \\Lambda/L"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "\\Lambda"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "\\Lambda = \\lambda/(2\\Delta n)"
},
{
"math_id": 6,
"text": "\\Delta n"
},
{
"math_id": 7,
"text": "\\lambda = 500\\text{ nm},\\ L = 1\\text{ mm},\\ \\Delta n = 0.01"
},
{
"math_id": 8,
"text": "\\Delta\\lambda \\approx 12.5\\text{ nm}"
},
{
"math_id": 9,
"text": "\\Delta\\Theta"
},
{
"math_id": 10,
"text": "\\Delta\\Theta \\approx \\Lambda/d"
},
{
"math_id": 11,
"text": "d"
},
{
"math_id": 12,
"text": "\\Lambda = (\\lambda/2\\sin\\Theta"
},
{
"math_id": 13,
"text": "\\lambda = 500\\text{ nm},\\ d = 1\\text{ cm},\\ \\Theta = 45^\\circ"
},
{
"math_id": 14,
"text": "\\Delta\\Theta \\approx 4 \\times 10^{-5}\\text{ rad} \\approx 0.002^\\circ"
}
]
| https://en.wikipedia.org/wiki?curid=14075419 |
14076693 | Relative dimension | Difference between two dimensions
In mathematics, specifically linear algebra and geometry, relative dimension is the dual notion to codimension.
In linear algebra, given a quotient map formula_0, the difference dim "V" − dim "Q" is the relative dimension; this equals the dimension of the kernel.
In fiber bundles, the relative dimension of the map is the dimension of the fiber.
More abstractly, the codimension of a map is the dimension of the cokernel, while the relative dimension of a map is the dimension of the kernel.
These are dual in that the inclusion of a subspace formula_1 of codimension "k" dualizes to yield a quotient map formula_2 of relative dimension "k", and conversely.
The additivity of codimension under intersection corresponds to the additivity of relative dimension in a fiber product. Just as codimension is mostly used for injective maps, relative dimension is mostly used for surjective maps.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V \\to Q"
},
{
"math_id": 1,
"text": "V \\to W"
},
{
"math_id": 2,
"text": "W^* \\to V^*"
}
]
| https://en.wikipedia.org/wiki?curid=14076693 |
1407993 | Code-excited linear prediction | Speech coding algorithm
Code-excited linear prediction (CELP) is a linear predictive speech coding algorithm originally proposed by Manfred R. Schroeder and Bishnu S. Atal in 1985. At the time, it provided significantly better quality than existing low bit-rate algorithms, such as residual-excited linear prediction (RELP) and linear predictive coding (LPC) vocoders (e.g., FS-1015). Along with its variants, such as algebraic CELP, relaxed CELP, low-delay CELP and vector sum excited linear prediction, it is currently the most widely used speech coding algorithm. It is also used in MPEG-4 Audio speech coding. CELP is commonly used as a generic term for a class of algorithms and not for a particular codec.
Background.
The CELP algorithm is based on four main ideas:
The original algorithm as simulated in 1983 by Schroeder and Atal required 150 seconds to encode 1 second of speech when run on a Cray-1 supercomputer. Since then, more efficient ways of implementing the codebooks and improvements in computing capabilities have made it possible to run the algorithm in embedded devices, such as mobile phones.
CELP decoder.
Before exploring the complex encoding process of CELP we introduce the decoder here. Figure 1 describes a generic CELP decoder. The excitation is produced by summing the contributions from fixed (a.k.a. stochastic or innovation) and adaptive (a.k.a. pitch) codebooks:
formula_0
where formula_1 is the fixed (a.k.a. stochastic or innovation) codebook contribution and formula_2 is the adaptive (pitch) codebook contribution. The fixed codebook is a vector quantization dictionary that is (implicitly or explicitly) hard-coded into the codec. This codebook can be algebraic (ACELP) or be stored explicitly (e.g. Speex). The entries in the adaptive codebook consist of delayed versions of the excitation. This makes it possible to efficiently code periodic signals, such as voiced sounds.
The filter that shapes the excitation has an all-pole model of the form formula_3, where formula_4 is called the prediction filter and is obtained using linear prediction (Levinson–Durbin algorithm). An all-pole filter is used because it is a good representation of the human vocal tract and because it is easy to compute.
CELP encoder.
The main principle behind CELP is called analysis-by-synthesis (AbS) and means that the encoding (analysis) is performed by perceptually optimizing the decoded (synthesis) signal in a closed loop. In theory, the best CELP stream would be produced by trying all possible bit combinations and selecting the one that produces the best-sounding decoded signal. This is obviously not possible in practice for two reasons: the required complexity is beyond any currently available hardware and the “best sounding” selection criterion implies a human listener.
In order to achieve real-time encoding using limited computing resources, the CELP search is broken down into smaller, more manageable, sequential searches using a simple perceptual weighting function. Typically, the encoding is performed in the following order:
Noise weighting.
Most (if not all) modern audio codecs attempt to shape the coding noise so that it appears mostly in the frequency regions where the ear cannot detect it. For example, the ear is more tolerant to noise in parts of the spectrum that are louder and vice versa. That's why instead of minimizing the simple quadratic error, CELP minimizes the error for the "perceptually weighted" domain. The weighting filter W(z) is typically derived from the LPC filter by the use of bandwidth expansion:
formula_5
where formula_6. | [
{
"math_id": 0,
"text": "e[n]=e_f[n]+e_a[n]\\,"
},
{
"math_id": 1,
"text": "e_{f}[n]"
},
{
"math_id": 2,
"text": "e_{a}[n]"
},
{
"math_id": 3,
"text": "1/A(z)"
},
{
"math_id": 4,
"text": "A(z)"
},
{
"math_id": 5,
"text": "W(z) = \\frac{A(z/\\gamma_1)}{A(z/\\gamma_2)}"
},
{
"math_id": 6,
"text": "\\gamma_1 > \\gamma_2"
}
]
| https://en.wikipedia.org/wiki?curid=1407993 |
1408000 | Smoothness | Number of derivatives of a function (mathematics)
In mathematical analysis, the smoothness of a function is a property measured by the number, called "differentiability class", of continuous derivatives it has over its domain.
A function of class formula_0 is a function of smoothness at least k; that is, a function of class formula_0 is a function that has a kth derivative that is continuous in its domain.
A function of class formula_1 or formula_1-function (pronounced C-infinity function) is an infinitely differentiable function, that is, a function that has derivatives of all orders (this implies that all these derivatives are continuous).
Generally, the term smooth function refers to a formula_2-function. However, it may also mean "sufficiently differentiable" for the problem under consideration.
Differentiability classes.
Differentiability class is a classification of functions according to the properties of their derivatives. It is a measure of the highest order of derivative that exists and is continuous for a function.
Consider an open set formula_3 on the real line and a function formula_4 defined on formula_3 with real values. Let "k" be a non-negative integer. The function formula_4 is said to be of differentiability class "formula_0" if the derivatives formula_5 exist and are continuous on formula_6 If formula_4 is formula_7-differentiable on formula_8 then it is at least in the class formula_9 since formula_10 are continuous on formula_6 The function formula_4 is said to be infinitely differentiable, smooth, or of class formula_11 if it has derivatives of all orders on formula_6 (So all these derivatives are continuous functions over formula_6) The function formula_4 is said to be of class formula_12 or "analytic", if formula_4 is smooth (i.e., formula_4 is in the class formula_1) and its Taylor series expansion around any point in its domain converges to the function in some neighborhood of the point. There exist functions that are smooth but not analytic; formula_13 is thus strictly contained in formula_14 Bump functions are examples of functions with this property.
To put it differently, the class formula_15 consists of all continuous functions. The class formula_16 consists of all differentiable functions whose derivative is continuous; such functions are called "continuously differentiable". Thus, a formula_16 function is exactly a function whose derivative exists and is of class formula_17 In general, the classes formula_0 can be defined recursively by declaring formula_15 to be the set of all continuous functions, and declaring formula_0 for any positive integer formula_7 to be the set of all differentiable functions whose derivative is in formula_18 In particular, formula_0 is contained in formula_9 for every formula_19 and there are examples to show that this containment is strict (formula_20). The class formula_1 of infinitely differentiable functions, is the intersection of the classes formula_0 as formula_7 varies over the non-negative integers.
Examples.
Example: Continuous ("C"0) But Not Differentiable.
The function
formula_21
is continuous, but not differentiable at x = 0, so it is of class "C"0, but not of class "C"1.
Example: Finitely-times Differentiable ("C"k).
For each even integer k, the function
formula_22
is continuous and k times differentiable at all x. At x = 0, however, formula_4 is not times differentiable, so formula_4 is of class "C"k, but not of class "C"j where .
Example: Differentiable But Not Continuously Differentiable (not "C"1).
The function
formula_23
is differentiable, with derivative
formula_24
Because formula_25 oscillates as x → 0, formula_26 is not continuous at zero. Therefore, formula_27 is differentiable but not of class "C"1.
Example: Differentiable But Not Lipschitz Continuous.
The function
formula_28
is differentiable but its derivative is unbounded on a compact set. Therefore, formula_29 is an example of a function that is differentiable but not locally Lipschitz continuous.
Example: Analytic ("C"ω).
The exponential function formula_30 is analytic, and hence falls into the class "C"ω. The trigonometric functions are also analytic wherever they are defined, because they are linear combinations of complex exponential functions formula_31 and formula_32.
Example: Smooth ("C"∞) but not Analytic ("C"ω).
The bump function
formula_33
is smooth, so of class "C"∞, but it is not analytic at x = ±1, and hence is not of class "C"ω. The function f is an example of a smooth function with compact support.
Multivariate differentiability classes.
A function formula_34 defined on an open set formula_3 of formula_35 is said to be of class formula_0 on formula_3, for a positive integer formula_7, if all partial derivatives
formula_36
exist and are continuous, for every formula_37 non-negative integers, such that formula_38, and every formula_39. Equivalently, formula_4 is of class formula_0 on formula_3 if the formula_7-th order Fréchet derivative of formula_4 exists and is continuous at every point of formula_3. The function formula_4 is said to be of class formula_40 or formula_15 if it is continuous on formula_3. Functions of class formula_16 are also said to be "continuously differentiable".
A function formula_41, defined on an open set formula_3 of formula_35, is said to be of class formula_0 on formula_3, for a positive integer formula_7, if all of its components
formula_42
are of class formula_0, where formula_43 are the natural projections formula_44 defined by formula_45. It is said to be of class formula_40 or formula_15 if it is continuous, or equivalently, if all components formula_46 are continuous, on formula_3.
The space of "C""k" functions.
Let formula_47 be an open subset of the real line. The set of all formula_0 real-valued functions defined on formula_47 is a Fréchet vector space, with the countable family of seminorms
formula_48
where formula_49 varies over an increasing sequence of compact sets whose union is formula_47, and formula_50.
The set of formula_1 functions over formula_47 also forms a Fréchet space. One uses the same seminorms as above, except that formula_51 is allowed to range over all non-negative integer values.
The above spaces occur naturally in applications where functions having derivatives of certain orders are necessary; however, particularly in the study of partial differential equations, it can sometimes be more fruitful to work instead with the Sobolev spaces.
Continuity.
The terms "parametric continuity" ("C""k") and "geometric continuity" ("Gn") were introduced by Brian Barsky, to show that the smoothness of a curve could be measured by removing restrictions on the speed, with which the parameter traces out the curve.
Parametric continuity.
Parametric continuity (Ck) is a concept applied to parametric curves, which describes the smoothness of the parameter's value with distance along the curve. A (parametric) curve formula_52 is said to be of class "C""k", if formula_53 exists and is continuous on formula_54, where derivatives at the end-points formula_55 and formula_56 are taken to be one sided derivatives (from the right at formula_55 and from the left at formula_56).
As a practical application of this concept, a curve describing the motion of an object with a parameter of time must have "C"1 continuity and its first derivative is differentiable—for the object to have finite acceleration. For smoother motion, such as that of a camera's path while making a film, higher orders of parametric continuity are required.
Order of parametric continuity.
The various order of parametric continuity can be described as follows:
Geometric continuity.
A curve or surface can be described as having formula_60 continuity, with formula_59 being the increasing measure of smoothness. Consider the segments either side of a point on a curve:
In general, formula_60 continuity exists if the curves can be reparameterized to have formula_58 (parametric) continuity. A reparametrization of the curve is geometrically identical to the original; only the parameter is affected.
Equivalently, two vector functions formula_64 and formula_65 such that formula_66 have formula_60 continuity at the point where they meet if
they satisfy equations known as Beta-constraints. For example, the Beta-constraints for formula_67 continuity are:
formula_68
where formula_69, formula_70, and formula_71 are arbitrary, but formula_72 is constrained to be positive.65
In the case formula_73, this reduces to
formula_74 and formula_75, for a scalar formula_76 (i.e., the direction, but not necessarily the magnitude, of the two vectors is equal).
While it may be obvious that a curve would require formula_62 continuity to appear smooth, for good aesthetics, such as those aspired to in architecture and sports car design, higher levels of geometric continuity are required. For example, reflections in a car body will not appear smooth unless the body has formula_63 continuity.
A rounded rectangle (with ninety degree circular arcs at the four corners) has formula_62 continuity, but does not have formula_63 continuity. The same is true for a rounded cube, with octants of a sphere at its corners and quarter-cylinders along its edges. If an editable curve with formula_63 continuity is required, then cubic splines are typically chosen; these curves are frequently used in industrial design.
Other concepts.
Relation to analyticity.
While all analytic functions are "smooth" (i.e. have all derivatives continuous) on the set on which they are analytic, examples such as bump functions (mentioned above) show that the converse is not true for functions on the reals: there exist smooth real functions that are not analytic. Simple examples of functions that are smooth but not analytic at any point can be made by means of Fourier series; another example is the Fabius function. Although it might seem that such functions are the exception rather than the rule, it turns out that the analytic functions are scattered very thinly among the smooth ones; more rigorously, the analytic functions form a meagre subset of the smooth functions. Furthermore, for every open subset "A" of the real line, there exist smooth functions that are analytic on "A" and nowhere else .
It is useful to compare the situation to that of the ubiquity of transcendental numbers on the real line. Both on the real line and the set of smooth functions, the examples we come up with at first thought (algebraic/rational numbers and analytic functions) are far better behaved than the majority of cases: the transcendental numbers and nowhere analytic functions have full measure (their complements are meagre).
The situation thus described is in marked contrast to complex differentiable functions. If a complex function is differentiable just once on an open set, it is both infinitely differentiable and analytic on that set .
Smooth partitions of unity.
Smooth functions with given closed support are used in the construction of smooth partitions of unity (see "partition of unity" and topology glossary); these are essential in the study of smooth manifolds, for example to show that Riemannian metrics can be defined globally starting from their local existence. A simple case is that of a "bump function" on the real line, that is, a smooth function "f" that takes the value 0 outside an interval ["a","b"] and such that
formula_77
Given a number of overlapping intervals on the line, bump functions can be constructed on each of them, and on semi-infinite intervals formula_78 and formula_79 to cover the whole line, such that the sum of the functions is always 1.
From what has just been said, partitions of unity do not apply to holomorphic functions; their different behavior relative to existence and analytic continuation is one of the roots of sheaf theory. In contrast, sheaves of smooth functions tend not to carry much topological information.
Smooth functions on and between manifolds.
Given a smooth manifold formula_80, of dimension formula_81 and an atlas formula_82 then a map formula_83 is smooth on formula_80 if for all formula_84 there exists a chart formula_85 such that formula_86 and formula_87 is a smooth function from a neighborhood of formula_88 in formula_89 to formula_90 (all partial derivatives up to a given order are continuous). Smoothness can be checked with respect to any chart of the atlas that contains formula_91 since the smoothness requirements on the transition functions between charts ensure that if formula_4 is smooth near formula_92 in one chart it will be smooth near formula_92 in any other chart.
If formula_93 is a map from formula_80 to an formula_59-dimensional manifold formula_94, then formula_95 is smooth if, for every formula_96 there is a chart formula_97 containing formula_91 and a chart formula_98 containing formula_99 such that formula_100 and formula_101 is a smooth function from formula_102
Smooth maps between manifolds induce linear maps between tangent spaces: for formula_93, at each point the pushforward (or differential) maps tangent vectors at formula_92 to tangent vectors at formula_99: formula_103 and on the level of the tangent bundle, the pushforward is a vector bundle homomorphism: formula_104 The dual to the pushforward is the pullback, which "pulls" covectors on formula_94 back to covectors on formula_105 and formula_7-forms to formula_7-forms: formula_106 In this way smooth functions between manifolds can transport local data, like vector fields and differential forms, from one manifold to another, or down to Euclidean space where computations like integration are well understood.
Preimages and pushforwards along smooth functions are, in general, not manifolds without additional assumptions. Preimages of regular points (that is, if the differential does not vanish on the preimage) are manifolds; this is the preimage theorem. Similarly, pushforwards along embeddings are manifolds.
Smooth functions between subsets of manifolds.
There is a corresponding notion of smooth map for arbitrary subsets of manifolds. If formula_107 is a function whose domain and range are subsets of manifolds formula_108 and formula_109 respectively. formula_4 is said to be smooth if for all formula_110 there is an open set formula_111 with formula_112 and a smooth function formula_113 such that formula_114 for all formula_115
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C^k"
},
{
"math_id": 1,
"text": "C^\\infty"
},
{
"math_id": 2,
"text": "C^{\\infty}"
},
{
"math_id": 3,
"text": "U"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "f',f'',\\dots,f^{(k)}"
},
{
"math_id": 6,
"text": "U."
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "U,"
},
{
"math_id": 9,
"text": "C^{k-1}"
},
{
"math_id": 10,
"text": "f',f'',\\dots,f^{(k-1)}"
},
{
"math_id": 11,
"text": "C^\\infty,"
},
{
"math_id": 12,
"text": "C^\\omega,"
},
{
"math_id": 13,
"text": "C^\\omega"
},
{
"math_id": 14,
"text": "C^\\infty."
},
{
"math_id": 15,
"text": "C^0"
},
{
"math_id": 16,
"text": "C^1"
},
{
"math_id": 17,
"text": "C^0."
},
{
"math_id": 18,
"text": "C^{k-1}."
},
{
"math_id": 19,
"text": "k>0,"
},
{
"math_id": 20,
"text": "C^k \\subsetneq C^{k-1}"
},
{
"math_id": 21,
"text": "f(x) = \\begin{cases}x & \\mbox{if } x \\geq 0, \\\\ 0 &\\text{if } x < 0\\end{cases}"
},
{
"math_id": 22,
"text": "f(x)=|x|^{k+1}"
},
{
"math_id": 23,
"text": "g(x) = \\begin{cases}x^2\\sin{\\left(\\tfrac{1}{x}\\right)} & \\text{if }x \\neq 0, \\\\ 0 &\\text{if }x = 0\\end{cases}"
},
{
"math_id": 24,
"text": "g'(x) = \\begin{cases}-\\mathord{\\cos\\left(\\tfrac{1}{x}\\right)} + 2x\\sin\\left(\\tfrac{1}{x}\\right) & \\text{if }x \\neq 0, \\\\ 0 &\\text{if }x = 0.\\end{cases}"
},
{
"math_id": 25,
"text": "\\cos(1/x)"
},
{
"math_id": 26,
"text": "g'(x)"
},
{
"math_id": 27,
"text": "g(x)"
},
{
"math_id": 28,
"text": "h(x) = \\begin{cases}x^{4/3}\\sin{\\left(\\tfrac{1}{x}\\right)} & \\text{if }x \\neq 0, \\\\ 0 &\\text{if }x = 0\\end{cases}"
},
{
"math_id": 29,
"text": "h"
},
{
"math_id": 30,
"text": "e^{x}"
},
{
"math_id": 31,
"text": "e^{ix}"
},
{
"math_id": 32,
"text": "e^{-ix}"
},
{
"math_id": 33,
"text": "f(x) = \\begin{cases}e^{-\\frac{1}{1-x^2}} & \\text{ if } |x| < 1, \\\\ 0 &\\text{ otherwise }\\end{cases}"
},
{
"math_id": 34,
"text": "f:U\\subset\\mathbb{R}^n\\to\\mathbb{R}"
},
{
"math_id": 35,
"text": "\\mathbb{R}^n"
},
{
"math_id": 36,
"text": "\\frac{\\partial^\\alpha f}{\\partial x_1^{\\alpha_1} \\, \\partial x_2^{\\alpha_2}\\,\\cdots\\,\\partial x_n^{\\alpha_n}}(y_1,y_2,\\ldots,y_n)"
},
{
"math_id": 37,
"text": "\\alpha_1,\\alpha_2,\\ldots,\\alpha_n"
},
{
"math_id": 38,
"text": "\\alpha=\\alpha_1+\\alpha_2+\\cdots+\\alpha_n\\leq k"
},
{
"math_id": 39,
"text": "(y_1,y_2,\\ldots,y_n)\\in U"
},
{
"math_id": 40,
"text": "C"
},
{
"math_id": 41,
"text": "f:U\\subset\\mathbb{R}^n\\to\\mathbb{R}^m"
},
{
"math_id": 42,
"text": "f_i(x_1,x_2,\\ldots,x_n)=(\\pi_i\\circ f)(x_1,x_2,\\ldots,x_n)=\\pi_i(f(x_1,x_2,\\ldots,x_n)) \\text{ for } i=1,2,3,\\ldots,m"
},
{
"math_id": 43,
"text": "\\pi_i"
},
{
"math_id": 44,
"text": "\\pi_i:\\mathbb{R}^m\\to\\mathbb{R}"
},
{
"math_id": 45,
"text": "\\pi_i(x_1,x_2,\\ldots,x_m)=x_i"
},
{
"math_id": 46,
"text": "f_i"
},
{
"math_id": 47,
"text": "D"
},
{
"math_id": 48,
"text": "p_{K, m}=\\sup_{x\\in K}\\left|f^{(m)}(x)\\right|"
},
{
"math_id": 49,
"text": "K"
},
{
"math_id": 50,
"text": "m=0,1,\\dots,k"
},
{
"math_id": 51,
"text": "m"
},
{
"math_id": 52,
"text": "s:[0,1]\\to\\mathbb{R}^n"
},
{
"math_id": 53,
"text": "\\textstyle \\frac{d^ks}{dt^k}"
},
{
"math_id": 54,
"text": "[0,1]"
},
{
"math_id": 55,
"text": "0"
},
{
"math_id": 56,
"text": "1"
},
{
"math_id": 57,
"text": "C^2"
},
{
"math_id": 58,
"text": "C^n"
},
{
"math_id": 59,
"text": "n"
},
{
"math_id": 60,
"text": "G^n"
},
{
"math_id": 61,
"text": "G^0"
},
{
"math_id": 62,
"text": "G^1"
},
{
"math_id": 63,
"text": "G^2"
},
{
"math_id": 64,
"text": "f(t)"
},
{
"math_id": 65,
"text": "g(t)"
},
{
"math_id": 66,
"text": "f(1)=g(0)"
},
{
"math_id": 67,
"text": "G^4"
},
{
"math_id": 68,
"text": "\n\\begin{align}\ng^{(1)}(0) & = \\beta_1 f^{(1)}(1) \\\\\ng^{(2)}(0) & = \\beta_1^2 f^{(2)}(1) + \\beta_2 f^{(1)}(1) \\\\\ng^{(3)}(0) & = \\beta_1^3 f^{(3)}(1) + 3\\beta_1\\beta_2 f^{(2)}(1) +\\beta_3 f^{(1)}(1) \\\\\ng^{(4)}(0) & = \\beta_1^4 f^{(4)}(1) + 6\\beta_1^2\\beta_2 f^{(3)}(1) +(4\\beta_1\\beta_3+3\\beta_2^2) f^{(2)}(1) +\\beta_4 f^{(1)}(1) \\\\\n\\end{align}\n"
},
{
"math_id": 69,
"text": "\\beta_2"
},
{
"math_id": 70,
"text": "\\beta_3"
},
{
"math_id": 71,
"text": "\\beta_4"
},
{
"math_id": 72,
"text": "\\beta_1"
},
{
"math_id": 73,
"text": "n=1"
},
{
"math_id": 74,
"text": "f'(1)\\neq0"
},
{
"math_id": 75,
"text": "f'(1) = kg'(0)"
},
{
"math_id": 76,
"text": "k>0"
},
{
"math_id": 77,
"text": "f(x) > 0 \\quad \\text{ for } \\quad a < x < b.\\,"
},
{
"math_id": 78,
"text": "(-\\infty, c]"
},
{
"math_id": 79,
"text": "[d, +\\infty)"
},
{
"math_id": 80,
"text": "M"
},
{
"math_id": 81,
"text": "m,"
},
{
"math_id": 82,
"text": "\\mathfrak{U} = \\{(U_\\alpha,\\phi_\\alpha)\\}_\\alpha,"
},
{
"math_id": 83,
"text": "f:M\\to \\R"
},
{
"math_id": 84,
"text": "p \\in M"
},
{
"math_id": 85,
"text": "(U, \\phi) \\in \\mathfrak{U},"
},
{
"math_id": 86,
"text": "p \\in U,"
},
{
"math_id": 87,
"text": "f \\circ \\phi^{-1} : \\phi(U) \\to \\R"
},
{
"math_id": 88,
"text": "\\phi(p)"
},
{
"math_id": 89,
"text": "\\R^m"
},
{
"math_id": 90,
"text": "\\R"
},
{
"math_id": 91,
"text": "p,"
},
{
"math_id": 92,
"text": "p"
},
{
"math_id": 93,
"text": "F : M \\to N"
},
{
"math_id": 94,
"text": "N"
},
{
"math_id": 95,
"text": "F"
},
{
"math_id": 96,
"text": "p \\in M,"
},
{
"math_id": 97,
"text": "(U,\\phi)"
},
{
"math_id": 98,
"text": "(V, \\psi)"
},
{
"math_id": 99,
"text": "F(p)"
},
{
"math_id": 100,
"text": "F(U) \\subset V,"
},
{
"math_id": 101,
"text": "\\psi \\circ F \\circ \\phi^{-1} : \\phi(U) \\to \\psi(V)"
},
{
"math_id": 102,
"text": "\\R^n."
},
{
"math_id": 103,
"text": "F_{*,p} : T_p M \\to T_{F(p)}N,"
},
{
"math_id": 104,
"text": "F_* : TM \\to TN."
},
{
"math_id": 105,
"text": "M,"
},
{
"math_id": 106,
"text": "F^* : \\Omega^k(N) \\to \\Omega^k(M)."
},
{
"math_id": 107,
"text": "f : X \\to Y"
},
{
"math_id": 108,
"text": "X \\subseteq M"
},
{
"math_id": 109,
"text": "Y \\subseteq N"
},
{
"math_id": 110,
"text": "x \\in X"
},
{
"math_id": 111,
"text": "U \\subseteq M"
},
{
"math_id": 112,
"text": "x \\in U"
},
{
"math_id": 113,
"text": "F : U \\to N"
},
{
"math_id": 114,
"text": "F(p) = f(p)"
},
{
"math_id": 115,
"text": "p \\in U \\cap X."
}
]
| https://en.wikipedia.org/wiki?curid=1408000 |
140806 | Maximum likelihood estimation | Method of estimating the parameters of a statistical model, given observations
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.
From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with a prior distribution that is uniform in the region of interest. In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
Principles.
We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector formula_0 so that this distribution falls within a parametric family formula_1 where formula_2 is called the "parameter space", a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample formula_3 gives a real-valued function,
formula_4
which is called the likelihood function. For independent and identically distributed random variables, formula_5 will be the product of univariate density functions:
formula_6
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is
formula_7
Intuitively, this selects the parameter values that make the observed data most probable. The specific value formula_8 that maximizes the likelihood function formula_9 is called the maximum likelihood estimate. Further, if the function formula_10 so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space formula_2 that is compact. For an open formula_2 the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
formula_11
Since the logarithm is a monotonic function, the maximum of formula_12 occurs at the same value of formula_13 as does the maximum of formula_14 If formula_15 is differentiable in formula_16 sufficient conditions for the occurrence of a maximum (or a minimum) are
formula_17
known as the likelihood equations. For some models, these equations can be explicitly solved for formula_18 but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found via numerical optimization. Another problem is that in finite samples, there may exist multiple roots for the likelihood equations. Whether the identified root formula_19 of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-called Hessian matrix
formula_20
is negative semi-definite at formula_21, as this indicates local concavity. Conveniently, most common probability distributions – in particular the exponential family – are logarithmically concave.
Restricted parameter space.
While the domain of the likelihood function—the parameter space—is generally a finite-dimensional subset of Euclidean space, additional restrictions sometimes need to be incorporated into the estimation process. The parameter space can be expressed as
formula_22
where formula_23 is a vector-valued function mapping formula_24 into formula_25 Estimating the true parameter formula_13 belonging to formula_26 then, as a practical matter, means to find the maximum of the likelihood function subject to the constraint formula_27
Theoretically, the most natural approach to this constrained optimization problem is the method of substitution, that is "filling out" the restrictions formula_28 to a set formula_29 in such a way that formula_30 is a one-to-one function from formula_31 to itself, and reparameterize the likelihood function by setting formula_32 Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also. For instance, in a multivariate normal distribution the covariance matrix formula_33 must be positive-definite; this restriction can be imposed by replacing formula_34 where formula_35 is a real upper triangular matrix and formula_36 is its transpose.
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the "restricted likelihood equations"
formula_37 and formula_38
where formula_39 is a column-vector of Lagrange multipliers and formula_40 is the k × r Jacobian matrix of partial derivatives. Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero. This in turn allows for a statistical test of the "validity" of the constraint, known as the Lagrange multiplier test.
Nonparametric maximum likelihood estimation.
Nonparametric maximum likelihood estimation can be performed using the empirical likelihood.
Properties.
A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of "θ", the objective function formula_41. If the data are independent and identically distributed, then we have
formula_42
this being the sample analogue of the expected log-likelihood formula_43, where this expectation is taken with respect to the true density.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
Consistency.
Under the conditions outlined below, the maximum likelihood estimator is consistent. The consistency means that if the data were generated by formula_50 and we have a sufficiently large number of observations "n", then it is possible to find the value of "θ"0 with arbitrary precision. In mathematical terms this means that as "n" goes to infinity the estimator formula_21 converges in probability to its true value:
formula_51
Under slightly stronger conditions, the estimator converges almost surely (or "strongly"):
formula_52
In practical applications, data is never generated by formula_50. Rather, formula_50 is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics that "all models are wrong". Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have.
To establish consistency, the following conditions are sufficient.
The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence formula_53 is stochastically equicontinuous.
If one wants to demonstrate that the ML estimator formula_21 converges to "θ"0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:
formula_54
Additionally, if (as assumed above) the data were generated by formula_50, then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. Specifically,
formula_55
where "I" is the Fisher information matrix.
Functional invariance.
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if formula_21 is the MLE for formula_13, and if formula_56 is any transformation of formula_13, then the MLE for formula_57 is by definition
formula_58
It maximizes the so-called profile likelihood:
formula_59
The MLE is also equivariant with respect to certain transformations of the data. If formula_60 where formula_49 is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
formula_61
and hence the likelihood functions for formula_62 and formula_63 differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data.
Efficiency.
As assumed above, if the data were generated by formula_64 then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. It is √"n" -consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound. Specifically,
formula_65
where formula_66 is the Fisher information matrix:
formula_67
In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order .
Second-order efficiency after correction for bias.
However, when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that "θ"mle has bias of order <templatestyles src="Fraction/styles.css" />1⁄1. This bias is equal to (componentwise)
formula_68
where formula_69 (with superscripts) denotes the ("j,k")-th component of the "inverse" Fisher information matrix formula_70, and
formula_71
Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, and "correct" for that bias by subtracting it:
formula_72
This estimator is unbiased up to the terms of order , and is called the bias-corrected maximum likelihood estimator.
This bias-corrected estimator is second-order efficient (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order . It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator is "not" third-order efficient.
Relation to Bayesian inference.
A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter θ that maximizes the probability of θ given the data, given by Bayes' theorem:
formula_73
where formula_74 is the prior distribution for the parameter θ and where formula_75 is the probability of the data averaged over all parameters. Since the denominator is independent of θ, the Bayesian estimator is obtained by maximizing formula_76 with respect to θ. If we further assume that the prior formula_74 is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood function formula_77. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distribution formula_74.
Application of maximum-likelihood estimation in Bayes decision theory.
In many practical applications in machine learning, maximum-likelihood estimation is used as the model for parameter estimation.
The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.
Thus, the Bayes Decision Rule is stated as
"decide formula_78 if formula_79 otherwise decide formula_80"
where formula_81 are predictions of different classes. From a perspective of minimizing error, it can also be stated as
formula_82
where
formula_83
if we decide formula_80 and formula_84 if we decide formula_85
By applying Bayes' theorem
formula_86,
and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:
formula_87
where formula_88 is the prediction and formula_89 is the prior probability.
Relation to minimizing Kullback–Leibler divergence and cross entropy.
Finding formula_90 that maximizes the likelihood is asymptotically equivalent to finding the formula_90 that defines a probability distribution (formula_91) that has a minimal distance, in terms of Kullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated by formula_92). In an ideal world, P and Q are the same (and the only thing unknown is formula_13 that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends on formula_90) to the real distribution formula_92.
Examples.
Discrete uniform distribution.
Consider a case where "n" tickets numbered from 1 to "n" are placed in a box and one is selected at random ("see uniform distribution"); thus, the sample size is 1. If "n" is unknown, then the maximum likelihood estimator formula_93 of "n" is the number "m" on the drawn ticket. (The likelihood is 0 for "n" < "m", <templatestyles src="Fraction/styles.css" />1⁄"n" for "n" ≥ "m", and this is greatest when "n" = "m". Note that the maximum likelihood estimate of "n" occurs at the lower extreme of possible values {"m", "m" + 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) The expected value of the number "m" on the drawn ticket, and therefore the expected value of formula_93, is ("n" + 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator for "n" will systematically underestimate "n" by ("n" − 1)/2.
Discrete distribution, finite parameter space.
Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a 'head' "p". The goal then becomes to determine "p".
Suppose the coin is tossed 80 times: i.e. the sample might be something like "x"1 = H, "x"2 = T, ..., "x"80 = T, and the count of the number of heads "H" is observed.
The probability of tossing tails is 1 − "p" (so here "p" is "θ" above). Suppose the outcome is 49 heads and 31 tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probability "p" = <templatestyles src="Fraction/styles.css" />1⁄3, one which gives heads with probability "p" = <templatestyles src="Fraction/styles.css" />1⁄2 and another which gives heads with probability "p" = <templatestyles src="Fraction/styles.css" />2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but for different values of "p" (the "probability of success"), the likelihood function (defined below) takes one of three values:
formula_94
The likelihood is maximized when p = <templatestyles src="Fraction/styles.css" />2⁄3, and so this is the "maximum likelihood estimate" for p.
Discrete distribution, continuous parameter space.
Now suppose that there was only one coin but its p could have been any value The likelihood function to be maximised is
formula_95
and the maximisation is over all possible values
One way to maximize this function is by differentiating with respect to p and setting to zero:
formula_96
This is a product of three terms. The first term is 0 when p = 0. The second is 0 when p = 1. The third is zero when p = <templatestyles src="Fraction/styles.css" />49⁄80. The solution that maximizes the likelihood is clearly p = <templatestyles src="Fraction/styles.css" />49⁄80 (since p = 0 and p = 1 result in a likelihood of 0). Thus the "maximum likelihood estimator" for p is <templatestyles src="Fraction/styles.css" />49⁄80.
This result is easily generalized by substituting a letter such as s in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as n in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields <templatestyles src="Fraction/styles.css" />⁄ which is the maximum likelihood estimator for any sequence of n Bernoulli trials resulting in s 'successes'.
Continuous distribution, continuous parameter space.
For the normal distribution formula_97 which has probability density function
formula_98
the corresponding probability density function for a sample of n independent identically distributed normal random variables (the likelihood) is
formula_99
This family of distributions has two parameters: "θ"
("μ", "σ"); so we maximize the likelihood, formula_100, over both parameters simultaneously, or if possible, individually.
Since the logarithm function itself is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows:
formula_101
We now compute the derivatives of this log-likelihood as follows.
formula_102
where formula_103 is the sample mean. This is solved by
formula_104
This is indeed the maximum of the function, since it is the only turning point in μ and the second derivative is strictly less than zero. Its expected value is equal to the parameter μ of the given distribution,
formula_105
which means that the maximum likelihood estimator formula_106 is unbiased.
Similarly we differentiate the log-likelihood with respect to σ and equate to zero:
formula_107
which is solved by
formula_108
Inserting the estimate formula_109 we obtain
formula_110
To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error) formula_111. Expressing the estimate in these variables yields
formula_112
Simplifying the expression above, utilizing the facts that formula_113 and formula_114, allows us to obtain
formula_115
This means that the estimator formula_116 is biased for formula_117. It can also be shown that formula_118 is biased for formula_119, but that both formula_116 and formula_118 are consistent.
Formally we say that the "maximum likelihood estimator" for formula_120 is
formula_121
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log-likelihood at its maximum takes a particularly simple form:
formula_122
This maximum log-likelihood can be shown to be the same for more general least squares, even for non-linear least squares. This is often used in determining likelihood-based approximate confidence intervals and confidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
Non-independent variables.
It may be the case that variables are correlated, that is, not independent. Two random variables formula_123 and formula_124 are independent only if their joint probability density function is the product of the individual probability density functions, i.e.
formula_125
Suppose one constructs an order-"n" Gaussian vector out of random variables formula_126, where each variable has means given by formula_127. Furthermore, let the covariance matrix be denoted by formula_128. The joint probability density function of these "n" random variables then follows a multivariate normal distribution given by:
formula_129
In the bivariate case, the joint probability density function is given by:
formula_130
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density.
Example.
formula_131 are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to be formula_132:formula_133. The probability of each box is formula_134, with a constraint: formula_135. This is a case in which the formula_136 "s" are not independent, the joint probability of a vector formula_137 is called the multinomial and has the form:
formula_138
Each box taken separately against all the other boxes is a binomial and this is an extension thereof.
The log-likelihood of this is:
formula_139
The constraint has to be taken into account and use the Lagrange multipliers:
formula_140
By posing all the derivatives to be 0, the most natural estimate is derived
formula_141
Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures.
Iterative procedures.
Except for special cases, the likelihood equations
formula_142
cannot be solved explicitly for an estimator formula_143. Instead, they need to be solved iteratively: starting from an initial guess of formula_13 (say formula_144), one seeks to obtain a convergent sequence formula_145. Many methods for this kind of optimization problem are available, but the most commonly used ones are algorithms based on an updating formula of the form
formula_146
where the vector formula_147 indicates the descent direction of the rth "step," and the scalar formula_148 captures the "step length," also known as the learning rate.
formula_149 that is small enough for convergence and formula_150
Gradient descent method.
Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method.
formula_151 and formula_152
Newton–Raphson method.
where formula_153 is the score and formula_154 is the inverse of the Hessian matrix of the log-likelihood function, both evaluated the rth iteration. But because the calculation of the Hessian matrix is computationally costly, numerous alternatives have been proposed. The popular Berndt–Hall–Hall–Hausman algorithm approximates the Hessian with the outer product of the expected gradient, such that
formula_155
Quasi-Newton methods.
Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix.
Davidon–Fletcher–Powell formula.
DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:
formula_156
where
formula_157
formula_158
formula_159
Broyden–Fletcher–Goldfarb–Shanno algorithm.
BFGS also gives a solution that is symmetric and positive-definite:
formula_160
where
formula_157
formula_159
BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances
Fisher's scoring.
Another popular method is to replace the Hessian with the Fisher information matrix, formula_161, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such as generalized linear models.
Although popular, quasi-Newton methods may converge to a stationary point that is not necessarily a local or global maximum, but rather a local minimum or a saddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is both negative definite and well-conditioned.
History.
Early users of maximum likelihood include Carl Friedrich Gauss, Pierre-Simon Laplace, Thorvald N. Thiele, and Francis Ysidro Edgeworth. It was Ronald Fisher however, between 1912 and 1922, who singlehandedly created the modern version of the method.
Maximum-likelihood estimation finally transcended heuristic justification in a proof published by Samuel S. Wilks in 1938, now called Wilks' theorem. The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptotically "χ" 2-distributed, which enables convenient determination of a confidence region around any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of the Fisher information matrix, which is provided by a theorem proven by Fisher. Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\; \\theta = \\left[ \\theta_{1},\\, \\theta_2,\\, \\ldots,\\, \\theta_k \\right]^{\\mathsf{T}} \\;"
},
{
"math_id": 1,
"text": "\\; \\{ f(\\cdot\\,;\\theta) \\mid \\theta \\in \\Theta \\} \\;,"
},
{
"math_id": 2,
"text": "\\, \\Theta \\,"
},
{
"math_id": 3,
"text": "\\; \\mathbf{y} = (y_1, y_2, \\ldots, y_n) \\;"
},
{
"math_id": 4,
"text": "\\mathcal{L}_{n}(\\theta) = \\mathcal{L}_{n}(\\theta; \\mathbf{y}) = f_{n}(\\mathbf{y}; \\theta) \\;,"
},
{
"math_id": 5,
"text": "f_{n}(\\mathbf{y}; \\theta)"
},
{
"math_id": 6,
"text": "f_{n}(\\mathbf{y}; \\theta) = \\prod_{k=1}^n \\, f_k^\\mathsf{univar}(y_k; \\theta) ~."
},
{
"math_id": 7,
"text": "\\hat{\\theta} = \\underset{\\theta\\in\\Theta}{\\operatorname{arg\\;max}}\\,\\mathcal{L}_{n}(\\theta\\,;\\mathbf{y}) ~."
},
{
"math_id": 8,
"text": "~ \\hat{\\theta} = \\hat{\\theta}_{n}(\\mathbf{y}) \\in \\Theta ~"
},
{
"math_id": 9,
"text": "\\, \\mathcal{L}_{n} \\,"
},
{
"math_id": 10,
"text": "\\; \\hat{\\theta}_{n} : \\mathbb{R}^{n} \\to \\Theta \\;"
},
{
"math_id": 11,
"text": "\n \\ell(\\theta\\,;\\mathbf{y}) = \\ln \\mathcal{L}_{n}(\\theta\\,;\\mathbf{y}) ~.\n "
},
{
"math_id": 12,
"text": "\\; \\ell(\\theta\\,;\\mathbf{y}) \\;"
},
{
"math_id": 13,
"text": "\\theta"
},
{
"math_id": 14,
"text": "\\, \\mathcal{L}_{n} ~."
},
{
"math_id": 15,
"text": "\\ell(\\theta\\,;\\mathbf{y})"
},
{
"math_id": 16,
"text": "\\, \\Theta \\,,"
},
{
"math_id": 17,
"text": "\\frac{\\partial \\ell}{\\partial \\theta_{1}} = 0, \\quad \\frac{\\partial \\ell}{\\partial \\theta_{2}} = 0, \\quad \\ldots, \\quad \\frac{\\partial \\ell}{\\partial \\theta_{k}} = 0 ~,"
},
{
"math_id": 18,
"text": "\\, \\widehat{\\theta\\,} \\,,"
},
{
"math_id": 19,
"text": "\\, \\widehat{\\theta\\,} \\,"
},
{
"math_id": 20,
"text": "\\mathbf{H}\\left(\\widehat{\\theta\\,}\\right) = \\begin{bmatrix} \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_1^2} \\right|_{\\theta=\\widehat{\\theta\\,}} & \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_1 \\, \\partial \\theta_2} \\right|_{\\theta=\\widehat{\\theta\\,}} & \\dots & \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_1 \\, \\partial \\theta_k} \\right|_{\\theta=\\widehat{\\theta\\,}} \\\\ \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_2 \\, \\partial \\theta_1} \\right|_{\\theta=\\widehat{\\theta\\,}} & \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_2^2} \\right|_{\\theta=\\widehat{\\theta\\,}} & \\dots & \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_2 \\, \\partial \\theta_k} \\right|_{\\theta=\\widehat{\\theta\\,}} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_k \\, \\partial \\theta_1} \\right|_{\\theta=\\widehat{\\theta\\,}} & \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_k \\, \\partial \\theta_2} \\right|_{\\theta=\\widehat{\\theta\\,}} & \\dots & \\left. \\frac{\\partial^2 \\ell}{\\partial \\theta_k^2} \\right|_{\\theta=\\widehat{\\theta\\,}} \\end{bmatrix} ~,"
},
{
"math_id": 21,
"text": "\\widehat{\\theta\\,}"
},
{
"math_id": 22,
"text": "\\Theta = \\left\\{ \\theta : \\theta \\in \\mathbb{R}^{k},\\; h(\\theta) = 0 \\right\\} ~,"
},
{
"math_id": 23,
"text": "\\; h(\\theta) = \\left[ h_{1}(\\theta), h_{2}(\\theta), \\ldots, h_{r}(\\theta) \\right] \\;"
},
{
"math_id": 24,
"text": "\\, \\mathbb{R}^{k} \\,"
},
{
"math_id": 25,
"text": "\\; \\mathbb{R}^{r} ~."
},
{
"math_id": 26,
"text": "\\Theta"
},
{
"math_id": 27,
"text": "~h(\\theta) = 0 ~."
},
{
"math_id": 28,
"text": "\\; h_{1}, h_{2}, \\ldots, h_{r} \\;"
},
{
"math_id": 29,
"text": "\\; h_{1}, h_{2}, \\ldots, h_{r}, h_{r+1}, \\ldots, h_{k} \\;"
},
{
"math_id": 30,
"text": "\\; h^{\\ast} = \\left[ h_{1}, h_{2}, \\ldots, h_{k} \\right] \\;"
},
{
"math_id": 31,
"text": "\\mathbb{R}^{k}"
},
{
"math_id": 32,
"text": "\\; \\phi_{i} = h_{i}(\\theta_{1}, \\theta_{2}, \\ldots, \\theta_{k}) ~."
},
{
"math_id": 33,
"text": "\\, \\Sigma \\,"
},
{
"math_id": 34,
"text": "\\; \\Sigma = \\Gamma^{\\mathsf{T}} \\Gamma \\;,"
},
{
"math_id": 35,
"text": "\\Gamma"
},
{
"math_id": 36,
"text": "\\Gamma^{\\mathsf{T}}"
},
{
"math_id": 37,
"text": "\\frac{\\partial \\ell}{\\partial \\theta} - \\frac{\\partial h(\\theta)^\\mathsf{T}}{\\partial \\theta} \\lambda = 0"
},
{
"math_id": 38,
"text": "h(\\theta) = 0 \\;,"
},
{
"math_id": 39,
"text": "~ \\lambda = \\left[ \\lambda_{1}, \\lambda_{2}, \\ldots, \\lambda_{r}\\right]^\\mathsf{T} ~"
},
{
"math_id": 40,
"text": "\\; \\frac{\\partial h(\\theta)^\\mathsf{T}}{\\partial \\theta} \\;"
},
{
"math_id": 41,
"text": "\\widehat{\\ell\\,}(\\theta\\,;x)"
},
{
"math_id": 42,
"text": "\n \\widehat{\\ell\\,}(\\theta\\,;x)=\\frac1n \\sum_{i=1}^n \\ln f(x_i\\mid\\theta),\n "
},
{
"math_id": 43,
"text": "\\ell(\\theta) = \\operatorname{\\mathbb E}[\\, \\ln f(x_i\\mid\\theta) \\,]"
},
{
"math_id": 44,
"text": "\n \\hat{\\theta}\n "
},
{
"math_id": 45,
"text": "\n \\theta\n "
},
{
"math_id": 46,
"text": "\n g(\\theta)\n "
},
{
"math_id": 47,
"text": "\n \\alpha = g(\\theta )\n "
},
{
"math_id": 48,
"text": "\n \\hat{\\alpha} = g(\\hat{\\theta} )\n "
},
{
"math_id": 49,
"text": "g"
},
{
"math_id": 50,
"text": "f(\\cdot\\,;\\theta_0)"
},
{
"math_id": 51,
"text": "\n \\widehat{\\theta\\,}_\\mathrm{mle}\\ \\xrightarrow{\\text{p}}\\ \\theta_0.\n"
},
{
"math_id": 52,
"text": "\n \\widehat{\\theta\\,}_\\mathrm{mle}\\ \\xrightarrow{\\text{a.s.}}\\ \\theta_0.\n"
},
{
"math_id": 53,
"text": "\\widehat{\\ell\\,}(\\theta\\mid x)"
},
{
"math_id": 54,
"text": "\n \\sup_{\\theta\\in\\Theta} \\left\\|\\;\\widehat{\\ell\\,}(\\theta\\mid x) - \\ell(\\theta)\\;\\right\\| \\ \\xrightarrow{\\text{a.s.}}\\ 0.\n "
},
{
"math_id": 55,
"text": "\n \\sqrt{n}\\left(\\widehat{\\theta\\,}_\\mathrm{mle} - \\theta_0\\right)\\ \\xrightarrow{d}\\ \\mathcal{N}\\left(0,\\, I^{-1}\\right)\n "
},
{
"math_id": 56,
"text": "g(\\theta)"
},
{
"math_id": 57,
"text": "\\alpha=g(\\theta)"
},
{
"math_id": 58,
"text": "\\widehat{\\alpha} = g(\\,\\widehat{\\theta\\,}\\,). \\,"
},
{
"math_id": 59,
"text": "\\bar{L}(\\alpha) = \\sup_{\\theta: \\alpha = g(\\theta)} L(\\theta). \\, "
},
{
"math_id": 60,
"text": "y=g(x)"
},
{
"math_id": 61,
"text": "f_Y(y) = \\frac{f_X(x)}{|g'(x)|} "
},
{
"math_id": 62,
"text": "X"
},
{
"math_id": 63,
"text": "Y"
},
{
"math_id": 64,
"text": "~f(\\cdot\\,;\\theta_0)~,"
},
{
"math_id": 65,
"text": "\n \\sqrt{n\\,} \\, \\left( \\widehat{\\theta\\,}_\\text{mle} - \\theta_0 \\right)\\ \\ \\xrightarrow{d}\\ \\ \\mathcal{N} \\left( 0,\\ \\mathcal{I}^{-1} \\right) ~,\n "
},
{
"math_id": 66,
"text": "~\\mathcal{I}~"
},
{
"math_id": 67,
"text": "\n \\mathcal{I}_{jk} = \\operatorname{\\mathbb E} \\, \\biggl[ \\; -{ \\frac{\\partial^2\\ln f_{\\theta_0}(X_t)}{\\partial\\theta_j\\,\\partial\\theta_k } } \n \\; \\biggr] ~.\n "
},
{
"math_id": 68,
"text": "\n b_h \\; \\equiv \\; \\operatorname{\\mathbb E} \\biggl[ \\; \\left( \\widehat\\theta_\\mathrm{mle} - \\theta_0 \\right)_h \\; \\biggr]\n \\; = \\; \\frac{1}{\\,n\\,} \\, \\sum_{i, j, k = 1}^m \\; \\mathcal{I}^{h i} \\; \\mathcal{I}^{j k} \\left( \\frac{1}{\\,2\\,} \\, K_{i j k} \\; + \\; J_{j,i k} \\right)\n "
},
{
"math_id": 69,
"text": "\\mathcal{I}^{j k}"
},
{
"math_id": 70,
"text": "\\mathcal{I}^{-1}"
},
{
"math_id": 71,
"text": "\n \\frac{1}{\\,2\\,} \\, K_{i j k} \\; + \\; J_{j,i k} \\; = \\; \\operatorname{\\mathbb E}\\,\\biggl[\\;\n \\frac12 \\frac{\\partial^3 \\ln f_{\\theta_0}(X_t)}{\\partial\\theta_i\\;\\partial\\theta_j\\;\\partial\\theta_k} +\n \\frac{\\;\\partial\\ln f_{\\theta_0}(X_t)\\;}{\\partial\\theta_j}\\,\\frac{\\;\\partial^2\\ln f_{\\theta_0}(X_t)\\;}{\\partial\\theta_i \\, \\partial\\theta_k}\n \\; \\biggr] ~ .\n "
},
{
"math_id": 72,
"text": "\n \\widehat{\\theta\\,}^*_\\text{mle} = \\widehat{\\theta\\,}_\\text{mle} - \\widehat{b\\,} ~ .\n "
},
{
"math_id": 73,
"text": "\n \\operatorname{\\mathbb P}(\\theta\\mid x_1,x_2,\\ldots,x_n) = \\frac{f(x_1,x_2,\\ldots,x_n\\mid\\theta)\\operatorname{\\mathbb P}(\\theta)}{\\operatorname{\\mathbb P}(x_1,x_2,\\ldots,x_n)}\n "
},
{
"math_id": 74,
"text": "\\operatorname{\\mathbb P}(\\theta)"
},
{
"math_id": 75,
"text": "\\operatorname{\\mathbb P}(x_1,x_2,\\ldots,x_n)"
},
{
"math_id": 76,
"text": "f(x_1,x_2,\\ldots,x_n\\mid\\theta)\\operatorname{\\mathbb P}(\\theta)"
},
{
"math_id": 77,
"text": "f(x_1,x_2,\\ldots,x_n\\mid\\theta)"
},
{
"math_id": 78,
"text": "\\;w_1\\;"
},
{
"math_id": 79,
"text": "~\\operatorname{\\mathbb P}(w_1|x) \\; > \\; \\operatorname{\\mathbb P}(w_2|x)~;~"
},
{
"math_id": 80,
"text": "\\;w_2\\;"
},
{
"math_id": 81,
"text": "\\;w_1\\,, w_2\\;"
},
{
"math_id": 82,
"text": "w = \\underset{ w }{\\operatorname{arg\\;max}} \\; \\int_{-\\infty}^\\infty \\operatorname{\\mathbb P}(\\text{ error}\\mid x)\\operatorname{\\mathbb P}(x)\\,\\operatorname{d}x~"
},
{
"math_id": 83,
"text": "\\operatorname{\\mathbb P}(\\text{ error}\\mid x) = \\operatorname{\\mathbb P}(w_1\\mid x)~"
},
{
"math_id": 84,
"text": "\\;\\operatorname{\\mathbb P}(\\text{ error}\\mid x) = \\operatorname{\\mathbb P}(w_2\\mid x)\\;"
},
{
"math_id": 85,
"text": "\\;w_1\\;."
},
{
"math_id": 86,
"text": "\\operatorname{\\mathbb P}(w_i \\mid x) = \\frac{\\operatorname{\\mathbb P}(x \\mid w_i) \\operatorname{\\mathbb P}(w_i)}{\\operatorname{\\mathbb P}(x)}"
},
{
"math_id": 87,
"text": "h_\\text{Bayes} = \\underset{ w }{\\operatorname{arg\\;max}} \\, \\bigl[\\, \\operatorname{\\mathbb P}(x\\mid w)\\,\\operatorname{\\mathbb P}(w) \\,\\bigr]\\;,"
},
{
"math_id": 88,
"text": "h_\\text{Bayes}"
},
{
"math_id": 89,
"text": "\\;\\operatorname{\\mathbb P}(w)\\;"
},
{
"math_id": 90,
"text": "\\hat \\theta"
},
{
"math_id": 91,
"text": "Q_{\\hat \\theta}"
},
{
"math_id": 92,
"text": "P_{\\theta_0}"
},
{
"math_id": 93,
"text": "\\widehat{n}"
},
{
"math_id": 94,
"text": "\n\\begin{align}\n\\operatorname{\\mathbb P}\\bigl[\\;\\mathrm{H} = 49 \\mid p=\\tfrac{1}{3}\\;\\bigr] & = \\binom{80}{49}(\\tfrac{1}{3})^{49}(1-\\tfrac{1}{3})^{31} \\approx 0.000, \\\\[6pt]\n\\operatorname{\\mathbb P}\\bigl[\\;\\mathrm{H} = 49 \\mid p=\\tfrac{1}{2}\\;\\bigr] & = \\binom{80}{49}(\\tfrac{1}{2})^{49}(1-\\tfrac{1}{2})^{31} \\approx 0.012, \\\\[6pt]\n\\operatorname{\\mathbb P}\\bigl[\\;\\mathrm{H} = 49 \\mid p=\\tfrac{2}{3}\\;\\bigr] & = \\binom{80}{49}(\\tfrac{2}{3})^{49}(1-\\tfrac{2}{3})^{31} \\approx 0.054~.\n\\end{align}\n"
},
{
"math_id": 95,
"text": "\nL(p) = f_D(\\mathrm{H} = 49 \\mid p) = \\binom{80}{49} p^{49}(1 - p)^{31}~,\n"
},
{
"math_id": 96,
"text": "\n\\begin{align}\n0 & = \\frac{\\partial}{\\partial p} \\left( \\binom{80}{49} p^{49}(1-p)^{31} \\right)~, \\\\[8pt]\n0 & = 49 p^{48}(1-p)^{31} - 31 p^{49}(1-p)^{30} \\\\[8pt]\n & = p^{48}(1-p)^{30}\\left[ 49 (1-p) - 31 p \\right] \\\\[8pt]\n & = p^{48}(1-p)^{30}\\left[ 49 - 80 p \\right]~.\n\\end{align}\n"
},
{
"math_id": 97,
"text": "\\mathcal{N}(\\mu, \\sigma^2)"
},
{
"math_id": 98,
"text": "f(x\\mid \\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}\\ }\n \\exp\\left(-\\frac {(x-\\mu)^2}{2\\sigma^2} \\right), "
},
{
"math_id": 99,
"text": "f(x_1,\\ldots,x_n \\mid \\mu,\\sigma^2) = \\prod_{i=1}^n f( x_i\\mid \\mu, \\sigma^2) = \\left( \\frac{1}{2\\pi\\sigma^2} \\right)^{n/2} \\exp\\left( -\\frac{ \\sum_{i=1}^n (x_i-\\mu)^2}{2\\sigma^2}\\right)."
},
{
"math_id": 100,
"text": "\\mathcal{L} (\\mu,\\sigma^2) = f(x_1,\\ldots,x_n \\mid \\mu, \\sigma^2)"
},
{
"math_id": 101,
"text": "\n \\log\\Bigl( \\mathcal{L} (\\mu,\\sigma^2)\\Bigr) = -\\frac{\\,n\\,}{2} \\log(2\\pi\\sigma^2)\n - \\frac{1}{2\\sigma^2} \\sum_{i=1}^n (\\,x_i-\\mu\\,)^2\n"
},
{
"math_id": 102,
"text": "\n\\begin{align}\n0 & = \\frac{\\partial}{\\partial \\mu} \\log\\Bigl( \\mathcal{L} (\\mu,\\sigma^2)\\Bigr) =\n 0 - \\frac{\\;-2 n(\\bar{x}-\\mu)\\;}{2\\sigma^2}.\n\\end{align}\n"
},
{
"math_id": 103,
"text": " \\bar{x} "
},
{
"math_id": 104,
"text": "\\widehat\\mu = \\bar{x} = \\sum^n_{i=1} \\frac{\\,x_i\\,}{n}. "
},
{
"math_id": 105,
"text": "\\operatorname{\\mathbb E}\\bigl[\\;\\widehat\\mu\\;\\bigr] = \\mu, \\, "
},
{
"math_id": 106,
"text": "\\widehat\\mu"
},
{
"math_id": 107,
"text": "\n\\begin{align}\n0 & = \\frac{\\partial}{\\partial \\sigma} \\log\\Bigl( \\mathcal{L} (\\mu,\\sigma^2)\\Bigr) = -\\frac{\\,n\\,}{\\sigma} \n + \\frac{1}{\\sigma^3} \\sum_{i=1}^{n} (\\,x_i-\\mu\\,)^2.\n\\end{align}\n"
},
{
"math_id": 108,
"text": "\\widehat\\sigma^2 = \\frac{1}{n} \\sum_{i=1}^n(x_i-\\mu)^2."
},
{
"math_id": 109,
"text": "\\mu = \\widehat\\mu"
},
{
"math_id": 110,
"text": "\\widehat\\sigma^2 = \\frac{1}{n} \\sum_{i=1}^n (x_i - \\bar{x})^2 = \\frac{1}{n}\\sum_{i=1}^n x_i^2 -\\frac{1}{n^2}\\sum_{i=1}^n\\sum_{j=1}^n x_i x_j."
},
{
"math_id": 111,
"text": "\\delta_i \\equiv \\mu - x_i"
},
{
"math_id": 112,
"text": "\\widehat\\sigma^2 = \\frac{1}{n} \\sum_{i=1}^n (\\mu - \\delta_i)^2 -\\frac{1}{n^2}\\sum_{i=1}^n\\sum_{j=1}^n (\\mu - \\delta_i)(\\mu - \\delta_j)."
},
{
"math_id": 113,
"text": "\\operatorname{\\mathbb E}\\bigl[\\;\\delta_i\\;\\bigr] = 0 "
},
{
"math_id": 114,
"text": "\\operatorname{E}\\bigl[\\;\\delta_i^2\\;\\bigr] = \\sigma^2 "
},
{
"math_id": 115,
"text": "\\operatorname{\\mathbb E}\\bigl[\\;\\widehat\\sigma^2\\;\\bigr]= \\frac{\\,n-1\\,}{n}\\sigma^2."
},
{
"math_id": 116,
"text": "\\widehat\\sigma^2"
},
{
"math_id": 117,
"text": "\\sigma^2"
},
{
"math_id": 118,
"text": "\\widehat\\sigma"
},
{
"math_id": 119,
"text": "\\sigma"
},
{
"math_id": 120,
"text": "\\theta=(\\mu,\\sigma^2)"
},
{
"math_id": 121,
"text": "\\widehat{\\theta\\,} = \\left(\\widehat{\\mu},\\widehat{\\sigma}^2\\right)."
},
{
"math_id": 122,
"text": "\n \\log\\Bigl( \\mathcal{L}(\\widehat\\mu,\\widehat\\sigma)\\Bigr) = \\frac{\\,-n\\;\\;}{2} \\bigl(\\,\\log(2\\pi\\widehat\\sigma^2) +1\\,\\bigr)\n"
},
{
"math_id": 123,
"text": "y_1"
},
{
"math_id": 124,
"text": "y_2"
},
{
"math_id": 125,
"text": "f(y_1,y_2)=f(y_1)f(y_2)\\,"
},
{
"math_id": 126,
"text": "(y_1,\\ldots,y_n)"
},
{
"math_id": 127,
"text": "(\\mu_1, \\ldots, \\mu_n)"
},
{
"math_id": 128,
"text": "\\mathit\\Sigma"
},
{
"math_id": 129,
"text": "f(y_1,\\ldots,y_n)=\\frac{1}{(2\\pi)^{n/2}\\sqrt{\\det(\\mathit\\Sigma)}} \\exp\\left( -\\frac{1}{2} \\left[y_1-\\mu_1,\\ldots,y_n-\\mu_n\\right]\\mathit\\Sigma^{-1} \\left[y_1-\\mu_1,\\ldots,y_n-\\mu_n\\right]^\\mathrm{T} \\right)"
},
{
"math_id": 130,
"text": " f(y_1,y_2) = \\frac{1}{2\\pi \\sigma_{1} \\sigma_2 \\sqrt{1-\\rho^2}} \\exp\\left[ -\\frac{1}{2(1-\\rho^2)} \\left(\\frac{(y_1-\\mu_1)^2}{\\sigma_1^2} - \\frac{2\\rho(y_1-\\mu_1)(y_2-\\mu_2)}{\\sigma_1\\sigma_2} + \\frac{(y_2-\\mu_2)^2}{\\sigma_2^2}\\right) \\right] "
},
{
"math_id": 131,
"text": "X_1,\\ X_2,\\ldots,\\ X_m"
},
{
"math_id": 132,
"text": "n"
},
{
"math_id": 133,
"text": "x_1+x_2+\\cdots+x_m=n"
},
{
"math_id": 134,
"text": "p_i"
},
{
"math_id": 135,
"text": "p_1+p_2+\\cdots+p_m=1"
},
{
"math_id": 136,
"text": "X_i"
},
{
"math_id": 137,
"text": "x_1,\\ x_2,\\ldots,x_m"
},
{
"math_id": 138,
"text": "f(x_1,x_2,\\ldots,x_m\\mid p_1,p_2,\\ldots,p_m)=\\frac{n!}{\\prod x_i!}\\prod p_i^{x_i}= \\binom{n}{x_1,x_2,\\ldots,x_m} p_1^{x_1} p_2^{x_2} \\cdots p_m^{x_m}"
},
{
"math_id": 139,
"text": "\\ell(p_1,p_2,\\ldots,p_m)=\\log n!-\\sum_{i=1}^m \\log x_i!+\\sum_{i=1}^m x_i\\log p_i"
},
{
"math_id": 140,
"text": "L(p_1,p_2,\\ldots,p_m,\\lambda)=\\ell(p_1,p_2,\\ldots,p_m)+\\lambda\\left(1-\\sum_{i=1}^m p_i\\right)"
},
{
"math_id": 141,
"text": "\\hat{p}_i=\\frac{x_i}{n}"
},
{
"math_id": 142,
"text": "\\frac{\\partial \\ell(\\theta;\\mathbf{y})}{\\partial \\theta} = 0"
},
{
"math_id": 143,
"text": "\\widehat{\\theta} = \\widehat{\\theta}(\\mathbf{y})"
},
{
"math_id": 144,
"text": "\\widehat{\\theta}_{1}"
},
{
"math_id": 145,
"text": "\\left\\{ \\widehat{\\theta}_{r} \\right\\}"
},
{
"math_id": 146,
"text": "\\widehat{\\theta}_{r+1} = \\widehat{\\theta}_{r} + \\eta_{r} \\mathbf{d}_r\\left(\\widehat{\\theta}\\right)"
},
{
"math_id": 147,
"text": "\\mathbf{d}_{r}\\left(\\widehat{\\theta}\\right)"
},
{
"math_id": 148,
"text": "\\eta_{r}"
},
{
"math_id": 149,
"text": "\\eta_r\\in \\R^+"
},
{
"math_id": 150,
"text": "\\mathbf{d}_r\\left(\\widehat{\\theta}\\right) = \\nabla\\ell\\left(\\widehat{\\theta}_r;\\mathbf{y}\\right)"
},
{
"math_id": 151,
"text": "\\eta_r = 1"
},
{
"math_id": 152,
"text": "\\mathbf{d}_r\\left(\\widehat{\\theta}\\right) = -\\mathbf{H}^{-1}_r\\left(\\widehat{\\theta}\\right) \\mathbf{s}_r\\left(\\widehat{\\theta}\\right)"
},
{
"math_id": 153,
"text": "\\mathbf{s}_{r}(\\widehat{\\theta})"
},
{
"math_id": 154,
"text": "\\mathbf{H}^{-1}_r \\left(\\widehat{\\theta}\\right)"
},
{
"math_id": 155,
"text": "\\mathbf{d}_r\\left(\\widehat{\\theta}\\right) = - \\left[ \\frac{1}{n} \\sum_{t=1}^n \\frac{\\partial \\ell(\\theta;\\mathbf{y})}{\\partial \\theta} \\left( \\frac{\\partial \\ell(\\theta;\\mathbf{y})}{\\partial \\theta} \\right)^{\\mathsf{T}} \\right]^{-1} \\mathbf{s}_r \\left(\\widehat{\\theta}\\right)"
},
{
"math_id": 156,
"text": "\\mathbf{H}_{k+1} =\n \\left(I - \\gamma_k y_k s_k^\\mathsf{T}\\right) \\mathbf{H}_k \\left(I - \\gamma_k s_k y_k^\\mathsf{T}\\right) + \\gamma_k y_k y_k^\\mathsf{T},\n"
},
{
"math_id": 157,
"text": "y_k = \\nabla\\ell(x_k + s_k) - \\nabla\\ell(x_k),"
},
{
"math_id": 158,
"text": "\\gamma_k = \\frac{1}{y_k^T s_k},"
},
{
"math_id": 159,
"text": "s_k = x_{k+1} - x_k."
},
{
"math_id": 160,
"text": "B_{k+1} = B_k + \\frac{y_k y_k^\\mathsf{T}}{y_k^\\mathsf{T} s_k} - \\frac{B_k s_k s_k^\\mathsf{T} B_k^\\mathsf{T}}{s_k^\\mathsf{T} B_k s_k}\\ ,"
},
{
"math_id": 161,
"text": "\\mathcal{I}(\\theta) = \\operatorname{\\mathbb E}\\left[\\mathbf{H}_r \\left(\\widehat{\\theta}\\right)\\right]"
}
]
| https://en.wikipedia.org/wiki?curid=140806 |
14080780 | Neves (video game) | Neves, known in Japan as , is a puzzle video game developed by Yuke's Media Creations for the Nintendo DS, based on the Japanese Lucky Puzzle, a tangram-like dissection puzzle. In the game, players use the stylus to move, rotate and flip pieces on the DS's touch screen to clear puzzles. It features over 500 different puzzles from which to choose.
A sequel, Neves Plus ( in Japan), was released for WiiWare in Japan on 26 May 2009, in North America on 22 June 2009 and in Europe on 11 June 2010.
Gameplay.
In each puzzle the player is given an image which they then must try to recreate using only the following seven pieces:
The seven pieces form a rectangle of length 1 by 5/4.
Game modes.
With the Bragging Rights mode, multiple "Neves" players can compete with others who do not have a copy of the game via DS Download Play.
WiiWare version.
Players of the WiiWare version of the game called NEVES Plus, (known as NEVES Plus: Pantheon of Tangrams in Europe) use the Wii Remote to grab and rotate puzzle pieces, with up to four players being able to solve the puzzle simultaneously or compete against each other in teams of two. The game also features new multiplayer modes and an Ancient Egypt-themed setting. A sequel of the game called "NEVES Plus Returns" ("Hamekomi Lucky Puzzle Wii Returns") was released for WiiWare in Japan.
Reception.
"Neves" received above-average reviews, while "Neves Plus" received "generally favourable reviews", according to the review aggregation website Metacritic. In Japan, "Famitsu" gave the former a score of one six and three sevens for a total of 27 out of 40.
Craig Harris of "IGN" said that the DS version was fun and appealing but too expensive for the amount of stuff it offers. "Den of Geek" criticised its very frustrating controls. "Hexus" didn't gave "Neves" a score but praised the amount of levels. The site also stated that the game lacks Wi-Fi and multiple profile feature. Todd Melick of "GamePro" called the same DS version "an addictive puzzle solving experience. Other than the drawback of the $30 price, my only real gripe with the game is that the jazz inspired soundtrack gets repetitive. The gameplay may grow old to some, but if you're looking for a mind-bending experience, "Neves" is a game that will keep you thinking." However, Andrew Ramsey later gave "Neves Plus" four out of five, saying, "While "Neves Plus" seems like its ["sic"] barebones there is actually a lot going for this package. The only letdown is a lack of an online mode. But with all the puzzles you get and the many modes to play this is certainly another winner to add on Nintendo's list, especially since its sporting a 600 point price tag. So grab your friends and get ready to solve some puzzles, especially some of the odd ones featuring Japanese caricatures."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle{1/2}"
},
{
"math_id": 1,
"text": "\\scriptstyle{1/\\sqrt{2}}"
},
{
"math_id": 2,
"text": "\\scriptstyle{1/4}"
},
{
"math_id": 3,
"text": "\\scriptstyle{1/2\\sqrt{2}}"
},
{
"math_id": 4,
"text": "\\scriptstyle{3/2\\sqrt{2}}"
}
]
| https://en.wikipedia.org/wiki?curid=14080780 |
140841 | Sufficient statistic | Statistical principle
In statistics, sufficiency is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. A sufficient statistic contains all of the information that the dataset provides about the model parameters. It is closely related to the concepts of an ancillary statistic which contains no information about the model parameters, and of a complete statistic which only contains information about the parameters and no ancillary information.
A related concept is that of linear sufficiency, which is weaker than "sufficiency" but can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators. The Kolmogorov structure function deals with individual finite data; the related notion there is the algorithmic sufficient statistic.
The concept is due to Sir Ronald Fisher in 1920. Stephen Stigler noted in 1973 that the concept of sufficiency had fallen out of favor in descriptive statistics because of the strong dependence on an assumption of the distributional form (see Pitman–Koopman–Darmois theorem below), but remained very important in theoretical work.
Background.
Roughly, given a set formula_0 of independent identically distributed data conditioned on an unknown parameter formula_1, a sufficient statistic is a function formula_2 whose value contains all the information needed to compute any estimate of the parameter (e.g. a maximum likelihood estimate). Due to the factorization theorem (see below), for a sufficient statistic formula_2, the probability density can be written as formula_3. From this factorization, it can easily be seen that the maximum likelihood estimate of formula_1 will interact with formula_4 only through formula_2. Typically, the sufficient statistic is a simple function of the data, e.g. the sum of all the data points.
More generally, the "unknown parameter" may represent a vector of unknown quantities or may represent everything about the model that is unknown or not fully specified. In such a case, the sufficient statistic may be a set of functions, called a "jointly sufficient statistic". Typically, there are as many functions as there are parameters. For example, for a Gaussian distribution with unknown mean and variance, the jointly sufficient statistic, from which maximum likelihood estimates of both parameters can be estimated, consists of two functions, the sum of all data points and the sum of all squared data points (or equivalently, the sample mean and sample variance).
In other words, the joint probability distribution of the data is conditionally independent of the parameter given the value of the sufficient statistic for the parameter. Both the statistic and the underlying parameter can be vectors.
Mathematical definition.
A statistic "t" = "T"("X") is sufficient for underlying parameter "θ" precisely if the conditional probability distribution of the data "X", given the statistic "t" = "T"("X"), does not depend on the parameter "θ".
Alternatively, one can say the statistic "T"("X") is sufficient for "θ" if, for all prior distributions on "θ", the mutual information between "θ" and "T(X)" equals the mutual information between "θ" and "X". In other words, the data processing inequality becomes an equality:
formula_5
Example.
As an example, the sample mean is sufficient for the mean ("μ") of a normal distribution with known variance. Once the sample mean is known, no further information about "μ" can be obtained from the sample itself. On the other hand, for an arbitrary distribution the median is not sufficient for the mean: even if the median of the sample is known, knowing the sample itself would provide further information about the population mean. For example, if the observations that are less than the median are only slightly less, but observations exceeding the median exceed it by a large amount, then this would have a bearing on one's inference about the population mean.
Fisher–Neyman factorization theorem.
"Fisher's factorization theorem" or "factorization criterion" provides a convenient characterization of a sufficient statistic. If the probability density function is ƒ"θ"("x"), then "T" is sufficient for "θ" if and only if nonnegative functions "g" and "h" can be found such that
formula_6
i.e. the density ƒ can be factored into a product such that one factor, "h", does not depend on "θ" and the other factor, which does depend on "θ", depends on "x" only through "T"("x"). A general proof of this was given by Halmos and Savage and the theorem is sometimes referred to as the Halmos–Savage factorization theorem. The proofs below handle special cases, but an alternative general proof along the same lines can be given. In many simple cases the probability density function is fully specified by formula_1 and formula_7, and formula_8 (see Examples).
It is easy to see that if "F"("t") is a one-to-one function and "T" is a sufficient
statistic, then "F"("T") is a sufficient statistic. In particular we can multiply a
sufficient statistic by a nonzero constant and get another sufficient statistic.
Likelihood principle interpretation.
An implication of the theorem is that when using likelihood-based inference, two sets of data yielding the same value for the sufficient statistic "T"("X") will always yield the same inferences about "θ". By the factorization criterion, the likelihood's dependence on "θ" is only in conjunction with "T"("X"). As this is the same in both cases, the dependence on "θ" will be the same as well, leading to identical inferences.
Proof.
Due to Hogg and Craig. Let formula_9, denote a random sample from a distribution having the pdf "f"("x", "θ") for "ι" < "θ" < "δ". Let "Y"1 = "u"1("X"1, "X"2, ..., "X""n") be a statistic whose pdf is "g"1("y"1; "θ"). What we want to prove is that "Y"1 = "u"1("X"1, "X"2, ..., "X""n") is a sufficient statistic for "θ" if and only if, for some function "H",
formula_10
First, suppose that
formula_10
We shall make the transformation "y""i" = "u"i("x"1, "x"2, ..., "x""n"), for "i" = 1, ..., "n", having inverse functions "x""i" = "w""i"("y"1, "y"2, ..., "y""n"), for "i" = 1, ..., "n", and Jacobian formula_11. Thus,
formula_12
The left-hand member is the joint pdf "g"("y"1, "y"2, ..., "y""n"; θ) of "Y"1 = "u"1("X"1, ..., "X""n"), ..., "Y""n" = "u""n"("X"1, ..., "X""n"). In the right-hand member, formula_13 is the pdf of formula_14, so that formula_15 is the quotient of formula_16 and formula_13; that is, it is the conditional pdf formula_17 of formula_18 given formula_19.
But formula_20, and thus formula_21, was given not to depend upon formula_1. Since formula_1 was not introduced in the transformation and accordingly not in the Jacobian formula_22, it follows that formula_23 does not depend upon formula_1 and that formula_14 is a sufficient statistics for formula_1.
The converse is proven by taking:
formula_24
where formula_25 does not depend upon formula_1 because formula_26 depend only upon formula_27, which are independent on formula_28 when conditioned by formula_14, a sufficient statistics by hypothesis. Now divide both members by the absolute value of the non-vanishing Jacobian formula_22, and replace formula_29 by the functions formula_30 in formula_31. This yields
formula_32
where formula_33 is the Jacobian with formula_34 replaced by their value in terms formula_35. The left-hand member is necessarily the joint pdf formula_36 of formula_37. Since formula_38, and thus formula_39, does not depend upon formula_1, then
formula_40
is a function that does not depend upon formula_1.
Another proof.
A simpler more illustrative proof is as follows, although it applies only in the discrete case.
We use the shorthand notation to denote the joint probability density of formula_41 by formula_42. Since formula_43 is a function of formula_44, we have formula_45, as long as formula_46 and zero otherwise. Therefore:
formula_47
with the last equality being true by the definition of sufficient statistics. Thus formula_48 with formula_49 and formula_50.
Conversely, if formula_48, we have
formula_51
With the first equality by the definition of pdf for multiple variables, the second by the remark above, the third by hypothesis, and the fourth because the summation is not over formula_52.
Let formula_53 denote the conditional probability density of formula_44 given formula_54. Then we can derive an explicit expression for this:
formula_55
With the first equality by definition of conditional probability density, the second by the remark above, the third by the equality proven above, and the fourth by simplification. This expression does not depend on formula_1 and thus formula_43 is a sufficient statistic.
Minimal sufficiency.
A sufficient statistic is minimal sufficient if it can be represented as a function of any other sufficient statistic. In other words, "S"("X") is minimal sufficient if and only if
Intuitively, a minimal sufficient statistic "most efficiently" captures all possible information about the parameter "θ".
A useful characterization of minimal sufficiency is that when the density "f""θ" exists, "S"("X") is minimal sufficient if and only if
formula_56 is independent of "θ" :formula_57 "S"("x") = "S"("y")
This follows as a consequence from Fisher's factorization theorem stated above.
A case in which there is no minimal sufficient statistic was shown by Bahadur, 1954. However, under mild conditions, a minimal sufficient statistic does always exist. In particular, in Euclidean space, these conditions always hold if the random variables (associated with formula_58 ) are all discrete or are all continuous.
If there exists a minimal sufficient statistic, and this is usually the case, then every complete sufficient statistic is necessarily minimal sufficient(note that this statement does not exclude a pathological case in which a complete sufficient exists while there is no minimal sufficient statistic). While it is hard to find cases in which a minimal sufficient statistic does not exist, it is not so hard to find cases in which there is no complete statistic.
The collection of likelihood ratios formula_59 for formula_60, is a minimal sufficient statistic if the parameter space is discrete formula_61.
Examples.
Bernoulli distribution.
If "X"1, ..., "X""n" are independent Bernoulli-distributed random variables with expected value "p", then the sum "T"("X") = "X"1 + ... + "X""n" is a sufficient statistic for "p" (here 'success' corresponds to "X""i" = 1 and 'failure' to "X""i" = 0; so "T" is the total number of successes)
This is seen by considering the joint probability distribution:
formula_62
Because the observations are independent, this can be written as
formula_63
and, collecting powers of "p" and 1 − "p", gives
formula_64
which satisfies the factorization criterion, with "h"("x") = 1 being just a constant.
Note the crucial feature: the unknown parameter "p" interacts with the data "x" only via the statistic "T"("x") = Σ "x""i".
As a concrete application, this gives a procedure for distinguishing a fair coin from a biased coin.
Uniform distribution.
If "X"1, ..., "X""n" are independent and uniformly distributed on the interval [0,"θ"], then "T"("X") = max("X"1, ..., "X""n") is sufficient for θ — the sample maximum is a sufficient statistic for the population maximum.
To see this, consider the joint probability density function of "X" ("X"1...,"X""n"). Because the observations are independent, the pdf can be written as a product of individual densities
formula_65
where 1{"..."} is the indicator function. Thus the density takes form required by the Fisher–Neyman factorization theorem, where "h"("x") = 1{min{"xi"}≥0}, and the rest of the expression is a function of only "θ" and "T"("x") = max{"xi"}.
In fact, the minimum-variance unbiased estimator (MVUE) for "θ" is
formula_66
This is the sample maximum, scaled to correct for the bias, and is MVUE by the Lehmann–Scheffé theorem. Unscaled sample maximum "T"("X") is the maximum likelihood estimator for "θ".
Uniform distribution (with two parameters).
If formula_67 are independent and uniformly distributed on the interval formula_68 (where formula_69 and formula_70 are unknown parameters), then formula_71 is a two-dimensional sufficient statistic for formula_72.
To see this, consider the joint probability density function of formula_73. Because the observations are independent, the pdf can be written as a product of individual densities, i.e.
formula_74
The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting
formula_75
Since formula_76 does not depend on the parameter formula_77 and formula_78 depends only on formula_79 through the function formula_80
the Fisher–Neyman factorization theorem implies formula_81 is a sufficient statistic for formula_72.
Poisson distribution.
If "X"1, ..., "X""n" are independent and have a Poisson distribution with parameter "λ", then the sum "T"("X") = "X"1 + ... + "X""n" is a sufficient statistic for "λ".
To see this, consider the joint probability distribution:
formula_82
Because the observations are independent, this can be written as
formula_83
which may be written as
formula_84
which shows that the factorization criterion is satisfied, where "h"("x") is the reciprocal of the product of the factorials. Note the parameter λ interacts with the data only through its sum "T"("X").
Normal distribution.
If formula_85 are independent and normally distributed with expected value formula_1 (a parameter) and known finite variance formula_86 then
formula_87
is a sufficient statistic for formula_88
To see this, consider the joint probability density function of formula_89. Because the observations are independent, the pdf can be written as a product of individual densities, i.e.
formula_90
The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting
formula_91
Since formula_76 does not depend on the parameter formula_1 and formula_92 depends only on formula_79 through the function
formula_93
the Fisher–Neyman factorization theorem implies formula_94 is a sufficient statistic for formula_1.
If formula_95 is unknown and since formula_96, the above likelihood can be rewritten as
formula_97
The Fisher–Neyman factorization theorem still holds and implies that formula_98 is a joint sufficient statistic for formula_99.
Exponential distribution.
If formula_37 are independent and exponentially distributed with expected value "θ" (an unknown real-valued positive parameter), then formula_100 is a sufficient statistic for θ.
To see this, consider the joint probability density function of formula_89. Because the observations are independent, the pdf can be written as a product of individual densities, i.e.
formula_101
The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting
formula_102
Since formula_76 does not depend on the parameter formula_1 and formula_92 depends only on formula_79 through the function formula_100
the Fisher–Neyman factorization theorem implies formula_100 is a sufficient statistic for formula_1.
Gamma distribution.
If formula_37 are independent and distributed as a formula_103, where formula_69 and formula_70 are unknown parameters of a Gamma distribution, then formula_104 is a two-dimensional sufficient statistic for formula_77.
To see this, consider the joint probability density function of formula_89. Because the observations are independent, the pdf can be written as a product of individual densities, i.e.
formula_105
The joint density of the sample takes the form required by the Fisher–Neyman factorization theorem, by letting
formula_106
Since formula_76 does not depend on the parameter formula_72 and formula_78 depends only on formula_79 through the function formula_107
the Fisher–Neyman factorization theorem implies formula_108 is a sufficient statistic for formula_109
Rao–Blackwell theorem.
Sufficiency finds a useful application in the Rao–Blackwell theorem, which states that if "g"("X") is any kind of estimator of "θ", then typically the conditional expectation of "g"("X") given sufficient statistic "T"("X") is a better (in the sense of having lower variance) estimator of "θ", and is never worse. Sometimes one can very easily construct a very crude estimator "g"("X"), and then evaluate that conditional expected value to get an estimator that is in various senses optimal.
Exponential family.
According to the Pitman–Koopman–Darmois theorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there a sufficient statistic whose dimension remains bounded as sample size increases. Intuitively, this states that nonexponential families of distributions on the real line require nonparametric statistics to fully capture the information in the data.
Less tersely, suppose formula_110 are independent identically distributed real random variables whose distribution is known to be in some family of probability distributions, parametrized by formula_1, satisfying certain technical regularity conditions, then that family is an "exponential" family if and only if there is a formula_111-valued sufficient statistic formula_112 whose number of scalar components formula_113 does not increase as the sample size "n" increases.
This theorem shows that the existence of a finite-dimensional, real-vector-valued sufficient statistics sharply restricts the possible forms of a family of distributions on the real line.
When the parameters or the random variables are no longer real-valued, the situation is more complex.
Other types of sufficiency.
Bayesian sufficiency.
An alternative formulation of the condition that a statistic be sufficient, set in a Bayesian context, involves the posterior distributions obtained by using the full data-set and by using only a statistic. Thus the requirement is that, for almost every "x",
formula_114
More generally, without assuming a parametric model, we can say that the statistics "T" is "predictive sufficient" if
formula_115
It turns out that this "Bayesian sufficiency" is a consequence of the formulation above, however they are not directly equivalent in the infinite-dimensional case. A range of theoretical results for sufficiency in a Bayesian context is available.
Linear sufficiency.
A concept called "linear sufficiency" can be formulated in a Bayesian context, and more generally. First define the best linear predictor of a vector "Y" based on "X" as formula_116. Then a linear statistic "T"("x") is linear sufficient if
formula_117
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{X}"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "T(\\mathbf{X})"
},
{
"math_id": 3,
"text": "f_{\\mathbf{X}}(x) = h(x) \\, g(\\theta, T(x))"
},
{
"math_id": 4,
"text": "\\mathbf{X}"
},
{
"math_id": 5,
"text": "I\\bigl(\\theta ; T(X)\\bigr) = I(\\theta ; X)"
},
{
"math_id": 6,
"text": " f_\\theta(x)=h(x) \\, g_\\theta(T(x)), "
},
{
"math_id": 7,
"text": "T(x)"
},
{
"math_id": 8,
"text": "h(x)=1"
},
{
"math_id": 9,
"text": "X_1, X_2, \\ldots, X_n"
},
{
"math_id": 10,
"text": " \\prod_{i=1}^n f(x_i; \\theta) = g_1 \\left[u_1 (x_1, x_2, \\dots, x_n); \\theta \\right] H(x_1, x_2, \\dots, x_n). "
},
{
"math_id": 11,
"text": " J = \\left[w_i/y_j \\right] "
},
{
"math_id": 12,
"text": "\n \\prod_{i=1}^n f \\left[ w_i(y_1, y_2, \\dots, y_n); \\theta \\right] = \n |J| g_1 (y_1; \\theta) H \\left[ w_1(y_1, y_2, \\dots, y_n), \\dots, w_n(y_1, y_2, \\dots, y_n) \\right].\n"
},
{
"math_id": 13,
"text": "g_1(y_1;\\theta)"
},
{
"math_id": 14,
"text": "Y_1"
},
{
"math_id": 15,
"text": "H[ w_1, \\dots , w_n] |J|"
},
{
"math_id": 16,
"text": "g(y_1,\\dots,y_n;\\theta)"
},
{
"math_id": 17,
"text": "h(y_2, \\dots, y_n \\mid y_1; \\theta)"
},
{
"math_id": 18,
"text": "Y_2,\\dots,Y_n"
},
{
"math_id": 19,
"text": "Y_1=y_1"
},
{
"math_id": 20,
"text": "H(x_1,x_2,\\dots,x_n)"
},
{
"math_id": 21,
"text": "H\\left[w_1(y_1,\\dots,y_n), \\dots, w_n(y_1, \\dots, y_n))\\right]"
},
{
"math_id": 22,
"text": "J"
},
{
"math_id": 23,
"text": "h(y_2, \\dots, y_n \\mid y_1; \\theta)"
},
{
"math_id": 24,
"text": "g(y_1,\\dots,y_n;\\theta)=g_1(y_1; \\theta) h(y_2, \\dots, y_n \\mid y_1),"
},
{
"math_id": 25,
"text": "h(y_2, \\dots, y_n \\mid y_1)"
},
{
"math_id": 26,
"text": "Y_2 ... Y_n"
},
{
"math_id": 27,
"text": "X_1 ... X_n"
},
{
"math_id": 28,
"text": "\\Theta"
},
{
"math_id": 29,
"text": "y_1, \\dots, y_n"
},
{
"math_id": 30,
"text": "u_1(x_1, \\dots, x_n), \\dots, u_n(x_1,\\dots, x_n)"
},
{
"math_id": 31,
"text": "x_1,\\dots, x_n"
},
{
"math_id": 32,
"text": "\\frac{g\\left[ u_1(x_1, \\dots, x_n), \\dots, u_n(x_1, \\dots, x_n); \\theta \\right]}{|J^*|}=g_1\\left[u_1(x_1,\\dots,x_n); \\theta\\right] \\frac{h(u_2, \\dots, u_n \\mid u_1)}{|J^*|}"
},
{
"math_id": 33,
"text": "J^*"
},
{
"math_id": 34,
"text": "y_1,\\dots,y_n"
},
{
"math_id": 35,
"text": "x_1, \\dots, x_n"
},
{
"math_id": 36,
"text": "f(x_1;\\theta)\\cdots f(x_n;\\theta)"
},
{
"math_id": 37,
"text": "X_1,\\dots,X_n"
},
{
"math_id": 38,
"text": "h(y_2,\\dots,y_n\\mid y_1)"
},
{
"math_id": 39,
"text": "h(u_2,\\dots,u_n\\mid u_1)"
},
{
"math_id": 40,
"text": "H(x_1,\\dots,x_n)=\\frac{h(u_2,\\dots,u_n\\mid u_1)}{|J^*|}"
},
{
"math_id": 41,
"text": "(X, T(X))"
},
{
"math_id": 42,
"text": "f_\\theta(x,t)"
},
{
"math_id": 43,
"text": "T"
},
{
"math_id": 44,
"text": "X"
},
{
"math_id": 45,
"text": "f_\\theta(x,t) = f_\\theta(x)"
},
{
"math_id": 46,
"text": "t = T(x)"
},
{
"math_id": 47,
"text": "\n\\begin{align}\nf_\\theta(x) & = f_\\theta(x,t) \\\\[5pt]\n& = f_\\theta (x\\mid t) f_\\theta(t) \\\\[5pt]\n& = f(x\\mid t) f_\\theta(t)\n\\end{align}\n"
},
{
"math_id": 48,
"text": "f_\\theta(x)=a(x) b_\\theta(t)"
},
{
"math_id": 49,
"text": "a(x) = f_{X \\mid t}(x)"
},
{
"math_id": 50,
"text": "b_\\theta(t) = f_\\theta(t)"
},
{
"math_id": 51,
"text": "\n\\begin{align}\nf_\\theta(t) & = \\sum _{x : T(x) = t} f_\\theta(x, t) \\\\[5pt]\n& = \\sum _{x : T(x) = t} f_\\theta(x) \\\\[5pt]\n& = \\sum _{x : T(x) = t} a(x) b_\\theta(t) \\\\[5pt]\n& = \\left( \\sum _{x : T(x) = t} a(x) \\right) b_\\theta(t).\n\\end{align}"
},
{
"math_id": 52,
"text": "t"
},
{
"math_id": 53,
"text": "f_{X\\mid t}(x)"
},
{
"math_id": 54,
"text": "T(X)"
},
{
"math_id": 55,
"text": "\n\\begin{align}\nf_{X\\mid t}(x)\n& = \\frac{f_\\theta(x, t)}{f_\\theta(t)} \\\\[5pt]\n& = \\frac{f_\\theta(x)}{f_\\theta(t)} \\\\[5pt]\n& = \\frac{a(x) b_\\theta(t)}{\\left( \\sum _{x : T(x) = t} a(x) \\right) b_\\theta(t)} \\\\[5pt]\n& = \\frac{a(x)}{\\sum _{x : T(x) = t} a(x)}.\n\\end{align}"
},
{
"math_id": 56,
"text": "\\frac{f_\\theta(x)}{f_\\theta(y)}"
},
{
"math_id": 57,
"text": "\\Longleftrightarrow"
},
{
"math_id": 58,
"text": "P_\\theta"
},
{
"math_id": 59,
"text": "\\left\\{\\frac{L(X \\mid \\theta_i)}{L(X \\mid \\theta_0)}\\right\\}"
},
{
"math_id": 60,
"text": "i = 1, ..., k"
},
{
"math_id": 61,
"text": "\\left\\{\\theta_0, ..., \\theta_k\\right\\}"
},
{
"math_id": 62,
"text": " \\Pr\\{X=x\\}=\\Pr\\{X_1=x_1,X_2=x_2,\\ldots,X_n=x_n\\}."
},
{
"math_id": 63,
"text": "\np^{x_1}(1-p)^{1-x_1} p^{x_2}(1-p)^{1-x_2}\\cdots p^{x_n}(1-p)^{1-x_n} "
},
{
"math_id": 64,
"text": "\np^{\\sum x_i}(1-p)^{n-\\sum x_i}=p^{T(x)}(1-p)^{n-T(x)}\n"
},
{
"math_id": 65,
"text": "\\begin{align}\nf_{\\theta}(x_1,\\ldots,x_n)\n &= \\frac{1}{\\theta}\\mathbf{1}_{\\{0\\leq x_1\\leq\\theta\\}} \\cdots\n \\frac{1}{\\theta}\\mathbf{1}_{\\{0\\leq x_n\\leq\\theta\\}} \\\\[5pt]\n &= \\frac{1}{\\theta^n} \\mathbf{1}_{\\{0\\leq\\min\\{x_i\\}\\}}\\mathbf{1}_{\\{\\max\\{x_i\\}\\leq\\theta\\}}\n\\end{align}"
},
{
"math_id": 66,
"text": " \\frac{n+1}{n}T(X). "
},
{
"math_id": 67,
"text": "X_1,...,X_n"
},
{
"math_id": 68,
"text": "[\\alpha, \\beta]"
},
{
"math_id": 69,
"text": "\\alpha"
},
{
"math_id": 70,
"text": "\\beta"
},
{
"math_id": 71,
"text": "T(X_1^n)=\\left(\\min_{1 \\leq i \\leq n}X_i,\\max_{1 \\leq i \\leq n}X_i\\right)"
},
{
"math_id": 72,
"text": "(\\alpha\\, , \\, \\beta)"
},
{
"math_id": 73,
"text": "X_1^n=(X_1,\\ldots,X_n)"
},
{
"math_id": 74,
"text": "\\begin{align}\nf_{X_1^n}(x_1^n)\n &= \\prod_{i=1}^n \\left({1 \\over \\beta-\\alpha}\\right) \\mathbf{1}_{ \\{ \\alpha \\leq x_i \\leq \\beta \\} }\n = \\left({1 \\over \\beta-\\alpha}\\right)^n \\mathbf{1}_{ \\{ \\alpha \\leq x_i \\leq \\beta, \\, \\forall \\, i = 1,\\ldots,n\\}} \\\\\n &= \\left({1 \\over \\beta-\\alpha}\\right)^n \\mathbf{1}_{ \\{ \\alpha \\, \\leq \\, \\min_{1 \\leq i \\leq n}X_i \\} } \\mathbf{1}_{ \\{ \\max_{1 \\leq i \\leq n}X_i \\, \\leq \\, \\beta \\} }.\n\\end{align}"
},
{
"math_id": 75,
"text": "\\begin{align}\nh(x_1^n)= 1, \\quad\ng_{(\\alpha, \\beta)}(x_1^n)= \\left({1 \\over \\beta-\\alpha}\\right)^n \\mathbf{1}_{ \\{ \\alpha \\, \\leq \\, \\min_{1 \\leq i \\leq n}X_i \\} } \\mathbf{1}_{ \\{ \\max_{1 \\leq i \\leq n}X_i \\, \\leq \\, \\beta \\} }.\n\\end{align}"
},
{
"math_id": 76,
"text": "h(x_1^n)"
},
{
"math_id": 77,
"text": "(\\alpha, \\beta)"
},
{
"math_id": 78,
"text": "g_{(\\alpha \\, , \\, \\beta)}(x_1^n)"
},
{
"math_id": 79,
"text": "x_1^n"
},
{
"math_id": 80,
"text": "T(X_1^n)= \\left(\\min_{1 \\leq i \\leq n}X_i,\\max_{1 \\leq i \\leq n}X_i\\right),"
},
{
"math_id": 81,
"text": "T(X_1^n) = \\left(\\min_{1 \\leq i \\leq n}X_i,\\max_{1 \\leq i \\leq n}X_i\\right)"
},
{
"math_id": 82,
"text": "\n\\Pr(X=x)=P(X_1=x_1,X_2=x_2,\\ldots,X_n=x_n).\n"
},
{
"math_id": 83,
"text": "\n{e^{-\\lambda} \\lambda^{x_1} \\over x_1 !} \\cdot\n{e^{-\\lambda} \\lambda^{x_2} \\over x_2 !} \\cdots\n{e^{-\\lambda} \\lambda^{x_n} \\over x_n !}\n"
},
{
"math_id": 84,
"text": "\ne^{-n\\lambda} \\lambda^{(x_1+x_2+\\cdots+x_n)} \\cdot\n{1 \\over x_1 ! x_2 !\\cdots x_n ! }\n"
},
{
"math_id": 85,
"text": "X_1,\\ldots,X_n"
},
{
"math_id": 86,
"text": "\\sigma^2,"
},
{
"math_id": 87,
"text": "T(X_1^n)=\\overline{x}=\\frac1n\\sum_{i=1}^nX_i"
},
{
"math_id": 88,
"text": "\\theta."
},
{
"math_id": 89,
"text": "X_1^n=(X_1,\\dots,X_n)"
},
{
"math_id": 90,
"text": "\\begin{align}\nf_{X_1^n}(x_1^n)\n & = \\prod_{i=1}^n \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp \\left (-\\frac{(x_i-\\theta)^2}{2\\sigma^2} \\right ) \\\\ [6pt]\n &= (2\\pi\\sigma^2)^{-\\frac{n}{2}} \\exp \\left ( -\\sum_{i=1}^n \\frac{(x_i-\\theta)^2}{2\\sigma^2} \\right ) \\\\ [6pt]\n & = (2\\pi\\sigma^2)^{-\\frac{n}{2}} \\exp \\left (-\\sum_{i=1}^n \\frac{ \\left ( \\left (x_i-\\overline{x} \\right ) - \\left (\\theta-\\overline{x} \\right ) \\right )^2}{2\\sigma^2} \\right ) \\\\ [6pt]\n & = (2\\pi\\sigma^2)^{-\\frac{n}{2}} \\exp \\left( -{1\\over2\\sigma^2} \\left(\\sum_{i=1}^n(x_i-\\overline{x})^2 + \\sum_{i=1}^n(\\theta-\\overline{x})^2 -2\\sum_{i=1}^n(x_i-\\overline{x})(\\theta-\\overline{x})\\right) \\right) \\\\ [6pt] \n &= (2\\pi\\sigma^2)^{-\\frac{n}{2}} \\exp \\left( -{1\\over2\\sigma^2} \\left (\\sum_{i=1}^n(x_i-\\overline{x})^2 + n(\\theta-\\overline{x})^2 \\right ) \\right ) && \\sum_{i=1}^n(x_i-\\overline{x})(\\theta-\\overline{x})=0 \\\\ [6pt]\n &= (2\\pi\\sigma^2)^{-\\frac{n}{2}} \\exp \\left( -{1\\over2\\sigma^2} \\sum_{i=1}^n (x_i-\\overline{x})^2 \\right ) \\exp \\left (-\\frac{n}{2\\sigma^2} (\\theta-\\overline{x})^2 \\right )\n\n\\end{align}"
},
{
"math_id": 91,
"text": "\\begin{align}\nh(x_1^n) &= (2\\pi\\sigma^2)^{-\\frac{n}{2}} \\exp \\left( -{1\\over2\\sigma^2} \\sum_{i=1}^n (x_i-\\overline{x})^2 \\right ) \\\\[6pt]\ng_\\theta(x_1^n) &= \\exp \\left (-\\frac{n}{2\\sigma^2} (\\theta-\\overline{x})^2 \\right )\n\\end{align}"
},
{
"math_id": 92,
"text": "g_{\\theta}(x_1^n)"
},
{
"math_id": 93,
"text": "T(X_1^n)=\\overline{x}=\\frac1n\\sum_{i=1}^nX_i,"
},
{
"math_id": 94,
"text": "T(X_1^n)"
},
{
"math_id": 95,
"text": " \\sigma^2 "
},
{
"math_id": 96,
"text": "s^2 = \\frac{1}{n-1} \\sum_{i=1}^n \\left(x_i - \\overline{x} \\right)^2 "
},
{
"math_id": 97,
"text": "\\begin{align}\nf_{X_1^n}(x_1^n)= (2\\pi\\sigma^2)^{-n/2} \\exp \\left( -\\frac{n-1}{2\\sigma^2}s^2 \\right) \\exp \\left (-\\frac{n}{2\\sigma^2} (\\theta-\\overline{x})^2 \\right ) .\n\\end{align}"
},
{
"math_id": 98,
"text": "(\\overline{x},s^2)"
},
{
"math_id": 99,
"text": " ( \\theta , \\sigma^2) "
},
{
"math_id": 100,
"text": "T(X_1^n)=\\sum_{i=1}^nX_i"
},
{
"math_id": 101,
"text": "\\begin{align}\nf_{X_1^n}(x_1^n)\n &= \\prod_{i=1}^n {1 \\over \\theta} \\, e^{ {-1 \\over \\theta}x_i }\n = {1 \\over \\theta^n}\\, e^{ {-1 \\over \\theta} \\sum_{i=1}^nx_i }.\n\\end{align}"
},
{
"math_id": 102,
"text": "\\begin{align}\nh(x_1^n)= 1,\\,\\,\\,\ng_{\\theta}(x_1^n)= {1 \\over \\theta^n}\\, e^{ {-1 \\over \\theta} \\sum_{i=1}^nx_i }.\n\\end{align}"
},
{
"math_id": 103,
"text": "\\Gamma(\\alpha \\, , \\, \\beta) "
},
{
"math_id": 104,
"text": "T(X_1^n) = \\left( \\prod_{i=1}^n{X_i} , \\sum_{i=1}^n X_i \\right)"
},
{
"math_id": 105,
"text": "\\begin{align}\nf_{X_1^n}(x_1^n)\n &= \\prod_{i=1}^n \\left({1 \\over \\Gamma(\\alpha) \\beta^\\alpha}\\right) x_i^{\\alpha -1} e^{(-1/\\beta)x_i} \\\\[5pt]\n &= \\left({1 \\over \\Gamma(\\alpha) \\beta^\\alpha}\\right)^n \\left(\\prod_{i=1}^n x_i\\right)^{\\alpha-1} e^{{-1 \\over \\beta} \\sum_{i=1}^n x_i}.\n\\end{align}"
},
{
"math_id": 106,
"text": "\\begin{align}\nh(x_1^n)= 1,\\,\\,\\,\ng_{(\\alpha \\, , \\, \\beta)}(x_1^n)= \\left({1 \\over \\Gamma(\\alpha) \\beta^{\\alpha}}\\right)^n \\left(\\prod_{i=1}^n x_i\\right)^{\\alpha-1} e^{{-1 \\over \\beta} \\sum_{i=1}^n x_i}.\n\\end{align}"
},
{
"math_id": 107,
"text": "T(x_1^n)= \\left( \\prod_{i=1}^n x_i, \\sum_{i=1}^n x_i \\right),"
},
{
"math_id": 108,
"text": "T(X_1^n)= \\left( \\prod_{i=1}^n X_i, \\sum_{i=1}^n X_i \\right)"
},
{
"math_id": 109,
"text": "(\\alpha\\, , \\, \\beta)."
},
{
"math_id": 110,
"text": "X_n, n = 1, 2, 3, \\dots"
},
{
"math_id": 111,
"text": "\\R^m"
},
{
"math_id": 112,
"text": "T(X_1, \\dots, X_n)"
},
{
"math_id": 113,
"text": "m"
},
{
"math_id": 114,
"text": "\\Pr(\\theta\\mid X=x) = \\Pr(\\theta\\mid T(X)=t(x)). "
},
{
"math_id": 115,
"text": "\\Pr(X'=x'\\mid X=x) = \\Pr(X'=x'\\mid T(X)=t(x))."
},
{
"math_id": 116,
"text": "\\hat E[Y\\mid X]"
},
{
"math_id": 117,
"text": "\\hat E[\\theta\\mid X]= \\hat E[\\theta\\mid T(X)] . "
}
]
| https://en.wikipedia.org/wiki?curid=140841 |
14085112 | Adolfo Bartoli | Italian physicist
Adolfo Bartoli (19 March 1851 – 18 July 1896) was an Italian physicist, who is best known for introducing the concept of radiation pressure from thermodynamical considerations.
Born in Florence, Bartoli studied physics and mathematics at the University of Pisa until 1874. He was professor of physics at the Technical Institute of Arezzo from 1876, at the University of Sassari from 1878, at the Technical Institute of Firenze from 1879, at the University of Catania from 1886 to 1893, and at the University of Pavia from 1893.
In 1874 James Clerk Maxwell found out that the existence of tensions in the ether, in other words radiation pressure, follows from his electromagnetic theory.
In 1876 Bartoli derived the existence of radiation pressure from thermodynamics, and his derivation influenced Boltzmann.
Like how the thermodynamics of gas is independent of microscopic details of what gas is, his derivation does not depend on a theory of what light is, such as Maxwell's electromagnetism. In contrast, Maxwell predicted radiation pressure directly from his theory of electromagnetism. Therefore, radiation pressure was also called "Maxwell-Bartoli pressure".
Later the radiation pressure played an important role in the work of Albert Einstein in connection with mass–energy equivalence and the photoelectric effect. Einstein lived in Pavia at that time (1895), when Bartoli held the Physics chair at the local University. However, it is unknown whether Einstein was directly influenced by Bartoli.
Bartoli died in Pavia in 1896.
Bartoli's argument for radiation pressure.
Consider a perfectly reflexive piston with a mirror in the middle, and two black bodies connected to two heat baths at its two ends. One body is hot at temperature formula_0, and the other is cold at temperature formula_1. Now, consider the following heat-engine cycle:
With one cycle, we have pumped energy away from the cold body. Since the energy has no where else to go, it must have ended up in the hot body. If light has no pressure, then this transport energy from a colder to a hotter body. To avoid this violation of the second law of thermodynamics, it is necessary that light impart a pressure to the mirror.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_1"
},
{
"math_id": 1,
"text": "T_0"
}
]
| https://en.wikipedia.org/wiki?curid=14085112 |
140858 | Pair production | Interaction of a photon with matter resulting into creation of electron-positron pair
Pair production is the creation of a subatomic particle and its antiparticle from a neutral boson. Examples include creating an electron and a positron, a muon and an antimuon, or a proton and an antiproton. Pair production often refers specifically to a photon creating an electron–positron pair near a nucleus. As energy must be conserved, for pair production to occur, the incoming energy of the photon must be above a threshold of at least the total rest mass energy of the two particles created. (As the electron is the lightest, hence, lowest mass/energy, elementary particle, it requires the least energetic photons of all possible pair-production processes.) Conservation of energy and momentum are the principal constraints on the process.
All other conserved quantum numbers (angular momentum, electric charge, lepton number) of the produced particles must sum to zero – thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, or if one particle has strangeness of +1 then another one must have strangeness of −1.
The probability of pair production in photon–matter interactions increases with photon energy and also increases approximately as the square of atomic number of (hence, number of protons in) the nearby atom.
Photon to electron and positron.
For photons with high photon energy (MeV scale and higher), pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blackett's counter-controlled cloud chamber, leading to the 1948 Nobel Prize in Physics. If the photon is near an atomic nucleus, the energy of a photon can be converted into an electron–positron pair:
(Z+) → +
The photon's energy is converted to particle mass in accordance with Einstein's equation, "E"
[[Category:Pages which use a template in place of a magic word|TPair production]] "m ⋅ c"2; where "E" is energy, "m" is mass and "c" is the speed of light. The photon must have higher energy than the sum of the rest mass energies of an electron and positron (2 ⋅ 511 keV = 1.022 MeV, resulting in a photon-wavelength of 1.2132 picometer) for the production to occur. (Thus, pair production does not occur in medical X-ray imaging because these X-rays only contain ~150 keV.)
The photon must be near a nucleus in order to satisfy conservation of momentum, as an electron–positron pair produced in free space cannot satisfy conservation of both energy and momentum. Because of this, when pair production occurs, the atomic nucleus receives some recoil. The reverse of this process is electron–positron annihilation.
Basic kinematics.
These properties can be derived through the kinematics of the interaction. Using four vector notation, the conservation of energy-momentum before and after the interaction gives:
formula_0
where formula_1 is the recoil of the nucleus. Note the modulus of the four vector
formula_2
is:
formula_3
which implies that formula_4 for all cases and formula_5. We can square the conservation equation:
formula_6
However, in most cases the recoil of the nucleus is small compared to the energy of the photon and can be neglected. Taking this approximation of formula_7 and expanding the remaining relation:
formula_8
formula_9
formula_10
Therefore, this approximation can only be satisfied if the electron and positron are emitted in very nearly the same direction, that is, formula_11.
This derivation is a semi-classical approximation. An exact derivation of the kinematics can be done taking into account the full quantum mechanical scattering of photon and nucleus.
Energy transfer.
The energy transfer to electron and positron in pair production interactions is given by:
formula_12
where formula_13 is Planck's constant, formula_14 is the frequency of the photon and the formula_15 is the combined rest mass of the electron–positron. In general the electron and positron can be emitted with different kinetic energies, but the average transferred to each (ignoring the recoil of the nucleus) is:
formula_16
Cross section.
The exact analytic form for the cross section of pair production must be calculated through quantum electrodynamics in the form of Feynman diagrams and results in a complicated function. To simplify, the cross section can be written as:
formula_17
where formula_18 is the fine-structure constant, formula_19 is the classical electron radius, formula_20 is the atomic number of the material, and formula_21 is some complex-valued function that depends on the energy and atomic number. Cross sections are tabulated for different materials and energies.
In 2008 the Titan laser, aimed at a 1 millimeter-thick gold target, was used to generate positron–electron pairs in large numbers.
Astronomy.
Pair production is invoked in the heuristic explanation of hypothetical Hawking radiation. According to quantum mechanics, particle pairs are constantly appearing and disappearing as a quantum foam. In a region of strong gravitational tidal forces, the two particles in a pair may sometimes be wrenched apart before they have a chance to mutually annihilate. When this happens in the region around a black hole, one particle may escape while its antiparticle partner is captured by the black hole.
Pair production is also the mechanism behind the hypothesized pair-instability supernova type of stellar explosion, where pair production suddenly lowers the pressure inside a supergiant star, leading to a partial implosion, and then explosive thermonuclear burning. Supernova SN 2006gy is hypothesized to have been a pair production type supernova.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_\\gamma = p_{\\text{e}^-} + p_{\\text{e}^+} + p_{\\text{ʀ}}"
},
{
"math_id": 1,
"text": "p_\\text{ʀ}"
},
{
"math_id": 2,
"text": "A \\equiv (A^0,\\mathbf{A}) "
},
{
"math_id": 3,
"text": "A^2 = A^{\\mu} A_{\\mu} = - (A^0)^2 + \\mathbf{A} \\cdot \\mathbf{A} "
},
{
"math_id": 4,
"text": "(p_\\gamma)^2 = 0 "
},
{
"math_id": 5,
"text": "(p_{\\text{e}^-})^2 = -m_\\text{e}^2 c^2 "
},
{
"math_id": 6,
"text": "(p_\\gamma)^2 = (p_{\\text{e}^-} + p_{\\text{e}^+} + p_\\text{ʀ})^2 "
},
{
"math_id": 7,
"text": "p_{R} \\approx 0"
},
{
"math_id": 8,
"text": "(p_\\gamma)^2 \\approx (p_{\\text{e}^-})^2 + 2 p_{\\text{e}^-} p_{\\text{e}^+} + (p_{\\text{e}^+})^2 "
},
{
"math_id": 9,
"text": "-2\\, m_\\text{e}^2 c^2 + 2 \\left( -\\frac{E^2}{c^2} + \\mathbf{p}_{\\text{e}^-} \\cdot \\mathbf{p}_{\\text{e}^+} \\right) \\approx 0 "
},
{
"math_id": 10,
"text": "2\\,(\\gamma^2 - 1)\\,m_\\text{e}^2\\,c^2\\,(\\cos \\theta_\\text{e} - 1) \\approx 0 "
},
{
"math_id": 11,
"text": "\\theta_\\text{e} \\approx 0 "
},
{
"math_id": 12,
"text": "(E_k^{pp})_\\text{tr} = h \\nu - 2\\, m_\\text{e} c^2"
},
{
"math_id": 13,
"text": "h"
},
{
"math_id": 14,
"text": "\\nu "
},
{
"math_id": 15,
"text": "2\\, m_\\text{e} c^2"
},
{
"math_id": 16,
"text": "(\\bar E_k^{pp})_\\text{tr} = \\frac{1}{2} (h \\nu - 2\\, m_\\text{e} c^2)"
},
{
"math_id": 17,
"text": "\\sigma = \\alpha \\, r_\\text{e}^2 \\, Z^2 \\, P(E,Z)"
},
{
"math_id": 18,
"text": "\\alpha"
},
{
"math_id": 19,
"text": "r_\\text{e}"
},
{
"math_id": 20,
"text": "Z"
},
{
"math_id": 21,
"text": "P(E,Z)"
}
]
| https://en.wikipedia.org/wiki?curid=140858 |
1409006 | Common knowledge (logic) | Statement that players know and also know that other players know (ad infinitum)
Common knowledge is a special kind of knowledge for a group of agents. There is "common knowledge" of "p" in a group of agents "G" when all the agents in "G" know "p", they all know that they know "p", they all know that they all know that they know "p", and so on "ad infinitum". It can be denoted as formula_0.
The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study "Convention" (1969). The sociologist Morris Friedell defined common knowledge in a 1969 paper. It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s.[#endnote_] There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway.
The philosopher Stephen Schiffer, in his 1972 book "Meaning", independently developed a notion he called "mutual knowledge" (formula_1) which functions quite similarly to Lewis's and Friedel's 1969 "common knowledge". If a trustworthy announcement is made in public, then it becomes common knowledge; However, if it is transmitted to each agent in private, it becomes mutual knowledge but not common knowledge. Even if the fact that "every agent in the group knows "p"" (formula_1) is transmitted to each agent in private, it is still not common knowledge: formula_2. But, if any agent formula_3 publicly announces their knowledge of "p", then it becomes common knowledge that they know "p" (viz. formula_4). If every agent publicly announces their knowledge of "p", "p" becomes common knowledge formula_5.
Example.
Puzzle.
The idea of common knowledge is often introduced by some variant of induction puzzles (e.g. Muddy children puzzle):[#endnote_]
On an island, there are "k" people who have blue eyes, and the rest of the people have green eyes. At the start of the puzzle, no one on the island ever knows their own eye color. By rule, if a person on the island ever discovers they have blue eyes, that person must leave the island at dawn; anyone not making such a discovery always sleeps until after dawn. On the island, each person knows every other person's eye color, there are no reflective surfaces, and there is no communication of eye color.
At some point, an outsider comes to the island, calls together all the people on the island, and makes the following public announcement: "At least one of you has blue eyes". The outsider, furthermore, is known by all to be truthful, and all know that all know this, and so on: it is common knowledge that he is truthful, and thus it becomes common knowledge that there is at least one islander who has blue eyes (formula_6). The problem: finding the eventual outcome, assuming all persons on the island are completely logical (every participant's knowledge obeys the axiom schemata for epistemic logic) and that this too is common knowledge.
Solution.
The answer is that, on the "k"th dawn after the announcement, all the blue-eyed people will leave the island.
Proof.
The solution can be seen with an inductive argument. If "k" = 1 (that is, there is exactly one blue-eyed person), the person will recognize that they alone have blue eyes (by seeing only green eyes in the others) and leave at the first dawn. If "k" = 2, no one will leave at the first dawn, and the inaction (and the implied lack of knowledge for every agent) is observed by everyone, which then becomes "common knowledge" as well (formula_7). The two blue-eyed people, seeing only one person with blue eyes, "and" that no one left on the first dawn (and thus that "k" > 1; and also that the other blue-eyed person does not think that everyone except themself are not blue-eyed formula_8, so "another" blue-eyed person formula_9), will leave on the second dawn. Inductively, it can be reasoned that no one will leave at the first "k" − 1 dawns if and only if there are at least "k" blue-eyed people. Those with blue eyes, seeing "k" − 1 blue-eyed people among the others and knowing there must be at least "k", will reason that they must have blue eyes and leave.
For "k" > 1, the outsider is only telling the island citizens what they already know: that there are blue-eyed people among them. However, before this fact is announced, the fact is not "common knowledge", but instead mutual knowledge.
For "k" = 2, it is merely "first-order" knowledge (formula_10). Each blue-eyed person knows that there is someone with blue eyes, but each blue eyed person does "not" know that the other blue-eyed person has this same knowledge.
For "k" = 3, it is "second order" knowledge (formula_11). Each blue-eyed person knows that a second blue-eyed person knows that a third person has blue eyes, but no one knows that there is a "third" blue-eyed person with that knowledge, until the outsider makes their statement.
In general: For "k" > 1, it is "("k" − 1)th order" knowledge (formula_12). Each blue-eyed person knows that a second blue-eyed person knows that a third blue-eyed person knows that... (repeat for a total of "k" − 1 levels) a "k"th person has blue eyes, but no one knows that there is a ""k"th" blue-eyed person with that knowledge, until the outsider makes his statement. The notion of "common knowledge" therefore has a palpable effect. Knowing that everyone knows does make a difference. When the outsider's public announcement (a fact already known to all, unless k=1 then the one person with blue eyes would not know until the announcement) becomes common knowledge, the blue-eyed people on this island eventually deduce their status, and leave.
In particular:
Formalization.
Modal logic (syntactic characterization).
Common knowledge can be given a logical definition in multi-modal logic systems in which the modal operators are interpreted epistemically. At the propositional level, such systems are extensions of propositional logic. The extension consists of the introduction of a group "G" of "agents", and of "n" modal operators "Ki" (with "i" = 1, ..., "n") with the intended meaning that "agent "i" knows." Thus "Ki formula_19" (where formula_19 is a formula of the logical calculus) is read "agent "i" knows formula_19." We can define an operator "EG" with the intended meaning of "everyone in group "G" knows" by defining it with the axiom
formula_20
By abbreviating the expression formula_21 with formula_22 and defining formula_23, common knowledge could then be defined with the axiom
formula_24
There is, however, a complication. The languages of epistemic logic are usually "finitary", whereas the axiom above defines common knowledge as an infinite conjunction of formulas, hence not a well-formed formula of the language. To overcome this difficulty, a "fixed-point" definition of common knowledge can be given. Intuitively, common knowledge is thought of as the fixed point of the "equation" formula_25. Here, formula_26 is the Aleph-naught. In this way, it is possible to find a formula formula_27 implying formula_28 from which, in the limit, we can infer common knowledge of formula_19.
From this definition it can be seen that if formula_29 is common knowledge, then formula_19 is also common knowledge (formula_30).
This "syntactic" characterization is given semantic content through so-called "Kripke structures". A Kripke structure is given by a set of states (or possible worlds) "S", "n" "accessibility relations" formula_31, defined on formula_32, intuitively representing what states agent "i" considers possible from any given state, and a valuation function formula_33 assigning a truth value, in each state, to each primitive proposition in the language. The Kripke semantics for the knowledge operator is given by stipulating that formula_34 is true at state "s" iff formula_19 is true at "all" states "t" such that formula_35. The semantics for the common knowledge operator, then, is given by taking, for each group of agents "G", the reflexive (modal axiom T) and transitive closure (modal axiom 4) of the formula_36, for all agents "i" in "G", call such a relation formula_37, and stipulating that formula_38 is true at state "s" iff formula_19 is true at "all" states "t" such that formula_39.
Set theoretic (semantic characterization).
Alternatively (yet equivalently) common knowledge can be formalized using set theory (this was the path taken by the Nobel laureate Robert Aumann in his seminal 1976 paper). Starting with a set of states "S". An event "E" can then be defined as a subset of the set of states "S". For each agent "i", define a partition on "S", "Pi". This partition represents the state of knowledge of an agent in a state. Intuitively, if two states "s""1" and "s""2" are elements of the same part of partition of an agent, it means that "s""1" and "s""2" are indistinguishable to that agent. In general, in state "s", agent "i" knows that one of the states in "P""i"("s") obtains, but not which one. (Here "P""i"("s") denotes the unique element of "Pi" containing "s". This model excludes cases in which agents know things that are not true.)
A knowledge function "K" can now be defined in the following way:
formula_40
That is, "K""i"("e") is the set of states where the agent will know that event "e" obtains. It is a subset of "e".
Similar to the modal logic formulation above, an operator for the idea that "everyone knows can be defined as "e"".
formula_41
As with the modal operator, we will iterate the "E" function, formula_42 and formula_43. Using this we can then define a common knowledge function,
formula_44
The equivalence with the syntactic approach sketched above can easily be seen: consider an Aumann structure as the one just defined. We can define a correspondent Kripke structure by taking the same space "S", accessibility relations formula_36 that define the equivalence classes corresponding to the partitions formula_45, and a valuation function such that it yields value "true" to the primitive proposition "p" in all and only the states "s" such that formula_46, where formula_47 is the event of the Aumann structure corresponding to the primitive proposition "p". It is not difficult to see that the common knowledge accessibility function formula_37 defined in the previous section corresponds to the finest common coarsening of the partitions formula_45 for all formula_48, which is the finitary characterization of common knowledge also given by Aumann in the 1976 article.
Applications.
Common knowledge was used by David Lewis in his pioneering game-theoretical account of convention. In this sense, common knowledge is a concept still central for linguists and philosophers of language (see Clark 1996) maintaining a Lewisian, conventionalist account of language.
Robert Aumann introduced a set theoretical formulation of common knowledge (theoretically equivalent to the one given above) and proved the so-called agreement theorem through which: if two agents have common prior probability over a certain event, and the posterior probabilities are common knowledge, then such posterior probabilities are equal. A result based on the agreement theorem and proven by Milgrom shows that, given certain conditions on market efficiency and information, speculative trade is impossible.
The concept of common knowledge is central in game theory. For several years it has been thought that the assumption of common knowledge of rationality for the players in the game was fundamental. It turns out (Aumann and Brandenburger 1995) that, in two-player games, common knowledge of rationality is not needed as an epistemic condition for Nash equilibrium strategies.
Computer scientists use languages incorporating epistemic logics (and common knowledge) to reason about distributed systems. Such systems can be based on logics more complicated than simple propositional epistemic logic, see Wooldridge "Reasoning about Artificial Agents", 2000 (in which he uses a first-order logic incorporating epistemic and temporal operators) or van der Hoek et al. "Alternating Time Epistemic Logic".
In his 2007 book, "The Stuff of Thought: Language as a Window into Human Nature," Steven Pinker uses the notion of common knowledge to analyze the kind of indirect speech involved in innuendoes.
In popular culture.
The comedy movie Hot Lead and Cold Feet has an example of a chain of logic that is collapsed by common knowledge.
The Denver Kid tells his allies that Rattlesnake is in town, but that he [the Kid] has “the edge”: “He's here and I know he's here, and he knows I know he's here, but he "doesn't" know I know he knows I know he's here.”
So both protagonists know the main fact (Rattlesnake is here), but it is "not" “common knowledge”. Note that this is true even if the Kid is wrong: maybe Rattlesnake "does" know that the Kid knows that he knows that he knows, the chain still breaks because the Kid doesn't know that.
Moments later, Rattlesnake confronts the Kid. We see the Kid realizing that his carefully constructed “edge” has collapsed into common knowledge.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_G p"
},
{
"math_id": 1,
"text": "E_G p"
},
{
"math_id": 2,
"text": "E_G E_G p \\not \\Rightarrow C_G p"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "C_G K_a p"
},
{
"math_id": 5,
"text": "C_G E_G p \\Rightarrow C_G p"
},
{
"math_id": 6,
"text": "C_G[\\exists x\\! \\in\\! G ( Bl_{x})]"
},
{
"math_id": 7,
"text": "C_G[\\forall x\\! \\in\\! G ( \\neg K_{x}Bl_{x})]"
},
{
"math_id": 8,
"text": "\\neg K_{a}[\\forall x\\! \\in\\! (G-a) (\\neg Bl_{x})]"
},
{
"math_id": 9,
"text": "\\exists x\\! \\in\\! (G-a) (Bl_{x})"
},
{
"math_id": 10,
"text": "E_G[\\exists x\\! \\in\\! G ( Bl_{x})]"
},
{
"math_id": 11,
"text": "E_GE_G[\\exists x\\! \\in\\! G ( Bl_{x})]=E_G^2[\\exists x\\! \\in\\! G ( Bl_{x})]"
},
{
"math_id": 12,
"text": "E_G^{k-1} [\\exists x\\! \\in\\! G ( Bl_{x})]"
},
{
"math_id": 13,
"text": "E_G^{i} [|G ( Bl_{x})| \\ge j]"
},
{
"math_id": 14,
"text": "i + j \\le k"
},
{
"math_id": 15,
"text": "E_G^{i - 1} [|G ( Bl_{x})| \\ge j + 1]"
},
{
"math_id": 16,
"text": "j \\ge k"
},
{
"math_id": 17,
"text": "i + j > k"
},
{
"math_id": 18,
"text": "i = \\infty, j = 1"
},
{
"math_id": 19,
"text": "\\varphi"
},
{
"math_id": 20,
"text": "E_G \\varphi \\Leftrightarrow \\bigwedge_{i \\in G} K_i \\varphi,"
},
{
"math_id": 21,
"text": "E_GE_G^{n-1} \\varphi"
},
{
"math_id": 22,
"text": "E_G^n \\varphi"
},
{
"math_id": 23,
"text": "E_G^0 \\varphi = \\varphi"
},
{
"math_id": 24,
"text": "C \\varphi \\Leftrightarrow \\bigwedge_{i = 0}^\\infty E^i \\varphi"
},
{
"math_id": 25,
"text": "C_G \\varphi=[\\varphi\\wedge E_G (C_G \\varphi)]=E_G^{\\aleph_0} \\varphi"
},
{
"math_id": 26,
"text": "\\aleph_0"
},
{
"math_id": 27,
"text": "\\psi"
},
{
"math_id": 28,
"text": "E_G (\\varphi \\wedge C_G \\varphi)"
},
{
"math_id": 29,
"text": "E_G \\varphi"
},
{
"math_id": 30,
"text": "C_G E_G \\varphi \\Rightarrow C_G \\varphi"
},
{
"math_id": 31,
"text": "R_1,\\dots,R_n"
},
{
"math_id": 32,
"text": "S \\times S"
},
{
"math_id": 33,
"text": "\\pi"
},
{
"math_id": 34,
"text": "K_i \\varphi"
},
{
"math_id": 35,
"text": "(s,t) \\in R_i"
},
{
"math_id": 36,
"text": "R_i"
},
{
"math_id": 37,
"text": "R_G"
},
{
"math_id": 38,
"text": "C_G \\varphi"
},
{
"math_id": 39,
"text": "(s,t) \\in R_G"
},
{
"math_id": 40,
"text": "K_i(e) = \\{ s \\in S \\mid P_i(s) \\subset e\\}"
},
{
"math_id": 41,
"text": "E(e) = \\bigcap_i K_i(e)"
},
{
"math_id": 42,
"text": "E^1(e) = E(e)"
},
{
"math_id": 43,
"text": "E^{n+1}(e) = E(E^{n}(e))"
},
{
"math_id": 44,
"text": "C(e) = \\bigcap_{n=1}^{\\infty} E^n(e)."
},
{
"math_id": 45,
"text": "P_i"
},
{
"math_id": 46,
"text": "s \\in E^p"
},
{
"math_id": 47,
"text": "E^p"
},
{
"math_id": 48,
"text": "i \\in G"
}
]
| https://en.wikipedia.org/wiki?curid=1409006 |
14092544 | Youden's J statistic | Index that describes the performance of a dichotomous diagnostic test
Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a dichotomous diagnostic test. (Bookmaker) Informedness is its generalization to the multiclass case and estimates the probability of an informed decision.
Definition.
Youden's "J" statistic is
formula_0
with the two right-hand quantities being sensitivity and specificity. Thus the expanded formula is:
formula_1
The index was suggested by W. J. Youden in 1950 as a way of summarising the performance of a diagnostic test; however, the formula was earlier published in "Science" by C. S. Pierce in 1884. Its value ranges from -1 through 1 (inclusive), and has a zero value when a diagnostic test gives the same proportion of positive results for groups with and without the disease, i.e the test is useless. A value of 1 indicates that there are no false positives or false negatives, i.e. the test is perfect. The index gives equal weight to false positive and false negative values, so all tests with the same value of the index give the same proportion of total misclassified results. While it is possible to obtain a value of less than zero from this equation, e.g. Classification yields only False Positives and False Negatives, a value of less than zero just indicates that the positive and negative labels have been switched. After correcting the labels the result will then be in the 0 through 1 range.
Youden's index is often used in conjunction with receiver operating characteristic (ROC) analysis. The index is defined for all points of an ROC curve, and the maximum value of the index may be used as a criterion for selecting the optimum cut-off point when a diagnostic test gives a numeric rather than a dichotomous result. The index is represented graphically as the height above the chance line, and it is also equivalent to the area under the curve subtended by a single operating point.
Youden's index is also known as deltaP' and generalizes from the dichotomous to the multiclass case as informedness.
The use of a single index is "not generally to be recommended", but informedness or Youden's index is the probability of an informed decision (as opposed to a random guess) and takes into account all predictions.
An unrelated but commonly used combination of basic statistics from information retrieval is the F-score, being a (possibly weighted) harmonic mean of recall and precision where recall = sensitivity = true positive rate. But specificity and precision are totally different measures. F-score, like recall and precision, only considers the so-called positive predictions, with recall being the probability of predicting just the positive class, precision being the probability of a positive prediction being correct, and F-score equating these probabilities under the effective assumption that the positive labels and the positive predictions should have the same distribution and prevalence, similar to the assumption underlying of Fleiss' kappa. Youden's J, Informedness, Recall, Precision and F-score are intrinsically undirectional, aiming to assess the deductive effectiveness of predictions in the direction proposed by a rule, theory or classifier. DeltaP is Youden's J used to assess the reverse or abductive direction, (and generalizes to the multiclass case as Markedness), matching well human learning of associations; rules and, superstitions as we model possible causation;, while correlation and kappa evaluate bidirectionally.
Matthews correlation coefficient is the geometric mean of the regression coefficient of the dichotomous problem and its dual, where the component regression coefficients of the Matthews correlation coefficient are deltaP and deltaP' (that is Youden's J or Pierce's I). The main article on Matthews correlation coefficient discusses two different generalizations to the multiclass case, one being the analogous geometric mean of Informedness and Markedness. Kappa statistics such as Fleiss' kappa and Cohen's kappa are methods for calculating inter-rater reliability based on different assumptions about the marginal or prior distributions, and are increasingly used as "chance corrected" alternatives to accuracy in other contexts (including the multiclass case). Fleiss' kappa, like F-score, assumes that both variables are drawn from the same distribution and thus have the same expected prevalence, while Cohen's kappa assumes that the variables are drawn from distinct distributions and referenced to a model of expectation that assumes prevalences are independent.
When the true prevalences for the two positive variables are equal as assumed in Fleiss kappa and F-score, that is the number of positive predictions matches the number of positive classes in the dichotomous (two class) case, the different kappa and correlation measure collapse to identity with Youden's J, and recall, precision and F-score are similarly identical with accuracy. | [
{
"math_id": 0,
"text": " J = \\text{sensitivity} + \\text{specificity} -1=\\text{recall}_1 + \\text{recall}_0 -1 "
},
{
"math_id": 1,
"text": "J = \\frac{\\text{true positives}}{\\text{true positives}+\\text{false negatives}}+\\frac{\\text{true negatives}}{\\text{true negatives}+\\text{false positives}}-1"
}
]
| https://en.wikipedia.org/wiki?curid=14092544 |
14093130 | DEVS | Concept within modeling and systems analysis
DEVS, abbreviating Discrete Event System Specification, is a modular and hierarchical formalism for modeling and analyzing general systems that can be discrete event systems which might be described by state transition tables, and continuous state systems which might be described by differential equations, and hybrid continuous state and discrete event systems. DEVS is a timed event system.
History.
DEVS is a formalism for modeling and analysis of discrete event systems (DESs). The DEVS formalism was invented by Bernard P. Zeigler, who is emeritus professor at the University of Arizona. DEVS was introduced to the public in Zeigler's first book, "Theory of Modeling and Simulation" , in 1976, while Zeigler was an associate professor at University of Michigan. DEVS can be seen as an extension of the Moore machine formalism, which is a finite state automaton where the outputs are determined by the current state alone (and do not depend directly on the input). The extension was done by
Since the lifespan of each state is a real number (more precisely, non-negative real) or infinity, it is distinguished from discrete time systems, sequential machines, and Moore machines, in which time is determined by a tick time multiplied by non-negative integers. Moreover, the lifespan can be a random variable; for example the lifespan of a given state can be distributed exponentially or uniformly. The state transition and output functions of DEVS can also be stochastic.
Zeigler proposed a hierarchical algorithm for DEVS model simulation in 1984 [Zeigler84] which was published in "Simulation" journal in 1987. Since then, many extended formalism from DEVS have been introduced with their own purposes: DESS/DEVS for combined continuous and discrete event systems, P-DEVS for parallel DESs, G-DEVS for piecewise continuous state trajectory modeling of DESs, RT-DEVS for realtime DESs, Cell-DEVS for cellular DESs, Fuzzy-DEVS for fuzzy DESs, Dynamic Structuring DEVS for DESs changing their coupling structures dynamically, and so on. In addition to its extensions, there are some subclasses such as SP-DEVS and FD-DEVS have been researched for achieving decidability of system properties.
Due to the modular and hierarchical modeling views, as well as its simulation-based analysis capability, the DEVS formalism and its variations have been used in many application of engineering (such as hardware design, hardware/software codesign, communications systems, manufacturing systems) and science (such as biology, and sociology)
Formalism.
DEVS defines system behavior as well as system structure. System behavior in DEVS formalism is described using input and output events as well as states. For example, for the ping-pong player of Fig. 1, the input event is "?receive", and the output event is "!send". Each player, "A", "B", has its states: "Send" and "Wait". "Send" state takes 0.1 seconds to send back the ball that is the output event "!send", while the "Wait" state lasts until the player receives the ball that is the input event "?receive".
The structure of ping-pong game is to connect two players: Player "A"'s output event "!send" is transmitted to Player "B"'s input event "?receive", and vice versa.
In the classic DEVS formalism, "Atomic DEVS" captures the system behavior, while "Coupled DEVS" describes the structure of system.
The following formal definition is for Classic DEVS [ZKP00]. In this article, we will use the time base, formula_0 that is the set of non-negative real numbers; the extended time base,formula_1 that is the set of non-negative real numbers plus infinity.
Atomic DEVS.
An atomic DEVS model is defined as a 7-tuple
formula_2
where
The atomic DEVS model for player A of Fig. 1 is given
Player=formula_17
such that
formula_18
Both Player A and Player B are atomic DEVS models.
Behavior of atomic DEVS.
Simply speaking, there are two cases that an atomic DEVS model formula_19 can change its state formula_11: (1) when an external input formula_20 comes into the system formula_19; (2) when the elapsed time formula_21 reaches the lifespan of formula_12 which is defined by formula_22. (At the same time of (2), formula_19 generates an output formula_23 which is defined by formula_24.) .
For formal behavior description of given an Atomic DEVS model, refer to the page Behavior of DEVS. Computer algorithms to implement the behavior of a given Atomic DEVS model are available at Simulation Algorithms for Atomic DEVS.
Coupled DEVS.
The coupled DEVS defines which sub-components belong to it and how they are connected with each other. A coupled DEVS model is defined as an 8-tuple
formula_25
where
The ping-pong game of Fig. 1 can be modeled as a coupled DEVS model formula_33 where formula_34;formula_35;formula_36; formula_37 is described as above; formula_38; formula_39; and formula_40.
Behavior of coupled DEVS.
Simply speaking, like the behavior of the atomic DEVS class, a coupled DEVS model formula_41 changes its components' states (1) when an external event formula_42 comes into formula_41; (2) when one of components formula_43 where formula_44 executes its internal state transition and generates its output formula_45. In both cases (1) and (2), a triggering event is transmitted to all influencees which are defined by coupling sets formula_46 and formula_47.
For formal definition of behavior of the coupled DEVS, you can refer to Behavior of Coupled DEVS. Computer algorithms to implement the behavior of a given coupled DEVS mode are available at Simulation Algorithms for Coupled DEVS.
Analysis methods.
Simulation for discrete event systems.
The simulation algorithm of DEVS models considers two issues: time synchronization and message propagation. "Time synchronization" of DEVS is to control all models to have the identical current time. However, for an efficient execution, the algorithm makes the current time jump to the most urgent time when an event is scheduled to execute its internal state transition as well as its output generation. "Message propagation" is to transmit a triggering message which can be either an input or output event along the associated couplings which are defined in a coupled DEVS model. For more detailed information, the reader can refer to Simulation Algorithms for Atomic DEVS and Simulation Algorithms for Coupled DEVS.
Simulation for continuous state systems.
By introducing a quantization method which abstracts a continuous segment as a piecewise const segment, DEVS can simulate behaviors of continuous state systems which are described by networks of differential algebraic equations. This research has been initiated by Zeigler in 1990s and many properties have been clarified by Prof. Kofman in 2000s and Dr. Nutaro. In 2006, Prof. Cellier who is the author of "Continuous System Modeling"[Cellier91], and Prof. Kofman wrote a text book, "Continuous System Simulation"[CK06] in which Chapters 11 and 12 cover how DEVS simulates continuous state systems. Dr. Nutaro's book [Nutaro10], covers the discrete event simulation of continuous state systems too.
Verification for discrete event systems.
As an alternative analysis method against the sampling-based simulation method, an exhaustive generating behavior approach, generally called "verification" has been applied for analysis of DEVS models. It is proven that infinite states of a given DEVS model (especially a coupled DEVS model ) can be abstracted by behaviorally isomorphic finite structure, called a "reachability graph" when the given DEVS model is a sub-class of DEVS such as Schedule-Preserving DEVS (SP-DEVS), Finite & Deterministic DEVS (FD-DEVS) [HZ09], and Finite & Real-time DEVS (FRT-DEVS) [Hwang12]. As a result, based on the rechability graph, (1) dead-lock and live-lock freeness as qualitative properties are decidable with SP-DEVS [Hwang05], FD-DEVS [HZ06], and FRT-DEVS [Hwang12]; and (2) min/max processing time bounds as a quantitative property are decidable with SP-DEVS so far by 2012.
Variations of DEVS.
Extensions (superclassing).
Numerous extensions of the classic DEVS formalism have been developed in the last decades.
Among them formalisms which allow to have changing model structures while the simulation time evolves.
G-DEVS [Giambiasi01][Zacharewicz08], Parallel DEVS, Dynamic Structuring DEVS, Cell-DEVS [Wainer09], dynDEVS, Fuzzy-DEVS, GK-DEVS, ml-DEVS, Symbolic DEVS, Real-Time DEVS, rho-DEVS
Restrictions (subclassing).
There are some sub-classes known as Schedule-Preserving DEVS (SP-DEVS) and Finite and Deterministic DEVS (FD-DEVS) which were designated to support verification analysis.
SP-DEVS and FD-DEVS whose expressiveness are "E"(SP-DEVS) formula_48 "E"(FD-DEVS)formula_48 "E"(DEVS) where "E"("formalism") denotes the expressiveness of "formalism". | [
{
"math_id": 0,
"text": " \\mathbb{T}=[0,\\infty)"
},
{
"math_id": 1,
"text": " \\mathbb{T}^\\infty=[0,\\infty]"
},
{
"math_id": 2,
"text": "M=<X,Y,S,s_0,ta, \\delta_{ext}, \\delta_{int}, \\lambda>"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "s_0\\in S"
},
{
"math_id": 7,
"text": "ta:S \\rightarrow \\mathbb{T}^\\infty"
},
{
"math_id": 8,
"text": "\\delta_{ext}:Q \\times X \\rightarrow S "
},
{
"math_id": 9,
"text": "Q=\\{(s,t_e)|s \\in S, t_e \\in (\\mathbb{T} \\cap [0, ta(s)])\\}"
},
{
"math_id": 10,
"text": "t_e"
},
{
"math_id": 11,
"text": "s \\in S"
},
{
"math_id": 12,
"text": "s"
},
{
"math_id": 13,
"text": "\\delta_{int}:S \\rightarrow S "
},
{
"math_id": 14,
"text": "\\lambda:S \\rightarrow Y^\\phi"
},
{
"math_id": 15,
"text": "Y^\\phi=Y \\cup \\{\\phi\\}"
},
{
"math_id": 16,
"text": " \\phi \\not\\in Y"
},
{
"math_id": 17,
"text": "<X,Y,S,s_0,ta,\\delta_{ext}, \\delta_{int}, \\lambda>"
},
{
"math_id": 18,
"text": "\n\\begin{align}\n X &= \\{?\\textit{receive}\\}\\\\\n Y &= \\{!\\textit{send}\\}\\\\\n S &= \\{(d,\\sigma)| d \\in \\{\\textit{Wait},\\textit{Send}\\}, \\sigma \\in \\mathbb{T}^\\infty\\}\\\\\n s_0 &= (\\textit{Send}, 0.1)\\\\\n ta(s) &=\\sigma \\text{ for all } s \\in S\\\\\n\\delta_{ext}(((\\textit{Wait},\\sigma),t_e),?\\textit{receive})&=(\\textit{Send},0.1)\\\\\n\\delta_{int}(\\textit{Send},\\sigma)&=(\\textit{Wait},\\infty)\\\\\n\\delta_{int}(\\textit{Wait},\\sigma)&=(\\textit{Send},0.1)\\\\\n\\lambda(\\textit{Send},\\sigma)&=!\\textit{send}\\\\\n\\lambda(\\textit{Wait},\\sigma)&=\\phi\n\\end{align}\n"
},
{
"math_id": 19,
"text": " M "
},
{
"math_id": 20,
"text": " x \\in X "
},
{
"math_id": 21,
"text": " t_e "
},
{
"math_id": 22,
"text": " ta(s) "
},
{
"math_id": 23,
"text": " y \\in Y"
},
{
"math_id": 24,
"text": " \\lambda(s) "
},
{
"math_id": 25,
"text": "N=<X,Y,D,\\{M_i\\},C_{xx}, C_{yx}, C_{yy}, Select> "
},
{
"math_id": 26,
"text": "D"
},
{
"math_id": 27,
"text": "\\{M_i\\}"
},
{
"math_id": 28,
"text": "i \\in D, M_i"
},
{
"math_id": 29,
"text": "C_{xx}\\subseteq X \\times \\bigcup_{i \\in D} X_i"
},
{
"math_id": 30,
"text": "C_{yx}\\subseteq \\bigcup_{i \\in D} Y_i \\times \\bigcup_{i \\in D} X_i"
},
{
"math_id": 31,
"text": "C_{yy}: \\bigcup_{i \\in D} Y_i \\rightarrow Y^\\phi"
},
{
"math_id": 32,
"text": "Select:2^D \\rightarrow D"
},
{
"math_id": 33,
"text": " N=<X,Y,D,\\{M_i\\},C_{xx}, C_{yx}, C_{yy}, Select>"
},
{
"math_id": 34,
"text": "X=\\{\\} "
},
{
"math_id": 35,
"text": "Y=\\{\\} "
},
{
"math_id": 36,
"text": "D=\\{A,B\\} "
},
{
"math_id": 37,
"text": "M_A \\text{ and } M_B "
},
{
"math_id": 38,
"text": "C_{xx}=\\{\\}"
},
{
"math_id": 39,
"text": "C_{yx}=\\{(A.!send, B.?receive), (B.!send, A.?receive)\\}"
},
{
"math_id": 40,
"text": "C_{yy}(A.!send)=\\phi, C_{yy}(B.!send)=\\phi"
},
{
"math_id": 41,
"text": " N "
},
{
"math_id": 42,
"text": "x \\in X"
},
{
"math_id": 43,
"text": " M_i "
},
{
"math_id": 44,
"text": " i \\in D "
},
{
"math_id": 45,
"text": " y_i \\in Y_i"
},
{
"math_id": 46,
"text": " C_{xx}, C_{yx},"
},
{
"math_id": 47,
"text": " C_{yy} "
},
{
"math_id": 48,
"text": "\\subset"
}
]
| https://en.wikipedia.org/wiki?curid=14093130 |
1409506 | Helly's theorem | Theorem about the intersections of d-dimensional convex sets
Helly's theorem is a basic result in discrete geometry on the intersection of convex sets. It was discovered by Eduard Helly in 1913, but not published by him until 1923, by which time alternative proofs by and had already appeared. Helly's theorem gave rise to the notion of a Helly family.
Statement.
Let "X"1, ..., "Xn" be a finite collection of convex subsets of R"d", with "n" ≥ "d" + 1. If the intersection of every "d" + 1 of these sets is nonempty, then the whole collection has a nonempty intersection; that is,
formula_0
For infinite collections one has to assume compactness:
Let {"Xα"} be a collection of compact convex subsets of R"d", such that every subcollection of cardinality at most "d" + 1 has nonempty intersection. Then the whole collection has nonempty intersection.
Proof.
We prove the finite version, using Radon's theorem as in the proof by . The infinite version then follows by the finite intersection property characterization of compactness: a collection of closed subsets of a compact space has a non-empty intersection if and only if every finite subcollection has a non-empty intersection (once you fix a single set, the intersection of all others with it are closed subsets of a fixed compact space).
The proof is by induction:
Base case: Let "n"
"d" + 2. By our assumptions, for every "j"
1, ..., "n" there is a point "xj" that is in the common intersection of all "Xi" with the possible exception of "Xj". Now we apply Radon's theorem to the set "A"
{"x"1, ..., "xn"}, which furnishes us with disjoint subsets "A"1, "A"2 of A such that the convex hull of "A"1 intersects the convex hull of "A"2. Suppose that p is a point in the intersection of these two convex hulls. We claim that
formula_1
Indeed, consider any "j" ∈ {1, ..., "n"}. We shall prove that "p" ∈ "Xj". Note that the only element of A that may not be in "Xj" is "xj". If "xj" ∈ "A"1, then "xj" ∉ "A"2, and therefore "Xj" ⊃ "A"2. Since "Xj" is convex, it then also contains the convex hull of "A"2 and therefore also "p" ∈ "Xj". Likewise, if "xj" ∉ "A"1, then "Xj" ⊃ "A"1, and by the same reasoning "p" ∈ "Xj". Since p is in every "Xj", it must also be in the intersection.
Above, we have assumed that the points "x"1, ..., "xn" are all distinct. If this is not the case, say "xi"
"xk" for some "i" ≠ "k", then "xi" is in every one of the sets "Xj", and again we conclude that the intersection is nonempty. This completes the proof in the case "n"
"d" + 2.
Inductive Step: Suppose "n" > "d" + 2 and that the statement is true for "n"−1. The argument above shows that any subcollection of "d" + 2 sets will have nonempty intersection. We may then consider the collection where we replace the two sets "X""n"−1 and "Xn" with the single set "X""n"−1 ∩ "Xn". In this new collection, every subcollection of "d" + 1 sets will have nonempty intersection. The inductive hypothesis therefore applies, and shows that this new collection has nonempty intersection. This implies the same for the original collection, and completes the proof.
Colorful Helly theorem.
The colorful Helly theorem is an extension of Helly's theorem in which, instead of one collection, there are "d"+1 collections of convex subsets of R"d".
If, for "every" choice of a "transversal" – one set from every collection – there is a point in common to all the chosen sets, then for "at least one" of the collections, there is a point in common to all sets in the collection.
Figuratively, one can consider the "d"+1 collections to be of "d"+1 different colors. Then the theorem says that, if every choice of one-set-per-color has a non-empty intersection, then there exists a color such that all sets of that color have a non-empty intersection.
Fractional Helly theorem.
For every "a" > 0 there is some "b" > 0 such that, if "X"1, ..., "Xn" are "n" convex subsets of R"d", and at least an "a"-fraction of ("d"+1)-tuples of the sets have a point in common, then a fraction of at least "b" of the sets have a point in common.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bigcap_{j=1}^n X_j\\ne\\varnothing."
},
{
"math_id": 1,
"text": "p\\in\\bigcap_{j=1}^n X_j."
}
]
| https://en.wikipedia.org/wiki?curid=1409506 |
14095209 | Synaptic augmentation | Form of short term synaptic plasticity
Augmentation is one of four components of short-term synaptic plasticity that increases the probability of releasing synaptic vesicles during and after repetitive stimulation such that
formula_0
when all the other components of enhancement and depression are zero, where formula_1 is augmentation at time formula_2 and 0 refers to the baseline response to a single stimulus. The increase in the number of synaptic vesicles that release their transmitter leads to enhancement of the post synaptic response. Augmentation can be differentiated from the other components of enhancement by its kinetics of decay and by pharmacology. Augmentation selectively decays with a time constant of about 7 seconds and its magnitude is enhanced in the presence of barium. All four components are thought to be associated with or triggered by increases in internal calcium ions that build up and decay during repetitive stimulation.
During a train of impulses the enhancement of synaptic strength due to the underlying component formula_3 that gives rise to augmentation can be described by
formula_4
where formula_5 is the unit impulse function at the time of stimulation, formula_6 is the incremental increase in formula_3 with each impulse, and formula_7 is the rate constant for the loss of formula_3. During a stimulus train the magnitude of augmentation added by each impulse, a*, can increase during the train such that
formula_8
where formula_9 is the increment added by the first impulse of the train, formula_10 is a constant that determines the increase in formula_6 with each impulse, formula_11 is the stimulation rate, and formula_12 is the duration of stimulation.
Augmentation is differentiated from the three other components of enhancement by its time constant of decay. This is shown in Table 1 where the first and second components of facilitation, F1 and F2, decay with time constants of about 50 and 300 ms, and potentiation, P, decays with a time constant than ranges from tens of seconds to minutes depending on the duration of stimulation. Also included in the table are two components of depression D1 and D2, along with their associated decay time constants of recovery decay back to normal. Depression at some synapses may arise from depletion of synaptic vesicles available for release. Depression of synaptic vesicle release may mask augmentation because of overlapping time courses. Also included in the table is the fraction change in transmitter release arising from one impulse. A magnitude of 0.8 would increase transmitter release 80%.
†The magnitude of augmentation added by each impulse can increase during the train.
‡The time constant of P can increase with repetitive stimulation.
The balance between various components of enhancement and depression at the mammalian synapse is affected by temperature so that maintenance of the components of enhancement is greatly reduced at temperatures lower than physiological. During repetitive stimulation at 23 °C components of depression dominate synaptic release, whereas at 33–38 °C synaptic strength increases due to a shift towards components of enhancement. | [
{
"math_id": 0,
"text": "A(t) = [{\\rm Transmitter Release}(t)/ {\\rm Transmitter Release} (0)] - 1,"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "A^*"
},
{
"math_id": 4,
"text": "\\frac{d A^*}{d t} = J(t)a^* - k_{A^*}A^*"
},
{
"math_id": 5,
"text": "J(t)"
},
{
"math_id": 6,
"text": "a^*"
},
{
"math_id": 7,
"text": "k_{A^*}"
},
{
"math_id": 8,
"text": "a^* = a^*_0 Z^{S T}"
},
{
"math_id": 9,
"text": "a^*_0"
},
{
"math_id": 10,
"text": "Z"
},
{
"math_id": 11,
"text": "S"
},
{
"math_id": 12,
"text": "T"
}
]
| https://en.wikipedia.org/wiki?curid=14095209 |
1409530 | Equilibrium moisture content | Moisture content at which a material is neither gaining nor losing moisture
The equilibrium moisture content (EMC) of a hygroscopic material surrounded at least partially by air is the moisture content at which the material is neither gaining nor losing moisture. The value of the EMC depends on the material and the relative humidity and temperature of the air with which it is in contact. The speed with which it is approached depends on the properties of the material, the surface-area-to-volume ratio of its shape, and the speed with which humidity is carried away or towards the material (e.g. diffusion in stagnant air or convection in moving air).
EMC Business Presentation is
Equilibrium moisture content of grains.
The moisture content of grains is an essential property in food storage. The moisture content that is safe for long-term storage is 12% for corn, sorghum, rice and wheat and 11% for soybean
At a constant relative humidity of air, the EMC will drop by about 0.5% for every increase of 10 °C air temperature.
The following table shows the equilibriums for a number of grains (data from ). These values are only approximations since the exact values depend on the specific variety of a grain.
Equilibrium moisture content of wood.
The moisture content of wood below the fiber saturation point is a function of both relative humidity and temperature of surrounding air. The moisture content ("M") of wood is defined as:
formula_0
where "m" is the mass of the wood (with moisture) and formula_1 is the oven-dry mass of wood (i.e. no moisture). If the wood is placed in an environment at a particular temperature and relative humidity, its moisture content will generally begin to change in time, until it is finally in equilibrium with its surroundings, and the moisture content no longer changes in time. This moisture content is the EMC of the wood for that temperature and relative humidity.
The Hailwood-Horrobin equation for two hydrates is often used to approximate the relationship between EMC, temperature ("T"), and relative humidity ("h"):
formula_2
where "M"eq is the equilibrium moisture content (percent), "T" is the temperature (degrees Fahrenheit), h is the relative humidity (fractional) and:
formula_3
formula_4
formula_5
formula_6
This equation does not account for the slight variations with wood species, state of mechanical stress, and/or hysteresis. It is an empirical fit to tabulated data provided in the same reference, and closely agrees with the tabulated data. For example, at T=140 deg F, h=0.55, EMC=8.4% from the above equation, while EMC=8.0% from the tabulated data.
Equilibrium moisture content of sands, soils and building materials.
Materials such as stones, sand and ceramics are considered 'dry' and have much lower equilibrium moisture content than organic material like wood and leather. typically a fraction of a percent by weight when in equilibrium of air of Relative humidity 10% to 90%. This affects the rate that buildings need to dry out after construction, typical cements starting with 40-60% water content.
This is also important for construction materials such as render reinforced with organic materials, as modest changes in content of different types of straw and wood shavings have a significant influence on the overall moisture content
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M = \\frac{m-m_{od}}{m_{od}}"
},
{
"math_id": 1,
"text": "m_{od}"
},
{
"math_id": 2,
"text": "M_{\\mathrm{eq}}=\\frac{1800}{W}\\left[\\frac{kh}{1-kh}\\,+\\,\\frac{k_1kh+2k_1k_2k^2h^2}{1+k_1kh+k_1k_2k^2h^2}\\right]"
},
{
"math_id": 3,
"text": "W = 330 + 0.452\\,T + 0.00415\\,T^2 "
},
{
"math_id": 4,
"text": "k = 0.791 + 4.63\\times 10^{-4}\\,T - 8.44\\times 10^{-7}\\,T^2 "
},
{
"math_id": 5,
"text": "k_1= 6.34 + 7.75\\times 10^{-4}\\,T - 9.35\\times 10^{-5}\\,T^2 "
},
{
"math_id": 6,
"text": "k_2= 1.09 + 2.84\\times 10^{-2}\\,T - 9.04\\times 10^{-5}\\,T^2 "
}
]
| https://en.wikipedia.org/wiki?curid=1409530 |
1409541 | Berkson's paradox | Tendency to misinterpret statistical experiments involving conditional probabilities
Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks, and conditioning on a collider in graphical models.
It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson.
Examples.
Overview.
The most common example of Berkson's paradox is a false observation of a "negative" correlation between two desirable traits, i.e., that members of a population which have some desirable trait tend to lack a second. Berkson's paradox occurs when this observation appears true when in reality the two properties are unrelated—or even "positively" correlated—because members of the population where both are absent are not equally observed. For example, a person may observe from their experience that fast food restaurants in their area which serve good hamburgers tend to serve bad fries and vice versa; but because they would likely not eat anywhere where "both" were bad, they fail to allow for the large number of restaurants in this category which would weaken or even flip the correlation.
Original illustration.
Berkson's original illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. Because samples are taken from a hospital in-patient population, rather than from the general public, this can result in a spurious negative association between the disease and the risk factor. For example, if the risk factor is diabetes and the disease is cholecystitis, a hospital patient "without" diabetes is "more" likely to have cholecystitis than a member of the general population, since the patient must have had some non-diabetes (possibly cholecystitis-causing) reason to enter the hospital in the first place. That result will be obtained regardless of whether there is any association between diabetes and cholecystitis in the general population.
Ellenberg example.
An example presented by Jordan Ellenberg: Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex's dating pool. So, "among the men that Alex dates", Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex's selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in the population (since even among nice men, the ugliest portion of the population is skipped). Berkson's negative correlation is an effect that arises "within" the dating pool: the rude men that Alex dates must have been "even more" handsome to qualify.
Quantitative example.
As a quantitative example, suppose a collector has 1000 postage stamps, of which 300 are pretty and 100 are rare, with 30 being both pretty and rare. 30% of all his stamps are pretty and 10% of his pretty stamps are rare, so prettiness tells nothing about rarity. He puts the 370 stamps which are pretty or rare on display. Just over 27% of the stamps on display are rare (100/370), but still only 10%(30/300) of the pretty stamps are rare (and 100% of the 70 not-pretty stamps on display are rare). If an observer only considers stamps on display, they will observe a spurious negative relationship between prettiness and rarity as a result of the selection bias (that is, not-prettiness strongly indicates rarity in the display, but not in the total collection).<templatestyles src="Reflist/styles.css" />
Statement.
Two independent events become conditionally dependent given that at least one of them occurs. Symbolically:
If formula_0 and formula_1 then
formula_2
Proof: Note that formula_3 and formula_4
which, together with formula_0 and formula_5 (so formula_6) implies that
formula_7
One can see this in tabular form as follows: the yellow regions are the outcomes where at least one event occurs (and ~A means "not A").
For instance, if one has a sample of formula_8, and both "formula_9" and formula_10 occur independently half the time ( formula_11 ), one obtains:
So in formula_12 outcomes, either "formula_9" or formula_10 occurs, of which formula_13 have "formula_9" occurring. By comparing the conditional probability of "formula_9" to the unconditional probability of "formula_9":
formula_14
We see that the probability of formula_9 is higher (formula_15) in the subset of outcomes where ("formula_9" "or" "formula_10") occurs, than in the overall population (formula_16). On the other hand, the probability of formula_9 given both formula_10 and ("formula_9" or "formula_10") is simply the unconditional probability of "formula_9", formula_17, since "formula_9" is independent of "formula_10". In the numerical example, we have conditioned on being in the top row:
Here the probability of "formula_9" is formula_18.
Berkson's paradox arises because the conditional probability of "formula_9" given formula_10 "within the three-cell subset" equals the conditional probability in the overall population, but the unconditional probability within the subset is inflated relative to the unconditional probability in the overall population, hence, within the subset, the presence of formula_10 decreases the conditional probability of "formula_9" (back to its overall unconditional probability):
formula_19
formula_20
Because the effect of conditioning on formula_21 derives from the relative size of formula_22 and formula_17 the effect is particularly large when formula_9 is rare (formula_23) but very strongly correlated to formula_10 (formula_24). For example, consider the case below where N is very large:
For the case without conditioning on formula_21 we have
formula_25
formula_26
So A occurs rarely, unless B is present, when A occurs always. Thus B is dramatically increasing the likelihood of A.
For the case with conditioning on formula_21 we have
formula_27
formula_28
Now A occurs always, whether B is present or not. So B has no impact on the likelihood of A. Thus we
see that for highly correlated data a huge positive correlation of B on A can be effectively removed when one conditions on formula_21. | [
{
"math_id": 0,
"text": "P(A\\cap B)=P(A)P(B)"
},
{
"math_id": 1,
"text": "P(A\\cup B) < 1"
},
{
"math_id": 2,
"text": "P(A\\cap B|A\\cup B) < P(A|A\\cup B)P(B|A\\cup B)"
},
{
"math_id": 3,
"text": "P(A|A\\cup B)=P(A)/P(A\\cup B)"
},
{
"math_id": 4,
"text": "P(B|A\\cup B)=P(B)/P(A\\cup B)"
},
{
"math_id": 5,
"text": "P(A\\cup B) < 1 "
},
{
"math_id": 6,
"text": " \\frac{1}{P(A \\cup B)} < \\frac{1}{[P(A \\cup B)]^2} \\ "
},
{
"math_id": 7,
"text": "\n\\begin{align}\nP(A\\cap B|A\\cup B) = \n\\frac{P(A\\cap B)}{P(A\\cup B)} = \n\\frac{P(A)P(B)}{P(A\\cup B)} <\n\\frac{P(A)P(B)}{[P(A\\cup B) ]^2} = P(A|A\\cup B)P(B|A\\cup B).\n\\end{align}\n"
},
{
"math_id": 8,
"text": "100"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "B"
},
{
"math_id": 11,
"text": "P(A) = P(B) = 1 / 2"
},
{
"math_id": 12,
"text": "75"
},
{
"math_id": 13,
"text": "50"
},
{
"math_id": 14,
"text": "P(A|A \\cup B) = 50 / 75 = 2 / 3 > P(A) = 50 / 100 = 1 / 2"
},
{
"math_id": 15,
"text": "2 / 3"
},
{
"math_id": 16,
"text": "1 / 2"
},
{
"math_id": 17,
"text": "P(A)"
},
{
"math_id": 18,
"text": "25 / 50 = 1 / 2"
},
{
"math_id": 19,
"text": "P(A|B, A \\cup B) = P(A|B) = P(A)"
},
{
"math_id": 20,
"text": "P(A|A \\cup B) > P(A)"
},
{
"math_id": 21,
"text": "(A \\cup B)"
},
{
"math_id": 22,
"text": "P(A|A \\cup B)"
},
{
"math_id": 23,
"text": "P(A)<<1"
},
{
"math_id": 24,
"text": "P(A|B) \\approx 1"
},
{
"math_id": 25,
"text": "P(A) = 1/(N+1)"
},
{
"math_id": 26,
"text": "P(A|B) = 1"
},
{
"math_id": 27,
"text": "P(A|A \\cup B) = 1"
},
{
"math_id": 28,
"text": "P(A|B, A \\cup B) = P(A|B) = 1"
}
]
| https://en.wikipedia.org/wiki?curid=1409541 |
1409609 | Dyson series | Expansion of the time evolution operator
In scattering theory, a part of mathematical physics, the Dyson series, formulated by Freeman Dyson, is a perturbative expansion of the time evolution operator in the interaction picture. Each term can be represented by a sum of Feynman diagrams.
This series diverges asymptotically, but in quantum electrodynamics (QED) at the second order the difference from experimental data is in the order of 10−10. This close agreement holds because the coupling constant (also known as the fine-structure constant) of QED is much less than 1.
Dyson operator.
In the interaction picture, a Hamiltonian H, can be split into a "free" part "H"0 and an "interacting part" "V"S("t") as "H"
"H"0 + "V"S("t").
The potential in the interacting picture is
formula_0
where formula_1 is time-independent and formula_2 is the possibly time-dependent interacting part of the Schrödinger picture.
To avoid subscripts, formula_3 stands for formula_4 in what follows.
In the interaction picture, the evolution operator U is defined by the equation:
formula_5
This is sometimes called the Dyson operator.
The evolution operator forms a unitary group with respect to the time parameter. It has the group properties:
and from these is possible to derive the time evolution equation of the propagator:
formula_10
In the interaction picture, the Hamiltonian is the same as the interaction potential formula_11 and thus the equation can also be written in the interaction picture as
formula_12
"Caution": this time evolution equation is not to be confused with the Tomonaga–Schwinger equation.
The formal solution is
formula_13
which is ultimately a type of Volterra integral.
Derivation of the Dyson series.
An iterative solution of the Volterra equation above leads to the following Neumann series:
formula_14
Here, formula_15, and so the fields are time-ordered. It is useful to introduce an operator formula_16, called the "time-ordering operator", and to define
formula_17
The limits of the integration can be simplified. In general, given some symmetric function formula_18 one may define the integrals
formula_19
and
formula_20
The region of integration of the second integral can be broken in formula_21 sub-regions, defined by formula_15. Due to the symmetry of formula_22, the integral in each of these sub-regions is the same and equal to formula_23 by definition. It follows that
formula_24
Applied to the previous identity, this gives
formula_25
Summing up all the terms, the Dyson series is obtained. It is a simplified version of the Neumann series above and which includes the time ordered products; it is the path-ordered exponential:
formula_26
This result is also called Dyson's formula. The group laws can be derived from this formula.
Application on state vectors.
The state vector at time formula_27 can be expressed in terms of the state vector at time formula_28, for formula_29 as
formula_30
The inner product of an initial state at formula_31 with a final state at formula_32 in the Schrödinger picture, for formula_33 is:
formula_34
The "S"-matrix may be obtained by writing this in the Heisenberg picture, taking the in and out states to be at infinity:
formula_35
Note that the time ordering was reversed in the scalar product.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_{\\mathrm I}(t) = \\mathrm{e}^{\\mathrm{i} H_{0}(t - t_{0})/\\hbar} V_{\\mathrm S}(t) \\mathrm{e}^{-\\mathrm{i} H_{0} (t - t_{0})/\\hbar},"
},
{
"math_id": 1,
"text": "H_0"
},
{
"math_id": 2,
"text": "V_{\\mathrm S}(t)"
},
{
"math_id": 3,
"text": "V(t)"
},
{
"math_id": 4,
"text": "V_\\mathrm{I}(t) "
},
{
"math_id": 5,
"text": "\\Psi(t) = U(t,t_0) \\Psi(t_0)"
},
{
"math_id": 6,
"text": "U(t,t) = 1,"
},
{
"math_id": 7,
"text": "U(t,t_0) = U(t,t_1) U(t_1,t_0),"
},
{
"math_id": 8,
"text": "U^{-1}(t,t_0) = U(t_0,t),"
},
{
"math_id": 9,
"text": "U^{\\dagger}(t,t_0) U(t,t_0)=\\mathbb{1}"
},
{
"math_id": 10,
"text": "i\\hbar\\frac d{dt} U(t,t_0)\\Psi(t_0) = V(t) U(t,t_0)\\Psi(t_0)."
},
{
"math_id": 11,
"text": "H_{\\rm int}=V(t)"
},
{
"math_id": 12,
"text": "i\\hbar \\frac d{dt} \\Psi(t) = H_{\\rm int}\\Psi(t)"
},
{
"math_id": 13,
"text": "U(t,t_0)=1 - i\\hbar^{-1} \\int_{t_0}^t{dt_1\\ V(t_1)U(t_1,t_0)},"
},
{
"math_id": 14,
"text": "\n\\begin{align}\nU(t,t_0) = {} & 1 - i\\hbar^{-1} \\int_{t_0}^t dt_1V(t_1) + (-i\\hbar^{-1})^2\\int_{t_0}^t dt_1 \\int_{t_0}^{t_1} \\, dt_2 V(t_1)V(t_2)+\\cdots \\\\\n& {} + (-i\\hbar^{-1})^n\\int_{t_0}^t dt_1\\int_{t_0}^{t_1} dt_2 \\cdots \\int_{t_0}^{t_{n-1}} dt_nV(t_1)V(t_2) \\cdots V(t_n) +\\cdots.\n\\end{align}\n"
},
{
"math_id": 15,
"text": "t_1 > t_2 > \\cdots > t_n"
},
{
"math_id": 16,
"text": "\\mathcal T"
},
{
"math_id": 17,
"text": "U_n(t,t_0)=(-i\\hbar^{-1} )^n \\int_{t_0}^t dt_1 \\int_{t_0}^{t_1} dt_2 \\cdots \\int_{t_0}^{t_{n-1}} dt_n\\,\\mathcal TV(t_1) V(t_2)\\cdots V(t_n)."
},
{
"math_id": 18,
"text": "K(t_1, t_2,\\dots,t_n),"
},
{
"math_id": 19,
"text": "S_n=\\int_{t_0}^t dt_1\\int_{t_0}^{t_1} dt_2\\cdots \\int_{t_0}^{t_{n-1}} dt_n \\, K(t_1, t_2,\\dots,t_n)."
},
{
"math_id": 20,
"text": "I_n=\\int_{t_0}^t dt_1\\int_{t_0}^t dt_2\\cdots\\int_{t_0}^t dt_nK(t_1, t_2,\\dots,t_n)."
},
{
"math_id": 21,
"text": "n!"
},
{
"math_id": 22,
"text": "K"
},
{
"math_id": 23,
"text": "S_n"
},
{
"math_id": 24,
"text": "S_n = \\frac{1}{n!}I_n."
},
{
"math_id": 25,
"text": "U_n=\\frac{(-i \\hbar^{-1})^n}{n!}\\int_{t_0}^t dt_1\\int_{t_0}^t dt_2\\cdots\\int_{t_0}^t dt_n \\, \\mathcal TV(t_1)V(t_2)\\cdots V(t_n)."
},
{
"math_id": 26,
"text": "\\begin{align}\nU(t,t_0)&=\\sum_{n=0}^\\infty U_n(t,t_0)\\\\\n&=\\sum_{n=0}^\\infty \\frac{(-i\\hbar^{-1})^n}{n!}\\int_{t_0}^t dt_1\\int_{t_0}^t dt_2\\cdots\\int_{t_0}^t dt_n \\, \\mathcal TV(t_1)V(t_2)\\cdots V(t_n) \\\\\n&=\\mathcal T\\exp{-i\\hbar^{-1}\\int_{t_0}^t{d\\tau V(\\tau)}}\n\\end{align}"
},
{
"math_id": 27,
"text": "t"
},
{
"math_id": 28,
"text": "t_0"
},
{
"math_id": 29,
"text": "t>t_0,"
},
{
"math_id": 30,
"text": "|\\Psi(t)\\rangle=\\sum_{n=0}^\\infty {(-i\\hbar^{-1})^n\\over n!}\\underbrace{\\int dt_1 \\cdots dt_n}_{t_{\\rm f}\\,\\ge\\, t_1\\,\\ge\\, \\cdots\\, \\ge\\, t_n\\,\\ge\\, t_{\\rm i}}\\, \\mathcal{T}\\left\\{\\prod_{k=1}^n e^{iH_0 t_k/\\hbar}V(t_{k})e^{-iH_0 t_k/\\hbar}\\right \\}|\\Psi(t_0)\\rangle."
},
{
"math_id": 31,
"text": "t_i=t_0"
},
{
"math_id": 32,
"text": "t_f=t"
},
{
"math_id": 33,
"text": "t_f>t_i"
},
{
"math_id": 34,
"text": "\\begin{align}\n\\langle\\Psi(t_{\\rm i}) & \\mid\\Psi(t_{\\rm f})\\rangle=\\sum_{n=0}^\\infty {(-i\\hbar^{-1})^n\\over n!} \\times \\\\ \n&\\underbrace{\\int dt_1 \\cdots dt_n}_{t_{\\rm f}\\,\\ge\\, t_1\\,\\ge\\, \\cdots\\, \\ge\\, t_n\\,\\ge\\, t_{\\rm i}}\\, \\langle\\Psi(t_i)\\mid e^{-iH_0(t_{\\rm f}-t_1)/\\hbar}V_{\\rm S}(t_1)e^{-iH_0(t_1-t_2)/\\hbar}\\cdots V_{\\rm S}(t_n) e^{-iH_0(t_n-t_{\\rm i})/\\hbar}\\mid\\Psi(t_i)\\rangle\n\\end{align}"
},
{
"math_id": 35,
"text": "\\langle\\Psi_{\\rm out} \\mid S\\mid\\Psi_{\\rm in}\\rangle= \\langle\\Psi_{\\rm out}\\mid\\sum_{n=0}^\\infty {(-i\\hbar^{-1})^n\\over n!} \\underbrace{\\int d^4x_1 \\cdots d^4x_n}_{t_{\\rm out}\\,\\ge\\, t_n\\,\\ge\\, \\cdots\\, \\ge\\, t_1\\,\\ge\\, t_{\\rm in}}\\, \\mathcal{T}\\left\\{ H_{\\rm int}(x_1)H_{\\rm int}(x_2)\\cdots H_{\\rm int}(x_n) \\right\\}\\mid\\Psi_{\\rm in}\\rangle."
}
]
| https://en.wikipedia.org/wiki?curid=1409609 |
1409693 | Dendrogram | Diagram with a treelike structure
A dendrogram is a diagram representing a tree. This diagrammatic representation is frequently used in different contexts:
The name "dendrogram" derives from the two ancient greek words (), meaning "tree", and (), meaning "drawing, mathematical figure".
Clustering example.
For a clustering example, suppose that five taxa (formula_0 to formula_1) have been clustered by UPGMA based on a matrix of genetic distances. The hierarchical clustering dendrogram would show a column of five nodes representing the initial data (here individual taxa), and the remaining nodes represent the clusters to which the data belong, with the arrows representing the distance (dissimilarity). The distance between merged clusters is monotone, increasing with the level of the merger: the height of each node in the plot is proportional to the value of the intergroup dissimilarity between its two daughters (the nodes on the right representing individual observations all plotted at zero height).
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "e"
}
]
| https://en.wikipedia.org/wiki?curid=1409693 |
14096979 | Linear sweep voltammetry | Method of analyzing electrochemical reactions
In analytical chemistry, linear sweep voltammetry is a method of voltammetry where the current at a working electrode is measured while the potential between the working electrode and a reference electrode is swept linearly in time. Oxidation or reduction of species is registered as a peak or trough in the current signal at the potential at which the species begins to be oxidized or reduced.
Experimental method.
The experimental setup for linear sweep voltammetry utilizes a potentiostat and a three-electrode setup to deliver a potential to a solution and monitor its change in current. The three-electrode setup consists of a working electrode, an auxiliary electrode, and a reference electrode. The potentiostat delivers the potentials through the three-electrode setup. A potential, E, is delivered through the working electrode. The slope of the potential vs. time graph is called the "scan rate" and can range from mV/s to 1,000,000 V/s.
The working electrode is one of the electrodes at which the oxidation/reduction reactions occur—the processes that occur at this electrode are the ones being monitored. The auxiliary electrode (or counter electrode) is the one at which a process opposite from the one taking place at the working electrode occurs. The processes at this electrode are not monitored. The equation below gives an example of a reduction occurring at the surface of the working electrode. Es is the reduction potential of A (if the electrolyte and the electrode are in their standard conditions, then this potential is a standard reduction potential). As E approaches Es, the current on the surface increases, and when "E" = "Es", the concentration of A equals that of the oxidized/reduced A at the surface ([A] = [A−]). As the molecules on the surface of the working electrode are oxidized/reduced, they move away from the surface and new molecules come into contact with the surface of the working electrode. The flow of electrons into or out of the electrode causes the current. The current is a direct measure of the rate at which electrons are being exchanged through the electrode-electrolyte interface. When this rate becomes higher than the rate at which the oxidizing or reducing species can diffuse from the bulk of the electrolyte to the surface of the electrode, the current reaches a plateau or exhibits a peak:
formula_0
Reduction of molecule A at the surface of the working electrode.
The auxiliary and reference electrode work in unison to balance out the charge added or removed by the working electrode. The auxiliary electrode balances the working electrode, but in order to know how much potential it has to add or remove it relies on the reference electrode. The reference electrode has a known reduction potential. The auxiliary electrode tries to keep the reference electrode at a certain reduction potential and to do this it has to balance the working electrode.
Characterization.
Linear sweep voltammetry can identify unknown species and determine the concentration of solutions. E1/2 can be used to identify the unknown species while the height of the limiting current can determine the concentration. The sensitivity of current changes vs. voltage can be increased by increasing the scan rate. Higher potentials per second result in more oxidation/reduction of a species at the surface of the working electrode.
Variations.
For reversible reactions cyclic voltammetry can be used to find information about the forward reaction and the reverse reaction. Like linear sweep voltammetry, cyclic voltammetry applies a linear potential over time and at a certain potential the potentiostat will reverse the potential applied and sweep back to the beginning point. Cyclic voltammetry provides information about the oxidation and reduction reactions.
Applications.
While cyclic voltammetry is applicable to most cases where linear sweep voltammetry is used, there are some instances where linear sweep voltammetry is more useful. In cases where the reaction is irreversible cyclic voltammetry will not give any additional data that linear sweep voltammetry would give us. In one example, linear voltammetry was used to examine direct methane production via a biocathode. Since the production of methane from CO2 is an irreversible reaction, cyclic voltammetry did not present any distinct advantage over linear sweep voltammetry. This group found that the biocathode produced higher current densities than a plain carbon cathode and that methane can be produced from a direct electric current without the need of hydrogen gas.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{A + e- <=> A-},\\ E_s=0.00V"
}
]
| https://en.wikipedia.org/wiki?curid=14096979 |
14097579 | Squarewave voltammetry | Squarewave voltammetry (SWV) is a form of linear potential sweep voltammetry that uses a combined square wave and staircase potential applied to a stationary electrode. It has found numerous applications in various fields, including within medicinal and various sensing communities.
History.
When first reported by Barker in 1957, the working electrode utilized was primarily a dropping mercury electrode (DME). When using a DME, the surface area of the mercury drop is constantly changing throughout the course of the experiment; for this reason, complex mathematical modeling was at times required in order to analyze collected electrochemical data. The squarewave voltammetric technique allowed for the collection of the desired electrochemical data within one mercury drop, meaning that the need for mathematical modeling to account for the changing working electrode surface area was no longer needed. In short, the introduction and development of this technique allowed for the rapid collection of reliable and easily reproducible electrochemical data using DME or SDME working electrodes. With continued improvements from many electrochemists (particularly the Osteryoungs), SWV is now one of the primary voltammetric techniques available on modern potentiostats.
Theory.
In a squarewave voltammetric experiment, the current at a (usually stationary) working electrode is measured while the potential between the working electrode and a reference electrode is pulsed forward and backward at a constant frequency. The potential waveform can be viewed as a superposition of a regular squarewave onto an underlying staircase (see figure above); in this sense, SWV can be considered a modification of staircase voltammetry.
The current is sampled at two times - once at the end of the forward potential pulse and again at the end of the reverse potential pulse (in both cases immediately before the potential direction is reversed). As a result of this current sampling technique, the contribution to the current signal resulting from capacitive (sometimes referred to as non-faradaic or charging) current is minimal. As a result of having current sampling at two different instances per squarewave cycle, two current waveforms are collected - both have diagnostic value, and are therefore preserved. When viewed in isolation, the forward and reverse current waveforms mimic the appearance of a cyclic voltammogram (which corresponds to the anodic or cathodic halves, however, is dependent upon experimental conditions).
Despite both the forward and reverse current waveforms having diagnostic worth, it is almost always the case in SWV for the potentiostat software to plot a differential current waveform derived by subtracting the reverse current waveform from the forward current waveform. This differential curve is then plotted against the applied potential. Peaks in the differential current vs. applied potential plot are indicative of redox processes, and the magnitudes of the peaks in this plot are proportional to the concentrations of the various redox active species according to:
formula_0
where Δip is the differential current peak value, A is the surface area of the electrode, C0* is the concentration of the species, D0 is the diffusivity of the species, tp is the pulse width, and ΔΨp is a dimensionless parameter which gauges the peak height in SWV relative to the limiting response in normal pulse voltammetry.
Renewal of diffusion layer.
It is important to note that in squarewave voltammetric analyses, the diffusion layer is not renewed between potential cycles. Thus, it is not possible/accurate to view each cycle in isolation; the conditions present for each cycle is a complex diffusion layer which has evolved through all prior potential cycles. The conditions for a particular cycle are also a function of electrode kinetics, along with other electrochemical considerations.
Applications.
Because of the minimal contributions from non-faradaic currents, the use of a differential current plot instead of separate forward and reverse current plots, and significant time evolution between potential reversal and current sampling, high sensitivity screening can be obtained utilizing SWV. For this reason, squarewave voltammetry has been utilized in numerous electrochemical measurements and can be viewed as an improvement to other electroanalytical techniques. For instance, SWV suppresses background currents much more effectively than cyclic voltammetry - for this reason, analyte concentrations on the nanomolar scale can be registered utilizing SWV over CV.
SWV analysis has been used recently in the development of a voltammetric catechol sensor, in the analysis of a large number of pharmaceuticals, and in the development and construction of a 2,4,6-TNT and 2,4-DNT sensor
In addition to being utilized in independent analyses, SWV has also been coupled with other analytical techniques, including but not limited to thin-layer chromatography (TLC) and high-pressure liquid chromatography.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta i_p=\\frac{nFAD_0^{1/2}C_0^*}{(\\pi t_p)^{1/2}}\\Delta \\Psi_p"
}
]
| https://en.wikipedia.org/wiki?curid=14097579 |
14098258 | Queue automaton | Computation model, equivalent to Turing machines
A queue machine, queue automaton, or pullup automaton (PUA) is a finite state machine with the ability to store and retrieve data from an infinite-memory queue. It is a model of computation equivalent to a Turing machine, and therefore it can process the same class of formal languages.
Theory.
A queue machine can be defined as a six-tuple
formula_0 where
A machine "configuration" is an ordered pair of its state and queue contents formula_7, where formula_8 denotes the Kleene closure of formula_3. The starting configuration on an input string formula_9 is defined as formula_10, and the transition formula_11 from one configuration to the next is defined as:
formula_12
where formula_13 is a symbol from the queue alphabet, formula_14 is a sequence of queue symbols (formula_15), and formula_16. Note the "first-in-first-out" property of the queue in the relation.
The machine accepts a string formula_17 if after a finite number of transitions the starting configuration evolves to exhaust the string (reaching the null string formula_18), or otherwise stated, if formula_19
Turing completeness.
We can prove that a queue machine is equivalent to a Turing machine by showing that a queue machine can simulate a Turing machine and vice versa.
A Turing machine can be simulated by a queue machine that keeps a copy of the Turing machine's contents in its queue at all times, with two special markers: one for the Turing machine's head position, and one for the end of the tape; its transitions simulate those of the Turing machine by running through the whole queue, popping off each of its symbols and re-enqueing either the popped symbol, or, near the head position, the equivalent of the Turing machine transition's effect.
A queue machine can be simulated by a Turing machine, but more easily by a multi-tape Turing machine, which is known to be equivalent to a normal single-tape machine.
The simulating queue machine reads input on one tape and stores the queue on the second, with pushes and pops defined by simple transitions to the beginning and end symbols of the tape. A formal proof of this is often an exercise in theoretical computer science courses.
Applications.
Queue machines offer a simple model on which to base computer architectures, programming languages, or algorithms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M = (Q, \\Sigma, \\Gamma, \\$, s, \\delta)"
},
{
"math_id": 1,
"text": "\\,Q"
},
{
"math_id": 2,
"text": "\\,\\Sigma \\subset \\Gamma"
},
{
"math_id": 3,
"text": "\\,\\Gamma"
},
{
"math_id": 4,
"text": "\\,\\$ \\in \\Gamma \\setminus \\Sigma"
},
{
"math_id": 5,
"text": "\\,s \\in Q"
},
{
"math_id": 6,
"text": "\\,\\delta : Q \\times \\Gamma \\rightarrow Q \\times \\Gamma^*"
},
{
"math_id": 7,
"text": "\\,(q,\\gamma)\\in Q\\times\\Gamma^*"
},
{
"math_id": 8,
"text": "\\,\\Gamma^*"
},
{
"math_id": 9,
"text": "\\,x"
},
{
"math_id": 10,
"text": "\\,(s,x\\$)"
},
{
"math_id": 11,
"text": "\\rightarrow_M^1"
},
{
"math_id": 12,
"text": "\\,(p,A\\alpha) \\rightarrow_M^1 (q,\\alpha\\gamma)"
},
{
"math_id": 13,
"text": "A"
},
{
"math_id": 14,
"text": "\\alpha"
},
{
"math_id": 15,
"text": "\\alpha \\in \\Gamma^*"
},
{
"math_id": 16,
"text": "(q, \\gamma) = \\delta(p, A)"
},
{
"math_id": 17,
"text": "\\,x\\in\\Sigma^*"
},
{
"math_id": 18,
"text": "\\,\\epsilon"
},
{
"math_id": 19,
"text": "\\,(s,x\\$)\\rightarrow_M^*(q,\\epsilon)."
}
]
| https://en.wikipedia.org/wiki?curid=14098258 |
14103273 | Kleene's T predicate | Concept in computability theory
In computability theory, the T predicate, first studied by mathematician Stephen Cole Kleene, is a particular set of triples of natural numbers that is used to represent computable functions within formal theories of arithmetic. Informally, the "T" predicate tells whether a particular computer program will halt when run with a particular input, and the corresponding "U" function is used to obtain the results of the computation if the program does halt. As with the smn theorem, the original notation used by Kleene has become standard terminology for the concept.
Definition.
The definition depends on a suitable Gödel numbering that assigns natural numbers to computable functions (given as Turing machines). This numbering must be sufficiently effective that, given an index of a computable function and an input to the function, it is possible to effectively simulate the computation of the function on that input. The formula_0 predicate is obtained by formalizing this simulation.
The ternary relation formula_1 takes three natural numbers as arguments. formula_1 is true if formula_2 encodes a computation history of the computable function with index formula_3 when run with input formula_4, and the program halts as the last step of this computation history. That is,
If all three of these questions have a positive answer, then formula_1 is true, otherwise, it is false.
The formula_5 predicate is primitive recursive in the sense that there is a primitive recursive function that, given inputs for the predicate, correctly determines the truth value of the predicate on those inputs.
There is a corresponding primitive recursive function formula_7 such that if formula_1 is true then formula_8 returns the output of the function with index formula_3 on input formula_4.
Because Kleene's formalism attaches a number of inputs to each function, the predicate formula_5 can only be used for functions that take one input. There are additional predicates for functions with multiple inputs; the relation
formula_9
is true if formula_2 encodes a halting computation of the function with index formula_3 on the inputs formula_10.
Like formula_5, all functions formula_11 are primitive recursive.
Because of this, any theory of arithmetic that is able to represent every primitive recursive function is able to represent formula_0 and formula_7. Examples of such arithmetical theories include Robinson arithmetic and stronger theories such as Peano arithmetic.
Normal form theorem.
The formula_11 predicates can be used to obtain Kleene's normal form theorem for computable functions (Soare 1987, p. 15; Kleene 1943, p. 52—53). This states there exists a fixed primitive recursive function formula_7 such that a function formula_12 is computable if and only if there is a number formula_3 such that for all formula_13 one has
formula_14,
where "μ" is the "μ" operator (formula_15 is the smallest natural number for which formula_16 is true) and formula_17 is true if both sides are undefined or if both are defined and they are equal. By the theorem, the definition of every general recursive function "f" can be rewritten into a normal form such that the "μ" operator is used only once, viz. immediately below the topmost formula_7, which is independent of the computable function formula_18.
Arithmetical hierarchy.
In addition to encoding computability, the "T" predicate can be used to generate complete sets in the arithmetical hierarchy. In particular, the set
formula_19
which is of the same Turing degree as the halting problem, is a formula_20 complete unary relation (Soare 1987, pp. 28, 41). More generally, the set
formula_21
is a formula_20-complete ("n"+1)-ary predicate. Thus, once a representation of the "T""n" predicate is obtained in a theory of arithmetic, a representation of a formula_20-complete predicate can be obtained from it.
This construction can be extended higher in the arithmetical hierarchy, as in Post's theorem (compare Hinman 2005, p. 397). For example, if a set formula_22 is formula_23 complete then the set
formula_24
is formula_25 complete. | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "T_1(e,i,x)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "e"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "T_1"
},
{
"math_id": 6,
"text": "\\langle x_{j} \\rangle"
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "U(x)"
},
{
"math_id": 9,
"text": "T_k(e, i_1, \\ldots, i_k, x)"
},
{
"math_id": 10,
"text": "i_1,\\ldots,i_k"
},
{
"math_id": 11,
"text": "T_k"
},
{
"math_id": 12,
"text": "f:\\mathbb{N}^{k}\\rightarrow\\mathbb{N}"
},
{
"math_id": 13,
"text": "n_1,\\ldots,n_k"
},
{
"math_id": 14,
"text": "f(n_1,\\ldots,n_k) \\simeq U( \\mu x\\, T_k(e,n_1,\\ldots,n_k,x))"
},
{
"math_id": 15,
"text": "\\mu x\\, \\phi(x)"
},
{
"math_id": 16,
"text": "\\phi(x)"
},
{
"math_id": 17,
"text": "\\simeq"
},
{
"math_id": 18,
"text": "f"
},
{
"math_id": 19,
"text": " K = \\{ e \\mbox{ } : \\mbox{ } \\exists x T_1(e,0,x) \\}"
},
{
"math_id": 20,
"text": "\\Sigma^0_1"
},
{
"math_id": 21,
"text": "K_{n+1} = \\{ \\langle e, a_1, \\ldots, a_n\\rangle : \\exists x T_n(e, a_1, \\ldots, a_n, x)\\}"
},
{
"math_id": 22,
"text": "A \\subseteq \\mathbb{N}^{k+1}"
},
{
"math_id": 23,
"text": "\\Sigma^0_{n}"
},
{
"math_id": 24,
"text": "\\{ \\langle a_1, \\ldots, a_k\\rangle : \\forall x ( \\langle a_1, \\ldots, a_k, x) \\in A)\\}"
},
{
"math_id": 25,
"text": "\\Pi^0_{n+1}"
}
]
| https://en.wikipedia.org/wiki?curid=14103273 |
14103660 | Read-only Turing machine | A read-only Turing machine or two-way deterministic finite-state automaton (2DFA) is class of models of computability that behave like a standard Turing machine and can move in both directions across input, except cannot write to its input tape. The machine in its bare form is equivalent to a deterministic finite automaton in computational power, and therefore can only parse a regular language.
Theory.
We define a standard Turing machine by the 9-tuple
formula_0 where
So given initial state formula_10 reading symbol formula_11, we have a transition defined by formula_12 which replaces formula_11 with formula_13, transitions to state formula_14, and moves the "read head" in direction formula_15 (left or right) to read the next input. In our 2DFA read-only machine, however, formula_16 always.
This model is now equivalent to a DFA. The proof involves building a table which lists the result of backtracking with the control in any given state; at the start of the computation, this is simply the result of trying to move past the left endmarker in that state. On each rightward move, the table can be updated using the old table values and the character that was in the previous cell. Since the original head-control had some fixed number of states, and there is a fixed number of states in the tape alphabet, the table has fixed size, and can therefore be computed by another finite state machine. This machine, however, will never need to backtrack, and hence is a DFA.
Variants.
Several variants of this model are also equivalent to DFAs. In particular, the nondeterministic case (in which the transition from one state can be to multiple states given the same input) is reducible to a DFA.
Other variants of this model allow more computational complexity. With a single infinite stack the model can parse (at least) any language that is computable by a Turing machine in linear time. In particular, the language {anbncn} can be parsed by an algorithm which verifies first that there are the same number of a's and b's, then rewinds and verifies that there are the same number of b's and c's. With the further aid of nondeterminism the machine can parse any context-free language. With two infinite stacks the machine is Turing equivalent and can parse any recursive formal language.
If the machine is allowed to have multiple tape heads, it can parse any language in L or NL, according to whether nondeterminism is allowed.
Applications.
A read-only Turing machine is used in the definition of a Universal Turing machine to accept the definition of the Turing machine that is to be modelled, after which computation continues with a standard Turing machine.
In modern research, the model has become important in describing a new complexity class of Quantum finite automata or deterministic probabilistic automata.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M = (Q, \\Sigma, \\Gamma, \\vdash, \\_, \\delta, s, t, r)"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\Sigma"
},
{
"math_id": 3,
"text": "\\Gamma"
},
{
"math_id": 4,
"text": "\\vdash \\in \\Gamma - \\Sigma"
},
{
"math_id": 5,
"text": "\\_ \\in \\Gamma - \\Sigma"
},
{
"math_id": 6,
"text": "\\delta : Q \\times \\Gamma \\rightarrow Q \\times \\Gamma \\times \\{L,R\\}"
},
{
"math_id": 7,
"text": "s \\in Q"
},
{
"math_id": 8,
"text": "t \\in Q"
},
{
"math_id": 9,
"text": "r \\in Q, ~ r \\ne t"
},
{
"math_id": 10,
"text": "q"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "\\delta(q,a)=(q_2,a_2,d)"
},
{
"math_id": 13,
"text": "a_2"
},
{
"math_id": 14,
"text": "q_2"
},
{
"math_id": 15,
"text": "d"
},
{
"math_id": 16,
"text": "a=a_2"
}
]
| https://en.wikipedia.org/wiki?curid=14103660 |
1410431 | Bethe lattice | In statistical mechanics and mathematics, the Bethe lattice (also called a regular tree) is an infinite connected cycle-free graph where all vertices have the same number of neighbors. The Bethe lattice was introduced into the physics literature by Hans Bethe in 1935. In such a graph, each node is connected to "z" neighbors; the number "z" is called either the coordination number or the degree, depending on the field.
Due to its distinctive topological structure, the statistical mechanics of lattice models on this graph are often easier to solve than on other lattices. The solutions are related to the often used Bethe ansatz for these systems.
Basic Properties.
When working with the Bethe lattice, it is often convenient to mark a given vertex as the root, to be used as a reference point when considering local properties of the graph.
Sizes of layers.
Once a vertex is marked as the root, we can group the other vertices into layers based on their distance from the root. The number of vertices at a distance formula_0 from the root is formula_1, as each vertex other than the root is adjacent to formula_2 vertices at a distance one greater from the root, and the root is adjacent to formula_3 vertices at a distance 1.
In statistical mechanics.
The Bethe lattice is of interest in statistical mechanics mainly because lattice models on the Bethe lattice are often easier to solve than on other lattices, such as the two-dimensional square lattice. This is because the lack of cycles removes some of the more complicated interactions. While the Bethe lattice does not as closely approximate the interactions in physical materials as other lattices, it can still provide useful insight.
Exact solutions to the Ising model.
The Ising model is a mathematical model of ferromagnetism, in which the magnetic properties of a material are represented by a "spin" at each node in the lattice, which is either +1 or -1. The model is also equipped with a constant formula_4 representing the strength of the interaction between adjacent nodes, and a constant formula_5 representing an external magnetic field.
The Ising model on the Bethe lattice is defined by the partition function
formula_6
Magnetization.
In order to compute the local magnetization, we can break the lattice up into several identical parts by removing a vertex. This gives us a recurrence relation which allows us to compute the magnetization of a Cayley tree with "n" shells (the finite analog to the Bethe lattice) as
formula_7
where formula_8 and the values of formula_9 satisfy the recurrence relation
formula_10
In the formula_11 case when the system is ferromagnetic, the above sequence converges, so we may take the limit to evaluate the magnetization on the Bethe lattice. We get
formula_12 where "x" is a solution to formula_13.
There are either 1 or 3 solutions to this equation. In the case where there are 3, the sequence formula_14 will converge to the smallest when formula_15 and the largest when formula_16.
Free energy.
The free energy "f" at each site of the lattice in the Ising Model is given by
formula_17,
where formula_18 and formula_19 is as before.
In mathematics.
Return probability of a random walk.
The probability that a random walk on a Bethe lattice of degree formula_3 starting at a given vertex eventually returns to that vertex is given by formula_20. To show this, let formula_21 be the probability of returning to our starting point if we are a distance formula_22 away. We have the recurrence relation
formula_23
for all formula_24, as at each location other than the starting vertex there are formula_2 edges going away from the starting vertex and 1 edge going towards it. Summing this equation over all formula_24, we get
formula_25.
We have formula_26, as this indicates that we have just returned to the starting vertex, so formula_27, which is the value we want.
Note that this in stark contrast to the case of random walks on the two-dimensional square lattice, which famously has a return probability of 1. Such a lattice is 4-regular, but the 4-regular Bethe lattice has a return probability of 1/3.
Number of closed walks.
One can easily bound the number of closed walks of length formula_28 starting at a given vertex of the Bethe Lattice with degree formula_3 from below. By considering each step as either an outward step (away from the starting vertex) or an inward step (toward the starting vertex), we see that any closed walk of length formula_28 must have exactly formula_22 outward steps and formula_22 inward steps. We also may not have taken more inward steps than outward steps at any point, so the number of sequences of step directions (either inward or outward) is given by the formula_22th Catalan number formula_29. There are at least formula_2 choices for each outward step, and always exactly 1 choice for each inward step, so the number of closed walks is at least formula_30.
This bound is not tight, as there are actually formula_3 choices for an outward step from the starting vertex, which happens at the beginning and any number of times during the walk. The exact number of walks is trickier to compute, and is given by the formula
formula_31
where formula_32 is the Gauss hypergeometric function.
We may use this fact to bound the second largest eigenvalue of a formula_33-regular graph. Let formula_34 be a formula_33-regular graph with formula_35 vertices, and let formula_36 be its adjacency matrix. Then formula_37 is the number of closed walks of length formula_28. The number of closed walks on formula_34 is at least formula_35 times the number of closed walks on the Bethe lattice with degree formula_33 starting at a particular vertex, as we can map the walks on the Bethe lattice to the walks on formula_34 that start at a given vertex and only go back on paths that were already tread. There are often more walks on formula_34, as we can make use of cycles to create additional walks. The largest eigenvalue of formula_36 is formula_33, and letting formula_38 be the second largest absolute value of an eigenvalue, we have
formula_39
This gives formula_40. Noting that formula_41 as formula_22 grows, we can let formula_35 grow much faster than formula_22 to see that there are only finitely many formula_33-regular graphs formula_34 for which the second largest absolute value of an eigenvalue is at most formula_42, for any formula_43 This is a rather interesting result in the study of (n,d,λ)-graphs.
Relation to Cayley graphs and Cayley trees.
A Bethe graph of even coordination number 2"n" is isomorphic to the unoriented Cayley graph of a free group of rank "n" with respect to a free generating set.
Lattices in Lie groups.
Bethe lattices also occur as the discrete subgroups of certain hyperbolic Lie groups, such as the Fuchsian groups. As such, they are also lattices in the sense of a lattice in a Lie group.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d>0"
},
{
"math_id": 1,
"text": "z(z-1)^{d-1}"
},
{
"math_id": 2,
"text": "z-1"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "Z=\\sum_{\\{\\sigma\\}}\\exp\\left(K\\sum_{(i,j)}\\sigma_i\\sigma_j + h\\sum_i \\sigma_i\\right)."
},
{
"math_id": 7,
"text": "M=\\frac{e^h-e^{-h}x_n^q}{e^h+e^{-h}x_n^q},"
},
{
"math_id": 8,
"text": "x_0=1"
},
{
"math_id": 9,
"text": "x_i"
},
{
"math_id": 10,
"text": "x_n=\\frac{e^{-K+h}+e^{K-h}x_{n-1}^{q-1}}{e^{K+h}+e^{-K-h}x_{n-1}^{q-1}}"
},
{
"math_id": 11,
"text": "K>0"
},
{
"math_id": 12,
"text": "M=\\frac{e^{2h}-x^q}{e^{2h}+x_q},"
},
{
"math_id": 13,
"text": "x=\\frac{e^{-K+h}+e^{K-h}x^{q-1}}{e^{K+h}+e^{-K-h}x^{q-1}}"
},
{
"math_id": 14,
"text": "x_n"
},
{
"math_id": 15,
"text": "h>0"
},
{
"math_id": 16,
"text": "h<0"
},
{
"math_id": 17,
"text": "\\frac{f}{kT}=\\frac12[-Kq-q\\ln(1-z^2)+\\ln(z^2+1-z(x+1/x))+(q-2)\\ln(x+1/x-2z)]"
},
{
"math_id": 18,
"text": "z=\\exp(-2K)"
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "\\frac{1}{z-1}"
},
{
"math_id": 21,
"text": "P(k)"
},
{
"math_id": 22,
"text": "k"
},
{
"math_id": 23,
"text": "P(k)=\\frac1zP(k-1)+\\frac{z-1}zP(k+1)"
},
{
"math_id": 24,
"text": "k>1"
},
{
"math_id": 25,
"text": "\\sum_{k=1}^{\\infty}P(k)=\\frac1zP(0)+\\frac1zP(1)+\\sum_{k=2}^{\\infty}P(k)"
},
{
"math_id": 26,
"text": "P(0)=1"
},
{
"math_id": 27,
"text": "P(1)=1/(z-1)"
},
{
"math_id": 28,
"text": "2k"
},
{
"math_id": 29,
"text": "C_k"
},
{
"math_id": 30,
"text": "(z-1)^kC_k"
},
{
"math_id": 31,
"text": "(z-1)^kC_k\\cdot \\frac{z-1}{z}\\ _2F_1(k+1/2,1,k+2,4(z-1)/z^2),"
},
{
"math_id": 32,
"text": "_2F_1(\\alpha,\\beta,\\gamma,z)"
},
{
"math_id": 33,
"text": "d"
},
{
"math_id": 34,
"text": "G"
},
{
"math_id": 35,
"text": "n"
},
{
"math_id": 36,
"text": "A"
},
{
"math_id": 37,
"text": "\\text{tr }A^{2k}"
},
{
"math_id": 38,
"text": "\\lambda_2"
},
{
"math_id": 39,
"text": "n(d-1)^kC_k\\le\\text{tr} A^{2k}\\le d^{2k}+(n-1)\\lambda_2^{2k}."
},
{
"math_id": 40,
"text": "\\lambda_2^{2k}\\ge\\frac{1}{n-1}(n(d-1)^kC_k-d^{2k})"
},
{
"math_id": 41,
"text": "C_k=(4-o(1))^k"
},
{
"math_id": 42,
"text": "\\lambda"
},
{
"math_id": 43,
"text": "\\lambda < 2\\sqrt{d-1}."
}
]
| https://en.wikipedia.org/wiki?curid=1410431 |
14104402 | Beurling–Lax theorem | In mathematics, the Beurling–Lax theorem is a theorem due to and which characterizes the shift-invariant subspaces of the Hardy space formula_0. It states that each such space is of the form
formula_1
for some inner function formula_2. | [
{
"math_id": 0,
"text": "H^2(\\mathbb{D},\\mathbb{C})"
},
{
"math_id": 1,
"text": " \\theta H^2(\\mathbb{D},\\mathbb{C}), "
},
{
"math_id": 2,
"text": "\\theta"
}
]
| https://en.wikipedia.org/wiki?curid=14104402 |
1410537 | Return on equity | Measure of the profitability of a business
The return on equity (ROE) is a measure of the profitability of a business in relation to its equity;
where:
ROE =
Thus, ROE is equal to a fiscal year's net income (after preferred stock dividends, before common stock dividends), divided by total equity (excluding preferred shares), expressed as a percentage.
Because shareholder's equity can be calculated by taking all assets and subtracting all liabilities, ROE can also be thought of as a return on NAV, or "assets less liabilities".
Usage.
ROE measures how many dollars of profit are generated for each dollar of shareholder's equity, and is thus a metric of how well the company utilizes its equity to generate profits.
ROE is especially used for comparing the performance of companies in the same industry. As with return on capital, a ROE is a measure of management's ability to generate income from the equity available to it. ROEs of 15–20% are generally considered good.
ROE is also a factor in stock valuation, in association with other financial ratios. Note though that, while higher ROE ought intuitively to imply higher stock prices, in reality, predicting the stock value of a company based on its ROE is dependent on too many other factors to be of use by itself.
Both of these are expanded below.
The DuPont formula.
The DuPont formula,
also known as the strategic profit model,
is a framework allowing management to decompose ROE into three "actionable" components;
these "drivers of value" being the efficiency of operations, asset usage, and finance.
ROE is then
the net profit margin multiplied by asset turnover multiplied by accounting leverage:
formula_0
The application, in the main, is either to financial management or to fund management:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{ROE} = \\frac{\\mbox{Net income}}{\\mbox{Sales}}\\times\\frac{\\mbox{Sales}}{\\mbox{Total Assets}}\\times\\frac{\\mbox{Total Assets}}{\\mbox{Shareholder Equity}}"
}
]
| https://en.wikipedia.org/wiki?curid=1410537 |
1410576 | Schwarzschild geodesics | Paths of particles in the Schwarzschild solution to Einstein's field equations
In general relativity, Schwarzschild geodesics describe the motion of test particles in the gravitational field of a central fixed mass formula_0 that is, motion in the Schwarzschild metric. Schwarzschild geodesics have been pivotal in the validation of Einstein's theory of general relativity. For example, they provide accurate predictions of the anomalous precession of the planets in the Solar System and of the deflection of light by gravity.
Schwarzschild geodesics pertain only to the motion of particles of masses so small they contribute little to the gravitational field. However, they are highly accurate in many astrophysical scenarios provided that formula_1 is many-fold smaller than the central mass formula_2, e.g., for planets orbiting their star. Schwarzschild geodesics are also a good approximation to the relative motion of two bodies of arbitrary mass, provided that the Schwarzschild mass formula_2 is set equal to the sum of the two individual masses formula_3 and formula_4. This is important in predicting the motion of binary stars in general relativity.
Historical context.
The Schwarzschild metric is named in honour of its discoverer Karl Schwarzschild, who found the solution in 1915, only about a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution.
In 1931, Yusuke Hagihara published a paper showing that the trajectory of a test particle in the Schwarzschild metric can be expressed in terms of elliptic functions.
Samuil Kaplan in 1949 has shown that there is a minimum radius for the circular orbit to be stable in Schwarzschild metric.
Schwarzschild metric.
An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of an uncharged, non-rotating, spherically symmetric body of mass formula_2. The Schwarzschild solution can be written as
formula_5
where
formula_6, in the case of a test particle of small positive mass, is the proper time (time measured by a clock moving with the particle) in seconds,
formula_7 is the speed of light in meters per second,
formula_8 is, for formula_9, the time coordinate (time measured by a stationary clock at infinity) in seconds,
formula_10 is, for formula_9, the radial coordinate (circumference of a circle centered at the star divided by formula_11) in meters,
formula_12 is the colatitude (angle from North) in radians,
formula_13 is the longitude in radians, and
formula_14 is the Schwarzschild radius of the massive body (in meters), which is related to its mass formula_2 by
formula_15
where formula_16 is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio formula_17 goes to zero. In that limit, the metric returns to that defined by special relativity.
In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius formula_18 of the Earth is roughly 9 mm (<templatestyles src="Fraction/styles.css" />3⁄8 inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio formula_17 is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes.
Orbits of test particles.
We may simplify the problem by using symmetry to eliminate one variable from consideration. Since the Schwarzschild metric is symmetrical about formula_19, any geodesic that begins moving in that plane will remain in that plane indefinitely (the plane is totally geodesic). Therefore, we orient the coordinate system so that the orbit of the particle lies in that plane, and fix the formula_20 coordinate to be formula_21 so that the metric (of this plane) simplifies to
formula_22
Two constants of motion (values that do not change over proper time formula_6) can be identified (cf. the derivation given below). One is the total energy formula_23:
formula_24
and the other is the specific angular momentum:
formula_25
where formula_26 is the total angular momentum of the two bodies, and formula_27 is the reduced mass. When formula_28, the reduced mass is approximately equal to formula_1. Sometimes it is assumed that formula_29. In the case of the planet Mercury this simplification introduces an error more than twice as large as the relativistic effect. When discussing geodesics, formula_1 can be considered fictitious, and what matters are the constants formula_30 and formula_31. In order to cover all possible geodesics, we need to consider cases in which formula_30 is infinite (giving trajectories of photons) or imaginary (for tachyonic geodesics). For the photonic case, we also need to specify a number corresponding to the ratio of the two constants, namely formula_32, which may be zero or a non-zero real number.
Substituting these constants into the definition of the Schwarzschild metric
formula_33
yields an equation of motion for the radius as a function of the proper time formula_34:
formula_35
The formal solution to this is
formula_36
Note that the square root will be imaginary for tachyonic geodesics.
Using the relation higher up between formula_37 and formula_23, we can also write
formula_38
Since asymptotically the integrand is inversely proportional to formula_39, this shows that in the formula_40 frame of reference if formula_41 approaches formula_18 it does so exponentially without ever reaching it. However, as a function of formula_34, formula_41 does reach formula_18.
The above solutions are valid while the integrand is finite, but a total solution may involve two or an infinity of pieces, each described by the integral but with alternating signs for the square root.
When formula_42 and formula_43, we can solve for formula_44 and formula_34 explicitly:
formula_45
and for photonic geodesics (formula_46) with zero angular momentum
formula_47
Another solvable case is that in which formula_50 and formula_44 and formula_51 are constant. In the volume where formula_52 this gives for the proper time
formula_53
This is close to solutions with formula_54 small and positive. Outside of formula_18 the formula_50 solution is tachyonic and the "proper time" is space-like:
formula_55
This is close to other tachyonic solutions with formula_54 small and negative. The constant formula_44 tachyonic geodesic outside formula_18 is not continued by a constant formula_44 geodesic inside formula_18, but rather continues into a "parallel exterior region" (see Kruskal–Szekeres coordinates). Other tachyonic solutions can enter a black hole and re-exit into the parallel exterior region. The constant formula_44 solution inside the event horizon (formula_18) is continued by a constant formula_44 solution in a white hole.
When the angular momentum is not zero we can replace the dependence on proper time by a dependence on the angle formula_51 using the definition of formula_31
formula_56
which yields the equation for the orbit
formula_57
where, for brevity, two length-scales, formula_58 and formula_59, have been defined by
formula_60
Note that in the tachyonic case, formula_58 will be imaginary and formula_59 real or infinite.
The same equation can also be derived using a Lagrangian approach or the Hamilton–Jacobi equation (see below). The solution of the orbit equation is
formula_61
This can be expressed in terms of the Weierstrass elliptic function formula_62.
Local and delayed velocities.
Unlike in classical mechanics, in Schwarzschild coordinates formula_63 and formula_64 are not the radial formula_65 and transverse formula_66 components of the local velocity formula_67 (relative to a stationary observer), instead they give the components for the celerity which are related to formula_67 by
formula_68
for the radial and
formula_69
for the transverse component of motion, with formula_70. The coordinate bookkeeper far away from the scene observes the shapiro-delayed velocity formula_71, which is given by the relation
formula_72 and formula_73.
The time dilation factor between the bookkeeper and the moving test-particle can also be put into the form
formula_74
where the numerator is the gravitational, and the denominator is the kinematic component of the time dilation. For a particle falling in from infinity the left factor equals the right factor, since the in-falling velocity formula_67 matches the escape velocity formula_75 in this case.
The two constants angular momentum formula_26 and total energy formula_23 of a test-particle with mass formula_1 are in terms of formula_67
formula_76
and
formula_77
where
formula_78
and
formula_79
For massive testparticles formula_80 is the Lorentz factor formula_81 and formula_34 is the proper time, while for massless particles like photons formula_80 is set to formula_82 and formula_34 takes the role of an affine parameter. If the particle is massless formula_83 is replaced with formula_84 and formula_85 with formula_86, where formula_31 is the Planck constant and formula_87 the locally observed frequency.
Exact solution using elliptic functions.
The fundamental equation of the orbit is easier to solve if it is expressed in terms of the inverse radius formula_89
formula_90
The right-hand side of this equation is a cubic polynomial, which has three roots, denoted here as formula_91, formula_92, and formula_93
formula_94
The sum of the three roots equals the coefficient of the formula_95 term
formula_96
A cubic polynomial with real coefficients can either have three real roots, or one real root and two complex conjugate roots. If all three roots are real numbers, the roots are labeled so that formula_97. If instead there is only one real root, then that is denoted as formula_93; the complex conjugate roots are labeled formula_91 and formula_92. Using Descartes' rule of signs, there can be at most one negative root; formula_91 is negative if and only if formula_98. As discussed below, the roots are useful in determining the types of possible orbits.
Given this labeling of the roots, the solution of the fundamental orbital equation is
formula_99
where formula_100 represents the function (one of the Jacobi elliptic functions) and formula_101 is a constant of integration reflecting the initial position. The elliptic modulus formula_102 of this elliptic function is given by the formula
formula_103
Newtonian limit.
To recover the Newtonian solution for the planetary orbits, one takes the limit as the Schwarzschild radius formula_18 goes to zero. In this case, the third root formula_93 becomes roughly formula_104, and much larger than formula_91 or formula_92. Therefore, the modulus formula_102 tends to zero; in that limit, formula_100 becomes the trigonometric sine function
formula_105
Consistent with Newton's solutions for planetary motions, this formula describes a focal conic of eccentricity formula_106
formula_107
If formula_91 is a positive real number, then the orbit is an ellipse where formula_91 and formula_92 represent the distances of furthest and closest approach, respectively. If formula_91 is zero or a negative real number, the orbit is a parabola or a hyperbola, respectively. In these latter two cases, formula_92 represents the distance of closest approach; since the orbit goes to infinity (formula_108), there is no distance of furthest approach.
Roots and overview of possible orbits.
A root represents a point of the orbit where the derivative vanishes, i.e., where formula_109. At such a turning point, formula_88 reaches a maximum, a minimum, or an inflection point, depending on the value of the second derivative, which is given by the formula
formula_110
If all three roots are distinct real numbers, the second derivative is positive, negative, and positive at "u"1, "u"2, and "u"3, respectively. It follows that a graph of "u" versus φ may either oscillate between "u"1 and "u"2, or it may move away from "u"3 towards infinity (which corresponds to "r" going to zero). If "u"1 is negative, only part of an "oscillation" will actually occur. This corresponds to the particle coming from infinity, getting near the central mass, and then moving away again toward infinity, like the hyperbolic trajectory in the classical solution.
If the particle has just the right amount of energy for its angular momentum, "u"2 and "u"3 will merge. There are three solutions in this case. The orbit may spiral in to formula_111, approaching that radius as (asymptotically) a decreasing exponential in φ, formula_34, or formula_44. Or one can have a circular orbit at that radius. Or one can have an orbit that spirals down from that radius to the central point. The radius in question is called the inner radius and is between formula_112 and 3 times "rs". A circular orbit also results when formula_92 is equal to formula_91, and this is called the outer radius. These different types of orbits are discussed below.
If the particle comes at the central mass with sufficient energy and sufficiently low angular momentum then only formula_91 will be real. This corresponds to the particle falling into a black hole. The orbit spirals in with a finite change in φ.
Precession of orbits.
The function sn and its square sn2 have periods of 4"K" and 2"K", respectively, where "K" is defined by the equation
formula_113
Therefore, the change in φ over one oscillation of formula_88 (or, equivalently, one oscillation of formula_41) equals
formula_114
In the classical limit, "u"3 approaches formula_104 and is much larger than formula_91 or formula_92. Hence, formula_115 is approximately
formula_116
For the same reasons, the denominator of Δφ is approximately
formula_117
Since the modulus formula_102 is close to zero, the period "K" can be expanded in powers of formula_102; to lowest order, this expansion yields
formula_118
Substituting these approximations into the formula for Δφ yields a formula for angular advance per radial oscillation
formula_119
For an elliptical orbit, formula_91 and formula_92 represent the inverses of the longest and shortest distances, respectively. These can be expressed in terms of the ellipse's semi-major axis formula_120 and its orbital eccentricity formula_106,
formula_121
giving
formula_122
Substituting the definition of formula_18 gives the final equation
formula_123
Bending of light by gravity.
In the limit as the particle mass "m" goes to zero (or, equivalently if the light is heading directly toward the central mass, as the length-scale "a" goes to infinity), the equation for the orbit becomes
formula_124
Expanding in powers of formula_17, the leading order term in this formula gives the approximate angular deflection δ"φ" for a massless particle coming in from infinity and going back out to infinity:
formula_125
Here, formula_59 is the impact parameter, somewhat greater than the distance of closest approach, formula_126:
formula_127
Although this formula is approximate, it is accurate for most measurements of gravitational lensing, due to the smallness of the ratio formula_17. For light grazing the surface of the sun, the approximate angular deflection is roughly 1.75 arcseconds, roughly one millionth part of a circle.
More generally, the geodesics of a photon emitted from a light source located at a radial coordinate formula_128 can be calculated as follows, by applying the equation
formula_129The equation can be derived as
formula_130
which leads to
formula_131
This equation with second derivative can be numerically integrated as follows by a 4th order Runge-Kutta method, considering a step size formula_132 and with:
formula_133,
formula_134,
formula_135 and
formula_136.
The value at the next step formula_137 is
formula_138
and the value at the next step formula_139 is
formula_140
The step formula_132 can be chosen to be constant or adaptive, depending on the accuracy required on formula_141.
Relation to Newtonian physics.
Effective radial potential energy.
The equation of motion for the particle derived above
formula_142
can be rewritten using the definition of the Schwarzschild radius "r"s as
formula_143
which is equivalent to a particle moving in a one-dimensional effective potential
formula_144
The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution
formula_145
where formula_120 is the semi-major axis and formula_106 is the eccentricity.
The third term is attractive and dominates at small formula_41 values, giving a critical inner radius "r"inner at which a particle is drawn inexorably inwards to formula_146; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the formula_58 length-scale defined above.
Circular orbits and their stability.
The effective potential formula_147 can be re-written in terms of the length formula_148.
formula_149
Circular orbits are possible when the effective force is zero
formula_150
i.e., when the two attractive forces — Newtonian gravity (first term) and the attraction unique to general relativity (third term) — are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as "r"inner and "r"outer
formula_151
which are obtained using the quadratic formula. The inner radius "r"inner is unstable, because the attractive third force strengthens much faster than the other two forces when "r" becomes small; if the particle slips slightly inwards from "r"inner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to "r" = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem.
When formula_58 is much greater than formula_18 (the classical case), these formulae become approximately
formula_152
Substituting the definitions of formula_58 and "r"s into "r"outer yields the classical formula for a particle of mass formula_1 orbiting a body of mass formula_2.
formula_153
where "ω"φ is the orbital angular speed of the particle. This formula is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force:
formula_154
Where formula_27 is the reduced mass.
In our notation, the classical orbital angular speed equals
formula_155
At the other extreme, when "a"2 approaches 3"r"s2 from above, the two radii converge to a single value
formula_156
The quadratic solutions above ensure that "r"outer is always greater than 3"r"s, whereas "r"inner lies between <templatestyles src="Fraction/styles.css" />3⁄2 "r"s and 3"r"s. Circular orbits smaller than <templatestyles src="Fraction/styles.css" />3⁄2 "r"s are not possible. For massless particles, "a" goes to infinity, implying that there is a circular orbit for photons at "r"inner = <templatestyles src="Fraction/styles.css" />3⁄2 "r"s. The sphere of this radius is sometimes known as the photon sphere.
Precession of elliptical orbits.
The orbital precession rate may be derived using this radial effective potential "V". A small radial deviation from a circular orbit of radius "r"outer will oscillate stably with an angular frequency
formula_157
which equals
formula_158
Taking the square root of both sides and performing a Taylor series expansion yields
formula_159
Multiplying by the period "T" of one revolution gives the precession of the orbit per revolution
formula_160
where we have used "ωφT" = 2"п" and the definition of the length-scale "a". Substituting the definition of the Schwarzschild radius "r"s gives
formula_161
This may be simplified using the elliptical orbit's semiaxis "A" and eccentricity "e" related by the formula
formula_162
to give the precession angle
formula_163
Mathematical derivations of the orbital equation.
Christoffel symbols.
The non-vanishing Christoffel symbols for the Schwarzschild-metric are:
formula_164
Geodesic equation.
According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In flat space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is
formula_165
where Γ represents the Christoffel symbol and the variable formula_166 parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor formula_167, or rather on how it changes with position. The variable formula_166 is a constant multiple of the proper time formula_34 for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable formula_166. Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass "m" goes to zero while holding its total energy fixed.
Therefore, to solve for the motion of a particle, the most straightforward way is to solve the geodesic equation, an approach adopted by Einstein and others. The Schwarzschild metric may be written as
formula_168
where the two functions formula_169and its reciprocal formula_170are defined for brevity. From this metric, the Christoffel symbols formula_171may be calculated, and the results substituted into the geodesic equations
formula_172
It may be verified that formula_19 is a valid solution by substitution into the first of these four equations. By symmetry, the orbit must be planar, and we are free to arrange the coordinate frame so that the equatorial plane is the plane of the orbit. This formula_20 solution simplifies the second and fourth equations.
To solve the second and third equations, it suffices to divide them by formula_173 and formula_174, respectively.
formula_175
which yields two constants of motion.
Lagrangian approach.
Because test particles follow geodesics in a fixed metric, the orbits of those particles may be determined using the calculus of variations, also called the Lagrangian approach. Geodesics in space-time are defined as curves for which small local variations in their coordinates (while holding their endpoints events fixed) make no significant change in their overall length "s". This may be expressed mathematically using the calculus of variations
formula_176
where "τ" is the proper time, "s" = "cτ" is the arc-length in space-time and "T" is defined as
formula_177
in analogy with kinetic energy. If the derivative with respect to proper time is represented by a dot for brevity
formula_178
"T" may be written as
formula_179
Constant factors (such as "c" or the square root of two) don't affect the answer to the variational problem; therefore, taking the variation inside the integral yields Hamilton's principle
formula_180
The solution of the variational problem is given by Lagrange's equations
formula_181
When applied to "t" and "φ", these equations reveal two constants of motion
formula_182
which may be expressed in terms of two constant length-scales, formula_58 and formula_59
formula_183
As shown above, substitution of these equations into the definition of the Schwarzschild metric yields the equation for the orbit.
Hamiltonian approach.
A Lagrangian solution can be recast into an equivalent Hamiltonian form. In this case, the Hamiltonian formula_184 is given by
formula_185
Once again, the orbit may be restricted to formula_19by symmetry. Since formula_44 and formula_51 do not appear in the Hamiltonian, their conjugate momenta are constant; they may be expressed in terms of the speed of light formula_186 and two constant length-scales formula_58 and formula_59
formula_187
The derivatives with respect to proper time are given by
formula_188
Dividing the first equation by the second yields the orbital equation
formula_189
The radial momentum "p""r" can be expressed in terms of "r" using the constancy of the Hamiltonian formula_190; this yields the fundamental orbital equation
formula_57
Hamilton–Jacobi approach.
The orbital equation can be derived from the Hamilton–Jacobi equation. The advantage of this approach is that it equates the motion of the particle with the propagation of a wave, and leads neatly into the derivation of the deflection of light by gravity in general relativity, through Fermat's principle. The basic idea is that, due to gravitational slowing of time, parts of a wave-front closer to a gravitating mass move more slowly than those further away, thus bending the direction of the wave-front's propagation.
Using general covariance, the Hamilton–Jacobi equation for a single particle of unit mass can be expressed in arbitrary coordinates as
formula_191
This is equivalent to the Hamiltonian formulation above, with the partial derivatives of the action taking the place of the generalized momenta. Using the Schwarzschild metric "g"μν, this equation becomes
formula_192
where we again orient the spherical coordinate system with the plane of the orbit. The time "t" and azimuthal angle "φ" are cyclic coordinates, so that the solution for Hamilton's principal function "S" can be written
formula_193
where formula_194 and formula_195 are the constant generalized momenta. The Hamilton–Jacobi equation gives an integral solution for the radial part formula_196
formula_197
Taking the derivative of Hamilton's principal function "S" with respect to the conserved momentum "p"φ yields
formula_198
which equals
formula_199
Taking an infinitesimal variation in φ and "r" yields the fundamental orbital equation
formula_200
where the conserved length-scales "a" and "b" are defined by the conserved momenta by the equations
formula_201
Hamilton's principle.
The action integral for a particle affected only by gravity is
formula_202
where formula_34 is the proper time and formula_166 is any smooth parameterization of the particle's world line. If one applies the calculus of variations to this, one again gets the equations for a geodesic. To simplify the calculations, one first takes the variation of the square of the integrand. For the metric and coordinates of this case and assuming that the particle is moving in the equatorial plane formula_19, that square is
formula_203
Taking variation of this gives
formula_204
Motion in longitude.
Vary with respect to longitude formula_51 only to get
formula_205
Divide by formula_206 to get the variation of the integrand itself
formula_207
Thus
formula_208
Integrating by parts gives
formula_209
The variation of the longitude is assumed to be zero at the end points, so the first term disappears. The integral can be made nonzero by a perverse choice of formula_210 unless the other factor inside is zero everywhere. So the equation of motion is
formula_211
Motion in time.
Vary with respect to time formula_44 only to get
formula_212
Divide by formula_206 to get the variation of the integrand itself
formula_213
Thus
formula_214
Integrating by parts gives
formula_215
So the equation of motion is
formula_216
Conserved momenta.
Integrate these equations of motion to determine the constants of integration getting
formula_217
These two equations for the constants of motion formula_26 (angular momentum) and formula_23 (energy) can be combined to form one equation that is true even for photons and other massless particles for which the proper time along a geodesic is zero.
formula_218
Radial motion.
Substituting
formula_219
and
formula_220
into the metric equation (and using formula_19) gives
formula_221
from which one can derive
formula_222
which is the equation of motion for formula_41. The dependence of formula_41 on formula_51 can be found by dividing this by
formula_223
to get
formula_224
which is true even for particles without mass. If length scales are defined by
formula_225
and
formula_226
then the dependence of formula_41 on formula_51 simplifies to
formula_227 | [
{
"math_id": 0,
"text": "M,"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "m_1"
},
{
"math_id": 4,
"text": "m_2"
},
{
"math_id": 5,
"text": "\nds^2=c^2 {d \\tau}^{2} = \n\\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} dt^{2} - \\frac{dr^{2}}{1 - \\frac{r_\\text{s}}{r}} - r^{2} (d\\theta^{2} + \\sin^{2} \\theta \\, d\\varphi^{2})\n"
},
{
"math_id": 6,
"text": "\\tau"
},
{
"math_id": 7,
"text": "c"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "r > r_\\text{s}"
},
{
"math_id": 10,
"text": "r"
},
{
"math_id": 11,
"text": "2\\pi"
},
{
"math_id": 12,
"text": "\\theta"
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": "r_\\text{s}"
},
{
"math_id": 15,
"text": "\nr_\\text{s} = \\frac{2GM}{c^{2}},\n"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "\\frac{r_\\text{s}}{r}"
},
{
"math_id": 18,
"text": "r_\\text{s}"
},
{
"math_id": 19,
"text": "\\theta = \\frac{\\pi}{2}"
},
{
"math_id": 20,
"text": "\\theta"
},
{
"math_id": 21,
"text": "\\frac{\\pi}{2}"
},
{
"math_id": 22,
"text": "\nc^2 d \\tau^{2} = \n\\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} dt^{2} - \\frac{dr^{2}}{1 - \\frac{r_\\text{s}}{r}} - r^{2} d\\varphi^{2}.\n"
},
{
"math_id": 23,
"text": "E"
},
{
"math_id": 24,
"text": "\n\\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} = \\frac{E}{m c^{2}}.\n"
},
{
"math_id": 25,
"text": "\nh = \\frac{L}{\\mu} = r^{2} \\frac{d\\varphi}{d\\tau},\n"
},
{
"math_id": 26,
"text": "L"
},
{
"math_id": 27,
"text": "\\mu"
},
{
"math_id": 28,
"text": "M \\gg m"
},
{
"math_id": 29,
"text": "m = \\mu"
},
{
"math_id": 30,
"text": "\\frac{E}{m}"
},
{
"math_id": 31,
"text": "h"
},
{
"math_id": 32,
"text": "\\frac{mh}{E}"
},
{
"math_id": 33,
"text": "\nc^{2} = \\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} \\left( \\frac{dt}{d\\tau} \\right)^{2} - \n\\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\left( \\frac{dr}{d\\tau} \\right)^{2} - \nr^{2} \\left( \\frac{d\\varphi}{d\\tau} \\right)^{2},\n"
},
{
"math_id": 34,
"text": "\\tau"
},
{
"math_id": 35,
"text": "\n\\left( \\frac{dr}{d\\tau} \\right)^{2} = \\frac{E^{2}}{m^{2}c^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( c^{2} + \\frac{h^{2}}{r^{2}} \\right).\n"
},
{
"math_id": 36,
"text": "\n\\tau = \\int \\frac{dr}{\\pm\\sqrt{\\frac{E^{2}}{m^{2}c^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( c^{2} + \\frac{h^{2}}{r^{2}} \\right)}}.\n"
},
{
"math_id": 37,
"text": "\\frac{dt}{d\\tau}"
},
{
"math_id": 38,
"text": "\nt = \\int \\frac{dr}{\\pm c\\left( 1 - \\frac{r_\\text{s}}{r} \\right)\\sqrt{1 - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( c^{2} + \\frac{h^{2}}{r^{2}}\\right)\\frac{m^2 c^2}{E^2}}}.\n"
},
{
"math_id": 39,
"text": "r - r_\\text{s}"
},
{
"math_id": 40,
"text": "r, \\theta, \\varphi, t"
},
{
"math_id": 41,
"text": "r"
},
{
"math_id": 42,
"text": "E = mc^2"
},
{
"math_id": 43,
"text": "h = 0"
},
{
"math_id": 44,
"text": "t"
},
{
"math_id": 45,
"text": "\\begin{align}\n t &= \\text{constant} \\pm \\frac{r_\\text{s}}c\\left(\\frac{2}{3}\\left(\\frac r{r_\\text{s}}\\right)^\\frac{3}{2} + 2\\sqrt{\\frac r{r_\\text{s}}} + \\ln\\frac{\\left|\\sqrt{\\frac{r}{r_\\text{s}}} - 1\\right|}{\\sqrt{\\frac{r}{r_\\text{s}}} + 1}\\right) \\\\\n \\tau &= \\text{constant}\\pm\\frac{2}{3}\\frac{r_\\text{s}}c\\left(\\frac r{r_\\text{s}}\\right)^\\frac{3}{2}\n\\end{align}"
},
{
"math_id": 46,
"text": "m = 0"
},
{
"math_id": 47,
"text": "\\begin{align}\n t &= \\text{constant} \\pm \\frac{1}{c}\\left(r + r_\\text{s}\\ln\\left|\\frac{r}{r_\\text{s}} - 1\\right|\\right) \\\\\n \\tau &= \\text{constant}.\n\\end{align}"
},
{
"math_id": 48,
"text": "\\lambda"
},
{
"math_id": 49,
"text": "r = c_1\\lambda + c_2"
},
{
"math_id": 50,
"text": "E = 0"
},
{
"math_id": 51,
"text": "\\varphi"
},
{
"math_id": 52,
"text": "r < r_\\text{s}"
},
{
"math_id": 53,
"text": "\\tau=\\text{constant}\\pm\\frac{r_\\text{s}}c\\left(\\arcsin\\sqrt{\\frac r{r_\\text{s}}}-\\sqrt{\\frac r{r_\\text{s}}\\left(1-\\frac r{r_\\text{s}}\\right)}\\right)."
},
{
"math_id": 54,
"text": "\\frac{E^2}{m^2}"
},
{
"math_id": 55,
"text": "\\tau=\\text{constant}\\pm i\\frac{r_\\text{s}}c\\left(\\ln\\left(\\sqrt{\\frac r{r_\\text{s}}}+\\sqrt{\\frac r{r_\\text{s}}-1}\\right)+\\sqrt{\\frac r{r_\\text{s}}\\left(\\frac r{r_\\text{s}}-1\\right)}\\right)."
},
{
"math_id": 56,
"text": "\n\\left( \\frac{dr}{d\\varphi} \\right)^{2} = \n\\left( \\frac{dr}{d\\tau} \\right)^{2} \\left( \\frac{d\\tau}{d\\varphi} \\right)^{2} =\n\\left( \\frac{dr}{d\\tau} \\right)^{2} \\left( \\frac{ r^{2}}{h} \\right)^{2},\n"
},
{
"math_id": 57,
"text": "\n\\left( \\frac{dr}{d\\varphi} \\right)^{2} = \\frac{r^{4}}{b^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( \\frac{r^{4}}{a^{2}} + r^{2} \\right)\n"
},
{
"math_id": 58,
"text": "a"
},
{
"math_id": 59,
"text": "b"
},
{
"math_id": 60,
"text": "\\begin{align}\n a &= \\frac{h}{c}, \\\\\n b &= \\frac{cL}{E} = \\frac{hmc}E.\n\\end{align}"
},
{
"math_id": 61,
"text": "\n\\varphi = \\int \\frac{dr}{\\pm r^{2} \\sqrt{\\frac{1}{b^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( \\frac{1}{a^{2}} + \\frac{1}{r^{2}} \\right)}}.\n"
},
{
"math_id": 62,
"text": "\\wp"
},
{
"math_id": 63,
"text": "\\frac{{\\rm d}r}{{\\rm d}\\tau}"
},
{
"math_id": 64,
"text": "r\\ \\frac{{\\rm d}\\varphi}{{\\rm d}\\tau}"
},
{
"math_id": 65,
"text": "v_{\\parallel}"
},
{
"math_id": 66,
"text": "v_{\\perp}"
},
{
"math_id": 67,
"text": "v"
},
{
"math_id": 68,
"text": "\\frac{{\\rm d}r}{{\\rm d}\\tau} = v_\\parallel \\sqrt{1 - \\frac{r_\\text{s}}{r}}\\ \\gamma"
},
{
"math_id": 69,
"text": "\\frac{{\\rm d}\\varphi}{{\\rm d}\\tau} = \\frac{v_{\\perp}}{r} \\ \\gamma"
},
{
"math_id": 70,
"text": "v^2 = v_{\\parallel}^2 + v_{\\perp}^2"
},
{
"math_id": 71,
"text": "\\hat{v}"
},
{
"math_id": 72,
"text": "{\\hat v}_{\\perp} = v_{\\perp}\\sqrt{1 - \\frac{r_\\text{s}}{r}}"
},
{
"math_id": 73,
"text": "{\\hat v}_{\\parallel} = v_{\\parallel}\\left(1 - \\frac{r_\\text{s}}{r}\\right)"
},
{
"math_id": 74,
"text": "\\frac{{\\rm d}\\tau}{{\\rm d}t} = \\frac{\\sqrt{1 - \\frac{r_\\text{s}}{r}}}{\\gamma}"
},
{
"math_id": 75,
"text": "c \\sqrt{\\frac{r_\\text{s}}{r}}"
},
{
"math_id": 76,
"text": "L = m\\ v_{\\perp}\\ r\\ \\gamma"
},
{
"math_id": 77,
"text": "E = m c^2 \\ \\sqrt{1 - \\frac{r_\\text{s}}{r}}\\ \\gamma"
},
{
"math_id": 78,
"text": "E = E_{\\rm rest} + E_{\\rm kin} + E_{\\rm pot}"
},
{
"math_id": 79,
"text": "E_{\\rm rest} = m c^2\\ ,\\ \\ E_{\\rm kin} = (\\gamma - 1)mc^2\\ ,\\ \\ E_{\\rm pot} = \\left(\\sqrt{1 - \\frac{r_\\text{s}}{r}} - 1\\right)\\ \\gamma\\ m c^2"
},
{
"math_id": 80,
"text": "\\gamma"
},
{
"math_id": 81,
"text": "\\gamma = 1/\\sqrt{1 - v^2/c^2}"
},
{
"math_id": 82,
"text": "1"
},
{
"math_id": 83,
"text": "E_{\\rm rest}"
},
{
"math_id": 84,
"text": "E_{\\rm kin}"
},
{
"math_id": 85,
"text": "m c^2"
},
{
"math_id": 86,
"text": "h f"
},
{
"math_id": 87,
"text": "f"
},
{
"math_id": 88,
"text": "u"
},
{
"math_id": 89,
"text": "u = \\frac{1}{r}"
},
{
"math_id": 90,
"text": "\n\\left( \\frac{du}{d\\varphi} \\right)^{2} = \\frac{1}{b^{2}} - \\left( 1 - u r_\\text{s} \\right) \\left( \\frac{1}{a^{2}} + u^{2} \\right)\n"
},
{
"math_id": 91,
"text": "u_1"
},
{
"math_id": 92,
"text": "u_2"
},
{
"math_id": 93,
"text": "u_3"
},
{
"math_id": 94,
"text": "\n\\left( \\frac{du}{d\\varphi} \\right)^{2} = r_\\text{s} \\left( u - u_{1} \\right) \\left( u - u_{2} \\right) \\left( u - u_{3} \\right) \n"
},
{
"math_id": 95,
"text": "u^2"
},
{
"math_id": 96,
"text": "\nu_{1} + u_{2} + u_{3} = \\frac{1}{r_\\text{s}}\n"
},
{
"math_id": 97,
"text": "u_1 < u_2 < u_3"
},
{
"math_id": 98,
"text": "b < a"
},
{
"math_id": 99,
"text": "\nu = u_{1} + \\left( u_{2} - u_{1} \\right) \\, \\mathrm{sn}^{2}\\left( \\frac{1}{2} \\varphi \\sqrt{r_\\text{s} \\left( u_{3} - u_{1} \\right)} + \\delta \\right)\n"
},
{
"math_id": 100,
"text": "\\mathrm{sn}"
},
{
"math_id": 101,
"text": "\\delta"
},
{
"math_id": 102,
"text": "k"
},
{
"math_id": 103,
"text": "\nk = \\sqrt{\\frac{u_{2} - u_{1}}{u_{3} - u_{1}}}\n"
},
{
"math_id": 104,
"text": "\\frac{1}{r_\\text{s}}"
},
{
"math_id": 105,
"text": "\nu = u_{1} + \\left( u_{2} - u_{1} \\right) \\, \\sin^{2}\\left( \\frac{1}{2} \\varphi + \\delta \\right)\n"
},
{
"math_id": 106,
"text": "e"
},
{
"math_id": 107,
"text": "\ne = \\frac{u_{2} - u_{1}}{u_{2} + u_{1}}\n"
},
{
"math_id": 108,
"text": "u = 0"
},
{
"math_id": 109,
"text": "\\frac{du}{d\\phi} = 0"
},
{
"math_id": 110,
"text": "\n\\frac{d^{2}u}{d\\varphi^{2}} = \\frac{r_\\text{s}}{2} \\left[ \\left( u - u_{2} \\right) \\left( u - u_{3} \\right) + \\left( u - u_{1} \\right) \\left( u - u_{3} \\right) + \\left( u - u_{1} \\right) \\left( u - u_{2} \\right) \\right]\n"
},
{
"math_id": 111,
"text": "r = \\frac{1}{u_2} = \\frac{1}{u_3}"
},
{
"math_id": 112,
"text": "\\frac{3}{2}"
},
{
"math_id": 113,
"text": "\nK = \\int_0^1 \\frac{dy}{\\sqrt{\\left( 1 - y^2 \\right) \\left( 1 - k^2 y^2 \\right)}}\n"
},
{
"math_id": 114,
"text": "\n\\Delta\\varphi = \\frac{4K}{\\sqrt{r_\\text{s} \\left(u_3 - u_1\\right)}}\n"
},
{
"math_id": 115,
"text": "k^2"
},
{
"math_id": 116,
"text": "\nk^2 = \\frac{u_2 - u_1}{u_3 - u_1} \\approx r_\\text{s} \\left(u_2 - u_1\\right) \\ll 1\n"
},
{
"math_id": 117,
"text": "\n \\frac{1}{\\sqrt{r_\\text{s} \\left(u_3 - u_1\\right)}} =\n \\frac{1}{\\sqrt{1 - r_\\text{s} \\left(2u_1 + u_2\\right)}} \\approx\n 1 + \\frac{1}{2} r_\\text{s} \\left(2u_1 + u_2\\right)\n"
},
{
"math_id": 118,
"text": "\n K \\approx \\int_0^1 \\frac{dy}{\\sqrt{1 - y^2}} \\left( 1 + \\frac{1}{2} k^2 y^2 \\right) =\n \\frac{\\pi}{2} \\left( 1 + \\frac{k^2}{4} \\right)\n"
},
{
"math_id": 119,
"text": "\n\\delta\\varphi = \\Delta\\varphi - 2\\pi \\approx \\frac{3}{2} \\pi r_\\text{s} \\left( u_1 + u_2 \\right)\n"
},
{
"math_id": 120,
"text": "A"
},
{
"math_id": 121,
"text": "\\begin{align}\n r_\\text{max} &= \\frac{1}{u_{1}} = A(1 + e) \\\\\n r_\\text{min} &= \\frac{1}{u_{2}} = A(1 - e) \n\\end{align}"
},
{
"math_id": 122,
"text": "\nu_1 + u_2 = \\frac{2}{A\\left( 1 - e^2 \\right)}\n"
},
{
"math_id": 123,
"text": "\n\\delta\\varphi \\approx \\frac{6\\pi GM}{c^2 A\\left( 1 - e^2 \\right)}\n"
},
{
"math_id": 124,
"text": "\n\\varphi = \\int \\frac{dr}{r^{2} \\sqrt{\\frac{1}{b^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{1}{r^{2}}}}\n"
},
{
"math_id": 125,
"text": "\n\\delta \\varphi \\approx \\frac{2r_\\text{s}}{b} = \\frac{4GM}{c^{2}b}.\n"
},
{
"math_id": 126,
"text": "r_3"
},
{
"math_id": 127,
"text": "b = r_3\\sqrt{\\frac{r_3}{r_3 - r_\\text{s}}}"
},
{
"math_id": 128,
"text": "r={1\\over u}\\in [ r_{s},\\infty["
},
{
"math_id": 129,
"text": "{\\left ( {du \\over d\\varphi} \\right )^2}=r_s\\ u^3-u^2+\\frac{1}{b^2}"
},
{
"math_id": 130,
"text": "2\\ \\frac{du}{d\\varphi}\\frac{d^2u}{d\\varphi^2}=3\\ r_su^2\\frac{du}{d\\varphi}-2\\ u\\frac{du}{d\\varphi} "
},
{
"math_id": 131,
"text": "{d^2u\\over d\\varphi^2}=\\frac{3}{2}\\ r_s\\ u^2-\\ u"
},
{
"math_id": 132,
"text": "\\Delta\\varphi"
},
{
"math_id": 133,
"text": "k_1={d^2u\\over d\\varphi^2}(u)"
},
{
"math_id": 134,
"text": "k_2={d^2u\\over d\\varphi^2}\\bigl(u+\\frac{\\Delta\\varphi}{2}{du\\over d\\varphi}\\bigr)"
},
{
"math_id": 135,
"text": "k_3={d^2u\\over d\\varphi^2}\\Bigl(u+\\frac{\\Delta\\varphi}{2}{du\\over d\\varphi}+\\frac{\\Delta\\varphi^2}{4}k_{1}\\Bigr)"
},
{
"math_id": 136,
"text": "k_4={d^2u\\over d\\varphi^2}\\Bigl(u+\\Delta\\varphi{du\\over d\\varphi}+\\frac{\\Delta\\varphi^2}{2}k_2 \\Bigr)"
},
{
"math_id": 137,
"text": "{du\\over d\\varphi}(\\varphi+\\Delta\\varphi)"
},
{
"math_id": 138,
"text": " {du\\over d\\varphi}(\\varphi)+\\frac{\\Delta\\varphi}{6}(k_1+2k_{2}+2k_{3}+k_{4})"
},
{
"math_id": 139,
"text": "u(\\varphi+ \\Delta\\varphi)"
},
{
"math_id": 140,
"text": " u(\\varphi)+\\Delta\\varphi{du\\over d\\varphi}(\\varphi)+\\frac{\\Delta\\varphi^2}{6}(k_{1}+k_{2}+k_{3})"
},
{
"math_id": 141,
"text": "r = {1 \\over u}"
},
{
"math_id": 142,
"text": "\n \\left( \\frac{dr}{d\\tau} \\right)^{2} = \n \\frac{E^2}{m^2 c^2} - c^{2} + \\frac{ r_\\text{s} c^2}{r} - \n \\frac{L^2}{ m\\mu r^2 } + \\frac{ r_\\text{s} L^2 }{ m \\mu r^3 }\n"
},
{
"math_id": 143,
"text": "\n \\frac{1}{2} m \\left( \\frac{dr}{d\\tau} \\right)^{2} = \n \\left[ \\frac{E^2}{2 m c^2} - \\frac{1}{2} m c^2 \\right] +\n \\frac{GMm}{r} - \\frac{ L^2 }{ 2 \\mu r^2 } + \\frac{ G(M+m) L^2 }{c^2 \\mu r^3},\n"
},
{
"math_id": 144,
"text": "\nV(r) = -\\frac{GMm}{r} + \\frac{ L^2 }{ 2 \\mu r^2 } - \\frac{ G(M + m) L^2 }{ c^2 \\mu r^3 }\n"
},
{
"math_id": 145,
"text": "\n\\delta \\varphi \\approx \\frac{ 6\\pi G(M+m) }{ c^2 A \\left( 1 - e^{2} \\right)}\n"
},
{
"math_id": 146,
"text": "r = 0"
},
{
"math_id": 147,
"text": "V"
},
{
"math_id": 148,
"text": "a = \\frac{h}{c}"
},
{
"math_id": 149,
"text": "\nV(r) = \\frac{ \\mu c^{2}}{2} \\left[ - \\frac{r_\\text{s}}{r} + \\frac{a^{2}}{r^{2}} - \\frac{r_\\text{s} a^{2}}{r^{3}} \\right]\n"
},
{
"math_id": 150,
"text": "\nF = -\\frac{dV}{dr} = -\\frac{ \\mu c^{2}}{2r^{4}} \\left[ r_\\text{s} r^{2} - 2a^{2} r + 3r_\\text{s} a^{2} \\right] = 0\n"
},
{
"math_id": 151,
"text": "\\begin{align}\n r_\\text{outer} &= \\frac{a^{2}}{r_\\text{s}} \\left( 1 + \\sqrt{1 - \\frac{3r_\\text{s}^{2}}{a^{2}}} \\right) \\\\[3pt]\n r_\\text{inner} &= \\frac{a^{2}}{r_\\text{s}} \\left( 1 - \\sqrt{1 - \\frac{3r_\\text{s}^{2}}{a^{2}}} \\right) = \\frac{3a^{2}}{r_\\text{outer}}\n\\end{align}"
},
{
"math_id": 152,
"text": "\\begin{align}\n r_\\text{outer} &\\approx \\frac{2a^{2}}{r_\\text{s}} \\\\[3pt]\n r_\\text{inner} &\\approx \\frac{3}{2} r_\\text{s}\n\\end{align}"
},
{
"math_id": 153,
"text": "\nr_{\\mathrm{outer}}^{3} = \\frac{G(M+m)}{\\omega_{\\varphi}^{2}}\n"
},
{
"math_id": 154,
"text": "\n\\frac{GMm}{r^{2}} = \\mu \\omega_{\\varphi}^{2} r\n"
},
{
"math_id": 155,
"text": "\n\\omega_{\\varphi}^{2} \\approx \\frac{GM}{r_{\\mathrm{outer}}^{3}} = \\left( \\frac{r_\\text{s} c^{2}}{2r_{\\mathrm{outer}}^{3}} \\right) = \\left( \\frac{r_\\text{s} c^{2}}{2} \\right) \\left( \\frac{r_\\text{s}^{3}}{8a^{6}}\\right) = \\frac{c^{2} r_\\text{s}^{4}}{16 a^{6}}\n"
},
{
"math_id": 156,
"text": "\nr_{\\mathrm{outer}} \\approx r_{\\mathrm{inner}} \\approx 3 r_\\text{s}\n"
},
{
"math_id": 157,
"text": "\n\\omega_{r}^{2} = \\frac{1}{m} \\left[ \\frac{d^{2}V}{dr^{2}} \\right]_{r=r_{\\mathrm{outer}}}\n"
},
{
"math_id": 158,
"text": "\n\\omega_{r}^{2} = \\left( \\frac{c^{2} r_\\text{s}}{2 r_{\\mathrm{outer}}^{4}} \\right) \\left( r_{\\mathrm{outer}} - r_{\\mathrm{inner}}\\right) = \n\\omega_{\\varphi}^{2} \\sqrt{1 - \\frac{3r_\\text{s}^{2}}{a^{2}}} \n"
},
{
"math_id": 159,
"text": "\n\\omega_{r} = \\omega_{\\varphi} \\left[ 1 - \\frac{3r_\\text{s}^{2}}{4a^{2}} + \\mathcal{O}\\left( \\frac{r_\\text{s}^{4}}{a^{4}} \\right) \\right]\n"
},
{
"math_id": 160,
"text": "\n \\delta \\varphi = T \\left( \\omega_{\\varphi} - \\omega_{r} \\right) \\approx 2\\pi \\left( \\frac{3r_\\text{s}^{2}}{4a^{2}} \\right) = \n \\frac{3\\pi m^{2} c^{2}}{2L^{2}} r_\\text{s}^{2}\n"
},
{
"math_id": 161,
"text": "\n\\delta \\varphi \\approx \\frac{3\\pi m^{2} c^{2}}{2L^{2}} \\left( \\frac{4G^{2} M^{2}}{c^{4}} \\right) = \\frac{6\\pi G^{2} M^{2} m^{2}}{c^{2} L^{2}}\n"
},
{
"math_id": 162,
"text": "\n\\frac{ h^2 }{ G(M+m) } = A \\left( 1 - e^2 \\right)\n"
},
{
"math_id": 163,
"text": "\n\\delta \\varphi \\approx \\frac{6\\pi G(M+m)}{c^2 A \\left( 1 - e^{2} \\right)}\n"
},
{
"math_id": 164,
"text": "\\begin{align}\n \\Gamma^t_{rt} = -\\Gamma^r_{rr} &= \\frac{r_\\text{s}}{2r(r - r_\\text{s})} \\\\[3pt]\n \\Gamma^r_{tt} &= \\frac{r_\\text{s}(r - r_\\text{s})}{2r^3} \\\\[3pt]\n \\Gamma^r_{\\phi\\phi} &= (r_\\text{s} - r)\\sin^2(\\theta) \\\\[3pt]\n \\Gamma^r_{\\theta\\theta} &= r_\\text{s} - r \\\\[3pt]\n \\Gamma^\\theta_{r\\theta} = \\Gamma^\\phi_{r\\phi} &= \\frac{1}{r} \\\\[3pt]\n \\Gamma^\\theta_{\\phi\\phi} &= -\\sin(\\theta)\\cos(\\theta) \\\\[3pt]\n \\Gamma^\\phi_{\\theta\\phi} &= \\cot(\\theta)\n\\end{align}"
},
{
"math_id": 165,
"text": "\n\\frac{d^2x^{\\lambda}}{d q^2} + \\Gamma^{\\lambda}_{\\mu\\nu} \\frac{dx^{\\mu}}{d q} \\frac{dx^{\\nu}}{dq} = 0\n"
},
{
"math_id": 166,
"text": "q"
},
{
"math_id": 167,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 168,
"text": "\nc^{2}d\\tau^{2} = w(r) c^2 dt^{2} - v(r) dr^{2} - r^{2} d\\theta^{2} - r^{2} \\sin^{2} \\theta d\\phi^{2}\n\\,"
},
{
"math_id": 169,
"text": "w(r) = 1 - \\frac{r_\\text{s}}{r}"
},
{
"math_id": 170,
"text": "v(r)= \\frac{1}{w(r)}"
},
{
"math_id": 171,
"text": "\\Gamma_{\\mu\\nu}^{\\lambda}"
},
{
"math_id": 172,
"text": "\\begin{align}\n 0 &= \\frac{d^{2}\\theta}{dq^{2}} + \\frac{2}{r} \\frac{d\\theta}{dq} \\frac{dr}{dq} - \\sin \\theta \\cos \\theta \\left( \\frac{d\\phi}{dq} \\right)^{2} \\\\[3pt]\n 0 &= \\frac{d^{2}\\phi}{dq^{2}} + \\frac{2}{r} \\frac{d\\phi}{dq} \\frac{dr}{dq} + 2 \\cot \\theta \\frac{d\\phi}{dq} \\frac{d\\theta}{dq} \\\\[3pt]\n 0 &= \\frac{d^{2}t}{dq^{2}} + \\frac{1}{w} \\frac{dw}{dr} \\frac{dt}{dq} \\frac{dr}{dq} \\\\[3pt]\n 0 &= \\frac{d^{2}r}{dq^{2}} - \\frac{1}{v} \\frac{dv}{dr} \\left( \\frac{dr}{dq} \\right)^{2} - \\frac{r}{v} \\left( \\frac{d\\theta}{dq} \\right)^{2} - \\frac{r\\sin^{2}\\theta}{v} \\left( \\frac{d\\phi}{dq} \\right)^{2} + \\frac{c^{2}}{2v} \\frac{dw}{dr} \\left( \\frac{dt}{dq} \\right)^{2}\n\\end{align}"
},
{
"math_id": 173,
"text": "\\frac{d\\phi}{dq}"
},
{
"math_id": 174,
"text": "\\frac{dt}{dq}"
},
{
"math_id": 175,
"text": "\\begin{align}\n 0 &= \\frac{d}{dq} \\left[ \\ln \\frac{d\\phi}{dq} + \\ln r^{2} \\right] \\\\[3pt]\n 0 &= \\frac{d}{dq} \\left[ \\ln \\frac{dt}{dq} + \\ln w \\right],\n\\end{align}"
},
{
"math_id": 176,
"text": "\n 0 = \\delta s = \\delta \\int ds =\n \\delta \\int \\sqrt{g_{\\mu\\nu} \\frac{dx^{\\mu}}{d\\tau} \\frac{dx^{\\nu}}{d\\tau}} d\\tau =\n \\delta \\int \\sqrt{2T} d\\tau\n"
},
{
"math_id": 177,
"text": "\n 2T = c^{2} = \\left( \\frac{ds}{d\\tau} \\right)^{2} =\n g_{\\mu\\nu} \\frac{dx^{\\mu}}{d\\tau} \\frac{dx^{\\nu}}{d\\tau} = \n \\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} \\left( \\frac{dt}{d\\tau} \\right)^{2} - \n \\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\left( \\frac{dr}{d\\tau} \\right)^{2} - \n r^{2} \\left( \\frac{d\\varphi}{d\\tau} \\right)^{2}\n"
},
{
"math_id": 178,
"text": "\n\\dot{x}^{\\mu} = \\frac{dx^{\\mu}}{d\\tau}\n"
},
{
"math_id": 179,
"text": "\n 2T = c^{2} = \n \\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} \\left( \\dot{t} \\right)^{2} - \n \\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\left( \\dot{r} \\right)^{2} - \n r^{2} \\left( \\dot{\\varphi} \\right)^{2}\n"
},
{
"math_id": 180,
"text": "\n 0 = \\delta \\int \\sqrt{2T} d\\tau =\n \\int \\frac{\\delta T}{\\sqrt{2T}} d\\tau =\n \\frac{1}{c} \\delta \\int T d\\tau.\n"
},
{
"math_id": 181,
"text": "\n \\frac{d}{d\\tau} \\left(\\frac{\\partial T}{\\partial \\dot{x}^{\\sigma}} \\right) =\n \\frac{\\partial T}{\\partial x^{\\sigma}}.\n"
},
{
"math_id": 182,
"text": "\\begin{align}\n \\frac{d}{d\\tau} \\left[ r^{2} \\frac{d\\varphi}{d\\tau} \\right] &= 0, \\\\\n \\frac{d}{d\\tau} \\left[ \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\right] &= 0,\n\\end{align}"
},
{
"math_id": 183,
"text": "\\begin{align}\n r^{2} \\frac{d\\varphi}{d\\tau} &= ac, \\\\\n \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} &= \\frac{a}{b}.\n\\end{align}"
},
{
"math_id": 184,
"text": "H"
},
{
"math_id": 185,
"text": "\n2 H = c^{2} = \\frac{p_{t}^{2}}{c^{2} \\left( 1 - \\frac{r_\\text{s}}{r} \\right)} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) p_{r}^{2} - \\frac{p_{\\theta}^{2}}{r^{2}} - \\frac{p_{\\varphi}^{2}}{r^{2}\\sin^{2} \\theta} \n"
},
{
"math_id": 186,
"text": "c"
},
{
"math_id": 187,
"text": "\\begin{align}\n p_{\\varphi} &= -ac \\\\\n p_{\\theta} &= 0 \\\\\n p_{t} &= \\frac{ac^{2}}{b}\n\\end{align}"
},
{
"math_id": 188,
"text": "\\begin{align}\n \\frac{dr}{d\\tau} &= \\frac{\\partial H}{\\partial p_{r}} = - \\left(1 - \\frac{r_\\text{s}}{r} \\right) p_{r} \\\\\n \\frac{d\\varphi}{d\\tau} &= \\frac{\\partial H}{\\partial p_{\\varphi}} = \\frac{-p_{\\varphi}}{r^{2}} = \\frac{ac}{r^{2}} \\\\ \n \\frac{dt}{d\\tau} &= \\frac{\\partial H}{\\partial p_{t}} = \\frac{p_{t}}{c^{2} \\left( 1 - \\frac{r_\\text{s}}{r} \\right)} = \\frac{a}{b \\left( 1 - \\frac{r_\\text{s}}{r} \\right)}\n\\end{align}"
},
{
"math_id": 189,
"text": "\n\\frac{dr}{d\\varphi} = - \\frac{r^{2}}{ac} \\left(1 - \\frac{r_\\text{s}}{r} \\right) p_{r}\n"
},
{
"math_id": 190,
"text": "H = \\frac{c^{2}}{2}"
},
{
"math_id": 191,
"text": "\ng^{\\mu\\nu} \\frac{\\partial S}{\\partial x^{\\mu}} \\frac{\\partial S}{\\partial x^{\\nu}} = c^{2}.\n"
},
{
"math_id": 192,
"text": "\n\\frac{1}{c^{2} \\left(1 - \\frac{r_\\text{s}}{r} \\right)} \\left( \\frac{\\partial S}{\\partial t} \\right)^{2} - \n\\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( \\frac{\\partial S}{\\partial r} \\right)^{2} -\n\\frac{1}{r^{2}} \\left( \\frac{\\partial S}{\\partial \\varphi} \\right)^{2} = c^{2}\n"
},
{
"math_id": 193,
"text": "\nS = -p_{t} t + p_{\\varphi} \\varphi + S_{r}(r) \\,\n"
},
{
"math_id": 194,
"text": "p_t"
},
{
"math_id": 195,
"text": "p_{\\varphi}"
},
{
"math_id": 196,
"text": "S_r(r)"
},
{
"math_id": 197,
"text": "\nS_{r}(r) = \\int^{r} \\frac{dr}{1 - \\frac{r_\\text{s}}{r}} \\sqrt{\\frac{p_{t}^{2}}{c^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( c^{2} + \\frac{p_{\\varphi}^{2}}{r^{2}} \\right)}.\n"
},
{
"math_id": 198,
"text": "\n\\frac{\\partial S}{\\partial p_{\\varphi}} = \\varphi + \\frac{\\partial S_{r}}{\\partial p_{\\varphi}} = \\mathrm{constant}\n"
},
{
"math_id": 199,
"text": "\n\\varphi - \\int^{r} \\frac{p_{\\varphi} dr}{r^{2}\\sqrt{\\frac{p_{t}^{2}}{c^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( c^{2} + \\frac{p_{\\varphi}^{2}}{r^{2}} \\right)}} = \\mathrm{constant}\n"
},
{
"math_id": 200,
"text": "\n\\left( \\frac{dr}{d\\varphi} \\right)^{2} = \\frac{r^{4}}{b^{2}} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( \\frac{r^{4}}{a^{2}} + r^{2} \\right).\n"
},
{
"math_id": 201,
"text": "\\begin{align}\n \\frac{\\partial S}{\\partial \\varphi} = p_{\\varphi} &= -ac \\\\\n \\frac{\\partial S}{\\partial t} = p_{t} &= \\frac{ac^{2}}{b}\n\\end{align}"
},
{
"math_id": 202,
"text": "\nS = \\int{ - m c^2 d\\tau} = - m c \\int{ c \\frac{d\\tau}{dq} dq} = - m c \\int{ \\sqrt{-g_{\\mu\\nu} \\frac{dx^{\\mu}}{dq} \\frac{dx^{\\nu}}{dq} } dq}\n"
},
{
"math_id": 203,
"text": "\n\\left(c \\frac{d\\tau}{dq}\\right)^2 = - g_{\\mu\\nu} \\frac{dx^{\\mu}}{dq} \\frac{dx^{\\nu}}{dq} = \n\\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} \\left( \\frac{dt}{dq} \\right)^{2} - \n\\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\left( \\frac{dr}{dq} \\right)^{2} - \nr^{2} \\left( \\frac{d\\varphi}{dq} \\right)^{2}\n\\,."
},
{
"math_id": 204,
"text": "\n\\delta \\left(c \\frac{d\\tau}{dq}\\right)^2 = 2 c^{2} \\frac{d\\tau}{dq} \\delta \\frac{d\\tau}{dq} = \n\\delta \\left[ \\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} \\left( \\frac{dt}{dq} \\right)^{2} - \n\\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\left( \\frac{dr}{dq} \\right)^{2} - \nr^{2} \\left( \\frac{d\\varphi}{dq} \\right)^{2} \\right]\n\\,."
},
{
"math_id": 205,
"text": "\n2 c^{2} \\frac{d\\tau}{dq} \\delta \\frac{d\\tau}{dq} = \n- 2 r^{2} \\frac{d\\varphi}{dq} \\delta \\frac{d\\varphi}{dq} \n\\,."
},
{
"math_id": 206,
"text": "2 c \\frac{d\\tau}{dq}"
},
{
"math_id": 207,
"text": "\nc \\, \\delta \\frac{d\\tau}{dq} = - \\frac{r^{2}}{c} \\frac{d\\varphi}{d\\tau} \\delta \\frac{d\\varphi}{dq} \n= - \\frac{r^{2}}{c} \\frac{d\\varphi}{d\\tau} \\frac{d \\delta \\varphi}{dq} \n\\,."
},
{
"math_id": 208,
"text": "\n0 = \\delta \\int { c \\frac{d\\tau}{dq} dq } = \\int { c \\delta \\frac{d\\tau}{dq} dq } = \n\\int { - \\frac{r^{2}}{c} \\frac{d\\varphi}{d\\tau} \\frac{d \\delta \\varphi}{dq} dq }\n\\,."
},
{
"math_id": 209,
"text": "\n0 = - \\frac{r^{2}}{c} \\frac{d\\varphi}{d\\tau} \\delta \\varphi \n- \\int { \\frac{d}{dq} \\left[ - \\frac{r^{2}}{c} \\frac{d\\varphi}{d\\tau} \\right] \\delta \\varphi dq }\n\\,."
},
{
"math_id": 210,
"text": "\\delta \\varphi"
},
{
"math_id": 211,
"text": "\n\\frac{d}{dq} \\left[ - \\frac{r^{2}}{c} \\frac{d\\varphi}{d\\tau} \\right] = 0\n\\,."
},
{
"math_id": 212,
"text": "\n2 c^{2} \\frac{d\\tau}{dq} \\delta \\frac{d\\tau}{dq} = \n2 \\left( 1 - \\frac{r_\\text{s}}{r} \\right) c^{2} \\frac{dt}{dq} \\delta \\frac{dt}{dq}\n\\,."
},
{
"math_id": 213,
"text": "\nc \\delta \\frac{d\\tau}{dq} = \nc \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\delta \\frac{dt}{dq}\n= c \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\frac{d \\delta t}{dq}\n\\,."
},
{
"math_id": 214,
"text": "\n0 = \\delta \\int { c \\frac{d\\tau}{dq} dq }\n= \\int { c \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\frac{d \\delta t}{dq} dq }\n\\,."
},
{
"math_id": 215,
"text": "\n0 = c \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\delta t \n- \\int { \\frac{d}{dq} \\left[ c \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\right] \\delta t dq }\n\\,."
},
{
"math_id": 216,
"text": "\n\\frac{d}{dq} \\left[ c \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau} \\right] = 0\n\\,."
},
{
"math_id": 217,
"text": "\\begin{align}\n L = p_{\\phi} &= m r^{2} \\frac{d\\varphi}{d\\tau}\\,, \\\\\n E = - p_{t} &= m c^2 \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{dt}{d\\tau}\\,.\n\\end{align}"
},
{
"math_id": 218,
"text": "\n\\frac{d\\varphi}{dt} = \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\frac{L \\, c^2}{E \\, r^2}\n\\,."
},
{
"math_id": 219,
"text": "\n\\frac{d\\varphi}{d\\tau} = \\frac{L}{m \\, r^2}\n\\,"
},
{
"math_id": 220,
"text": "\n\\frac{dt}{d\\tau} = \\frac{E}{\\left( 1 - \\frac{r_\\text{s}}{r} \\right) m \\, c^2}\n\\,"
},
{
"math_id": 221,
"text": "\nc^{2} = \\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\, \\frac{E^2}{m^2 c^2} - \n\\frac{1}{1 - \\frac{r_\\text{s}}{r}} \\left( \\frac{dr}{d\\tau} \\right)^{2} -\n\\frac{1}{r^{2}} \\, \\frac{L^2}{m^2}\n\\,,"
},
{
"math_id": 222,
"text": "\n{\\left( \\frac{dr}{d\\tau} \\right)}^{2} = \\frac{E^2}{m^2 c^2} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( c^{2} + \\frac{L^2}{m^2 r^2} \\right)\n\\,,"
},
{
"math_id": 223,
"text": "{\\left( \\frac{d\\varphi}{d\\tau} \\right)}^2 = \\frac{L^2}{m^2 r^4}"
},
{
"math_id": 224,
"text": "\n{\\left( \\frac{dr}{d\\varphi} \\right)}^{2} = \\frac{E^2 r^4}{L^2 c^2} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( \\frac{m^2 c^{2} r^4}{L^2} + r^2 \\right)\n\\,"
},
{
"math_id": 225,
"text": "\na = \\frac{L}{m \\, c}\n"
},
{
"math_id": 226,
"text": "\nb = \\frac{L \\, c}{E}\n\\,,"
},
{
"math_id": 227,
"text": "\n{\\left( \\frac{dr}{d\\varphi} \\right)}^{2} = \\frac{r^4}{b^2} - \\left( 1 - \\frac{r_\\text{s}}{r} \\right) \\left( \\frac{r^4}{a^2} + r^2 \\right)\n\\,."
}
]
| https://en.wikipedia.org/wiki?curid=1410576 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.