id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
13966180 | Consumption smoothing | Consumption smoothing is an economic concept for the practice of optimizing a person's standard of living through an appropriate balance between savings and consumption over time. An optimal consumption rate should be relatively similar at each stage of a person's life rather than fluctuate wildly. Luxurious consumption at an old age does not compensate for an impoverished existence at other stages in one's life.
Since income tends to be hump-shaped across an individual's life, economic theory suggests that individuals should on average have low or negative savings rate at early stages in their life, high in middle age, and negative during retirement. Although many popular books on personal finance advocate that individuals should at all stages of their life set aside money in savings, economist James Choi states that this deviates from the advice of economists.
Expected utility model.
The graph below illustrates the expected utility model, in which U(c) is increasing in and concave in c. This shows that there are diminishing marginal returns associated with consumption, as each additional unit of consumption adds less utility. The expected utility model states that individuals want to maximize their expected utility, as defined as the weighted sum of utilities across states of the world. The weights in this model are the probabilities of each state of the world happening. According to the "more is better" principle, the first order condition will be positive; however, the second order condition will be negative, due to the principle of diminishing marginal utility. Due to the concave actual utility, marginal utility decreases as consumption increase; as a result, it is favorable to reduce consumption in states of high income to increase consumption in low income states.
Expected utility can be modeled as:
formula_0
where:
formula_1 = probability you will lose all your wealth/consumption
formula_2 = wealth
The model shows expected utility as the sum of the probability of being in a bad state multiplied by utility of being in a bad state and the probability of being in a good state multiplied by utility of being in a good state.
Similarly, actuarially fair insurance can also be modeled:
formula_3
where:
formula_1 = probability you will lose all your wealth/consumption
formula_2 = wealth
formula_4 = damages
An actuarially fair premium to pay for insurance would be the insurance premium that is set equal to the insurer's expected payout, so that the insurer will expect to earn zero profit. Some individuals are risk-averse, as shown by the graph above. The blue line, formula_5 is curved upwards, revealing that this particular individual is risk-averse. If the blue line was curved downwards, this would reveal the preference for a risk-seeking individual. Additionally, a straight line would reveal a risk-neutral individual.
Insurance and consumption smoothing.
One method used to smooth consumption is insurance. Insurance allows people to translate consumption from periods where their consumption is high (having a low marginal utility) to periods when their consumption is low (having a high marginal utility). Due to many possible states of the world, people want to decrease the amount of uncertain outcomes of the future. Basic insurance theory states that individuals will demand full insurance to fully smooth consumption across difference states of the world. This explains why people purchase insurance, whether in healthcare, unemployment, and social security. To help illustrate this, think of a simplified hypothetical scenario with Person A, who can exist in one of two states of the world. Assume Person A who is healthy and can work; this will be State X of the world. One day, an unfortunate accident occurs, person A no longer can work. Therefore, he cannot obtain income from work and is in State Y of the world. In State X, Person A enjoys a good income from his work place and is able to spend money on necessities, such as paying rent and buying groceries, and luxuries, such as traveling to Europe. In State Y, Person A no longer obtains an income, due to injury, and struggles to pay for necessities. In a perfect world, Person A would have known to save for this future accident and would have more savings to compensate for the lack of income post-injury. Rather than spend money on the trip to Europe in State X, Person A could have saved that money to use for necessities in State Y. However, people tend to be poor predictors of the future, especially ones that are myopic. Therefore, insurance can "smooth" between these two states and provide more certainty for the future.
Microcredit and consumption smoothing.
Though there are arguments stating that microcredit does not effectively lift people from poverty, some note that offering a way to consumption smooth during tough periods has shown to be effective. This supports the principle of diminishing marginal utility, where those who have a history of suffering in extremely low income states of the world want to prepare for the next time they experience an adverse state of the world. This leads to the support of microfinance as a tool to consumption smooth, stating that those in poverty value microloans tremendously due to its extremely high marginal utility.
Hall and Friedman's model.
Another model to look at for consumption smoothing is Hall's model, which is inspired by Milton Friedman. Since Friedman's 1956 permanent income theory and Modigliani and Brumberg's 1954 life-cycle model, the idea that agents prefer a stable path of consumption has been widely accepted. This idea came to replace the perception that people had a marginal propensity to consume and therefore current consumption was tied to current income.
Friedman's theory argues that consumption is linked to the permanent income of agents. Thus, when income is affected by transitory shocks, for example, agents' consumption should not change, since they can use savings or borrowing to adjust. This theory assumes that agents are able to finance consumption with earnings that are not yet generated, and thus assumes perfect capital markets. Empirical evidence shows that liquidity constraint is one of the main reasons why it is difficult to observe consumption smoothing in the data. In 1978, Robert Hall formalized Friedman's idea. By taking into account the diminishing returns to consumption, and therefore, assuming a concave utility function, he showed that agents optimally would choose to keep a stable path of consumption.
With (cf. Hall's paper)
formula_6 being the mathematical expectation conditional on all information available in formula_7
formula_8 being the agent's rate of time preference
formula_9 being the real rate of interest in formula_7
formula_10 being the strictly concave one-period utility function
formula_11 being the consumption in formula_7
formula_12 being the earnings in formula_7
formula_13 being the assets, apart from human capital, in formula_7.
agents choose the consumption path that maximizes:
formula_14
Subject to a sequence of budget constraints:
formula_15
The first order necessary condition in this case will be:
formula_16
By assuming that formula_17 we obtain, for the previous
equation:
formula_18
Which, due to the concavity of the utility function, implies:
formula_19
Thus, rational agents would expect to achieve the same consumption in every period.
Hall also showed that for a quadratic utility function, the optimal consumption is equal to:
formula_20
This expression shows that agents choose to consume a fraction of their present discounted value of their human and financial wealth.
Empirical evidence for Hall and Friedman's model.
Robert Hall (1978) estimated the Euler equation in order to find evidence of a random walk in consumption. The data used are US National Income and Product Accounts (NIPA) quarterly from 1948 to 1977. For the analysis the author does not consider the consumption of durable goods. Although Hall argues that he finds some evidence of consumption smoothing, he does so using a modified version. There are also some econometric concerns about his findings.
Wilcox (1989) argue that liquidity constraint is the reason why consumption smoothing does not show up in the data. Zeldes (1989) follows the same argument and finds that a poor household's consumption is correlated with contemporaneous income, while a rich household's consumption is not. A recent meta-analysis of 3000 estimates reported in 144 studies finds strong evidence for consumption smoothing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "EU = q*U(W| bad state) + (1-q)*U(W|goodstate)"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "W"
},
{
"math_id": 3,
"text": "EU = (1-q)*U(W-p) + q*U(W-p-d+p/q)"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "U(c)=\\sqrt{c}"
},
{
"math_id": 6,
"text": "E_t"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "\\delta = 1/\\beta-1"
},
{
"math_id": 9,
"text": "r_t = R_t - 1\\ge \\delta"
},
{
"math_id": 10,
"text": "u"
},
{
"math_id": 11,
"text": "c_t"
},
{
"math_id": 12,
"text": "y_t = w_t"
},
{
"math_id": 13,
"text": "A_t"
},
{
"math_id": 14,
"text": "E_{0}\\sum_{t=0}^{\\infty }\\beta^{t}\\left[u(c_{t})\\right]"
},
{
"math_id": 15,
"text": " A_{t+1}=R_{t+1}(A_{t}+y_{t}-c_{t}) "
},
{
"math_id": 16,
"text": " \\beta E_{t}R_{t+1}\\frac{u^{\\prime }(c_{t+1})}{u^{\\prime }(c_{t})}=1 "
},
{
"math_id": 17,
"text": " R_{t+1}=R=\\beta ^{-1} "
},
{
"math_id": 18,
"text": " E_{t}u^{\\prime }(c_{t+1})=u^{\\prime }(c_{t}) "
},
{
"math_id": 19,
"text": " E_{t}[c_{t+1}]=c_{t} "
},
{
"math_id": 20,
"text": " c_{t}=\\left[ \\frac{r}{1+r}\\right] \\left[ E_{t}\\sum_{i=0}^{\\infty\n}\\left( \\frac{1}{1+r}\\right) ^{i}y_{t+i}+A_{t}\\right] "
}
]
| https://en.wikipedia.org/wiki?curid=13966180 |
13968483 | Magnetic keyed lock | Type of mechanical lock
A magnetic keyed lock or magnetic-coded lock is a locking mechanism whereby the key utilizes magnets as part of the locking and unlocking mechanism. Magnetic-coded locks encompass knob locks, cylinder locks, lever locks, and deadbolt locks as well as applications in other security devices.
Design.
A magnetic key uses from one to many small magnets oriented so that the North / South Poles would equate to a combination to push or pull the lock's internal tumblers thus releasing the lock. This is a totally passive system requiring no electricity or electronics to activate or deactivate the mechanism. Using several magnets at differing polarity / orientations and different strengths can allow thousands of different combinations per key.
Magnetic-coded technology utilizes multiple pairs of magnetic pins with opposing poles that are embedded inside keys and plugs. When a correctly matched key is inserted into the lock, not only are all the mechanical pins pushed into the correct positions, the magnetic pins are also driven to the appropriate level by the magnetic force inside the key.
The magnetic pins are made with permanent magnets which means the magnets stay magnetized. The intensity of the magnet will not decay over time or be affected by other magnetic fields.
Security.
In order to open a magnetic-coded lock, three criteria must be met: correct teething of the key, magnetic pin locations and poles of the magnetic pins. If any of these three criteria are not satisfied, the lock stays inoperable and cannot be turned.
Traditional lock picking is impossible due to the tumblers being magnetically operated instead of via a physical up and down action. Magnetic keys also cannot be reproduced by locksmiths by sight or other "human sensed" information.
formula_0
Equation.
where
formula_1 is the number of magnetic-coded lock key combinations,
formula_2 is the number of conventional pin-tumbler lock combinations,
formula_3 is the number of pairs of embedded magnets (multiple pairs can be embedded).
History.
The magnetic-coded lock was invented by an engineer in Nanchang, China. There have been several Chinese patents taken out on this technology.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N = C \\cdot 4^m"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "m"
}
]
| https://en.wikipedia.org/wiki?curid=13968483 |
1396870 | Heawood graph | Undirected graph with 14 vertices
In the mathematical field of graph theory, the Heawood graph is an undirected graph with 14 vertices and 21 edges, named after Percy John Heawood.
Combinatorial properties.
The graph is cubic, and all cycles in the graph have six or more edges. Every smaller cubic graph has shorter cycles, so this graph is the 6-cage, the smallest cubic graph of girth 6. It is a distance-transitive graph (see the Foster census) and therefore distance regular.
There are 24 perfect matchings in the Heawood graph; for each matching, the set of edges not in the matching forms a Hamiltonian cycle. For instance, the figure shows the vertices of the graph placed on a cycle, with the internal diagonals of the cycle forming a matching. By subdividing the cycle edges into two matchings, we can partition the Heawood graph into three perfect matchings (that is, 3-color its edges) in eight different ways. Every two perfect matchings, and every two Hamiltonian cycles, can be transformed into each other by a symmetry of the graph.
There are 28 six-vertex cycles in the Heawood graph. Each 6-cycle is disjoint from exactly three other 6-cycles; among these three 6-cycles, each one is the symmetric difference of the other two. The graph with one node per 6-cycle, and one edge for each disjoint pair of 6-cycles, is the Coxeter graph.
Geometric and topological properties.
The Heawood graph is a toroidal graph; that is, it can be embedded without crossings onto a torus. The result is the regular map {6,3}2,1, with 7 hexagonal faces. Each face of the map is adjacent to every other face, thus as a result coloring the map requires 7 colors. The map and graph were discovered by Percy John Heawood in 1890, who proved that no map on the torus could require more than seven colors and thus this map is maximal.
The map can be faithfully realized as the Szilassi polyhedron, the only known polyhedron apart from the tetrahedron such that every pair of faces is adjacent.
The Heawood graph is the Levi graph of the Fano plane, the graph representing incidences between points and lines in that geometry. With this interpretation, the 6-cycles in the Heawood graph correspond to triangles in the Fano plane. Also, the Heawood graph is the Tits building of the group SL3(F2).
The Heawood graph has crossing number 3, and is the smallest cubic graph with that crossing number (sequence in the OEIS). Including the Heawood graph, there are 8 distinct graphs of order 14 with crossing number 3.
The Heawood graph is the smallest cubic graph with Colin de Verdière graph invariant μ
6.
The Heawood graph is a unit distance graph: it can be embedded in the plane such that adjacent vertices are exactly at distance one apart, with no two vertices embedded to the same point and no vertex embedded into a point within an edge.
Algebraic properties.
The automorphism group of the Heawood graph is isomorphic to the projective linear group PGL2(7), a group of order 336. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Heawood graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge.
More strongly, the Heawood graph is 4-arc-transitive.
According to the Foster census, the Heawood graph, referenced as F014A, is the only cubic symmetric graph on 14 vertices.
It has book thickness 3 and queue number 2.
The characteristic polynomial of the Heawood graph is formula_0. It is the only graph with this characteristic polynomial, making it a graph determined by its spectrum.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x-3) (x+3) (x^2-2)^6"
}
]
| https://en.wikipedia.org/wiki?curid=1396870 |
1396880 | Tutte–Coxeter graph | In the mathematical field of graph theory, the Tutte–Coxeter graph or Tutte eight-cage or Cremona–Richmond graph is a 3-regular graph with 30 vertices and 45 edges. As the unique smallest cubic graph of girth 8, it is a cage and a Moore graph. It is bipartite, and can be constructed as the Levi graph of the generalized quadrangle "W"2 (known as the Cremona–Richmond configuration). The graph is named after William Thomas Tutte and H. S. M. Coxeter; it was discovered by Tutte (1947) but its connection to geometric configurations was investigated by both authors in a pair of jointly published papers (Tutte 1958; Coxeter 1958a).
All the cubic distance-regular graphs are known. The Tutte–Coxeter is one of the 13 such graphs.
It has crossing number 13, book thickness 3 and queue number 2.
Constructions and automorphisms.
The Tutte–Coxeter graph is the bipartite Levi graph connecting the 15 perfect matchings of a 6-vertex complete graph K6 to its 15 edges, as described by Coxeter (1958b), based on work by Sylvester (1844). Each vertex corresponds to an edge or a perfect matching, and connected vertices represent the incidence structure between edges and matchings.
Based on this construction, Coxeter showed that the Tutte–Coxeter graph is a symmetric graph; it has a group of 1440 automorphisms, which may be identified with the automorphisms of the group of permutations on six elements (Coxeter 1958b). The inner automorphisms of this group correspond to permuting the six vertices of the "K"6 graph; these permutations act on the Tutte–Coxeter graph by permuting the vertices on each side of its bipartition while keeping each of the two sides fixed as a set. In addition, the outer automorphisms of the group of permutations swap one side of the bipartition for the other. As Coxeter showed, any path of up to five edges in the Tutte–Coxeter graph is equivalent to any other such path by one such automorphism.
The Tutte–Coxeter graph as a building.
This graph is the spherical building associated to the symplectic group formula_0 (there is an exceptional isomorphism between this group and the symmetric group formula_1). More specifically, it is the incidence graph of a generalized quadrangle.
Concretely, the Tutte-Coxeter graph can be defined from a 4-dimensional symplectic vector space formula_2 over formula_3 as follows:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Sp_4(\\mathbb{F}_{2})"
},
{
"math_id": 1,
"text": "S_6"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "\\mathbb{F}_{2}"
},
{
"math_id": 4,
"text": "W\\subset V"
},
{
"math_id": 5,
"text": "v \\in W"
}
]
| https://en.wikipedia.org/wiki?curid=1396880 |
1396888 | Levi graph | In combinatorial mathematics, a Levi graph or incidence graph is a bipartite graph associated with an incidence structure. From a collection of points and lines in an incidence geometry or a projective configuration, we form a graph with one vertex per point, one vertex per line, and an edge for every incidence between a point and a line. They are named for Friedrich Wilhelm Levi, who wrote about them in 1942.
The Levi graph of a system of points and lines usually has girth at least six: Any 4-cycles would correspond to two lines through the same two points. Conversely any bipartite graph with girth at least six can be viewed as the Levi graph of an abstract incidence structure. Levi graphs of configurations are biregular, and every biregular graph with girth at least six can be viewed as the Levi graph of an abstract configuration.
Levi graphs may also be defined for other types of incidence structure, such as the incidences between points and planes in Euclidean space. For every Levi graph, there is an equivalent hypergraph, and vice versa.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^3"
},
{
"math_id": 1,
"text": "3\\times 3\\times 3"
},
{
"math_id": 2,
"text": "Q_4"
}
]
| https://en.wikipedia.org/wiki?curid=1396888 |
13969132 | Neher–McGrath method | Estimation method in electrical engineering
In electrical engineering, Neher–McGrath is a method of estimating the steady-state temperature of electrical power cables for some commonly encountered configurations. By estimating the temperature of the cables, the safe long-term current-carrying capacity of the cables can be calculated.
J. H. Neher and M. H. McGrath were two electrical engineers who wrote a paper in 1957 about how to calculate the capacity of current (ampacity) of cables. The paper described two-dimensional highly symmetric simplified calculations which have formed the basis for many cable application guidelines and regulations. Complex geometries, or configurations that require three-dimensional analysis of heat flow, require more complex tools such as finite element analysis. Their article became used as reference for the ampacity in most of the standard tables.
Overview.
The Neher–McGrath paper summarized years of research into analytical treatment of the practical problem of heat transfer from power cables. The methods described included all the heat generation mechanisms from a power cable (conductor loss, dielectric loss and shield loss).
From the basic principles that electric current leads to thermal heating and thermal power transfer to the ambient environment requires some temperature difference, it follows that the current leads to a temperature rise in the conductors. The ampacity, or maximum allowable current, of an electric power cable depends on the allowable temperatures of the cable and any adjacent materials such as insulation or termination equipment. For insulated cables, the insulation maximum temperature is normally the limiting material property that constrains ampacity. For uninsulated cables (typically used in outdoor overhead installations), the tensile strength of the cable (as affected by temperature) is normally the limiting material property. The Neher–McGrath method is the electrical industry standard for calculating cable ampacity, most often employed via lookup in tables of precomputed results for common configurations.
US National Electrical Code use.
The equation in section 310-15(C) of the National Electrical Code, called the Neher–McGrath equation (NM), may be used to estimate the effective ampacity of a cable:formula_0
In the equation, formula_1 is normally the limiting conductor temperature derived from the insulation or tensile strength limitations. formula_2 is a term added to the ambient temperature formula_3 to compensate for heat generated in the jacket and insulation for higher voltages. formula_4 is called the dielectric loss temperature rise and is generally regarded as insignificant for voltages below 2000 V. Term formula_5 is a multiplier used to convert direct current resistance (formula_6) to the effective alternating current resistance (which typically includes conductor skin effects and eddy current losses). For wire sizes smaller than AWG No. 2 (), this term is also generally regarded as insignificant. formula_7 is the effective thermal resistance between the conductor and the ambient conditions, which can require significant empirical or theoretical effort to estimate. With respect to the AC-sensitive terms, tabular presentation of the NM equation results in the National Electrical Code was developed assuming the standard North American power frequency of 60 hertz and sinusoidal wave forms for current and voltage.
The challenges posed by the complexity of estimating formula_7 and of estimating the local increase in ambient temperature obtained by co-locating many cables (in a duct bank) create a market niche in the electric power industry for software dedicated to ampacity estimation. | [
{
"math_id": 0,
"text": "I = \\frac{\\sqrt{T_c-(T_a + \\Delta T_d)}}{R_\\text{dc}(1+Y_c)R_{c,a}}"
},
{
"math_id": 1,
"text": "T_c"
},
{
"math_id": 2,
"text": "\\Delta T_d"
},
{
"math_id": 3,
"text": "T_a"
},
{
"math_id": 4,
"text": " \\Delta T_d"
},
{
"math_id": 5,
"text": "1+Y_c"
},
{
"math_id": 6,
"text": "R_\\text{dc}"
},
{
"math_id": 7,
"text": "R_{c,a}"
}
]
| https://en.wikipedia.org/wiki?curid=13969132 |
1396924 | Closed monoidal category | Type of category in mathematics
In mathematics, especially in category theory, a closed monoidal category (or a "monoidal closed category") is a category that is both a monoidal category and a closed category in such a way that the structures are compatible.
A classic example is the category of sets, Set, where the monoidal product of sets formula_0 and formula_1 is the usual cartesian product formula_2, and the internal Hom formula_3 is the set of functions from formula_0 to formula_1. A non-cartesian example is the category of vector spaces, "K"-Vect, over a field formula_4. Here the monoidal product is the usual tensor product of vector spaces, and the internal Hom is the vector space of linear maps from one vector space to another.
The internal language of closed symmetric monoidal categories is linear logic and the type system is the linear type system. Many examples of closed monoidal categories are symmetric. However, this need not always be the case, as non-symmetric monoidal categories can be encountered in category-theoretic formulations of linguistics; roughly speaking, this is because word-order in natural language matters.
Definition.
A closed monoidal category is a monoidal category formula_5 such that for every object formula_1 the functor given by right tensoring with formula_1
formula_6
has a right adjoint, written
formula_7
This means that there exists a bijection, called 'currying', between the Hom-sets
formula_8
that is natural in both "A" and "C". In a different, but common notation, one would say that the functor
formula_9
has a right adjoint
formula_10
Equivalently, a closed monoidal category formula_5 is a category equipped, for every two objects "A" and "B", with
satisfying the following universal property: for every morphism
formula_13
there exists a unique morphism
formula_14
such that
formula_15
It can be shown that this construction defines a functor formula_16. This functor is called the internal Hom functor, and the object formula_17 is called the internal Hom of formula_0 and formula_1. Many other notations are in common use for the internal Hom. When the tensor product on formula_5 is the cartesian product, the usual notation is formula_3 and this object is called the exponential object.
Biclosed and symmetric categories.
Strictly speaking, we have defined a right closed monoidal category, since we required that "right" tensoring with any object formula_0 has a right adjoint. In a left closed monoidal category, we instead demand that the functor of left tensoring with any object formula_0
formula_18
have a right adjoint
formula_19
A biclosed monoidal category is a monoidal category that is both left and right closed.
A symmetric monoidal category is left closed if and only if it is right closed. Thus we may safely speak of a 'symmetric monoidal closed category' without specifying whether it is left or right closed. In fact, the same is true more generally for braided monoidal categories: since the braiding makes formula_20 naturally isomorphic to formula_21, the distinction between tensoring on the left and tensoring on the right becomes immaterial, so every right closed braided monoidal category becomes left closed in a canonical way, and vice versa.
We have described closed monoidal categories as monoidal categories with an extra property. One can equivalently define a closed monoidal category to be a closed category with an extra property. Namely, we can demand the existence of a tensor product that is left adjoint to the internal Hom functor.
In this approach, closed monoidal categories are also called monoidal closed categories. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "A \\times B"
},
{
"math_id": 3,
"text": "B^A"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "\\mathcal{C}"
},
{
"math_id": 6,
"text": "A\\mapsto A\\otimes B"
},
{
"math_id": 7,
"text": "A\\mapsto (B \\Rightarrow A)."
},
{
"math_id": 8,
"text": "\\text{Hom}_\\mathcal{C}(A\\otimes B, C)\\cong\\text{Hom}_\\mathcal{C}(A,B\\Rightarrow C)"
},
{
"math_id": 9,
"text": "-\\otimes B:\\mathcal{C}\\to\\mathcal{C}"
},
{
"math_id": 10,
"text": "[B, -]:\\mathcal{C}\\to\\mathcal{C}"
},
{
"math_id": 11,
"text": "A\\Rightarrow B"
},
{
"math_id": 12,
"text": "\\mathrm{eval}_{A,B} : (A\\Rightarrow B) \\otimes A \\to B"
},
{
"math_id": 13,
"text": "f : X\\otimes A\\to B"
},
{
"math_id": 14,
"text": "h : X \\to A\\Rightarrow B"
},
{
"math_id": 15,
"text": "f = \\mathrm{eval}_{A,B}\\circ(h \\otimes \\mathrm{id}_A)."
},
{
"math_id": 16,
"text": "\\Rightarrow : \\mathcal{C}^{op} \\times \\mathcal{C} \\to \\mathcal{C}"
},
{
"math_id": 17,
"text": "A \\Rightarrow B"
},
{
"math_id": 18,
"text": "B\\mapsto A\\otimes B"
},
{
"math_id": 19,
"text": "B\\mapsto(B\\Leftarrow A)"
},
{
"math_id": 20,
"text": "A \\otimes B"
},
{
"math_id": 21,
"text": "B \\otimes A"
},
{
"math_id": 22,
"text": "M\\Rightarrow N"
},
{
"math_id": 23,
"text": "\\operatorname{Hom}_R(M, N)"
},
{
"math_id": 24,
"text": "A^*\\otimes B"
},
{
"math_id": 25,
"text": "\\Z"
},
{
"math_id": 26,
"text": "\\operatorname{Hom}(R,S)\\cong\\operatorname{Hom}(\\Z\\otimes R,S)\\cong\\operatorname{Hom}(\\Z,R\\Rightarrow S)\\cong\\{\\bullet\\}"
}
]
| https://en.wikipedia.org/wiki?curid=1396924 |
1396948 | Particle filter | Type of Monte Carlo algorithms for signal processing and statistical inference
Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998.
Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems.
Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution. In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.
From the statistical and probabilistic point of view, particle filters may be interpreted as mean-field particle interpretations of Feynman-Kac probability measures. These particle integration techniques were developed in molecular chemistry and computational physics by Theodore E. Harris and Herman Kahn in 1951, Marshall N. Rosenbluth and Arianna W. Rosenbluth in 1955, and more recently by Jack H. Hetherington in 1984. In computational physics, these Feynman-Kac type path particle integration methods are also used in Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods. Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.
The particle filter methodology is used to solve Hidden Markov Model (HMM) and nonlinear filtering problems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion. Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.
Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics, phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, quantitative risk and insurance and other fields.
History.
Heuristic-like algorithms.
From a statistical and probabilistic viewpoint, particle filters belong to the class of branching/genetic type algorithms, and mean-field type interacting particle methodologies. The interpretation of these particle methods depends on the scientific discipline. In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a. Metaheuristic). In computational physics and molecular chemistry, they are used to solve Feynman-Kac path integration problems or to compute Boltzmann-Gibbs measures, top eigenvalues, and ground states of Schrödinger operators. In Biology and Genetics, they represent the evolution of a population of individuals or genes in some environment.
The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. The first trace of particle filters in statistical methodology dates back to the mid-1950s; the 'Poor Man's Monte Carlo', that was proposed by Hammersley et al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game. In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book published in 1975.
In Biology and Genetics, the Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms. The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms.
From the mathematical viewpoint, the conditional distribution of the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. One can also quote the earlier seminal works of Theodore E. Harris and Herman Kahn in particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter", a slightly modified version of this article appeared in 1996. In April 1993, Gordon et al., published in their seminal work an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin, and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.
Mathematical foundations.
From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms.
The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral in 1996. The article also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.
Dan Crisan, Jessica Gaines, and Terry Lyons, as well as Pierre Del Moral, and Terry Lyons, created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet. The first rigorous analysis of genealogical tree-ased particle filter smoothers is due to P. Del Moral and L. Miclo in 2001
The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books. These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models, backward Markov particle models, adaptive mean-field particle models, island-type particle models, particle Markov chain Monte Carlo methodologies, Sequential Monte Carlo samplers and Sequential Monte Carlo Approximate Bayesian Computation methods and Sequential Monte Carlo ABC based Bayesian Bootstrap.
The filtering problem.
Objective.
A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with a hidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known.
A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below:
formula_0
the filtering problem is to estimate sequentially the values of the hidden states formula_1, given the values of the observation process formula_2 at any time step "k".
All Bayesian estimates of formula_1 follow from the posterior density formula_3. The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior formula_4.
The Signal-Observation model.
Particle methods often assume formula_1 and the observations formula_5 can be modeled in this form:
with an initial probability density formula_11.
An example of system with these properties is:
formula_17
formula_18
where both formula_19 and formula_20 are mutually independent sequences with known probability density functions and "g" and "h" are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions "g" and "h" in the above example are linear, and if both formula_19 and formula_20 are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if the probability distribution is Gaussian a third-order approximation is possible).
The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed. To design a particle filter we simply need to assume that we can sample the transitions formula_21 of the Markov chain formula_22 and to compute the likelihood function formula_23 (see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions of formula_1 is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.
Approximate Bayesian computation models.
In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute. In this situation, an additional level of approximation is necessitated. One strategy is to replace the signal formula_1 by the Markov chain formula_24 and to introduce a virtual observation of the form
formula_25
for some sequence of independent random variables formula_26 with known probability density functions. The central idea is to observe that
formula_27
The particle filter associated with the Markov process formula_24 given the partial observations formula_28 is defined in terms of particles evolving in formula_29 with a likelihood function given with some obvious abusive notation by formula_30. These probabilistic techniques are closely related to Approximate Bayesian Computation (ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter. They were further developed by P. Del Moral, A. Doucet and A. Jasra.
The nonlinear filtering equation.
Bayes' rule for conditional probability gives:
formula_31
where
formula_32
Particle filters are also an approximation, but with enough particles they can be much more accurate. The nonlinear filtering equation is given by the recursion
with the convention formula_33 for "k" = 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially.
Feynman-Kac formulation.
We fix a time horizon n and a sequence of observations formula_34, and for each "k" = 0, ..., "n" we set:
formula_35
In this notation, for any bounded function "F" on the set of trajectories of formula_1 from the origin "k" = 0 up to time "k" = "n", we have the Feynman-Kac formula
formula_36
Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences. Their interpretations are dependent on the application domain. For instance, if we choose the indicator function formula_37 of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have:
formula_38
and
formula_39
as soon as the normalizing constant is strictly positive.
Particle filters.
A Genetic type particle algorithm.
Initially, such an algorithm starts with "N" independent random variables formula_40 with common probability density formula_11. The genetic algorithm selection-mutation transitions
formula_41
mimic/approximate the updating-prediction transitions of the optimal filter evolution (Eq. 1):
formula_43
where formula_44 stands for the Dirac measure at a given state a.
formula_46
In the above displayed formulae formula_47 stands for the likelihood function formula_23 evaluated at formula_48, and formula_49 stands for the conditional density formula_50 evaluated at formula_51.
At each time "k", we have the particle approximations
formula_52
and
formula_53
In Genetic algorithms and Evolutionary computing community, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.
Monte Carlo principles.
Particle methods, like all sampling-based approaches (e.g., Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density
formula_54
For example, we may have "N" samples from the approximate posterior distribution of formula_1, where the samples are labeled with superscripts as:
formula_55
Then, expectations with respect to the filtering distribution are approximated by
with
formula_56
where formula_44 stands for the Dirac measure at a given state a. The function "f", in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some approximation error. When the approximation equation (Eq. 2) is satisfied for any bounded function "f" we write
formula_57
Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines
formula_58
of the particles formula_59. The random states formula_60, with the lower indices l=0...,k, stands for the ancestor of the individual formula_61 at level l=0...,k. In this situation, we have the approximation formula
with the empirical measure
formula_62
Here "F" stands for any founded function on the path space of the signal. In a more synthetic form (Eq. 3) is equivalent to
formula_63
Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with a mean-field particle interpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.
Mean-field particle simulation.
The general probabilistic principle.
The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form formula_64 where formula_65 stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictor formula_66
satisfies a nonlinear evolution starting with the probability distribution formula_67. One of the simplest ways to approximate these probability measures is to start with "N" independent random variables formula_40 with common probability distribution formula_67 . Suppose we have defined a sequence of "N" random variables formula_68 such that
formula_69
At the next step we sample "N" (conditionally) independent random variables formula_70 with common law .
formula_71
A particle interpretation of the filtering equation.
We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors
For "k" = 0 we use the convention formula_72.
By the law of large numbers, we have
formula_73
in the sense that
formula_74
for any bounded function formula_75. We further assume that we have constructed a sequence of particles formula_76 at some rank "k" such that
formula_77
in the sense that for any bounded function formula_75 we have
formula_78
In this situation, replacing formula_79 by the empirical measure formula_80 in the evolution equation of the one-step optimal filter stated in (Eq. 4) we find that
formula_81
Notice that the right hand side in the above formula is a weighted probability mixture
formula_82
where formula_47 stands for the density formula_83 evaluated at formula_48, and formula_84 stands for the density formula_50 evaluated at formula_48 for formula_85
Then, we sample "N" independent random variable formula_86 with common probability density formula_87 so that
formula_88
Iterating this procedure, we design a Markov chain such that
formula_89
Notice that the optimal filter is approximated at each time step k using the Bayes' formulae
formula_90
The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measure formula_91 by the empirical approximation formula_92. The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.
Some convergence results.
The analysis of the convergence of particle filters was started in 1996 and in 2000 in the book and the series of articles. More recent developments can be found in the books, When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates
formula_93
are controlled by the non asymptotic uniform estimates
formula_94
formula_95
for any function "f" bounded by 1, and for some finite constants formula_96 In addition, for any formula_97:
formula_98
for some finite constants formula_99 related to the asymptotic bias and variance of the particle estimate, and some finite constant "c". The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation.
Genealogical trees and Unbiasedness properties.
Genealogical tree based particle smoothing.
Tracing back in time the ancestral lines
formula_100
of the individuals formula_101 and formula_102 at every time step "k", we also have the particle approximations
formula_103
These empirical approximations are equivalent to the particle integral approximations
formula_104
for any bounded function "F" on the random trajectories of the signal. As shown in the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.
Unbiased particle estimates of likelihood functions.
We use the product formula
formula_105
with
formula_106
and the conventions formula_107 and formula_108 for "k" = 0. Replacing formula_109 by the empirical approximation
formula_53
in the above displayed formula, we design the following unbiased particle approximation of the likelihood function
formula_110
with
formula_111
where formula_47 stands for the density formula_83 evaluated at formula_48. The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article. Refined variance estimates can be found in and.
Backward particle smoothers.
Using Bayes' rule, we have the formula
formula_112
Notice that
formula_113
This implies that
formula_114
Replacing the one-step optimal predictors formula_115 by the particle empirical measures
formula_116
we find that
formula_117
We conclude that
formula_118
with the backward particle approximation
formula_119
The probability measure
formula_120
is the probability of the random paths of a Markov chain formula_121running backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particles formula_122
formula_124
formula_128
In the above displayed formula, formula_129 stands for the conditional distribution formula_130 evaluated at formula_131. In the same vein, formula_132 and formula_133 stand for the conditional densities formula_134 and formula_9 evaluated at formula_131 and formula_135 These models allows to reduce integration with respect to the densities formula_136 in terms of matrix operations with respect to the Markov transitions of the chain described above. For instance, for any function formula_137 we have the particle estimates
formula_138
where
formula_139
This also shows that if
formula_140
then
formula_141
Some convergence results.
We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.
In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by
formula_142
for some finite constant "c". In addition, for any formula_97:
formula_143
for some finite constants formula_99 related to the asymptotic bias and variance of the particle estimate, and for some finite constant "c".
The bias and the variance of the particle particle estimates based on the ancestral lines of the genealogical trees
formula_144
are controlled by the non asymptotic uniform estimates
formula_145
for any function "F" bounded by 1, and for some finite constants formula_146 In addition, for any formula_97:
formula_147
for some finite constants formula_99 related to the asymptotic bias and variance of the particle estimate, and for some finite constant "c". The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form
formula_148
with
formula_149
with functions formula_137 bounded by 1, we have
formula_150
and
formula_151
for some finite constants formula_152 More refined estimates including exponentially small probability of errors are developed in.
Sequential Importance Resampling (SIR).
Monte Carlo filter and bootstrap filter.
"Sequential importance Resampling (SIR)", Monte Carlo filtering (Kitagawa 1993), bootstrap filtering algorithm (Gordon et al. 1993) and single distribution resampling (Bejuri W.M.Y.B et al. 2017), are also commonly applied filtering algorithms, which approximate the filtering probability density formula_153 by a weighted set of "N" samples
formula_154
The "importance weights" formula_155 are approximations to the relative posterior probabilities (or densities) of the samples such that
formula_156
Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function "f" can be approximated as a weighted average
formula_157
For a finite set of samples, the algorithm performance is dependent on the choice of the "proposal distribution"
formula_158.
The ""optimal" proposal distribution" is given as the "target distribution"
formula_159
This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998. When it is difficult to sample transitions according to the distribution formula_160 one natural strategy is to use the following particle approximation
formula_161
with the empirical approximation
formula_162
associated with "N" (or any other large number of samples) independent random samples formula_163with the conditional distribution of the random state formula_1 given formula_164. The consistency of the resulting particle filter of this approximation and other extensions are developed in. In the above display formula_44 stands for the Dirac measure at a given state a.
However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:
formula_165
"Sequential Importance Resampling" (SIR) filters with transition prior probability distribution as importance function are commonly known as bootstrap filter and condensation algorithm.
"Resampling" is used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The "stratified sampling" proposed by Kitagawa (1993) is optimal in terms of variance.
A single step of sequential importance resampling is as follows:
1) For formula_59 draw samples from the "proposal distribution"
formula_166
2) For formula_59 update the importance weights up to a normalizing constant:
formula_167
Note that when we use the transition prior probability distribution as the importance function,
formula_168
this simplifies to the following :
formula_169
3) For formula_59 compute the normalized importance weights:
formula_170
4) Compute an estimate of the effective number of particles as
formula_171
This criterion reflects the variance of the weights. Other criteria can be found in the article, including their rigorous analysis and central limit theorems.
5) If the effective number of particles is less than a given threshold formula_172, then perform resampling:
a) Draw "N" particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For formula_59 set formula_173
The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the term "Importance Resampling" is more accurate because the word "resampling" implies that the initial sampling has already been done.
"Direct version" algorithm.
The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample "x" at "k" from formula_174:
1) Set "n" "= 0" (This will count the number of particles generated so far)
2) Uniformly choose an index i from the range formula_175
3) Generate a test formula_176 from the distribution formula_9 with formula_177
4) Generate the probability of formula_178 using formula_176 from formula_179 where formula_180 is the measured value
5) Generate another uniform u from formula_181 where formula_182
6) Compare u and formula_183
6a) If u is larger then repeat from step 2
6b) If u is smaller then save formula_176 as formula_184 and increment n
7) If "n == N" then quit
The goal is to generate P "particles" at "k" using only the particles from formula_185. This requires that a Markov equation can be written (and computed) to generate a formula_186 based only upon formula_187. This algorithm uses the composition of the P particles from formula_185 to generate a particle at "k" and repeats (steps 2–6) until P particles are generated at "k".
This can be more easily visualized if "x" is viewed as a two-dimensional array. One dimension is "k" and the other dimension is the particle number. For example, formula_188 would be the ith particle at formula_189 and can also be written formula_190 (as done above in the algorithm). Step 3 generates a "potential" formula_186 based on a randomly chosen particle (formula_191) at time formula_185 and rejects or accepts it in step 6. In other words, the formula_186 values are generated using the previously generated formula_187.
Applications.
Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{cccccccccc}\nX_0&\\to &X_1&\\to &X_2&\\to&X_3&\\to &\\cdots&\\text{signal}\\\\\n\\downarrow&&\\downarrow&&\\downarrow&&\\downarrow&&\\cdots&\\\\\nY_0&&Y_1&&Y_2&&Y_3&&\\cdots&\\text{observation}\n\\end{array}"
},
{
"math_id": 1,
"text": "X_k"
},
{
"math_id": 2,
"text": "Y_0,\\cdots,Y_k,"
},
{
"math_id": 3,
"text": "p(x_k|y_0,y_1,...,y_k)"
},
{
"math_id": 4,
"text": "p(x_0,x_1,...,x_k|y_0,y_1,...,y_k)"
},
{
"math_id": 5,
"text": "Y_k"
},
{
"math_id": 6,
"text": "X_0, X_1, \\cdots"
},
{
"math_id": 7,
"text": "\\mathbb R^{d_x}"
},
{
"math_id": 8,
"text": "d_x\\geqslant 1"
},
{
"math_id": 9,
"text": "p(x_k|x_{k-1})"
},
{
"math_id": 10,
"text": "X_k|X_{k-1}=x_k \\sim p(x_k|x_{k-1})"
},
{
"math_id": 11,
"text": "p(x_0)"
},
{
"math_id": 12,
"text": "Y_0, Y_1, \\cdots"
},
{
"math_id": 13,
"text": "\\mathbb{R}^{d_y}"
},
{
"math_id": 14,
"text": "d_y\\geqslant 1"
},
{
"math_id": 15,
"text": "X_k=x_k"
},
{
"math_id": 16,
"text": "Y_k|X_k=y_k \\sim p(y_k|x_k)"
},
{
"math_id": 17,
"text": "X_k = g(X_{k-1}) + W_{k-1}"
},
{
"math_id": 18,
"text": "Y_k = h(X_k) + V_k"
},
{
"math_id": 19,
"text": "W_k"
},
{
"math_id": 20,
"text": "V_k"
},
{
"math_id": 21,
"text": "X_{k-1} \\to X_k"
},
{
"math_id": 22,
"text": "X_k,"
},
{
"math_id": 23,
"text": "x_k\\mapsto p(y_k|x_k)"
},
{
"math_id": 24,
"text": "\\mathcal X_k=\\left(X_k,Y_k\\right)"
},
{
"math_id": 25,
"text": "\\mathcal Y_k=Y_k+\\epsilon \\mathcal V_k\\quad\\mbox{for some parameter}\\quad\\epsilon\\in [0,1]"
},
{
"math_id": 26,
"text": "\\mathcal V_k"
},
{
"math_id": 27,
"text": "\\text{Law}\\left(X_k|\\mathcal Y_0=y_0,\\cdots, \\mathcal Y_k=y_k\\right)\\approx_{\\epsilon\\downarrow 0} \\text{Law}\\left(X_k|Y_0=y_0,\\cdots, Y_k=y_k\\right)"
},
{
"math_id": 28,
"text": "\\mathcal Y_0=y_0,\\cdots, \\mathcal Y_k=y_k,"
},
{
"math_id": 29,
"text": "\\mathbb R^{d_x+d_y}"
},
{
"math_id": 30,
"text": "p(\\mathcal Y_k|\\mathcal X_k)"
},
{
"math_id": 31,
"text": "p(x_0, \\cdots, x_k|y_0,\\cdots,y_k) =\\frac{p(y_0,\\cdots,y_k|x_0, \\cdots, x_k) p(x_0,\\cdots,x_k)}{p(y_0,\\cdots,y_k)}"
},
{
"math_id": 32,
"text": "\\begin{align}\np(y_0,\\cdots,y_k) &=\\int p(y_0,\\cdots,y_k|x_0,\\cdots, x_k) p(x_0,\\cdots,x_k) dx_0\\cdots dx_k \\\\\np(y_0,\\cdots, y_k|x_0,\\cdots ,x_k) &=\\prod_{l=0}^{k} p(y_l|x_l) \\\\\np(x_0,\\cdots, x_k) &=p_0(x_0)\\prod_{l=1}^{k} p(x_l|x_{l-1})\n\\end{align}"
},
{
"math_id": 33,
"text": "p(x_0|y_0,\\cdots,y_{k-1})=p(x_0)"
},
{
"math_id": 34,
"text": "Y_0=y_0,\\cdots,Y_n=y_n"
},
{
"math_id": 35,
"text": "G_k(x_k)=p(y_k|x_k)."
},
{
"math_id": 36,
"text": "\\begin{align}\n\\int F(x_0,\\cdots,x_n) p(x_0,\\cdots,x_n|y_0,\\cdots,y_n) dx_0\\cdots dx_n &= \\frac{\\int F(x_0,\\cdots,x_n) \\left\\{\\prod\\limits_{k=0}^{n} p(y_k|x_k)\\right\\}p(x_0,\\cdots,x_n) dx_0\\cdots dx_n}{\\int \\left\\{\\prod\\limits_{k=0}^{n} p(y_k|x_k)\\right\\}p(x_0,\\cdots,x_n) dx_0\\cdots dx_n}\\\\\n&=\\frac{E\\left(F(X_0,\\cdots,X_n)\\prod\\limits_{k=0}^{n} G_k(X_k)\\right)}{E\\left(\\prod\\limits_{k=0}^{n} G_k(X_k)\\right)}\n\\end{align}"
},
{
"math_id": 37,
"text": "G_n(x_n)=1_A(x_n)"
},
{
"math_id": 38,
"text": "E\\left(F(X_0,\\cdots,X_n) | X_0\\in A, \\cdots, X_n\\in A\\right) =\\frac{E\\left(F(X_0,\\cdots,X_n)\\prod\\limits_{k=0}^{n} G_k(X_k)\\right)}{E\\left(\\prod\\limits_{k=0}^{n} G_k(X_k)\\right)}"
},
{
"math_id": 39,
"text": "P\\left(X_0\\in A,\\cdots, X_n\\in A\\right)=E\\left(\\prod\\limits_{k=0}^{n} G_k(X_k)\\right)"
},
{
"math_id": 40,
"text": "\\left(\\xi^i_0\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 41,
"text": "\\xi_k:=\\left(\\xi^i_{k}\\right)_{1\\leqslant i\\leqslant N}\\stackrel{\\text{selection}}{\\longrightarrow} \\widehat{\\xi}_k:=\\left(\\widehat{\\xi}^i_{k}\\right)_{1\\leqslant i\\leqslant N}\\stackrel{\\text{mutation}}{\\longrightarrow} \\xi_{k+1}:=\\left(\\xi^i_{k+1}\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 42,
"text": "\\widehat{\\xi}_k:=\\left(\\widehat{\\xi}^i_{k}\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 43,
"text": "\\sum_{i=1}^N \\frac{p(y_k|\\xi^i_k)}{\\sum_{j=1}^Np(y_k|\\xi^j_k)} \\delta_{\\xi^i_k}(dx_k)"
},
{
"math_id": 44,
"text": "\\delta_a"
},
{
"math_id": 45,
"text": "\\widehat{\\xi}^i_k"
},
{
"math_id": 46,
"text": "\\widehat{\\xi}^i_k \\longrightarrow\\xi^i_{k+1} \\sim p(x_{k+1}|\\widehat{\\xi}^i_k), \\qquad i=1,\\cdots,N."
},
{
"math_id": 47,
"text": "p(y_k|\\xi^i_k)"
},
{
"math_id": 48,
"text": "x_k=\\xi^i_k"
},
{
"math_id": 49,
"text": "p(x_{k+1}|\\widehat{\\xi}^i_k)"
},
{
"math_id": 50,
"text": "p(x_{k+1}|x_k)"
},
{
"math_id": 51,
"text": "x_k=\\widehat{\\xi}^i_k"
},
{
"math_id": 52,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_k):=\\frac{1}{N} \\sum_{i=1}^N \\delta_{\\widehat{\\xi}^i_k} (dx_k) \\approx_{N\\uparrow\\infty} p(dx_k|y_0,\\cdots,y_k) \\approx_{N\\uparrow\\infty} \n\\sum_{i=1}^N \\frac{p(y_k|\\xi^i_k)}{\\sum_{i=1}^N p(y_k|\\xi^j_k)} \\delta_{\\xi^i_k}(dx_k)"
},
{
"math_id": 53,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_{k-1}):=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^i_k}(dx_k) \\approx_{N\\uparrow\\infty} p(dx_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 54,
"text": "p(x_k|y_0, \\cdots, y_k)."
},
{
"math_id": 55,
"text": "\\widehat{\\xi}_k^1, \\cdots, \\widehat{\\xi}_k^{N}."
},
{
"math_id": 56,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_k)=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\widehat{\\xi}^i_k}(dx_k)"
},
{
"math_id": 57,
"text": "p(dx_k|y_0,\\cdots,y_k):=p(x_k|y_0,\\cdots,y_k) dx_k \\approx_{N\\uparrow\\infty} \\widehat{p}(dx_k|y_0,\\cdots,y_k)=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\widehat{\\xi}^{i}_k}(dx_k)"
},
{
"math_id": 58,
"text": "\\left(\\widehat{\\xi}^{i}_{0,k}, \\widehat{\\xi}^{i}_{1,k},\\cdots,\\widehat{\\xi}^{i}_{k-1,k},\\widehat{\\xi}^i_{k,k}\\right)"
},
{
"math_id": 59,
"text": "i=1,\\cdots,N"
},
{
"math_id": 60,
"text": "\\widehat{\\xi}^{i}_{l,k}"
},
{
"math_id": 61,
"text": "\\widehat{\\xi}^{i}_{k,k}=\\widehat{\\xi}^i_k"
},
{
"math_id": 62,
"text": "\\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k):=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\left(\\widehat{\\xi}^{i}_{0,k},\\widehat{\\xi}^{i}_{1,k},\\cdots,\\widehat{\\xi}^{i}_{k,k}\\right)}(d(x_0,\\cdots,x_k))"
},
{
"math_id": 63,
"text": "\\begin{align}\np(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k)&:=p(x_0,\\cdots,x_k|y_0,\\cdots,y_k) \\, dx_0\\cdots dx_k \\\\\n&\\approx_{N\\uparrow\\infty} \\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k) \\\\\n&:=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\left(\\widehat{\\xi}^{i}_{0,k}, \\cdots,\\widehat{\\xi}^{i}_{k,k}\\right)}(d(x_0,\\cdots,x_k))\n\\end{align}"
},
{
"math_id": 64,
"text": "\\eta_{n+1}=\\Phi_{n+1}\\left(\\eta_{n}\\right)"
},
{
"math_id": 65,
"text": "\\Phi_{n+1}"
},
{
"math_id": 66,
"text": " \\eta_n(dx_n) =p(x_n|y_0,\\cdots,y_{n-1})dx_n"
},
{
"math_id": 67,
"text": "\\eta_0(dx_0)=p(x_0)dx_0"
},
{
"math_id": 68,
"text": "\\left(\\xi^i_n\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 69,
"text": "\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^i_n}(dx_n) \\approx_{N\\uparrow\\infty} \\eta_n(dx_n)"
},
{
"math_id": 70,
"text": "\\xi_{n+1}:=\\left(\\xi^i_{n+1}\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 71,
"text": "\\Phi_{n+1}\\left(\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^i_n}\\right) \\approx_{N\\uparrow\\infty} \\Phi_{n+1}\\left(\\eta_{n}\\right)=\\eta_{n+1}"
},
{
"math_id": 72,
"text": "p(x_0|y_0,\\cdots,y_{-1}):=p(x_0)"
},
{
"math_id": 73,
"text": "\\widehat{p}(dx_0)=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^{i}_0}(dx_0)\\approx_{N\\uparrow\\infty} p(x_0)dx_0"
},
{
"math_id": 74,
"text": "\\int f(x_0)\\widehat{p}(dx_0)=\\frac{1}{N}\\sum_{i=1}^N f(\\xi^i_0)\\approx_{N\\uparrow\\infty} \\int f(x_0)p(dx_0)dx_0"
},
{
"math_id": 75,
"text": "f"
},
{
"math_id": 76,
"text": "\\left(\\xi^i_k\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 77,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_{k-1}):=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^{i}_k}(dx_k)\\approx_{N\\uparrow\\infty}~p(x_k~|~y_0,\\cdots,y_{k-1})dx_k"
},
{
"math_id": 78,
"text": "\\int f(x_k)\\widehat{p}(dx_k|y_0,\\cdots,y_{k-1})=\\frac{1}{N}\\sum_{i=1}^N f(\\xi^i_k)\\approx_{N\\uparrow\\infty} \\int f(x_k)p(dx_k|y_0,\\cdots,y_{k-1})dx_k"
},
{
"math_id": 79,
"text": "p(x_k|y_0,\\cdots,y_{k-1}) dx_k"
},
{
"math_id": 80,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 81,
"text": "p(x_{k+1}|y_0,\\cdots,y_k)\\approx_{N\\uparrow\\infty} \\int p(x_{k+1}|x'_{k}) \\frac{p(y_k|x_k') \\widehat{p}(dx'_k|y_0,\\cdots,y_{k-1})}{ \\int p(y_k|x''_k) \\widehat{p}(dx''_k|y_0,\\cdots,y_{k-1})}"
},
{
"math_id": 82,
"text": "\\int p(x_{k+1}|x'_{k}) \\frac{p(y_k|x_k') \\widehat{p}(dx'_k|y_0,\\cdots,y_{k-1})}{\\int p(y_k|x''_k) \\widehat{p}(dx''_k|y_0,\\cdots,y_{k-1})}=\\sum_{i=1}^N \\frac{p(y_k|\\xi^i_k)}{\\sum_{i=1}^N p(y_k|\\xi^j_k)} p(x_{k+1}|\\xi^i_k)=:\\widehat{q}(x_{k+1}|y_0,\\cdots,y_k)"
},
{
"math_id": 83,
"text": "p(y_k|x_k)"
},
{
"math_id": 84,
"text": "p(x_{k+1}|\\xi^i_k)"
},
{
"math_id": 85,
"text": "i=1,\\cdots,N."
},
{
"math_id": 86,
"text": "\\left(\\xi^i_{k+1}\\right)_{1\\leqslant i\\leqslant N}"
},
{
"math_id": 87,
"text": "\\widehat{q}(x_{k+1}|y_0,\\cdots,y_k)"
},
{
"math_id": 88,
"text": "\\widehat{p}(dx_{k+1}|y_0,\\cdots,y_{k}):=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^{i}_{k+1}}(dx_{k+1})\\approx_{N\\uparrow\\infty} \\widehat{q}(x_{k+1}|y_0,\\cdots,y_{k}) dx_{k+1} \\approx_{N\\uparrow\\infty} p(x_{k+1}|y_0,\\cdots,y_{k})dx_{k+1}"
},
{
"math_id": 89,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_{k-1}):=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^i_k}(dx_k) \\approx_{N\\uparrow\\infty} p(dx_k|y_0,\\cdots,y_{k-1}):=p(x_k|y_0,\\cdots,y_{k-1}) dx_k"
},
{
"math_id": 90,
"text": "p(dx_{k}|y_0,\\cdots,y_{k}) \\approx_{N\\uparrow\\infty} \\frac{p(y_{k}|x_{k}) \\widehat{p}(dx_{k}|y_0,\\cdots,y_{k-1})}{\\int p(y_{k}|x'_{k})\\widehat{p}(dx'_{k}|y_0,\\cdots,y_{k-1})}=\\sum_{i=1}^N \\frac{p(y_k|\\xi^i_k)}{\\sum_{j=1}^Np(y_k|\\xi^j_k)}~\\delta_{\\xi^i_k}(dx_k)"
},
{
"math_id": 91,
"text": "p(dx_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 92,
"text": "\\widehat{p}(dx_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 93,
"text": "I_k(f):=\\int f(x_k) p(dx_k|y_0,\\cdots,y_{k-1}) \\approx_{N\\uparrow\\infty} \\widehat{I}_k(f):=\\int f(x_k) \\widehat{p}(dx_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 94,
"text": "\\sup_{k\\geqslant 0}\\left\\vert E\\left(\\widehat{I}_k(f)\\right)-I_k(f)\\right\\vert\\leqslant \\frac{c_1}{N}"
},
{
"math_id": 95,
"text": "\\sup_{k\\geqslant 0}E\\left(\\left[\\widehat{I}_k(f)-I_k(f)\\right]^2\\right)\\leqslant \\frac{c_2}{N}"
},
{
"math_id": 96,
"text": "c_1,c_2."
},
{
"math_id": 97,
"text": "x\\geqslant 0"
},
{
"math_id": 98,
"text": "\\mathbf{P} \\left ( \\left| \\widehat{I}_k(f)-I_k(f)\\right|\\leqslant c_1 \\frac{x}{N}+c_2 \\sqrt{\\frac{x}{N}}\\land \\sup_{0\\leqslant k\\leqslant n}\\left| \\widehat{I}_k(f)-I_k(f)\\right|\\leqslant c \\sqrt{\\frac{x\\log(n)}{N}} \\right ) > 1-e^{-x}"
},
{
"math_id": 99,
"text": "c_1, c_2"
},
{
"math_id": 100,
"text": "\\left(\\widehat{\\xi}^i_{0,k},\\widehat{\\xi}^i_{1,k},\\cdots,\\widehat{\\xi}^i_{k-1,k},\\widehat{\\xi}^i_{k,k}\\right), \\quad \\left(\\xi^i_{0,k},\\xi^i_{1,k},\\cdots,\\xi^i_{k-1,k},\\xi^i_{k,k}\\right)"
},
{
"math_id": 101,
"text": "\\widehat{\\xi}^i_{k}\\left(=\\widehat{\\xi}^i_{k,k}\\right)"
},
{
"math_id": 102,
"text": "\\xi^i_{k}\\left(={\\xi}^i_{k,k}\\right)"
},
{
"math_id": 103,
"text": "\\begin{align}\n\\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k) &:=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\left(\\widehat{\\xi}^i_{0,k},\\cdots,\\widehat{\\xi}^i_{0,k}\\right)}(d(x_0,\\cdots,x_k)) \\\\\n&\\approx_{N\\uparrow\\infty} p(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k) \\\\\n&\\approx_{N\\uparrow\\infty} \\sum_{i=1}^N \\frac{p(y_k|\\xi^i_{k,k})}{\\sum_{j=1}^Np(y_k|\\xi^j_{k,k})} \\delta_{\\left(\\xi^i_{0,k},\\cdots,\\xi^i_{0,k}\\right)}(d(x_0,\\cdots,x_k)) \\\\\n& \\ \\\\\n\\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_{k-1}) &:=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\left(\\xi^i_{0,k},\\cdots,\\xi^i_{k,k}\\right)}(d(x_0,\\cdots,x_k)) \\\\\n&\\approx_{N\\uparrow\\infty} p(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_{k-1}) \\\\\n&:=p(x_0,\\cdots,x_k|y_0,\\cdots,y_{k-1}) dx_0,\\cdots,dx_k\n\\end{align}"
},
{
"math_id": 104,
"text": "\\begin{align}\n\\int F(x_0,\\cdots,x_n) \\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k) &:=\\frac{1}{N}\\sum_{i=1}^N F\\left(\\widehat{\\xi}^i_{0,k},\\cdots,\\widehat{\\xi}^i_{0,k}\\right) \\\\\n&\\approx_{N\\uparrow\\infty} \\int F(x_0,\\cdots,x_n) p(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_k) \\\\\n&\\approx_{N\\uparrow\\infty} \\sum_{i=1}^N \\frac{p(y_k|\\xi^i_{k,k})}{\\sum_{j=1}^N p(y_k|\\xi^j_{k,k})} F\\left(\\xi^i_{0,k}, \\cdots,\\xi^i_{k,k} \\right) \\\\\n& \\ \\\\\n\\int F(x_0,\\cdots,x_n) \\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_{k-1}) &:=\\frac{1}{N} \\sum_{i=1}^N F\\left(\\xi^i_{0,k},\\cdots,\\xi^i_{k,k}\\right) \\\\\n&\\approx_{N\\uparrow\\infty} \\int F(x_0,\\cdots,x_n) p(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_{k-1})\n\\end{align}"
},
{
"math_id": 105,
"text": "p(y_0,\\cdots,y_n)=\\prod_{k=0}^n p(y_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 106,
"text": "p(y_k|y_0,\\cdots,y_{k-1})=\\int p(y_k|x_k) p(dx_k|y_0,\\cdots,y_{k-1})"
},
{
"math_id": 107,
"text": "p(y_0|y_0,\\cdots,y_{-1})=p(y_0)"
},
{
"math_id": 108,
"text": "p(x_0|y_0,\\cdots,y_{-1})=p(x_0),"
},
{
"math_id": 109,
"text": "p(x_k|y_0,\\cdots,y_{k-1})dx_k"
},
{
"math_id": 110,
"text": "p(y_0,\\cdots,y_n) \\approx_{N\\uparrow\\infty} \\widehat{p}(y_0,\\cdots,y_n)=\\prod_{k=0}^n \\widehat{p}(y_k|y_0,\\cdots,y_{k-1}) "
},
{
"math_id": 111,
"text": "\\widehat{p}(y_k|y_0,\\cdots,y_{k-1})=\\int p(y_k|x_k) \\widehat{p}(dx_k|y_0,\\cdots,y_{k-1})=\\frac{1}{N}\\sum_{i=1}^N p(y_k|\\xi^i_k)"
},
{
"math_id": 112,
"text": "p(x_0,\\cdots,x_n|y_0,\\cdots,y_{n-1}) = p(x_n | y_0,\\cdots,y_{n-1}) p(x_{n-1}|x_n, y_0,\\cdots,y_{n-1} ) \\cdots p(x_1|x_2,y_0,y_1) p(x_0|x_1,y_0)"
},
{
"math_id": 113,
"text": " \\begin{align} \np(x_{k-1}|x_{k},(y_0,\\cdots,y_{k-1})) &\\propto p(x_{k}|x_{k-1})p(x_{k-1}|(y_0,\\cdots,y_{k-1})) \\\\\np(x_{k-1}|(y_0,\\cdots,y_{k-1}) &\\propto p(y_{k-1}|x_{k-1})p(x_{k-1}|(y_0,\\cdots,y_{k-2})\n\\end{align}"
},
{
"math_id": 114,
"text": "p(x_{k-1}|x_k, (y_0,\\cdots,y_{k-1}))=\\frac{p(y_{k-1}|x_{k-1})p(x_{k}|x_{k-1})p(x_{k-1}|y_0,\\cdots,y_{k-2})}{\\int p(y_{k-1}|x'_{k-1})p(x_{k}|x'_{k-1})p(x'_{k-1}|y_0,\\cdots,y_{k-2}) dx'_{k-1}}"
},
{
"math_id": 115,
"text": "p(x_{k-1}|(y_0,\\cdots,y_{k-2}))dx_{k-1}"
},
{
"math_id": 116,
"text": "\\widehat{p}(dx_{k-1}|(y_0,\\cdots,y_{k-2}))=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^i_{k-1}}(dx_{k-1}) \\left(\\approx_{N\\uparrow\\infty} p(dx_{k-1}|(y_0,\\cdots,y_{k-2})):={p}(x_{k-1}|(y_0,\\cdots,y_{k-2})) dx_{k-1}\\right)"
},
{
"math_id": 117,
"text": "\\begin{align}\np(dx_{k-1}| x_{k},(y_0,\\cdots,y_{k-1})) &\\approx_{N\\uparrow\\infty} \\widehat{p}(dx_{k-1}|x_{k},(y_0,\\cdots,y_{k-1})) \\\\\n&:= \\frac{p(y_{k-1}|x_{k-1}) p(x_{k}|x_{k-1}) \\widehat{p}(dx_{k-1}|y_0,\\cdots,y_{k-2})}{\\int p(y_{k-1}|x'_{k-1})~p(x_{k}| x'_{k-1}) \\widehat{p}(dx'_{k-1}|y_0,\\cdots,y_{k-2})}\\\\\n&= \\sum_{i=1}^{N} \\frac{p(y_{k-1}|\\xi^i_{k-1}) p(x_{k}|\\xi^i_{k-1})}{\\sum_{j=1}^{N} p(y_{k-1}|\\xi^j_{k-1}) p(x_{k}|\\xi^j_{k-1})} \\delta_{\\xi^i_{k-1}}(dx_{k-1})\n\\end{align}"
},
{
"math_id": 118,
"text": "p(d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1})) \\approx_{N\\uparrow\\infty} \\widehat{p}_{backward}(d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1}))"
},
{
"math_id": 119,
"text": "\\begin{align}\n\\widehat{p}_{backward} (d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1})) = \\widehat{p}(dx_n|(y_0,\\cdots,y_{n-1})) \\widehat{p}(dx_{n-1}|x_n,(y_0,\\cdots,y_{n-1})) \\cdots \\widehat{p}(dx_1|x_2,(y_0,y_1)) \\widehat{p}(dx_0|x_1,y_0)\n\\end{align}"
},
{
"math_id": 120,
"text": "\\widehat{p}_{backward}(d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1}))"
},
{
"math_id": 121,
"text": "\\left(\\mathbb X^{\\flat}_{k,n}\\right)_{0\\leqslant k\\leqslant n}"
},
{
"math_id": 122,
"text": "\\xi^i_k, i=1,\\cdots,N."
},
{
"math_id": 123,
"text": "\\mathbb X^{\\flat}_{n,n}"
},
{
"math_id": 124,
"text": "\\widehat{p}(dx_{n}|(y_0,\\cdots,y_{n-1}))=\\frac{1}{N}\\sum_{i=1}^N \\delta_{\\xi^i_{n}}(dx_{n})"
},
{
"math_id": 125,
"text": "\\mathbb X^{\\flat}_{k,n}=\\xi^i_k"
},
{
"math_id": 126,
"text": " i=1,\\cdots,N"
},
{
"math_id": 127,
"text": "\\mathbb{X}^{\\flat}_{k-1,n}"
},
{
"math_id": 128,
"text": "\\widehat{p}(dx_{k-1}|\\xi^i_{k},(y_0,\\cdots,y_{k-1}))= \\sum_{j=1}^N\\frac{p(y_{k-1}|\\xi^j_{k-1}) p(\\xi^i_{k}|\\xi^j_{k-1})}{\\sum_{l=1}^Np(y_{k-1}|\\xi^l_{k-1}) p(\\xi^i_{k}|\\xi^l_{k-1})}~\\delta_{\\xi^j_{k-1}}(dx_{k-1})"
},
{
"math_id": 129,
"text": "\\widehat{p}(dx_{k-1}|\\xi^i_{k},(y_0,\\cdots,y_{k-1}))"
},
{
"math_id": 130,
"text": "\\widehat{p}(dx_{k-1}|x_k, (y_0,\\cdots,y_{k-1}))"
},
{
"math_id": 131,
"text": "x_k=\\xi^i_{k}"
},
{
"math_id": 132,
"text": "p(y_{k-1}|\\xi^j_{k-1})"
},
{
"math_id": 133,
"text": "p(\\xi^i_k|\\xi^j_{k-1})"
},
{
"math_id": 134,
"text": "p(y_{k-1}|x_{k-1})"
},
{
"math_id": 135,
"text": "x_{k-1}=\\xi^j_{k-1}."
},
{
"math_id": 136,
"text": "p((x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1}))"
},
{
"math_id": 137,
"text": "f_k"
},
{
"math_id": 138,
"text": "\\begin{align}\n\\int p(d(x_0,\\cdots,x_n)&|(y_0,\\cdots,y_{n-1}))f_k(x_k) \\\\\n&\\approx_{N\\uparrow\\infty} \\int \\widehat{p}_{backward}(d(x_0,\\cdots,x_n)| (y_0,\\cdots,y_{n-1})) f_k(x_k) \\\\\n&=\\int \\widehat{p}(dx_n| (y_0,\\cdots,y_{n-1})) \\widehat{p}(dx_{n-1}|x_n,(y_0,\\cdots,y_{n-1})) \\cdots \\widehat{p}(dx_k| x_{k+1},(y_0,\\cdots,y_k)) f_k(x_k) \\\\\n&=\\underbrace{\\left[\\tfrac{1}{N},\\cdots,\\tfrac{1}{N}\\right]}_{N \\text{ times}}\\mathbb{M}_{n-1} \\cdots\\mathbb M_{k} \\begin{bmatrix} f_k(\\xi^1_k)\\\\\n\\vdots\\\\ f_k(\\xi^N_k) \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 139,
"text": "\\mathbb M_k= (\\mathbb M_k(i,j))_{1\\leqslant i,j\\leqslant N}: \\qquad \\mathbb M_k(i,j)=\\frac{p(\\xi^i_{k}|\\xi^j_{k-1})~p(y_{k-1}|\\xi^j_{k-1})}{\\sum\\limits_{l=1}^{N} p(\\xi^i_{k}|\\xi^l_{k-1}) p(y_{k-1}|\\xi^l_{k-1})}"
},
{
"math_id": 140,
"text": "\\overline{F}(x_0,\\cdots,x_n):=\\frac{1}{n+1}\\sum_{k=0}^n f_k(x_k)"
},
{
"math_id": 141,
"text": "\\begin{align} \n\\int \\overline{F}(x_0,\\cdots,x_n) p(d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1})) &\\approx_{N\\uparrow\\infty} \\int \\overline{F}(x_0,\\cdots,x_n) \\widehat{p}_{backward}(d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1})) \\\\\n&=\\frac{1}{n+1} \\sum_{k=0}^n \\underbrace{\\left[\\tfrac{1}{N},\\cdots,\\tfrac{1}{N}\\right]}_{N \\text{ times}}\\mathbb M_{n-1}\\mathbb M_{n-2}\\cdots\\mathbb{M}_k \\begin{bmatrix} f_k(\\xi^1_k)\\\\ \\vdots\\\\ f_k(\\xi^N_k) \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 142,
"text": "E\\left(\\widehat{p}(y_0,\\cdots,y_n)\\right)= p(y_0,\\cdots,y_n), \\qquad E\\left(\\left[\\frac{\\widehat{p}(y_0,\\cdots,y_n)}{p(y_0,\\cdots,y_n)}-1\\right]^2\\right)\\leqslant \\frac{cn}{N},"
},
{
"math_id": 143,
"text": "\\mathbf{P} \\left ( \\left\\vert \\frac{1}{n}\\log{\\widehat{p}(y_0,\\cdots,y_n)}-\\frac{1}{n}\\log{p(y_0,\\cdots,y_n)}\\right\\vert \\leqslant c_1 \\frac{x}{N}+c_2 \\sqrt{\\frac{x}{N}} \\right ) > 1-e^{-x} "
},
{
"math_id": 144,
"text": "\\begin{align}\nI^{path}_k(F) &:=\\int F(x_0,\\cdots,x_k) p(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_{k-1}) \\\\\n&\\approx_{N\\uparrow\\infty} \\widehat{I}^{path}_k(F) \\\\\n&:=\\int F(x_0,\\cdots,x_k) \\widehat{p}(d(x_0,\\cdots,x_k)|y_0,\\cdots,y_{k-1}) \\\\\n&=\\frac{1}{N}\\sum_{i=1}^N F\\left(\\xi^i_{0,k},\\cdots,\\xi^i_{k,k}\\right)\n\\end{align}"
},
{
"math_id": 145,
"text": "\\left| E\\left(\\widehat{I}^{path}_k(F)\\right)-I_k^{path}(F)\\right|\\leqslant \\frac{c_1 k}{N}, \\qquad E\\left(\\left[\\widehat{I}^{path}_k(F)-I_k^{path}(F)\\right]^2\\right)\\leqslant \\frac{c_2 k}{N},"
},
{
"math_id": 146,
"text": "c_1, c_2."
},
{
"math_id": 147,
"text": "\\mathbf{P} \\left ( \\left| \\widehat{I}^{path}_k(F)-I_k^{path}(F)\\right | \\leqslant c_1 \\frac{kx}{N}+c_2 \\sqrt{\\frac{kx}{N}} \\land \\sup_{0\\leqslant k\\leqslant n}\\left| \\widehat{I}_k^{path}(F)-I^{path}_k(F)\\right| \\leqslant c \\sqrt{\\frac{xn\\log(n)}{N}} \\right ) > 1-e^{-x}"
},
{
"math_id": 148,
"text": "\\overline{F}(x_0,\\cdots,x_n):=\\frac{1}{n+1}\\sum_{0\\leqslant k\\leqslant n}f_k(x_k)"
},
{
"math_id": 149,
"text": "I^{path}_n(\\overline{F}) \\approx_{N\\uparrow\\infty} I^{\\flat, path}_n(\\overline{F}):=\\int \\overline{F}(x_0,\\cdots,x_n) \\widehat{p}_{backward}(d(x_0,\\cdots,x_n)|(y_0,\\cdots,y_{n-1}))"
},
{
"math_id": 150,
"text": "\\sup_{n\\geqslant 0}{\\left\\vert E\\left(\\widehat{I}^{\\flat,path}_n(\\overline{F})\\right)-I_n^{path}(\\overline{F})\\right\\vert} \\leqslant \\frac{c_1}{N}"
},
{
"math_id": 151,
"text": "E\\left(\\left[\\widehat{I}^{\\flat,path}_n(F)-I_n^{path}(F)\\right]^2\\right)\\leqslant \\frac{c_2}{nN}+ \\frac{c_3}{N^2}"
},
{
"math_id": 152,
"text": "c_1,c_2,c_3."
},
{
"math_id": 153,
"text": "p(x_k|y_0,\\cdots,y_k)"
},
{
"math_id": 154,
"text": " \\left \\{ \\left (w^{(i)}_k,x^{(i)}_k \\right ) \\ : \\ i\\in\\{1,\\cdots,N\\} \\right \\}."
},
{
"math_id": 155,
"text": "w^{(i)}_k"
},
{
"math_id": 156,
"text": "\\sum_{i=1}^N w^{(i)}_k = 1."
},
{
"math_id": 157,
"text": " \\int f(x_k) p(x_k|y_0,\\dots,y_k) dx_k \\approx \\sum_{i=1}^N w_k^{(i)} f(x_k^{(i)})."
},
{
"math_id": 158,
"text": "\\pi(x_k|x_{0:k-1},y_{0:k})\\, "
},
{
"math_id": 159,
"text": "\\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1},y_{k})=\\frac{p(y_k|x_k)}{\\int p(y_k|x_k)p(x_k|x_{k-1})dx_k}~p(x_k|x_{k-1})."
},
{
"math_id": 160,
"text": " p(x_k|x_{k-1},y_{k})"
},
{
"math_id": 161,
"text": "\\begin{align} \n\\frac{p(y_k|x_k)}{\\int p(y_k|x_k)p(x_k|x_{k-1})dx_k} p(x_k|x_{k-1})dx_k &\\simeq_{N\\uparrow\\infty} \\frac{p(y_k|x_k)}{\\int p(y_k|x_k)\\widehat{p}(dx_k|x_{k-1})} \\widehat{p}(dx_k|x_{k-1}) \\\\\n&= \\sum_{i=1}^N \\frac{p(y_k|X^i_k(x_{k-1}))}{\\sum_{j=1}^N p(y_k|X^j_k(x_{k-1}))} \\delta_{X^i_k(x_{k-1})}(dx_k)\n\\end{align}"
},
{
"math_id": 162,
"text": " \\widehat{p}(dx_k|x_{k-1})= \\frac{1}{N}\\sum_{i=1}^{N} \\delta_{X^i_k(x_{k-1})}(dx_k)~\\simeq_{N\\uparrow\\infty} p(x_k|x_{k-1})dx_k "
},
{
"math_id": 163,
"text": "X^i_k(x_{k-1}), i=1,\\cdots,N "
},
{
"math_id": 164,
"text": "X_{k-1}=x_{k-1}"
},
{
"math_id": 165,
"text": "\\pi(x_k|x_{0:k-1},y_{0:k}) = p(x_k|x_{k-1})."
},
{
"math_id": 166,
"text": "x^{(i)}_k \\sim \\pi(x_k|x^{(i)}_{0:k-1},y_{0:k})"
},
{
"math_id": 167,
"text": "\\hat{w}^{(i)}_k = w^{(i)}_{k-1} \\frac{p(y_k|x^{(i)}_k) p(x^{(i)}_k|x^{(i)}_{k-1})} {\\pi(x_k^{(i)}|x^{(i)}_{0:k-1},y_{0:k})}."
},
{
"math_id": 168,
"text": " \\pi(x_k^{(i)}|x^{(i)}_{0:k-1},y_{0:k}) = p(x^{(i)}_k|x^{(i)}_{k-1}),"
},
{
"math_id": 169,
"text": " \\hat{w}^{(i)}_k = w^{(i)}_{k-1} p(y_k|x^{(i)}_k), "
},
{
"math_id": 170,
"text": "w^{(i)}_k = \\frac{\\hat{w}^{(i)}_k}{\\sum_{j=1}^N \\hat{w}^{(j)}_k}"
},
{
"math_id": 171,
"text": "\\hat{N}_\\mathit{eff} = \\frac{1}{\\sum_{i=1}^N\\left(w^{(i)}_k\\right)^2} "
},
{
"math_id": 172,
"text": "\\hat{N}_\\mathit{eff} < N_{thr}"
},
{
"math_id": 173,
"text": "w^{(i)}_k = 1/N."
},
{
"math_id": 174,
"text": "p_{x_k|y_{1:k}}(x|y_{1:k})"
},
{
"math_id": 175,
"text": "\\{1,..., N\\}"
},
{
"math_id": 176,
"text": "\\hat{x}"
},
{
"math_id": 177,
"text": " x_{k-1}=x_{k-1|k-1}^{(i)}"
},
{
"math_id": 178,
"text": "\\hat{y}"
},
{
"math_id": 179,
"text": "p(y_k|x_k),~\\mbox{with}~x_k=\\hat{x}"
},
{
"math_id": 180,
"text": "y_k"
},
{
"math_id": 181,
"text": "[0, m_k]"
},
{
"math_id": 182,
"text": "m_k = \\sup_{x_k} p(y_k|x_k) "
},
{
"math_id": 183,
"text": "p\\left(\\hat{y}\\right)"
},
{
"math_id": 184,
"text": "x_{k|k}^{(i)}"
},
{
"math_id": 185,
"text": "k-1"
},
{
"math_id": 186,
"text": "x_k"
},
{
"math_id": 187,
"text": "x_{k-1}"
},
{
"math_id": 188,
"text": "x(k,i)"
},
{
"math_id": 189,
"text": "k"
},
{
"math_id": 190,
"text": "x_k^{(i)}"
},
{
"math_id": 191,
"text": "x_{k-1}^{(i)}"
}
]
| https://en.wikipedia.org/wiki?curid=1396948 |
13969950 | Zariski's main theorem | Theorem of algebraic geometry and commutative algebra
In algebraic geometry, Zariski's main theorem, proved by Oscar Zariski (1943), is a statement about the structure of birational morphisms stating roughly that there is only one branch at any normal point of a variety. It is the special case of Zariski's connectedness theorem when the two varieties are birational.
Zariski's main theorem can be stated in several ways which at first sight seem to be quite different, but are in fact deeply related. Some of the variations that have been called Zariski's main theorem are as follows:
Several results in commutative algebra imply the geometric forms of Zariski's main theorem, including:
The original result was labelled as the "MAIN THEOREM" in Zariski (1943).
Zariski's main theorem for birational morphisms.
Let "f" be a birational mapping of algebraic varieties "V" and "W". Recall that "f" is defined by a closed subvariety formula_0 (a "graph" of "f") such that the projection on the first factor formula_1 induces an isomorphism between an open formula_2 and formula_3, and such that formula_4 is an isomorphism on "U" too. The complement of "U" in "V" is called a "fundamental variety" or "indeterminacy locus", and the image of a subset of "V" under formula_4 is called a "total transform" of it.
The original statement of the theorem in reads:
MAIN THEOREM: If "W" is an irreducible fundamental variety on "V" of a birational correspondence "T" between "V" and "V"′ and if "T" has no fundamental elements on "V"′ then — under the assumption that "V" is locally normal at "W" — each irreducible component of the transform "T"["W"] is of higher dimension than "W".
Here "T" is essentially a morphism from "V"′ to "V" that is birational, "W" is a subvariety of the set where the inverse of "T" is not defined whose local ring is normal, and the transform "T"["W"] means the inverse image of "W" under the morphism from "V"′ to "V".
Here are some variants of this theorem stated using more recent terminology. calls the following connectedness statement "Zariski's Main theorem":
If "f":"X"→"Y" is a birational projective morphism between noetherian integral schemes, then the inverse image of every normal point of "Y" is connected.
The following consequence of it (Theorem V.5.2,"loc.cit.") also goes under this name:
If "f":"X"→"Y" is a birational transformation of projective varieties with "Y" normal, then the total transform of a fundamental point of "f" is connected and of dimension at least 1.
Zariski's main theorem for quasifinite morphisms.
In EGA III, Grothendieck calls the following statement which does not involve connectedness a "Main theorem" of Zariski :
If "f":"X"→"Y" is a quasi-projective morphism of Noetherian schemes then the set of points that are isolated in their fiber is open in "X". Moreover the induced scheme of this set is isomorphic to an open subset of a scheme that is finite over "Y".
In EGA IV, Grothendieck observed that the last statement could be deduced from a more general theorem about the structure of quasi-finite morphisms, and the latter is often referred to as the "Zariski's main theorem in the form of Grothendieck".
It is well known that open immersions and finite morphisms are quasi-finite. Grothendieck proved that under the hypothesis of separatedness all quasi-finite morphisms are compositions of such :
if "Y" is a quasi-compact separated scheme and formula_5 is a separated, quasi-finite, finitely presented morphism then there is a factorization into formula_6, where the first map is an open immersion and the second one is finite.
The relation between this theorem about quasi-finite morphisms and Théorème 4.4.3 of EGA III quoted above is that
if "f":"X"→"Y" is a projective morphism of varieties, then the set of points that are isolated in their fiber is quasifinite over "Y". Then structure theorem for quasi-finite morphisms applies and yields the desired result.
Zariski's main theorem for commutative rings.
reformulated his main theorem in terms of commutative algebra as a statement about local rings. generalized Zariski's formulation as follows:
If "B" is an algebra of finite type over a local Noetherian ring "A", and "n" is a maximal ideal of "B" which is minimal among ideals of "B" whose inverse image in "A" is the maximal ideal "m" of "A", then there is a finite "A"-algebra "A"′ with a maximal ideal "m"′ (whose inverse image in "A" is "m") such that the localization "B""n" is isomorphic to the "A"-algebra "A"′"m"′.
If in addition "A" and "B" are integral and have the same field of fractions, and "A" is integrally closed, then this theorem implies that "A" and "B" are equal. This is essentially Zariski's formulation of his main theorem in terms of commutative rings.
Zariski's main theorem: topological form.
A topological version of Zariski's main theorem says that if "x" is a (closed) point of a normal complex variety it is unibranch; in other words there are arbitrarily small neighborhoods "U" of "x" such that the set of non-singular points of "U" is connected .
The property of being normal is stronger than the property of being unibranch: for example, a cusp of a plane curve is unibranch but not normal.
Zariski's main theorem: power series form.
A formal power series version of Zariski's main theorem says that if "x" is a normal point of a variety then it is analytically normal; in other words the completion of the local ring at "x" is a normal integral domain . | [
{
"math_id": 0,
"text": "\\Gamma \\subset V \\times W"
},
{
"math_id": 1,
"text": "p_1"
},
{
"math_id": 2,
"text": "U \\subset V"
},
{
"math_id": 3,
"text": "p_1^{-1}(U)"
},
{
"math_id": 4,
"text": "p_2 \\circ p_1^{-1}"
},
{
"math_id": 5,
"text": "f: X \\to Y"
},
{
"math_id": 6,
"text": "X \\to Z \\to Y"
}
]
| https://en.wikipedia.org/wiki?curid=13969950 |
1397197 | Closed category | Category whose hom objects correspond (di-)naturally to objects in itselfIn category theory, a branch of mathematics, a closed category is a special kind of category.
In a locally small category, the "external hom" ("x", "y") maps a pair of objects to a set of morphisms. So in the category of sets, this is an object of the category itself. In the same vein, in a closed category, the (object of) morphisms from one object to another can be seen as lying inside the category. This is the "internal hom" ["x", "y"].
Every closed category has a forgetful functor to the category of sets, which in particular takes the internal hom to the external hom.
Definition.
A closed category can be defined as a category formula_0 with a so-called internal Hom functor
formula_1
with left Yoneda arrows
formula_2
natural in formula_3 and formula_4 and dinatural in formula_5, and a fixed object formula_6 of formula_0 with a natural isomorphism
formula_7
and a dinatural transformation
formula_8,
all satisfying certain coherence conditions. | [
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "\\left[-\\ -\\right] : \\mathcal{C}^{op} \\times \\mathcal{C} \\to \\mathcal{C}"
},
{
"math_id": 2,
"text": "L : \\left[B\\ C\\right] \\to \\left[\\left[A\\ B\\right] \\left[A\\ C\\right]\\right]"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "I"
},
{
"math_id": 7,
"text": "i_A : A \\cong \\left[I\\ A\\right]"
},
{
"math_id": 8,
"text": "j_A : I \\to \\left[A\\ A\\right]"
}
]
| https://en.wikipedia.org/wiki?curid=1397197 |
13972359 | Separating set | In mathematics, a set formula_0 of functions with domain formula_1 is called a separating set for formula_1 and is said to separate the points of formula_1 (or just to separate points) if for any two distinct elements formula_2 and formula_3 of formula_4 there exists a function formula_5 such that formula_6
Separating sets can be used to formulate a version of the Stone–Weierstrass theorem for real-valued functions on a compact Hausdorff space formula_7 with the topology of uniform convergence. It states that any subalgebra of this space of functions is dense if and only if it separates points. This is the version of the theorem originally proved by Marshall H. Stone.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "D,"
},
{
"math_id": 5,
"text": "f \\in S"
},
{
"math_id": 6,
"text": "f(x) \\neq f(y)."
},
{
"math_id": 7,
"text": "X,"
},
{
"math_id": 8,
"text": "\\Reals"
},
{
"math_id": 9,
"text": "\\Reals."
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "C(X)"
},
{
"math_id": 12,
"text": "X."
},
{
"math_id": 13,
"text": "\\Complex,"
}
]
| https://en.wikipedia.org/wiki?curid=13972359 |
13973757 | Programmable metallization cell | Non-volatile memory technology
The programmable metallization cell, or PMC, is a non-volatile computer memory developed at Arizona State University. PMC, a technology developed to replace the widely used flash memory, providing a combination of longer lifetimes, lower power, and better memory density. Infineon Technologies, who licensed the technology in 2004, refers to it as conductive-bridging RAM, or CBRAM. CBRAM became a registered trademark of Adesto Technologies in 2011. NEC has a variant called "Nanobridge" and Sony calls their version "electrolytic memory".
Description.
PMC is a two terminal resistive memory technology developed at Arizona State University. PMC is an electrochemical metallization memory that relies on redox reactions to form and dissolve a conductive filament. The state of the device is determined by the resistance across the two terminals. The existence of a filament between the terminals produces a low resistance state (LRS) while the absence of a filament results in a high resistance state (HRS). A PMC device is made of two solid metal electrodes, one relatively inert (e.g., tungsten or nickel) the other electrochemically active (e.g., silver or copper), with a thin film of solid electrolyte between them.
Device operation.
The resistance state of a PMC is controlled by the formation (programming) or dissolution (erasing) of a metallic conductive filament between the two terminals of the cell. A formed filament is a fractal tree like structure.
Filament formation.
PMC rely on the formation of a metallic conductive filament to transition to a low resistance state (LRS). The filament is created by applying a positive voltage bias ("V") to the anode contact (active metal) while grounding the cathode contact (inert metal). The positive bias oxidizes the active metal (M):
M → M+ + e−
The applied bias generates an electric field between the two metal contacts. The ionized (oxidized) metal ions migrate along the electric field toward the cathode contact. At the cathode contact, the metal ions are reduced:
M+ + e− → M
As the active metal deposits on the cathode, the electric field increases between the anode and the deposit. The evolution of the local electric field ("E") between the growing filament and the anode can be simplistically related to the following:
formula_0
where "d" is the distance between the anode and the top of the growing filament. The filament will grow to connect to the anode within a few nanoseconds. Metal ions will continue to be reduced at the filament until the voltage is removed, broadening the conductive filament and decreasing the resistance of the connection over time. Once the voltage is removed, the conductive filament will remain, leaving the device in a LRS.
The conductive filament may not be continuous, but a chain of electrodeposit islands or nanocrystals. This is likely to prevail at low programming currents (less than 1 μA) whereas higher programming current will lead to a mostly metallic conductor.
Filament dissolution.
A PMC can be "erased" into a high resistance state (HRS) by applying a negative voltage bias to the anode. The redox process used to create the conductive filament is reversed and the metal ions migrate along the reversed electric field to reduce at the anode contact. With the filament removed, the PMC is analogous to parallel plate capacitor with a high resistance of several MΩ to GΩ between the contacts.
Device read.
An individual PMC can be read by applying a small voltage across the cell. As long as the applied read voltage is less than both the programming and erasing voltage threshold, the direction of the bias is not significant.
Technology comparison.
CBRAM vs. metal-oxide ReRAM.
CBRAM differs from metal-oxide ReRAM in that for CBRAM metal ions dissolve readily in the material between the two electrodes, while for metal-oxides, the material between the electrodes requires a high electric field causing local damage akin to dielectric breakdown, producing a trail of conducting defects (sometimes called a "filament"). Hence for CBRAM, one electrode must provide the dissolving ions, while for metal-oxide RRAM, a one-time "forming" step is required to generate the local damage.
CBRAM vs. NAND Flash.
The primary form of solid-state non-volatile memory in use is flash memory, which is finding use in most roles formerly filled by hard drives. Flash, however, has problems that led to many efforts to introduce products to replace it.
Flash is based on the floating gate concept, essentially a modified transistor. Conventional flash transistors have three connections, the source, drain and gate. The gate is the essential component of the transistor, controlling the resistance between the source and drain, and thereby acting as a switch. In the floating gate transistor, the gate is attached to a layer that traps electrons, leaving it switched on (or off) for extended periods of time. The floating gate can be re-written by passing a large current through the emitter-collector circuit.
It is this large current that is flash's primary drawback, and for a number of reasons. For one, each application of the current physically degrades the cell, such that the cell will eventually be unwritable. Write cycles on the order of 105 to 106 are typical, limiting flash applications to roles where constant writing is not common. The current also requires an external circuit to generate, using a system known as a charge pump. The pump requires a fairly lengthy charging process so that writing is much slower than reading; the pump also requires much more power. Flash is thus an "asymmetrical" system, much more so than conventional RAM or hard drives.
Another problem with flash is that the floating gate suffers leakage that slowly releases the charge. This is countered through the use of powerful surrounding insulators, but these require a certain physical size in order to be useful and also require a specific physical layout, which is different from the more typical CMOS layouts, which required several new fabrication techniques to be introduced. As flash scales rapidly downward in size the charge leakage increasingly becomes a problem, which led to predictions of its demise. However, massive market investment drove development of flash at rates in excess of Moore's Law, and semiconductor fabrication plants using 30 nm processes were brought online in late 2007.
In contrast to flash, PMC writes with relatively low power and at high speed. The speed is inversely related to the power applied (to a point, there are mechanical limits), so the performance can be tuned.
PMC, in theory, can scale to sizes much smaller than flash, theoretically as small as a few ion widths wide. Copper ions are about 0.75 angstroms, so line widths on the order of nanometers seem possible. PMC was promoted as simpler in layout than flash.
History.
PMC technology was developed by Michael Kozicki, professor of electrical engineering at Arizona State University in the 1990s.
Early experimental PMC systems were based on silver-doped germanium selenide glasses. Work turned to silver-doped germanium sulfide electrolytes and then to the copper-doped germanium sulfide electrolytes. There has been renewed interest in silver-doped germanium selenide devices due to their high, high resistance state. Copper-doped silicon dioxide glass PMC would be compatible with the CMOS fabrication process.
In 1996, Axon Technologies was founded to commercialize the PMC technology.
Micron Technology announced work with PMC in 2002. Infineon followed in 2004. PMC technology was licensed to Adesto Technologies by 2007.
infineon had spun off memory business to its Qimonda company, which in turn sold it to Adesto Technologies. A DARPA grant was awarded in 2010 for further research.
In 2011, Adesto Technologies allied with the French company Altis Semiconductor for development and manufacturing of CBRAM. In 2013, Adesto introduced a sample CBRAM product in which a 1 megabit part was promoted to replace EEPROM.
NEC developed the so-called nanobridge technology, using Cu2S or tantalumpentoxide as dielectric material. Hereby copper (compatible with copper metallization of the IC) makes the copper to migrate through Cu2S or Ta2O5 making or breaking shorts between the copper and ruthenium electrodes.
The dominant use of this type of memory are space applications, since this type of memory is intrinsically radiation hard.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E = -\\frac{V}{d}"
}
]
| https://en.wikipedia.org/wiki?curid=13973757 |
1397536 | Econometric model | Statistical models used in econometrics
Econometric models are statistical models used in econometrics. An econometric model specifies the statistical relationship that is believed to hold between the various economic quantities pertaining to a particular economic phenomenon. An econometric model can be derived from a deterministic economic model by allowing for uncertainty, or from an economic model which itself is stochastic. However, it is also possible to use econometric models that are not tied to any specific economic theory.
A simple example of an econometric model is one that assumes that monthly spending by consumers is linearly dependent on consumers' income in the previous month. Then the model will consist of the equation
formula_0
where "C""t" is consumer spending in month "t", "Y""t"-1 is income during the previous month, and "et" is an error term measuring the extent to which the model cannot fully explain consumption. Then one objective of the econometrician is to obtain estimates of the parameters "a" and "b"; these estimated parameter values, when used in the model's equation, enable predictions for future values of consumption to be made contingent on the prior month's income.
Formal definition.
In econometrics, as in statistics in general, it is presupposed that the quantities being analyzed can be treated as random variables. An econometric model then is a set of joint probability distributions to which the true joint probability distribution of the variables under study is supposed to belong. In the case in which the elements of this set can be indexed by a finite number of real-valued "parameters", the model is called a parametric model; otherwise it is a nonparametric or semiparametric model. A large part of econometrics is the study of methods for selecting models, estimating them, and carrying out inference on them.
The most common econometric models are structural, in that they convey causal and counterfactual information, and are used for policy evaluation. For example, an equation modeling consumption spending based on income could be used to see what consumption would be contingent on any of various hypothetical levels of income, only one of which (depending on the choice of a fiscal policy) will end up actually occurring.
Basic models.
Some of the common econometric models are:
Use in policy-making.
Comprehensive models of macroeconomic relationships are used by central banks and governments to evaluate and guide economic policy. One famous econometric model of this nature is the Federal Reserve Bank econometric model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_t = a + bY_{t-1} + e_t,"
}
]
| https://en.wikipedia.org/wiki?curid=1397536 |
13976612 | Harris functional | In density functional theory (DFT), the Harris energy functional is a non-self-consistent approximation to the Kohn–Sham density functional theory. It gives the energy of a combined system as a function of the electronic densities of the isolated parts. The energy of the Harris functional varies much less than the energy of the Kohn–Sham functional as the density moves away from the converged density.
Background.
Kohn–Sham equations are the one-electron equations that must be solved in a self-consistent fashion in order to find the ground state density of a system of interacting electrons:
formula_0
The density, formula_1 is given by that of the Slater determinant formed by the spin-orbitals of the occupied states:
formula_2
where the coefficients formula_3 are the occupation numbers given by the Fermi–Dirac distribution at the temperature of the system with the restriction formula_4, where formula_5 is the total number of electrons. In the equation above, formula_6 is the Hartree potential and formula_7 is the exchange–correlation potential, which are expressed in terms of the electronic density. Formally, one must solve these equations self-consistently, for which the usual strategy is to pick an initial guess for the density, formula_8, substitute in the Kohn–Sham equation, extract a new density formula_9 and iterate the process until convergence is obtained. When the final self-consistent density formula_10 is reached, the energy of the system is expressed as:
formula_11.
Definition.
Assume that we have an approximate electron density formula_12, which is different from the exact electron density formula_13. We construct exchange-correlation potential formula_14 and the Hartree potential formula_15 based on the approximate electron density formula_16. Kohn–Sham equations are then solved with the XC and Hartree potentials and eigenvalues are then obtained; that is, we perform one single iteration of the self-consistency calculation. The sum of eigenvalues is often called the band structure energy:
formula_17
where formula_18 loops over all occupied Kohn–Sham orbitals. The Harris energy functional is defined as
formula_19
Comments.
It was discovered by Harris that the difference between the Harris energy formula_20 and the exact total energy is to the second order of the error of the approximate electron density, i.e., formula_21. Therefore, for many systems the accuracy of Harris energy functional may be sufficient. The Harris functional was originally developed for such calculations rather than self-consistent convergence, although it can be applied in a self-consistent manner in which the density is changed. Many density-functional tight-binding methods, such as CP2K, DFTB+, Fireball, and Hotbit, are built based on the Harris energy functional. In these methods, one often does not perform self-consistent Kohn–Sham DFT calculations and the total energy is estimated using the Harris energy functional, although a version of the Harris functional where one does perform self-consistency calculations has been used. These codes are often much faster than conventional Kohn–Sham DFT codes that solve Kohn–Sham DFT in a self-consistent manner.
While the Kohn–Sham DFT energy is a variational functional (never lower than the ground state energy), the Harris DFT energy was originally believed to be anti-variational (never higher than the ground state energy). This was, however, conclusively demonstrated to be incorrect.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\left( \\frac{-\\hbar^2}{2m}\\nabla^2+v_{\\rm H}[n]+v_{\\rm xc}[n] +v_{\\rm ext}(r)\\right)\\phi_j(r)=\\epsilon_j \\phi_j(r). "
},
{
"math_id": 1,
"text": " n, "
},
{
"math_id": 2,
"text": " n(r)=\\sum_{j} f_j \\vert \\phi_j (r) \\vert ^2,"
},
{
"math_id": 3,
"text": " f_j "
},
{
"math_id": 4,
"text": " \\sum_j f_j =N "
},
{
"math_id": 5,
"text": " N "
},
{
"math_id": 6,
"text": " v_{\\rm H}[n] "
},
{
"math_id": 7,
"text": " v_{\\rm xc}[n] "
},
{
"math_id": 8,
"text": " n_0(r) "
},
{
"math_id": 9,
"text": " n_1(r) "
},
{
"math_id": 10,
"text": " n(r) "
},
{
"math_id": 11,
"text": " E[n] = \\sum_{j \\in \\text{occupied}} \\epsilon_j -\\tfrac{1}{2}\\int v_{\\rm H}[n]n(r) \\, \\mathrm{d}r - \\int v_{\\rm xc}[n]n(r) \\, \\mathrm{d}r + E_{\\rm xc}[n] "
},
{
"math_id": 12,
"text": "n_0( r)"
},
{
"math_id": 13,
"text": " n( r) "
},
{
"math_id": 14,
"text": " v_{\\rm xc}(r) "
},
{
"math_id": 15,
"text": " v_{\\rm H}(r) "
},
{
"math_id": 16,
"text": "n_0(r)"
},
{
"math_id": 17,
"text": " E_{\\rm band}=\\sum_i \\epsilon_i, "
},
{
"math_id": 18,
"text": " i "
},
{
"math_id": 19,
"text": " E_{\\rm Harris}[n_0] = \\sum_i \\epsilon_i - \\int \\mathrm{d}r^3 v_{\\rm xc}[n_0](r) n_0(r) - \\tfrac{1}{2} \\int \\mathrm{d}r^3 v_{\\rm H}[n_0](r) n_0(r) + E_{\\rm xc}[n_0] "
},
{
"math_id": 20,
"text": " E_{\\rm Harris} "
},
{
"math_id": 21,
"text": " O((\\rho-\\rho_0)^2) "
}
]
| https://en.wikipedia.org/wiki?curid=13976612 |
13979549 | Ε-quadratic form | Mathematical concept
In mathematics, specifically the theory of quadratic forms, an "ε"-quadratic form is a generalization of quadratic forms to skew-symmetric settings and to *-rings; "ε" = ±1, accordingly for symmetric or skew-symmetric. They are also called formula_0-quadratic forms, particularly in the context of surgery theory.
There is the related notion of "ε"-symmetric forms, which generalizes symmetric forms, skew-symmetric forms (= symplectic forms), Hermitian forms, and skew-Hermitian forms. More briefly, one may refer to quadratic, skew-quadratic, symmetric, and skew-symmetric forms, where "skew" means (−) and the * (involution) is implied.
The theory is 2-local: away from 2, "ε"-quadratic forms are equivalent to "ε"-symmetric forms: half the symmetrization map (below) gives an explicit isomorphism.
Definition.
"ε"-symmetric forms and "ε"-quadratic forms are defined as follows.
Given a module "M" over a *-ring "R", let "B"("M") be the space of bilinear forms on "M", and let "T" : "B"("M") → "B"("M") be the "conjugate transpose" involution "B"("u", "v") ↦ "B"("v", "u")*. Since multiplication by −1 is also an involution and commutes with linear maps, −"T" is also an involution. Thus we can write "ε" = ±1 and "εT" is an involution, either "T" or −"T" (ε can be more general than ±1; see below). Define the ε"-symmetric forms as the invariants of "εT", and the ε"-quadratic forms are the coinvariants.
As an exact sequence,
formula_1
As kernel and cokernel,
formula_2
formula_3
The notation "Q""ε"("M"), "Q""ε"("M") follows the standard notation "MG", "MG" for the invariants and coinvariants for a group action, here of the order 2 group (an involution).
Composition of the inclusion and quotient maps (but not 1 − "εT") as formula_4 yields a map "Q""ε"("M") → "Q""ε"("M"): every "ε"-symmetric form determines an "ε"-quadratic form.
Symmetrization.
Conversely, one can define a reverse homomorphism "1 + "εT"": "Q""ε"("M") → "Q""ε"("M"), called the symmetrization map (since it yields a symmetric form) by taking any lift of a quadratic form and multiplying it by 1 + "εT". This is a symmetric form because (1 − "εT")(1 + "εT") = 1 − "T"2 = 0, so it is in the kernel. More precisely, formula_5. The map is well-defined by the same equation: choosing a different lift corresponds to adding a multiple of (1 − "εT"), but this vanishes after multiplying by 1 + "εT". Thus every "ε"-quadratic form determines an "ε"-symmetric form.
Composing these two maps either way: "Q""ε"("M") → "Q""ε"("M") → "Q""ε"("M") or "Q""ε"("M") → "Q""ε"("M") → "Q""ε"("M") yields multiplication by 2, and thus these maps are bijective if 2 is invertible in "R", with the inverse given by multiplication with 1/2.
An "ε"-quadratic form "ψ" ∈ "Q""ε"("M") is called non-degenerate if the associated "ε"-symmetric form (1 + "εT")("ψ") is non-degenerate.
Generalization from *.
If the * is trivial, then "ε" = ±1, and "away from 2" means that 2 is invertible: 1/2 ∈ "R".
More generally, one can take for "ε" ∈ "R" any element such that "ε"*"ε" = 1. "ε" = ±1 always satisfy this, but so does any element of norm 1, such as complex numbers of unit norm.
Similarly, in the presence of a non-trivial *, "ε"-symmetric forms are equivalent to "ε"-quadratic forms if there is an element "λ" ∈ "R" such that "λ"* + "λ" = 1. If * is trivial, this is equivalent to 2"λ" = 1 or "λ" = 1/2, while if * is non-trivial there can be multiple possible "λ"; for example, over the complex numbers any number with real part 1/2 is such a "λ".
For instance, in the ring formula_6 (the integral lattice for the quadratic form 2"x"2 − 2"x" + 1), with complex conjugation, formula_7 are two such elements, though 1/2 ∉ "R".
Intuition.
In terms of matrices (we take "V" to be 2-dimensional), if * is trivial:
formula_11,
to formula_13, for example by lifting to formula_14 and then adding to transpose. Mapping back to quadratic forms yields double the original: formula_15.
If formula_16 is complex conjugation, then
Refinements.
An intuitive way to understand an "ε"-quadratic form is to think of it as a quadratic refinement of its associated "ε"-symmetric form.
For instance, in defining a Clifford algebra over a general field or ring, one quotients the tensor algebra by relations coming from the symmetric form "and" the quadratic form: "vw" + "wv" = 2"B"("v", "w") and formula_19. If 2 is invertible, this second relation follows from the first (as the quadratic form can be recovered from the associated bilinear form), but at 2 this additional refinement is necessary.
Examples.
An easy example for an "ε"-quadratic form is the standard hyperbolic "ε"-quadratic form formula_20. (Here, "R"* := Hom"R"("R", "R") denotes the dual of the "R"-module "R".) It is given by the bilinear form formula_21. The standard hyperbolic "ε"-quadratic form is needed for the definition of "L"-theory.
For the field of two elements "R" = F2 there is no difference between (+1)-quadratic and (−1)-quadratic forms, which are just called quadratic forms. The Arf invariant of a nonsingular quadratic form over F2 is an F2-valued invariant with important applications in both algebra and topology, and plays a role similar to that played by the discriminant of a quadratic form in characteristic not equal to two.
Manifolds.
The free part of the middle homology group (with integer coefficients) of an oriented even-dimensional manifold has an "ε"-symmetric form, via Poincaré duality, the intersection form. In the case of singly even dimension 4"k" + 2, this is skew-symmetric, while for doubly even dimension 4"k", this is symmetric. Geometrically this corresponds to intersection, where two "n"/2-dimensional submanifolds in an "n"-dimensional manifold generically intersect in a 0-dimensional submanifold (a set of points), by adding codimension. For singly even dimension the order switches sign, while for doubly even dimension order does not change sign, hence the "ε"-symmetry. The simplest cases are for the product of spheres, where the product "S"2"k" × "S"2"k" and "S"2"k"+1 × "S"2"k"+1 respectively give the symmetric form formula_22 and skew-symmetric form formula_23 In dimension two, this yields a torus, and taking the connected sum of "g" tori yields the surface of genus "g", whose middle homology has the standard hyperbolic form.
With additional structure, this "ε"-symmetric form can be refined to an "ε"-quadratic form. For doubly even dimension this is integer valued, while for singly even dimension this is only defined up to parity, and takes values in Z/2. For example, given a framed manifold, one can produce such a refinement. For singly even dimension, the Arf invariant of this skew-quadratic form is the Kervaire invariant.
Given an oriented surface Σ embedded in R3, the middle homology group "H"1(Σ) carries not only a skew-symmetric form (via intersection), but also a skew-quadratic form, which can be seen as a quadratic refinement, via self-linking. The skew-symmetric form is an invariant of the surface Σ, whereas the skew-quadratic form is an invariant of the embedding Σ ⊂ R3, e.g. for the Seifert surface of a knot. The Arf invariant of the skew-quadratic form is a framed cobordism invariant generating the first stable homotopy group formula_24.
For the standard embedded torus, the skew-symmetric form is given by formula_25 (with respect to the standard symplectic basis), and the skew-quadratic refinement is given by "xy" with respect to this basis: "Q"(1, 0) = "Q"(0, 1) = 0: the basis curves don't self-link; and "Q"(1, 1) = 1: a (1, 1) self-links, as in the Hopf fibration. (This form has Arf invariant 0, and thus this embedded torus has Kervaire invariant 0.)
Applications.
A key application is in algebraic surgery theory, where even L-groups are defined as Witt groups of "ε"-quadratic forms, by C.T.C.Wall | [
{
"math_id": 0,
"text": "(-)^n"
},
{
"math_id": 1,
"text": "0 \\to Q^\\varepsilon(M) \\to B(M) \\stackrel{1-\\varepsilon T}{\\longrightarrow} B(M) \\to Q_\\varepsilon(M) \\to 0 "
},
{
"math_id": 2,
"text": "Q^\\varepsilon(M) := \\mbox{ker}\\,(1-\\varepsilon T)"
},
{
"math_id": 3,
"text": "Q_\\varepsilon(M) := \\mbox{coker}\\,(1-\\varepsilon T)"
},
{
"math_id": 4,
"text": "Q^\\varepsilon(M) \\to B(M) \\to Q_\\varepsilon(M)"
},
{
"math_id": 5,
"text": "(1 + \\varepsilon T)B(M) < Q^\\varepsilon(M)"
},
{
"math_id": 6,
"text": "R=\\mathbf{Z}\\left[\\textstyle{\\frac{1+i}{2}}\\right]"
},
{
"math_id": 7,
"text": "\\lambda=\\textstyle{\\frac{1\\pm i}{2}}"
},
{
"math_id": 8,
"text": "\\begin{pmatrix}a & b\\\\c & d\\end{pmatrix}"
},
{
"math_id": 9,
"text": "\\begin{pmatrix}a & b\\\\b & c\\end{pmatrix}"
},
{
"math_id": 10,
"text": "\\begin{pmatrix}0 & b\\\\-b & 0\\end{pmatrix}"
},
{
"math_id": 11,
"text": "ax^2 + bxy+cyx + dy^2 = ax^2 + (b+c)xy + dy^2\\, "
},
{
"math_id": 12,
"text": "ex^2 + fxy + gy^2"
},
{
"math_id": 13,
"text": "\\begin{pmatrix}2e & f\\\\f & 2g\\end{pmatrix}"
},
{
"math_id": 14,
"text": "\\begin{pmatrix}e & f\\\\0 & g\\end{pmatrix}"
},
{
"math_id": 15,
"text": "2ex^2 + 2fxy + 2gy^2 = 2(ex^2 + fxy + gy^2)"
},
{
"math_id": 16,
"text": "\\bar{\\cdot } "
},
{
"math_id": 17,
"text": "\\begin{pmatrix}a & z\\\\ \\bar z & c\\end{pmatrix}"
},
{
"math_id": 18,
"text": "\\begin{pmatrix}bi & z\\\\ -\\bar z & di\\end{pmatrix}"
},
{
"math_id": 19,
"text": "v^2=Q(v)"
},
{
"math_id": 20,
"text": "H_\\varepsilon(R) \\in Q_\\varepsilon(R \\oplus R^*)"
},
{
"math_id": 21,
"text": "((v_1,f_1),(v_2,f_2)) \\mapsto f_2(v_1)"
},
{
"math_id": 22,
"text": "\\left(\\begin{smallmatrix} 0 & 1\\\\ 1 & 0\\end{smallmatrix}\\right)"
},
{
"math_id": 23,
"text": "\\left(\\begin{smallmatrix} 0 & 1\\\\ -1 & 0\\end{smallmatrix}\\right)."
},
{
"math_id": 24,
"text": "\\pi^s_1"
},
{
"math_id": 25,
"text": "\\left(\\begin{smallmatrix}0 & 1\\\\-1 & 0\\end{smallmatrix}\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=13979549 |
1398487 | Super-resolution imaging | Any technique to improve resolution of an imaging system beyond conventional limits
Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced.
In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm.
Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy.
Basic concepts.
Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles:
The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution use to the fullest but always stay within the bounds imposed by the laws of physics and information theory.
Techniques.
Optical or diffractive super-resolution.
Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis.
Multiplexing spatial-frequency bands.
An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target. The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left).
Multiple parameter use within traditional diffraction limit.
If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution.
Probing near-field electromagnetic disturbance.
The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source which has superior resolution properties, see also evanescent waves and the development of the new super lens.
Geometrical or image-processing super-resolution.
Multi-exposure image noise reduction.
When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right.
Single-frame deblurring.
Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it.
Sub-pixel image localization.
The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity.
Bayesian induction beyond traditional diffraction limit.
Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object. The classical example is Toraldo di Francia's proposition of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?"
The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging. More recently, a fast single image super-resolution algorithm based on a closed-form solution to "formula_0" problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly.
Aliasing.
Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed.
In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion), the presence of aliasing is still a necessary condition for SR reconstruction.
Technical implementations.
There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, but researchers have found methods to adapt them to color camera images. Recently, the use of super-resolution for 3D data has also been shown.
Research.
There is promising research on using deep convolutional networks to perform super-resolution. In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. These methods can hallucinate image features, which can make them unsafe for medical use.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ell_2-\\ell_2"
}
]
| https://en.wikipedia.org/wiki?curid=1398487 |
13985992 | Jacobi–Anger expansion | Expansion of exponentials of trigonometric functions in the basis of their harmonics
In mathematics, the Jacobi–Anger expansion (or Jacobi–Anger identity) is an expansion of exponentials of trigonometric functions in the basis of their harmonics. It is useful in physics (for example, to convert between plane waves and cylindrical waves), and in signal processing (to describe FM signals). This identity is named after the 19th-century mathematicians Carl Jacobi and Carl Theodor Anger.
The most general identity is given by:
formula_0
where formula_1 is the formula_2-th Bessel function of the first kind and formula_3 is the imaginary unit, formula_4
Substituting formula_5 by formula_6, we also get:
formula_7
Using the relation formula_8 valid for integer formula_2, the expansion becomes:
formula_9
Real-valued expressions.
The following real-valued variations are often useful as well:
formula_10
Similarly useful expressions from the Sung Series:
formula_11
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n e^{i z \\cos \\theta} \\equiv \\sum_{n=-\\infty}^{\\infty} i^n\\, J_n(z)\\, e^{i n \\theta},\n"
},
{
"math_id": 1,
"text": "J_n(z)"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "i^2=-1."
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "\\theta-\\frac{\\pi}{2}"
},
{
"math_id": 7,
"text": "\n e^{i z \\sin \\theta} \\equiv \\sum_{n=-\\infty}^{\\infty} J_n(z)\\, e^{i n \\theta}.\n"
},
{
"math_id": 8,
"text": "J_{-n}(z) = (-1)^n\\, J_{n}(z),"
},
{
"math_id": 9,
"text": "e^{i z \\cos \\theta} \\equiv J_0(z)\\, +\\, 2\\, \\sum_{n=1}^{\\infty}\\, i^n\\, J_n(z)\\, \\cos\\, (n \\theta)."
},
{
"math_id": 10,
"text": "\n\\begin{align}\n \\cos(z \\cos \\theta) &\\equiv J_0(z)+2 \\sum_{n=1}^{\\infty}(-1)^n J_{2n}(z) \\cos(2n \\theta),\n \\\\\n \\sin(z \\cos \\theta) &\\equiv -2 \\sum_{n=1}^{\\infty}(-1)^n J_{2n-1}(z) \\cos\\left[\\left(2n-1\\right) \\theta\\right],\n \\\\\n \\cos(z \\sin \\theta) &\\equiv J_0(z)+2 \\sum_{n=1}^{\\infty} J_{2n}(z) \\cos(2n \\theta),\n \\\\\n \\sin(z \\sin \\theta) &\\equiv 2 \\sum_{ n=1 }^{\\infty} J_{2n-1}(z) \\sin\\left[\\left(2n-1\\right) \\theta\\right].\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\n\\begin{align}\n \\sum_{\\nu=-\\infty}^\\infty J_\\nu(x) &= 1,\n \\\\\n \\sum_{\\nu=-\\infty}^\\infty J_{2 \\nu}(x) &= 1,\n \\\\\n \\sum_{\\nu=-\\infty}^\\infty J_{3 \\nu}(x) &= \\frac{1}{3} \\left[1+2\\cos{\\frac{x\\sqrt{3}}{2}}\\right],\n \\\\\n \\sum_{\\nu=-\\infty}^\\infty J_{4 \\nu}(x) &= \\cos^2\\left(\\frac{x}{2}\\right).\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=13985992 |
13995373 | Post canonical system | A Post canonical system, also known as a Post production system, as created by Emil Post, is a string-manipulation system that starts with finitely-many strings and repeatedly transforms them by applying a finite set j of specified rules of a certain form, thus generating a formal language. Today they are mainly of historical relevance because every Post canonical system can be reduced to a string rewriting system (semi-Thue system), which is a simpler formulation. Both formalisms are Turing complete.
Definition.
A Post canonical system is a triplet (A,I,R), where
formula_0
where each g and h is a specified fixed word, and each "$" and "$' " is a variable standing for an arbitrary word. The strings before and after the arrow in a production rule are called the rule's "antecedents" and "consequent", respectively. It is required that each "$' " in the consequent be one of the "$"s in the antecedents of that rule, and that each antecedent and consequent contain at least one variable.
In many contexts, each production rule has only one antecedent, thus taking the simpler form
formula_1
The formal language generated by a Post canonical system is the set whose elements are the initial words together with all words obtainable from them by repeated application of the production rules. Such sets are recursively enumerable languages and every recursively enumerable language is the restriction of some such set to a sub-alphabet of A.
Example (well-formed bracket expressions).
Initial word: []
Production rules:
(1) "$" → ["$"]
(2) "$" → "$$"
(3) "$"1"$"2 → "$"1[]"$"2
Derivation of a few words in the language of well-formed bracket expressions:
[] initial word
[][] by (2)
][ by (1)
][][ by (2)
][[]][ by (3)
Normal-form theorem.
A Post canonical system is said to be in "normal form" if it has only one initial word and every production rule is of the simple form
formula_2
Post 1943 proved the remarkable Normal-form Theorem, which applies to the most-general type of Post canonical system:
Given any Post canonical system on an alphabet A, a Post canonical system in "normal form" can be constructed from it, possibly enlarging the alphabet, such that the set of words involving only letters of A that are generated by the normal-form system is exactly the set of words generated by the original system.
Tag systems, which comprise a universal computational model, are notable examples of Post normal-form system, being also "monogenic". (A canonical system is said to be "monogenic" if, given any string, at most one new string can be produced from it in one step — i.e., the system is deterministic.)
String rewriting systems, type-0 formal grammars.
A string rewriting system is a special type of Post canonical system with a single initial word, and the productions are each of the form
formula_3
That is, each production rule is a simple substitution rule, often written in the form "g" → "h". It has been proved that any Post canonical system is reducible to such a "substitution system", which, as a formal grammar, is also called a "phrase-structure grammar", or a "type-0 grammar" in the Chomsky hierarchy. | [
{
"math_id": 0,
"text": "\n\\overset{\n\\begin{matrix}\n g_{10} & \\$_{11} & g_{11} & \\$_{12} & g_{12} & \\dots & \\$_{1m_1} & g_{1m_1} \\\\\n g_{20} & \\$_{21} & g_{21} & \\$_{22} & g_{22} & \\dots & \\$_{2m_2} & g_{2m_2} \\\\\n \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n g_{k0} & \\$_{k1} & g_{k1} & \\$_{k2} & g_{k2} & \\dots & \\$_{km_k} & g_{km_k} \\\\\n\\end{matrix}\n}\n{\n\\underset{\n\\begin{matrix}\n h_0 & \\$'_1 & h_1 & \\$'_2 & h_2 & \\dots & \\$'_n & h_n \\\\\n\\end{matrix}\n}\n{\n \\downarrow\n}\n}\n"
},
{
"math_id": 1,
"text": "g_0 \\ \\$_1 \\ g_1 \\ \\$_2 \\ g_2 \\ \\dots \\ \\$_m \\ g_m \\ \\rightarrow \\ h_0 \\ \\$'_1 \\ h_1 \\ \\$'_2 \\ h_2 \\ \\dots \\ \\$'_n \\ h_n "
},
{
"math_id": 2,
"text": " g\\$ \\ \\rightarrow \\ \\$h "
},
{
"math_id": 3,
"text": " P_1 g P_2 \\ \\rightarrow \\ P_1 h P_2 "
}
]
| https://en.wikipedia.org/wiki?curid=13995373 |
13998 | Hierarchy | System of elements that are subordinated to each other
A hierarchy (from Greek: , from , 'president of sacred rites') is an arrangement of items (objects, names, values, categories, etc.) that are represented as being "above", "below", or "at the same level as" one another. Hierarchy is an important concept in a wide variety of fields, such as architecture, philosophy, design, mathematics, computer science, organizational theory, systems theory, systematic biology, and the social sciences (especially political science).
A hierarchy can link entities either directly or indirectly, and either vertically or diagonally. The only direct links in a hierarchy, insofar as they are hierarchical, are to one's immediate superior or to one of one's subordinates, although a system that is largely hierarchical can also incorporate alternative hierarchies. Hierarchical links can extend "vertically" upwards or downwards via multiple links in the same direction, following a path. All parts of the hierarchy that are not linked vertically to one another nevertheless can be "horizontally" linked through a path by traveling up the hierarchy to find a common direct or indirect superior, and then down again. This is akin to two co-workers or colleagues; each reports to a common superior, but they have the same relative amount of authority. Organizational forms exist that are both alternative and complementary to hierarchy. Heterarchy is one such form.
Nomenclature.
Hierarchies have their own special vocabulary. These terms are easiest to understand when a hierarchy is diagrammed (see below).
In an organizational context, the following terms are often used related to hierarchies:
In a mathematical context (in graph theory), the general terminology used is different.
Most hierarchies use a more specific vocabulary pertaining to their subject, but the idea behind them is the same. For example, with data structures, objects are known as nodes, superiors are called parents and subordinates are called children. In a business setting, a superior is a supervisor/boss and a peer is a colleague.
Degree of branching.
Degree of branching refers to the number of direct subordinates or children an object has (in graph theory, equivalent to the number of other vertices connected to via outgoing arcs, in a directed graph) a node has. Hierarchies can be categorized based on the "maximum degree", the highest degree present in the system as a whole. Categorization in this way yields two broad classes: "linear" and "branching".
In a linear hierarchy, the maximum degree is 1. In other words, all of the objects can be visualized in a line-up, and each object (excluding the top and bottom ones) has exactly one direct subordinate and one direct superior. This is referring to the "objects" and not the "levels"; every hierarchy has this property with respect to levels, but normally each level can have an infinite number of objects.
In a branching hierarchy, one or more objects has a degree of 2 or more (and therefore the minimum degree is 2 or higher). For many people, the word "hierarchy" automatically evokes an image of a branching hierarchy. Branching hierarchies are present within numerous systems, including organizations and classification schemes. The broad category of branching hierarchies can be further subdivided based on the degree.
A flat hierarchy (also known for companies as flat organization) is a branching hierarchy in which the maximum degree approaches infinity, i.e., that has a wide span. Most often, systems intuitively regarded as hierarchical have at most a moderate span. Therefore, a flat hierarchy is often not viewed as a hierarchy at all. For example, diamonds and graphite are flat hierarchies of numerous carbon atoms that can be further decomposed into subatomic particles.
An overlapping hierarchy is a branching hierarchy in which at least one object has two parent objects. For example, a graduate student can have two co-supervisors to whom the student reports directly and equally, and who have the same level of authority within the university hierarchy (i.e., they have the same position or tenure status).
Etymology.
Possibly the first use of the English word "hierarchy" cited by the "Oxford English Dictionary" was in 1881, when it was used in reference to the three orders of three angels as depicted by Pseudo-Dionysius the Areopagite (5th–6th centuries). Pseudo-Dionysius used the related Greek word (ἱεραρχία, ) both in reference to the celestial hierarchy and the ecclesiastical hierarchy. The Greek term "hierarchia" means 'rule of a high priest', from (ἱεράρχης, 'president of sacred rites, high-priest') and that from "hiereus" (ἱερεύς, 'priest') and "arche" (ἀρχή, 'first place or power, rule'). Dionysius is credited with first use of it as an abstract noun.
Since hierarchical churches, such as the Roman Catholic (see Catholic Church hierarchy) and Eastern Orthodox churches, had tables of organization that were "hierarchical" in the modern sense of the word (traditionally with God as the pinnacle or head of the hierarchy), the term came to refer to similar organizational methods in secular settings.
Representing hierarchies.
A hierarchy is typically depicted as a pyramid, where the height of a level represents that level's status and width of a level represents the quantity of items at that level relative to the whole. For example, the few Directors of a company could be at the apex, and the base could be thousands of people who have no subordinates.
These pyramids are often diagrammed with a triangle diagram which serves to emphasize the size differences between the levels (but not all triangle/pyramid diagrams are hierarchical; for example, the 1992 USDA food guide pyramid). An example of a triangle diagram appears to the right.
Another common representation of a hierarchical scheme is as a tree diagram. Phylogenetic trees, charts showing the structure of , and playoff brackets in sports are often illustrated this way.
More recently, as computers have allowed the storage and navigation of ever larger data sets, various methods have been developed to represent hierarchies in a manner that makes more efficient use of the available space on a computer's screen. Examples include fractal maps, TreeMaps and Radial Trees.
Visual hierarchy.
In the design field, mainly graphic design, successful layouts and formatting of the content on documents are heavily dependent on the rules of visual hierarchy. Visual hierarchy is also important for proper organization of files on computers.
An example of visually representing hierarchy is through nested clusters. Nested clusters represent hierarchical relationships using layers of information. The child element is within the parent element, such as in a Venn diagram. This structure is most effective in representing simple hierarchical relationships. For example, when directing someone to open a file on a computer desktop, one may first direct them towards the main folder, then the subfolders within the main folder. They will keep opening files within the folders until the designated file is located.
For more complicated hierarchies, the stair structure represents hierarchical relationships through the use of visual stacking. Visually imagine the top of a downward staircase beginning at the left and descending on the right. Child elements are towards the bottom of the stairs and parent elements are at the top. This structure represents hierarchical relationships through the use of visual stacking.
Informal representation.
In plain English, a hierarchy can be thought of as a set in which:
The first requirement is also interpreted to mean that a hierarchy can have no circular relationships; the association between two objects is always transitive.
The second requirement asserts that a hierarchy must have a leader or root that is common to all of the objects.
Mathematical representation.
Mathematically, in its most general form, a hierarchy is a partially ordered set or "poset". The system in this case is the entire poset, which is constituted of elements. Within this system, each element shares a particular unambiguous property. Objects with the same property value are grouped together, and each of those resulting levels is referred to as a class.
"Hierarchy" is particularly used to refer to a poset in which the classes are organized in terms of increasing complexity.
Operations such as addition, subtraction, multiplication and division are often performed in a certain sequence or order. Usually, addition and subtraction are performed after multiplication and division has already been applied to a problem. The use of parentheses is also a representation of hierarchy, for they show which operation is to be done prior to the following ones. For example:
(2 + 5) × (7 - 4).
In this problem, typically one would multiply 5 by 7 first, based on the rules of mathematical hierarchy. But when the parentheses are placed, one will know to do the operations within the parentheses first before continuing on with the problem. These rules are largely dominant in algebraic problems, ones that include several steps to solve. The use of hierarchy in mathematics is beneficial to quickly and efficiently solve a problem without having to go through the process of slowly dissecting the problem. Most of these rules are now known as the proper way into solving certain equations.
Subtypes.
Nested hierarchy.
A nested hierarchy or "inclusion hierarchy" is a hierarchical ordering of nested sets. The concept of nesting is exemplified in Russian matryoshka dolls. Each doll is encompassed by another doll, all the way to the outer doll. The outer doll holds all of the inner dolls, the next outer doll holds all the remaining inner dolls, and so on. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level. The general concept is both demonstrated and mathematically formulated in the following example:
formula_0
A square can always also be referred to as a quadrilateral, polygon or shape. In this way, it is a hierarchy. However, consider the set of polygons using this classification. A square can "only" be a quadrilateral; it can never be a triangle, hexagon, etc.
Nested hierarchies are the organizational schemes behind taxonomies and systematic classifications. For example, using the original Linnaean taxonomy (the version he laid out in the 10th edition of "Systema Naturae"), a human can be formulated as:
formula_1
Taxonomies may change frequently (as seen in biological taxonomy), but the underlying concept of nested hierarchies is always the same.
In many programming taxonomies and syntax models (as well as fractals in mathematics), nested hierarchies, including Russian dolls, are also used to illustrate the properties of self-similarity and recursion. Recursion itself is included as a subset of hierarchical programming, and recursive thinking can be synonymous with a form of hierarchical thinking and logic.
Containment hierarchy.
A containment hierarchy is a direct extrapolation of the nested hierarchy concept. All of the ordered sets are still nested, but every set must be "strict"—no two sets can be identical. The shapes example above can be modified to demonstrate this:
formula_2
The notation formula_3 means "x" is a subset of "y" but is not equal to "y".
A general example of a containment hierarchy is demonstrated in class inheritance in object-oriented programming.
Two types of containment hierarchies are the "subsumptive" containment hierarchy and the "compositional" containment hierarchy. A subsumptive hierarchy "subsumes" its children, and a compositional hierarchy is "composed" of its children. A hierarchy can also be both subsumptive "and" compositional.
Subsumptive containment hierarchy.
A "subsumptive" containment hierarchy is a classification of object classes from the general to the specific. Other names for this type of hierarchy are "taxonomic hierarchy" and "IS-A hierarchy". The last term describes the relationship between each level—a lower-level object "is a" member of the higher class. The taxonomical structure outlined above is a subsumptive containment hierarchy. Using again the example of Linnaean taxonomy, it can be seen that an object that is a member of the level "Mammalia" "is a" member of the level "Animalia"; more specifically, a human "is a" primate, a primate "is a" mammal, and so on. A subsumptive hierarchy can also be defined abstractly as a hierarchy of "concepts". For example, with the Linnaean hierarchy outlined above, an entity name like "Animalia" is a way to group all the species that fit the conceptualization of an animal.
Compositional containment hierarchy.
A "compositional" containment hierarchy is an ordering of the parts that make up a system—the system is "composed" of these parts. Most engineered structures, whether natural or artificial, can be broken down in this manner.
The compositional hierarchy that every person encounters at every moment is the hierarchy of life. Every person can be reduced to organ systems, which are composed of organs, which are composed of tissues, which are composed of cells, which are composed of molecules, which are composed of atoms. In fact, the last two levels apply to all matter, at least at the macroscopic scale. Moreover, each of these levels inherit all the properties of their children.
In this particular example, there are also "emergent properties"—functions that are not seen at the lower level (e.g., cognition is not a property of neurons but is of the brain)—and a scalar quality (molecules are bigger than atoms, cells are bigger than molecules, etc.). Both of these concepts commonly exist in compositional hierarchies, but they are not a required general property. These "level hierarchies" are characterized by bi-directional causation. "Upward causation" involves lower-level entities causing some property of a higher level entity; children entities may interact to yield parent entities, and parents are composed at least partly by their children. "Downward causation" refers to the effect that the incorporation of entity "x" into a higher-level entity can have on "x"'s properties and interactions. Furthermore, the entities found at each level are "autonomous".
Contexts and applications.
Kulish (2002) suggests that almost every system of organization which humans apply to the world is arranged hierarchically. Some conventional definitions of the terms "nation" and "government" suggest that every nation has a government and that every government is hierarchical. Sociologists can analyse socioeconomic systems in terms of stratification into a social hierarchy (the social stratification of societies), and all systematic classification schemes (taxonomies) are hierarchical. Most organized religions, regardless of their internal governance structures, operate as a hierarchy under deities and priesthoods. Many Christian denominations have an autocephalous ecclesiastical hierarchy of leadership. Families can be viewed as hierarchical structures in terms of cousinship (e.g., first cousin once removed, second cousin, etc.), ancestry (as depicted in a family tree) and inheritance (succession and heirship). All the requisites of a well-rounded life and lifestyle can be organized using Maslow's hierarchy of human needs - according to Maslow's hierarchy of human needs. Learning steps often follow a hierarchical scheme—to master differential equations one must first learn calculus; to learn calculus one must first learn elementary algebra; and so on. Nature offers hierarchical structures, as numerous schemes such as Linnaean taxonomy, the organization of life, and biomass pyramids attempt to document.
While the above examples are often clearly depicted in a hierarchical form and are classic examples, hierarchies exist in numerous systems where this branching structure is not immediately apparent. For example, most postal-code systems are hierarchical. Using the Canadian postal code system as an example, the top level's binding concept, the "postal district", consists of 18 objects (letters). The next level down is the "zone", where the objects are the digits 0–9. This is an example of an overlapping hierarchy, because each of these 10 objects has 18 parents. The hierarchy continues downward to generate, in theory, 7,200,000 unique codes of the format "A0A 0A0" (the second and third letter positions allow 20 objects each). Most library classification systems are also hierarchical. The Dewey Decimal System is infinitely hierarchical because there is no finite bound on the number of digits can be used after the decimal point.
Organizations.
Organizations can be structured as a dominance hierarchy. In an organizational hierarchy, there is a single person or group with the most power or authority, and each subsequent level represents a lesser authority. Most organizations are structured in this manner, including governments, companies, armed forces, militia and organized religions. The units or persons within an organization may be depicted hierarchically in an organizational chart.
In a reverse hierarchy, the conceptual pyramid of authority is turned upside-down, so that the apex is at the bottom and the base is at the top. This mode represents the idea that members of the higher rankings are responsible for the members of the lower rankings.
Biology.
Empirically, when we observe in nature a large proportion of the (complex) biological systems, they exhibit hierarchic structure. On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity. System hierarchies analysis performed in the 1950s, laid the empirical foundations for a field that would become, from the 1980s, hierarchical ecology.
The theoretical foundations are summarized by thermodynamics.
When biological systems are modeled as physical systems, in the most general abstraction, they are thermodynamic open systems that exhibit self-organised behavior, and the set/subset relations between dissipative structures can be characterized in a hierarchy.
Other hierarchical representations related to biology include ecological pyramids which illustrate energy flow or trophic levels in ecosystems, and taxonomic hierarchies, including the Linnean classification scheme and phylogenetic trees that reflect inferred patterns of evolutionary relationship among living and extinct species.
Computer-graphic imaging.
CGI and computer-animation programs mostly use hierarchies for models. On a 3D model of a human for example, the chest is a parent of the upper left arm, which is a parent of the lower left arm, which is a parent of the hand. This pattern is used in modeling and animation for almost everything built as a 3D digital model.
Linguistics.
Many grammatical theories, such as phrase-structure grammar, involve hierarchy.
Direct–inverse languages such as Cree and Mapudungun distinguish subject and object on verbs not by different subject and object markers, but via a hierarchy of persons.
In this system, the three (or four with Algonquian languages) persons occur in a hierarchy of salience. To distinguish which is subject and which object, "inverse markers" are used if the object outranks the subject.
On the other hand, languages include a variety of phenomena that are not hierarchical. For example, the relationship between a pronoun and a prior noun-phrase to which it refers commonly crosses grammatical boundaries in non-hierarchical ways.
Music.
The structure of a musical composition is often understood hierarchically (for example by Heinrich Schenker (1768–1835, see Schenkerian analysis), and in the (1985) Generative Theory of Tonal Music, by composer Fred Lerdahl and linguist Ray Jackendoff). The sum of all notes in a piece is understood to be an all-inclusive surface, which can be reduced to successively more sparse and more fundamental types of motion. The levels of structure that operate in Schenker's theory are the foreground, which is seen in all the details of the musical score; the middle ground, which is roughly a summary of an essential contrapuntal progression and voice-leading; and the background or Ursatz, which is one of only a few basic "long-range counterpoint" structures that are shared in the gamut of tonal music literature.
The pitches and form of tonal music are organized hierarchically, all pitches deriving their importance from their relationship to a tonic key, and secondary themes in other keys are brought back to the tonic in a recapitulation of the primary theme.
Examples of other applications.
<templatestyles src="Col-begin/styles.css"/>
<templatestyles src="Col-begin/styles.css"/>
<templatestyles src="Col-begin/styles.css"/>
Criticisms.
In the work of diverse theorists such as William James (1842 to 1910), Michel Foucault (1926 to 1984) and Hayden White (1928 to 2018), important critiques of hierarchical epistemology are advanced. James famously asserts in his work Radical Empiricism that clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared. But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved.
Feminists, Marxists, anarchists, communists, critical theorists and others, all of whom have multiple interpretations, criticize the hierarchies commonly found within human society, especially in social relationships. Hierarchies are present in all parts of society: in businesses, schools, families, etc. These relationships are often viewed as necessary. Entities that stand in hierarchical arrangements are animals, humans, plants, etc.
Ethics, behavioral psychology, philosophies of identity.
In ethics, various virtues are enumerated and sometimes organized hierarchically according to certain brands of virtue theory.
In some of these random examples, there is an asymmetry of 'compositional' significance between levels of structure, so that small parts of the whole hierarchical array depend, for their meaning, on their membership in larger parts. There is a hierarchy of activities in human life: productive activity serves or is guided by the moral life; the moral life is guided by practical reason; practical reason (used in moral and political life) serves contemplative reason (whereby we contemplate God). Practical reason sets aside time and resources for contemplative reason.
See also.
Structure-related concepts.
"(For example, in )"
Footnotes.
<templatestyles src="Reflist/styles.css" />
Works cited.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{square} \\subset \\text{quadrilateral} \\subset \\text{polygon} \\subset \\text{shape} \\, "
},
{
"math_id": 1,
"text": "\\text{H. sapiens} \\subset \\text{Homo} \\subset \\text{Primates} \\subset \\text{Mammalia} \\subset \\text{Animalia}"
},
{
"math_id": 2,
"text": " \\text{square} \\subsetneq \\text{quadrilateral} \\subsetneq \\text{polygon} \\subsetneq \\text{shape} \\, "
},
{
"math_id": 3,
"text": " x \\subsetneq y \\, "
}
]
| https://en.wikipedia.org/wiki?curid=13998 |
1399856 | Strongly regular graph | Concept in graph theory
In graph theory, a strongly regular graph (SRG) is a regular graph "G" = ("V", "E") with v vertices and degree k such that for some given integers formula_0
Such a strongly regular graph is denoted by srg("v", "k", λ, μ); its "parameters" are the numbers in ("v", "k", λ, μ). Its complement graph is also strongly regular: it is an srg("v", "v" − "k" − 1, "v" − 2 − 2"k" + μ, "v" − 2"k" + λ).
A strongly regular graph is a distance-regular graph with diameter 2 whenever μ is non-zero. It is a locally linear graph whenever λ = 1.
Etymology.
A strongly regular graph is denoted as an srg("v", "k", λ, μ) in the literature. By convention, graphs which satisfy the definition trivially are excluded from detailed studies and lists of strongly regular graphs. These include the disjoint union of one or more equal-sized complete graphs, and their complements, the complete multipartite graphs with equal-sized independent sets.
Andries Brouwer and Hendrik van Maldeghem (see #References) use an alternate but fully equivalent definition of a strongly regular graph based on spectral graph theory: a strongly regular graph is a finite regular graph that has exactly three eigenvalues, only one of which is equal to the degree "k", of multiplicity 1. This automatically rules out fully connected graphs (which have only two distinct eigenvalues, not three) and disconnected graphs (for which the multiplicity of the degree "k" is equal to the number of different connected components, which would therefore exceed one). Much of the literature, including Brouwer, refers to the larger eigenvalue as "r" (with multiplicity "f") and the smaller one as "s" (with multiplicity "g").
History.
Strongly regular graphs were introduced by R.C. Bose in 1963. They built upon earlier work in the 1950s in the then-new field of spectral graph theory.
Examples.
4 coincide with those of the Shrikhande graph, but the two graphs are not isomorphic.
5, is the 5-cycle (above).
A strongly regular graph is called primitive if both the graph and its complement are connected. All the above graphs are primitive, as otherwise μ
0 or λ
"k".
Conway's 99-graph problem asks for the construction of an srg(99, 14, 1, 2). It is unknown whether a graph with these parameters exists, and John Horton Conway offered a $1000 prize for the solution to this problem.
Triangle-free graphs.
The strongly regular graphs with λ = 0 are triangle free. Apart from the complete graphs on fewer than 3 vertices and all complete bipartite graphs, the seven listed earlier (pentagon, Petersen, Clebsch, Hoffman-Singleton, Gewirtz, Mesner-M22, and Higman-Sims) are the only known ones.
Geodetic graphs.
Every strongly regular graph with formula_2 is a geodetic graph, a graph in which every two vertices have a unique unweighted shortest path. The only known strongly regular graphs with formula_2 are those where formula_3 is 0, therefore triangle-free as well. These are called the Moore graphs and are explored below in more detail. Other combinations of parameters such as (400, 21, 2, 1) have not yet been ruled out. Despite ongoing research on the properties that a strongly regular graph with formula_4 would have, it is not known whether any more exist or even whether their number is finite. Only the elementary result is known, that formula_3 cannot be 1 for such a graph.
Algebraic properties of strongly regular graphs.
Basic relationship between parameters.
The four parameters in an srg("v", "k", λ, μ) are not independent. They must obey the following relation:
formula_5
The above relation is derived through a counting argument as follows:
Adjacency matrix equations.
Let "I" denote the identity matrix and let "J" denote the matrix of ones, both matrices of order "v". The adjacency matrix "A" of a strongly regular graph satisfies two equations.
First:
formula_10
which is a restatement of the regularity requirement. This shows that "k" is an eigenvalue of the adjacency matrix with the all-ones eigenvector.
Second:
formula_11
which expresses strong regularity. The "ij"-th element of the left hand side gives the number of two-step paths from "i" to "j". The first term of the right hand side gives the number of two-step paths from "i" back to "i", namely "k" edges out and back in. The second term gives the number of two-step paths when "i" and "j" are directly connected. The third term gives the corresponding value when "i" and "j" are not connected. Since the three cases are mutually exclusive and collectively exhaustive, the simple additive equality follows.
Conversely, a graph whose adjacency matrix satisfies both of the above conditions and which is not a complete or null graph is a strongly regular graph.
Eigenvalues and graph spectrum.
Since the adjacency matrix A is symmetric, it follows that its eigenvectors are orthogonal. We already observed one eigenvector above which is made of all ones, corresponding to the eigenvalue "k". Therefore the other eigenvectors "x" must all satisfy formula_12 where "J" is the all-ones matrix as before. Take the previously established equation:
formula_11
and multiply the above equation by eigenvector "x":
formula_13
Call the corresponding eigenvalue "p" (not to be confused with formula_3 the graph parameter) and substitute formula_14, formula_12 and formula_15:
formula_16
Eliminate x and rearrange to get a quadratic:
formula_17
This gives the two additional eigenvalues formula_18. There are thus exactly three eigenvalues for a strongly regular matrix.
Conversely, a connected regular graph with only three eigenvalues is strongly regular.
Following the terminology in much of the strongly regular graph literature, the larger eigenvalue is called "r" with multiplicity "f" and the smaller one is called "s" with multiplicity "g".
Since the sum of all the eigenvalues is the trace of the adjacency matrix, which is zero in this case, the respective multiplicities "f" and "g" can be calculated:
As the multiplicities must be integers, their expressions provide further constraints on the values of "v", "k", "μ", and "λ".
Strongly regular graphs for which formula_23 have integer eigenvalues with unequal multiplicities.
Strongly regular graphs for which formula_24 are called conference graphs because of their connection with symmetric conference matrices. Their parameters reduce to
formula_25
Their eigenvalues are formula_26 and formula_27, both of whose multiplicities are equal to formula_28. Further, in this case, "v" must equal the sum of two squares, related to the Bruck–Ryser–Chowla theorem.
Further properties of the eigenvalues and their multiplicities are:
If the above condition(s) are violated for any set of parameters, then there exists no strongly regular graph for those parameters. Brouwer has compiled such lists of existence or non-existence here with reasons for non-existence if any.
The Hoffman–Singleton theorem.
As noted above, the multiplicities of the eigenvalues are given by
formula_46
which must be integers.
In 1960, Alan Hoffman and Robert Singleton examined those expressions when applied on Moore graphs that have "λ" = 0 and "μ" = 1. Such graphs are free of triangles (otherwise "λ" would exceed zero) and quadrilaterals (otherwise "μ" would exceed 1), hence they have a girth (smallest cycle length) of 5. Substituting the values of "λ" and "μ" in the equation formula_5, it can be seen that formula_47, and the eigenvalue multiplicities reduce to
formula_48
For the multiplicities to be integers, the quantity formula_49 must be rational, therefore either the numerator formula_50 is zero or the denominator formula_51 is an integer.
If the numerator formula_50 is zero, the possibilities are:
If the denominator formula_51 is an integer "t", then formula_53 is a perfect square formula_54, so formula_55. Substituting:
formula_56
Since both sides are integers, formula_57 must be an integer, therefore "t" is a factor of 15, namely formula_58, therefore formula_59. In turn:
The Hoffman-Singleton theorem states that there are no strongly regular girth-5 Moore graphs except the ones listed above.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda, \\mu \\ge 0"
},
{
"math_id": 1,
"text": "\\operatorname{srg}\\left(\\binom{n}{2}, 2(n - 2), n - 2, 4\\right)"
},
{
"math_id": 2,
"text": "\\mu = 1"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "\\mu=1"
},
{
"math_id": 5,
"text": "(v - k - 1)\\mu = k(k - \\lambda - 1)"
},
{
"math_id": 6,
"text": "k - \\lambda - 1"
},
{
"math_id": 7,
"text": "k (k - \\lambda - 1)"
},
{
"math_id": 8,
"text": "(v - k - 1)"
},
{
"math_id": 9,
"text": "(v - k - 1)\\mu"
},
{
"math_id": 10,
"text": "AJ = JA = kJ,"
},
{
"math_id": 11,
"text": "A^2 = kI + \\lambda{A} + \\mu(J - I - A)"
},
{
"math_id": 12,
"text": "Jx = 0"
},
{
"math_id": 13,
"text": "A^2 x = kIx + \\lambda{A}x + \\mu(J - I - A)x"
},
{
"math_id": 14,
"text": "Ax = px"
},
{
"math_id": 15,
"text": "Ix = x"
},
{
"math_id": 16,
"text": "p^2 x = kx + \\lambda p x - \\mu x - \\mu p x"
},
{
"math_id": 17,
"text": "p^2 + (\\mu - \\lambda ) p - (k - \\mu) = 0"
},
{
"math_id": 18,
"text": "\\frac{1}{2}\\left[(\\lambda - \\mu) \\pm \\sqrt{(\\lambda - \\mu)^2 + 4(k - \\mu)}\\,\\right]"
},
{
"math_id": 19,
"text": "r = \\frac{1}{2}\\left[(\\lambda - \\mu) + \\sqrt{(\\lambda - \\mu)^2 + 4(k - \\mu)}\\,\\right]"
},
{
"math_id": 20,
"text": "f = \\frac{1}{2}\\left[(v - 1) - \\frac{2k + (v - 1)(\\lambda - \\mu)}{\\sqrt{(\\lambda - \\mu)^2 + 4(k - \\mu)}}\\right]"
},
{
"math_id": 21,
"text": "s = \\frac{1}{2}\\left[(\\lambda - \\mu) - \\sqrt{(\\lambda - \\mu)^2 + 4(k-\\mu)}\\,\\right]"
},
{
"math_id": 22,
"text": "g = \\frac{1}{2}\\left[(v - 1) + \\frac{2k + (v - 1)(\\lambda - \\mu)}{\\sqrt{(\\lambda - \\mu)^2 + 4(k - \\mu)}}\\right]"
},
{
"math_id": 23,
"text": "2k + (v - 1)(\\lambda - \\mu) \\ne 0"
},
{
"math_id": 24,
"text": "2k + (v - 1)(\\lambda - \\mu) = 0"
},
{
"math_id": 25,
"text": "\\operatorname{srg}\\left(v, \\frac{1}{2}(v - 1), \\frac{1}{4}(v - 5), \\frac{1}{4}(v - 1)\\right)."
},
{
"math_id": 26,
"text": "r =\\frac{-1 + \\sqrt{v}}{2}"
},
{
"math_id": 27,
"text": "s = \\frac{-1 - \\sqrt{v}}{2}"
},
{
"math_id": 28,
"text": "\\frac{v-1}{2}"
},
{
"math_id": 29,
"text": "(A - rI)\\times(A - sI) = \\mu.J"
},
{
"math_id": 30,
"text": "(k - r).(k - s) = \\mu v"
},
{
"math_id": 31,
"text": "\\lambda - \\mu = r + s"
},
{
"math_id": 32,
"text": "k - \\mu = -r\\times s"
},
{
"math_id": 33,
"text": "k \\ge r"
},
{
"math_id": 34,
"text": "f =\\frac{(s+1)k(k-s)}{\\mu(s-r)}"
},
{
"math_id": 35,
"text": "g =\\frac{(r+1)k(k-r)}{\\mu(r-s)}"
},
{
"math_id": 36,
"text": "v k (v-k-1) = f g (r-s)^2"
},
{
"math_id": 37,
"text": "v = (r-s)^2"
},
{
"math_id": 38,
"text": "{f,g} = {k, v-k-1}"
},
{
"math_id": 39,
"text": "(v-k-1)^2 (k^2 + r^3) \\ge (r+1)^3 k^2"
},
{
"math_id": 40,
"text": "(v-k-1)^2 (k^2 + s^3) \\ge (s+1)^3 k^2"
},
{
"math_id": 41,
"text": "v \\le \\frac{f(f+3)}{2}"
},
{
"math_id": 42,
"text": "v \\le \\frac{g(g+3)}{2}"
},
{
"math_id": 43,
"text": "r + 1 > \\frac{s(s+1)(\\mu+1)}{2}"
},
{
"math_id": 44,
"text": "\\mu = s^2"
},
{
"math_id": 45,
"text": "\\mu = s(s+1)"
},
{
"math_id": 46,
"text": "M_{\\pm} = \\frac{1}{2}\\left[(v - 1) \\pm \\frac{2k + (v - 1)(\\lambda - \\mu)}{\\sqrt{(\\lambda - \\mu)^2 + 4(k - \\mu)}}\\right]"
},
{
"math_id": 47,
"text": "v = k^2 + 1"
},
{
"math_id": 48,
"text": "M_{\\pm} = \\frac{1}{2}\\left[k^2 \\pm \\frac{2k - k^2}{\\sqrt{4k - 3}}\\right]"
},
{
"math_id": 49,
"text": "\\frac{2k - k^2}{\\sqrt{4k - 3}}"
},
{
"math_id": 50,
"text": "2k - k^2"
},
{
"math_id": 51,
"text": "\\sqrt{4k - 3}"
},
{
"math_id": 52,
"text": "C_5"
},
{
"math_id": 53,
"text": "4k - 3"
},
{
"math_id": 54,
"text": "t^2"
},
{
"math_id": 55,
"text": "k = \\frac{t^2 + 3}{4}"
},
{
"math_id": 56,
"text": "\\begin{align}\nM_{\\pm} &= \\frac{1}{2} \\left[\\left(\\frac{t^2 + 3}{4}\\right)^2 \\pm \\frac{\\frac{t^2 + 3}{2} - \\left(\\frac{t^2 + 3}{4}\\right)^2}{t}\\right] \\\\\n32 M_{\\pm} &= (t^2 + 3)^2 \\pm \\frac{8(t^2 + 3) - (t^2 + 3)^2}{t} \\\\\n &= t^4 + 6t^2 + 9 \\pm \\frac{- t^4 + 2t^2 + 15}{t} \\\\\n &= t^4 + 6t^2 + 9 \\pm \\left(-t^3 + 2t + \\frac{15}{t}\\right)\n\\end{align}"
},
{
"math_id": 57,
"text": "\\frac{15}{t}"
},
{
"math_id": 58,
"text": "t \\in \\{\\pm 1, \\pm 3, \\pm 5, \\pm 15\\}"
},
{
"math_id": 59,
"text": "k \\in \\{1, 3, 7, 57\\}"
}
]
| https://en.wikipedia.org/wiki?curid=1399856 |
1399873 | Circle graph | Intersection graph of a chord diagram
In graph theory, a circle graph is the intersection graph of a chord diagram. That is, it is an undirected graph whose vertices can be associated with a finite system of chords of a circle such that two vertices are adjacent if and only if the corresponding chords cross each other.
Algorithmic complexity.
After earlier polynomial time algorithms, presented an algorithm for recognizing circle graphs in near-linear time. Their method is slower than linear by a factor of the inverse Ackermann function, and is based on lexicographic breadth-first search. The running time comes from a method for maintaining the split decomposition of a graph incrementally, as vertices are added, used as a subroutine in the algorithm.
A number of other problems that are NP-complete on general graphs have polynomial time algorithms when restricted to circle graphs. For instance, showed that the treewidth of a circle graph can be determined, and an optimal tree decomposition constructed, in O("n"3) time. Additionally, a minimum fill-in (that is, a chordal graph with as few edges as possible that contains the given circle graph as a subgraph) may be found in O("n"3) time.
has shown that a maximum clique of a circle graph can be found in O("n" log2 "n") time, while
have shown that a maximum independent set of an unweighted circle graph can be found in O("n" min{"d", "α"}) time, where "d" is a parameter of the graph known as its density, and "α" is the independence number of the circle graph.
However, there are also problems that remain NP-complete when restricted to circle graphs. These include the minimum dominating set, minimum connected dominating set, and minimum total dominating set problems.
Chromatic number.
The chromatic number of a circle graph is the minimum number of colors that can be used to color its chords so that no two crossing chords have the same color. Since it is possible to form circle graphs in which arbitrarily large sets of chords all cross each other, the chromatic number of a circle graph may be arbitrarily large, and determining the chromatic number of a circle graph is NP-complete. It remains NP-complete to test whether a circle graph can be colored by four colors. claimed that finding a coloring with three colors may be done in polynomial time but his writeup of this result omits many details.
Several authors have investigated problems of coloring restricted subclasses of circle graphs with few colors. In particular, for circle graphs in which no sets of "k" or more chords all cross each other, it is possible to color the graph with as few as formula_0 colors. One way of stating this is that the circle graphs are formula_1-bounded. In the particular case when "k" = 3 (that is, for triangle-free circle graphs) the chromatic number is at most five, and this is tight: all triangle-free circle graphs may be colored with five colors, and there exist triangle-free circle graphs that require five colors. If a circle graph has girth at least five (that is, it is triangle-free and has no four-vertex cycles) it can be colored with at most three colors. The problem of coloring triangle-free squaregraphs is equivalent to the problem of representing squaregraphs as isometric subgraphs of Cartesian products of trees; in this correspondence, the number of colors in the coloring corresponds to the number of trees in the product representation.
Applications.
Circle graphs arise in VLSI physical design as an abstract representation for a special case for wire routing, known as "two-terminal switchbox routing". In this case the routing area is a rectangle, all nets are two-terminal, and the terminals are placed on the perimeter of the rectangle. It is easily seen that the intersection graph of these nets is a circle graph. Among the goals of wire routing step is to ensure that different nets stay electrically disconnected, and their potential intersecting parts must be laid out in different conducting layers. Therefore circle graphs capture various aspects of this routing problem.
Colorings of circle graphs may also be used to find book embeddings of arbitrary graphs: if the vertices of a given graph "G" are arranged on a circle, with the edges of "G" forming chords of the circle, then the intersection graph of these chords is a circle graph and colorings of this circle graph are equivalent to book embeddings that respect the given circular layout. In this equivalence, the number of colors in the coloring corresponds to the number of pages in the book embedding.
Related graph classes.
A graph is a circle graph if and only if it is the overlap graph of a set of intervals on a line. This is a graph in which the vertices correspond to the intervals, and two vertices are connected by an edge if the two intervals overlap, with neither containing the other.
The intersection graph of a set of intervals on a line is called the interval graph.
String graphs, the intersection graphs of curves in the plane, include circle graphs as a special case.
Every distance-hereditary graph is a circle graph, as is every permutation graph and every indifference graph. Every outerplanar graph is also a circle graph.
The circle graphs are generalized by the polygon-circle graphs, intersection graphs of polygons all inscribed in the same circle.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "7k^2"
},
{
"math_id": 1,
"text": "\\chi"
}
]
| https://en.wikipedia.org/wiki?curid=1399873 |
13998981 | Rational homotopy theory | Mathematical theory of topological spaces
In mathematics and specifically in topology, rational homotopy theory is a simplified version of homotopy theory for topological spaces, in which all torsion in the homotopy groups is ignored. It was founded by Dennis Sullivan (1977) and Daniel Quillen (1969). This simplification of homotopy theory makes certain calculations much easier.
Rational homotopy types of simply connected spaces can be identified with (isomorphism classes of) certain algebraic objects called Sullivan minimal models, which are commutative differential graded algebras over the rational numbers satisfying certain conditions.
A geometric application was the theorem of Sullivan and Micheline Vigué-Poirrier (1976): every simply connected closed Riemannian manifold "X" whose rational cohomology ring is not generated by one element has infinitely many geometrically distinct closed geodesics. The proof used rational homotopy theory to show that the Betti numbers of the free loop space of "X" are unbounded. The theorem then follows from a 1969 result of Detlef Gromoll and Wolfgang Meyer.
Rational spaces.
A continuous map formula_0 of simply connected topological spaces is called a rational homotopy equivalence if it induces an isomorphism on homotopy groups tensored with the rational numbers formula_1. Equivalently: "f" is a rational homotopy equivalence if and only if it induces an isomorphism on singular homology groups with rational coefficients. The rational homotopy category (of simply connected spaces) is defined to be the localization of the category of simply connected spaces with respect to rational homotopy equivalences. The goal of rational homotopy theory is to understand this category (i.e. to determine the information that can be recovered from rational homotopy equivalences).
One basic result is that the rational homotopy category is equivalent to a full subcategory of the homotopy category of topological spaces, the subcategory of rational spaces. By definition, a rational space is a simply connected CW complex all of whose homotopy groups are vector spaces over the rational numbers. For any simply connected CW complex formula_2, there is a rational space formula_3, unique up to homotopy equivalence, with a map formula_4 that induces an isomorphism on homotopy groups tensored with the rational numbers. The space formula_3 is called the rationalization of formula_2. This is a special case of Sullivan's construction of the localization of a space at a given set of prime numbers.
One obtains equivalent definitions using homology rather than homotopy groups. Namely, a simply connected CW complex formula_2 is a rational space if and only if its homology groups formula_5 are rational vector spaces for all formula_6. The rationalization of a simply connected CW complex formula_2 is the unique rational space formula_4 (up to homotopy equivalence) with a map formula_4 that induces an isomorphism on rational homology. Thus, one has
formula_7
and
formula_8
for all formula_9.
These results for simply connected spaces extend with little change to nilpotent spaces (spaces whose fundamental group is nilpotent and acts nilpotently on the higher homotopy groups).
Computing the homotopy groups of spheres is a central open problem in homotopy theory. However, the "rational" homotopy groups of spheres were computed by Jean-Pierre Serre in 1951:
formula_10
and
formula_11
This suggests the possibility of describing the whole rational homotopy category in a practically computable way. Rational homotopy theory has realized much of that goal.
In homotopy theory, spheres and Eilenberg–MacLane spaces are two very different types of basic spaces from which all spaces can be built. In rational homotopy theory, these two types of spaces become much closer. In particular, Serre's calculation implies that formula_12 is the Eilenberg–MacLane space formula_13. More generally, let "X" be any space whose rational cohomology ring is a free graded-commutative algebra (a tensor product of a polynomial ring on generators of even degree and an exterior algebra on generators of odd degree). Then the rationalization formula_3 is a product of Eilenberg–MacLane spaces. The hypothesis on the cohomology ring applies to any compact Lie group (or more generally, any loop space). For example, for the unitary group SU("n"),
formula_14
Cohomology ring and homotopy Lie algebra.
There are two basic invariants of a space "X" in the rational homotopy category: the rational cohomology ring formula_15 and the homotopy Lie algebra formula_16. The rational cohomology is a graded-commutative algebra over formula_1, and the homotopy groups form a graded Lie algebra via the Whitehead product. (More precisely, writing formula_17 for the loop space of "X", we have that formula_18 is a graded Lie algebra over formula_1. In view of the isomorphism formula_19, this just amounts to a shift of the grading by 1.) For example, Serre's theorem above says that formula_20 is the free graded Lie algebra on one generator of degree formula_21.
Another way to think of the homotopy Lie algebra is that the homology of the loop space of "X" is the universal enveloping algebra of the homotopy Lie algebra:
formula_22
Conversely, one can reconstruct the rational homotopy Lie algebra from the homology of the loop space as the subspace of primitive elements in the Hopf algebra formula_23.
A central result of the theory is that the rational homotopy category can be described in a purely algebraic way; in fact, in two different algebraic ways. First, Quillen showed that the rational homotopy category is equivalent to the homotopy category of connected differential graded Lie algebras. (The associated graded Lie algebra formula_24 is the homotopy Lie algebra.) Second, Quillen showed that the rational homotopy category is equivalent to the homotopy category of 1-connected differential graded cocommutative coalgebras. (The associated coalgebra is the rational homology of "X" as a coalgebra; the dual vector space is the rational cohomology ring.) These equivalences were among the first applications of Quillen's theory of model categories.
In particular, the second description implies that for any graded-commutative formula_1-algebra "A" of the form
formula_25
with each vector space formula_26 of finite dimension, there is a simply connected space "X" whose rational cohomology ring is isomorphic to "A". (By contrast, there are many restrictions, not completely understood, on the integral or mod "p" cohomology rings of topological spaces, for prime numbers "p".) In the same spirit, Sullivan showed that any graded-commutative formula_1-algebra with formula_27 that satisfies Poincaré duality is the cohomology ring of some simply connected smooth closed manifold, except in dimension 4"a"; in that case, one also needs to assume that the intersection pairing on formula_28 is of the form formula_29 over formula_1.
One may ask how to pass between the two algebraic descriptions of the rational homotopy category. In short, a Lie algebra determines a graded-commutative algebra by Lie algebra cohomology, and an augmented commutative algebra determines a graded Lie algebra by reduced André–Quillen cohomology. More generally, there are versions of these constructions for differential graded algebras. This duality between commutative algebras and Lie algebras is a version of Koszul duality.
Sullivan algebras.
For spaces whose rational homology in each degree has finite dimension, Sullivan classified all rational homotopy types in terms of simpler algebraic objects, Sullivan algebras. By definition, a Sullivan algebra is a commutative differential graded algebra over the rationals formula_1, whose underlying algebra is the free commutative graded algebra formula_30 on a graded vector space
formula_31
satisfying the following "nilpotence condition" on its differential "d": the space "V" is the union of an increasing series of graded subspaces, formula_32, where formula_33 on formula_34 and formula_35 is contained in formula_36. In the context of differential graded algebras "A", "commutative" is used to mean graded-commutative; that is,
formula_37
for "a" in formula_26 and "b" in formula_38.
The Sullivan algebra is called minimal if the image of "d" is contained in formula_39, where formula_40 is the direct sum of the positive-degree subspaces of formula_41.
A Sullivan model for a commutative differential graded algebra "A" is a Sullivan algebra formula_41 with a homomorphism formula_42 which induces an isomorphism on cohomology. If formula_43, then "A" has a minimal Sullivan model which is unique up to isomorphism. (Warning: a minimal Sullivan algebra with the same cohomology algebra as "A" need not be a minimal Sullivan model for "A": it is also necessary that the isomorphism of cohomology be induced by a homomorphism of differential graded algebras. There are examples of non-isomorphic minimal Sullivan models with isomorphic cohomology algebras.)
The Sullivan minimal model of a topological space.
For any topological space "X", Sullivan defined a commutative differential graded algebra formula_44, called the algebra of polynomial differential forms on "X" with rational coefficients. An element of this algebra consists of (roughly) a polynomial form on each singular simplex of "X", compatible with face and degeneracy maps. This algebra is usually very large (uncountable dimension) but can be replaced by a much smaller algebra. More precisely, any differential graded algebra with the same Sullivan minimal model as formula_44 is called a model for the space "X". When "X" is simply connected, such a model determines the rational homotopy type of "X".
To any simply connected CW complex "X" with all rational homology groups of finite dimension, there is a minimal Sullivan model formula_45 for formula_44, which has the property that formula_46 and all the formula_47 have finite dimension. This is called the Sullivan minimal model of "X"; it is unique up to isomorphism. This gives an equivalence between rational homotopy types of such spaces and such algebras, with the properties:
When "X" is a smooth manifold, the differential algebra of smooth differential forms on "X" (the de Rham complex) is almost a model for "X"; more precisely it is the tensor product of a model for "X" with the reals and therefore determines the real homotopy type. One can go further and define the "p"-completed homotopy type of "X" for a prime number "p". Sullivan's "arithmetic square" reduces many problems in homotopy theory to the combination of rational and "p"-completed homotopy theory, for all primes "p".
The construction of Sullivan minimal models for simply connected spaces extends to nilpotent spaces. For more general fundamental groups, things get more complicated; for example, the rational homotopy groups of a finite CW complex (such as the wedge formula_48) can be infinite-dimensional vector spaces.
Formal spaces.
A commutative differential graded algebra "A", again with formula_43, is called formal if "A" has a model with vanishing differential. This is equivalent to requiring that the cohomology algebra of "A" (viewed as a differential algebra with trivial differential) is a model for "A" (though it does not have to be the "minimal" model). Thus the rational homotopy type of a formal space is completely determined by its cohomology ring.
Examples of formal spaces include spheres, H-spaces, symmetric spaces, and compact Kähler manifolds. Formality is preserved under products and wedge sums. For manifolds, formality is preserved by connected sums.
On the other hand, closed nilmanifolds are almost never formal: if "M" is a formal nilmanifold, then "M" must be the torus of some dimension. The simplest example of a non-formal nilmanifold is the Heisenberg manifold, the quotient of the Heisenberg group of real 3×3 upper triangular matrices with 1's on the diagonal by its subgroup of matrices with integral coefficients. Closed symplectic manifolds need not be formal: the simplest example is the Kodaira–Thurston manifold (the product of the Heisenberg manifold with a circle). There are also examples of non-formal, simply connected symplectic closed manifolds.
Non-formality can often be detected by Massey products. Indeed, if a differential graded algebra "A" is formal, then all (higher order) Massey products must vanish. The converse is not true: formality means, roughly speaking, the "uniform" vanishing of all Massey products. The complement of the Borromean rings is a non-formal space: it supports a nontrivial triple Massey product.
Elliptic and hyperbolic spaces.
Rational homotopy theory revealed an unexpected dichotomy among finite CW complexes: either the rational homotopy groups are zero in sufficiently high degrees, or they grow exponentially. Namely, let "X" be a simply connected space such that formula_71 is a finite-dimensional formula_1-vector space (for example, a finite CW complex has this property). Define "X" to be rationally elliptic if formula_72 is also a finite-dimensional formula_1-vector space, and otherwise rationally hyperbolic. Then Félix and Halperin showed: if "X" is rationally hyperbolic, then there is a real number formula_73 and an integer "N" such that
formula_74
for all formula_75.
For example, spheres, complex projective spaces, and homogeneous spaces for compact Lie groups are elliptic. On the other hand, "most" finite complexes are hyperbolic. For example:
There are many other restrictions on the rational cohomology ring of an elliptic space.
Bott's conjecture predicts that every simply connected closed Riemannian manifold with nonnegative sectional curvature should be rationally elliptic. Very little is known about the conjecture, although it holds for all known examples of such manifolds.
Halperin's conjecture asserts that the rational Serre spectral sequence of a fiber sequence of simply-connected spaces with rationally elliptic fiber of non-zero Euler characteristic vanishes at the second page.
A simply connected finite complex "X" is rationally elliptic if and only if the rational homology of the loop space formula_17 grows at most polynomially. More generally, "X" is called integrally elliptic if the mod "p" homology of formula_17 grows at most polynomially, for every prime number "p". All known Riemannian manifolds with nonnegative sectional curvature are in fact integrally elliptic.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f\\colon X \\to Y"
},
{
"math_id": 1,
"text": "\\Q"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "X_{\\Q}"
},
{
"math_id": 4,
"text": "X \\to X_{\\Q}"
},
{
"math_id": 5,
"text": "H_i(X,\\Z)"
},
{
"math_id": 6,
"text": "i > 0"
},
{
"math_id": 7,
"text": "\\pi_i(X_{\\Q})\\cong \\pi_i(X)\\otimes {\\Q}"
},
{
"math_id": 8,
"text": "H_i(X_{\\Q},{\\Z})\\cong H_i(X,{\\Z})\\otimes {\\Q}\\cong H_i(X,{\\Q})"
},
{
"math_id": 9,
"text": "i>0"
},
{
"math_id": 10,
"text": "\\pi_i(S^{2a-1})\\otimes \\Q\\cong \\begin{cases}\n\\Q &\\text{if }i=2a-1\\\\\n0 &\\text{otherwise}\n\\end{cases}"
},
{
"math_id": 11,
"text": "\\pi_i(S^{2a})\\otimes \\Q\\cong \\begin{cases}\n\\Q &\\text{if }i=2a\\text{ or }i=4a-1\\\\\n0 &\\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 12,
"text": "S^{2a-1}_{\\Q}"
},
{
"math_id": 13,
"text": "K(\\Q,2a-1)"
},
{
"math_id": 14,
"text": "\\operatorname{SU}(n)_{\\Q} \\simeq S^3_{\\Q}\\times S^5_{\\Q}\\times\\cdots\\times S^{2n-1}_{\\Q}."
},
{
"math_id": 15,
"text": "H^*(X,\\Q)"
},
{
"math_id": 16,
"text": "\\pi_*(X)\\otimes \\Q"
},
{
"math_id": 17,
"text": "\\Omega X"
},
{
"math_id": 18,
"text": "\\pi_*(\\Omega X)\\otimes \\Q"
},
{
"math_id": 19,
"text": "\\pi_i(X) \\cong \\pi_{i-1}(\\Omega X)"
},
{
"math_id": 20,
"text": "\\pi_{*}(\\Omega S^n) \\otimes \\Q"
},
{
"math_id": 21,
"text": "n-1"
},
{
"math_id": 22,
"text": "H_*(\\Omega X,{\\Q})\\cong U(\\pi_*(\\Omega X)\\otimes \\Q)."
},
{
"math_id": 23,
"text": "H_{*}(\\Omega S^n) \\otimes \\Q"
},
{
"math_id": 24,
"text": "\\ker(d)/\\operatorname{im}(d)"
},
{
"math_id": 25,
"text": "A=\\Q \\oplus A^2\\oplus A^3 \\oplus\\cdots,"
},
{
"math_id": 26,
"text": "A^i"
},
{
"math_id": 27,
"text": "A^1=0"
},
{
"math_id": 28,
"text": "A^{2a}"
},
{
"math_id": 29,
"text": "\\sum \\pm x^2_i"
},
{
"math_id": 30,
"text": "\\bigwedge(V)"
},
{
"math_id": 31,
"text": "V=\\bigoplus_{n>0}V^n,"
},
{
"math_id": 32,
"text": "V(0)\\subseteq V(1) \\subseteq \\cdots"
},
{
"math_id": 33,
"text": "d=0"
},
{
"math_id": 34,
"text": "V(0)"
},
{
"math_id": 35,
"text": "d(V(k))"
},
{
"math_id": 36,
"text": "\\bigwedge(V(k-1))"
},
{
"math_id": 37,
"text": "ab=(-1)^{ij}ba"
},
{
"math_id": 38,
"text": "A^j"
},
{
"math_id": 39,
"text": "\\bigwedge^{+}(V)^2"
},
{
"math_id": 40,
"text": "\\bigwedge^{+}(V)"
},
{
"math_id": 41,
"text": "\\bigwedge (V)"
},
{
"math_id": 42,
"text": "\\bigwedge (V)\\to A"
},
{
"math_id": 43,
"text": "A^0=\\Q"
},
{
"math_id": 44,
"text": "A_{PL}(X)"
},
{
"math_id": 45,
"text": "\\bigwedge V"
},
{
"math_id": 46,
"text": "V^1=0"
},
{
"math_id": 47,
"text": "V^k"
},
{
"math_id": 48,
"text": "S^1 \\vee S^2"
},
{
"math_id": 49,
"text": "2n+1 > 1"
},
{
"math_id": 50,
"text": "2n+1"
},
{
"math_id": 51,
"text": "da = 0"
},
{
"math_id": 52,
"text": "2n > 0"
},
{
"math_id": 53,
"text": "2n"
},
{
"math_id": 54,
"text": "4n+1"
},
{
"math_id": 55,
"text": "db = a^2"
},
{
"math_id": 56,
"text": "1, a, b\\to a^2"
},
{
"math_id": 57,
"text": "ab\\to a^3"
},
{
"math_id": 58,
"text": "a^2b \\to a^4, \\ldots"
},
{
"math_id": 59,
"text": "\\mathbb{CP}^n"
},
{
"math_id": 60,
"text": "n > 0"
},
{
"math_id": 61,
"text": "du = 0"
},
{
"math_id": 62,
"text": "dx = u^{n+1}"
},
{
"math_id": 63,
"text": "1, u, u^2, \\ldots ,u^n"
},
{
"math_id": 64,
"text": "x \\to u^{n+1}"
},
{
"math_id": 65,
"text": "xu \\to u^{n+2}, \\ldots"
},
{
"math_id": 66,
"text": "db = 0"
},
{
"math_id": 67,
"text": "dx = a^2"
},
{
"math_id": 68,
"text": "dy = ab"
},
{
"math_id": 69,
"text": "xb-ay"
},
{
"math_id": 70,
"text": " \\langle [a], [a], [b] \\rangle "
},
{
"math_id": 71,
"text": "H_*(X,\\Q)"
},
{
"math_id": 72,
"text": "\\pi_*(X) \\otimes \\Q"
},
{
"math_id": 73,
"text": "C > 1"
},
{
"math_id": 74,
"text": " \\sum_{i=1}^n \\dim_{\\Q}\\pi_i(X)\\otimes {\\Q} \\geq C^n"
},
{
"math_id": 75,
"text": "n \\ge N"
},
{
"math_id": 76,
"text": "b_i(X)"
},
{
"math_id": 77,
"text": "\\binom{n}{i}"
},
{
"math_id": 78,
"text": "b_{2i+1}(X)"
}
]
| https://en.wikipedia.org/wiki?curid=13998981 |
1400173 | Generic polynomial | In mathematics, a generic polynomial refers usually to a polynomial whose coefficients are indeterminates. For example, if "a", "b", and "c" are indeterminates, the generic polynomial of degree two in "x" is formula_0
However in Galois theory, a branch of algebra, and in this article, the term "generic polynomial" has a different, although related, meaning: a generic polynomial for a finite group "G" and a field "F" is a monic polynomial "P" with coefficients in the field of rational functions "L" = "F"("t"1, ..., "t""n") in "n" indeterminates over "F", such that the splitting field "M" of "P" has Galois group "G" over "L", and such that every extension "K"/"F" with Galois group "G" can be obtained as the splitting field of a polynomial which is the specialization of "P" resulting from setting the "n" indeterminates to "n" elements of "F". This is sometimes called "F-generic" or relative to the field "F"; a Q-"generic" polynomial, which is generic relative to the rational numbers is called simply generic.
The existence, and especially the construction, of a generic polynomial for a given Galois group provides a complete solution to the inverse Galois problem for that group. However, not all Galois groups have generic polynomials, a counterexample being the cyclic group of order eight.
formula_1
is a generic polynomial for "S""n".
Examples of generic polynomials.
Generic polynomials are known for all transitive groups of degree 5 or less.
Generic dimension.
The generic dimension for a finite group "G" over a field "F", denoted formula_3, is defined as the minimal number of parameters in a generic polynomial for "G" over "F", or formula_4 if no generic polynomial exists.
Examples: | [
{
"math_id": 0,
"text": "ax^2+bx+c."
},
{
"math_id": 1,
"text": "x^n + t_1 x^{n-1} + \\cdots + t_n"
},
{
"math_id": 2,
"text": "H_{p^3}"
},
{
"math_id": 3,
"text": "gd_{F}G"
},
{
"math_id": 4,
"text": "\\infty"
},
{
"math_id": 5,
"text": "gd_{\\mathbb{Q}}A_3=1"
},
{
"math_id": 6,
"text": "gd_{\\mathbb{Q}}S_3=1"
},
{
"math_id": 7,
"text": "gd_{\\mathbb{Q}}D_4=2"
},
{
"math_id": 8,
"text": "gd_{\\mathbb{Q}}S_4=2"
},
{
"math_id": 9,
"text": "gd_{\\mathbb{Q}}D_5=2"
},
{
"math_id": 10,
"text": "gd_{\\mathbb{Q}}S_5=2"
}
]
| https://en.wikipedia.org/wiki?curid=1400173 |
14003441 | Bag-of-words model | Text represented as an unordered collection of words
The bag-of-words model (BoW) is a model of text which uses a representation of text that is based on an unordered collection (a "bag") of words. It is used in natural language processing and information retrieval (IR). It disregards word order (and thus most of syntax or grammar) but captures multiplicity.
The bag-of-words model is commonly used in methods of document classification where, for example, the (frequency of) occurrence of each word is used as a feature for training a classifier. It has also been used for computer vision.
An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on "Distributional Structure".
Definition.
The following models a text document using bag-of-words. Here are two simple text documents:
(1) John likes to watch movies. Mary likes movies too.
(2) Mary also likes to watch football games.
Based on these two text documents, a list is constructed as follows for each document:
"John","likes","to","watch","movies","Mary","likes","movies","too"
"Mary","also","likes","to","watch","football","games"
Representing each bag-of-words as a JSON object, and attributing to the respective JavaScript variable:
BoW1 = {"John":1,"likes":2,"to":1,"watch":1,"movies":2,"Mary":1,"too":1};
BoW2 = {"Mary":1,"also":1,"likes":1,"to":1,"watch":1,"football":1,"games":1};
Each key is the word, and each value is the number of occurrences of that word in the given text document.
The order of elements is free, so, for example codice_0 is also equivalent to "BoW1". It is also what we expect from a strict "JSON object" representation.
Note: if another document is like a union of these two,
(3) John likes to watch movies. Mary likes movies too. Mary also likes to watch football games.
its JavaScript representation will be:
BoW3 = {"John":1,"likes":3,"to":2,"watch":2,"movies":2,"Mary":2,"too":1,"also":1,"football":1,"games":1};
So, as we see in the bag algebra, the "union" of two documents in the bags-of-words representation is, formally, the disjoint union, summing the multiplicities of each element.formula_0
Word order.
The BoW representation of a text removes all word ordering. For example, the BoW representation of "man bites dog" and "dog bites man" are the same, so any algorithm that operates with a BoW representation of text must treat them in the same way. Despite this lack of syntax or grammar, BoW representation is fast and may be sufficient for simple tasks that do not require word order. For instance, for document classification, if the words "stocks" "trade" "investors" appears multiple times, then the text is likely a financial report, even though it would be insufficient to distinguish between Yesterday, investors were rallying, but today, they are retreating.andYesterday, investors were retreating, but today, they are rallying.and so the BoW representation would be insufficient to determine the detailed meaning of the document.
Implementations.
Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, or tf–idf. Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system).
Python implementation.
from tensorflow import keras
from typing import List
from keras.preprocessing.text import Tokenizer
sentence = ["John likes to watch movies. Mary likes movies too."]
def print_bow(sentence: List[str]) -> None:
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentence)
sequences = tokenizer.texts_to_sequences(sentence)
word_index = tokenizer.word_index
for key in word_index:
bow[key] = sequences[0].count(word_index[key])
print(f"Bag of word sentence 1:\n{bow}")
print(f"We found {len(word_index)} unique tokens.")
print_bow(sentence)
Hashing trick.
A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BoW3 = BoW1 \\biguplus BoW2"
}
]
| https://en.wikipedia.org/wiki?curid=14003441 |
14005026 | Diffusiophoresis and diffusioosmosis | Diffusiophoresis is the spontaneous motion of colloidal particles or molecules in a fluid, induced by a concentration gradient of a different substance. In other words, it is motion of one species, A, in response to a concentration gradient in another species, B. Typically, A is colloidal particles which are in aqueous solution in which B is a dissolved salt such as sodium chloride, and so the particles of A are much larger than the ions of B. But both A and B could be polymer molecules, and B could be a small molecule. For example, concentration gradients in ethanol solutions in water move 1 μm diameter colloidal particles with diffusiophoretic velocities formula_0 of order 0.1 to 1 μm/s, the movement is towards regions of the solution with lower ethanol concentration (and so higher water concentration). Both species A and B will typically be diffusing but diffusiophoresis is distinct from simple diffusion: in simple diffusion a species A moves down a gradient in its own concentration.
Diffusioosmosis, also referred to as capillary osmosis, is flow of a solution relative to a fixed wall or pore surface, where the flow is driven by a concentration gradient in the solution. This is distinct from flow relative to a surface driven by a gradient in the hydrostatic pressure in the fluid. In diffusioosmosis the hydrostatic pressure is uniform and the flow is due to a concentration gradient.
Diffusioosmosis and diffusiophoresis are essentially the same phenomenon. They are both relative motion of a surface and a solution, driven by a concentration gradient in the solution. This motion is called diffusiophoresis when the solution is considered static with particles moving in it due to relative motion of the fluid at the surface of these particles. The term diffusioosmosis is used when the surface is viewed as static, and the solution flows.
A well studied example of diffusiophoresis is the motion of colloidal particles in an aqueous solution of an electrolyte solution, where a gradient in the concentration of the electrolyte causes motion of the colloidal particles. Colloidal particles may be hundred of nanometres or larger in diameter, while the interfacial double layer region at the surface of the colloidal particle will be of order the Debye length wide, and this is typically only nanometres. So here, the interfacial width is much smaller than the size of the particle, and then the gradient in the smaller species drives diffusiophoretic motion of the colloidal particles largely through motion in the interfacial double layer.
Diffusiophoresis was first studied by Derjaguin and coworkers in 1947.
Applications of diffusiophoresis.
Diffusiophoresis, by definition, moves colloidal particles, and so the applications of diffusiophoresis are to situations where we want to move colloidal particles. Colloidal particles are typically between 10 nanometres and a few micrometres in size. Simple diffusion of colloids is fast on length scales of a few micrometres, and so diffusiophoresis would not be useful, whereas on length scales larger than millimetres, diffusiophoresis may be slow as its speed decreases with decreasing size of the solute concentration gradient. Thus, typically diffusiophoresis is employed on length scales approximately in the range a micrometre to a millimetre. Applications include moving particles into or out of pores of that size, and helping or inhibiting mixing of colloidal particles.
In addition, solid surfaces that are slowly dissolving will create concentration gradients near them, and these gradients may drive movement of colloidal particles towards or away from the surface. This was studied by Prieve in the context of latex particles being pulled towards, and coating, a dissolving steel surface.
Relation to thermophoresis, multicomponent diffusion and the Marangoni effect.
Diffusiophoresis is an analogous phenomenon to thermophoresis, where a species A moves in response to a temperature gradient. Both diffusiophoresis and thermophoresis are governed by Onsager reciprocal relations. Simply speaking, a gradient in any thermodynamic quantity, such as the concentration of any species, or temperature, will drive motion of all thermodynamic quantities, i.e., motion of all species present, and a temperature flux. Each gradient provides a thermodynamic force that moves the species present, and the Onsager reciprocal relations govern the relationship between the forces and the motions.
Diffusiophoresis is a special case of multicomponent diffusion. Multicomponent diffusion is diffusion in mixtures, and diffusiophoresis is the special case where we are interested in the movement of one species that is usually a colloidal particle, in a gradient of a much smaller species, such as dissolved salt such as sodium chloride in water. or a miscible liquid, such as ethanol in water. Thus diffusiophoresis always occurs in a mixture, typically a three-component mixture of water, salt and a colloidal species, and we are interested in the cross-interaction between the salt and the colloidal particle.
It is the very large difference in size between the colloidal particle, which may be 1μm across, and the size of the ions or molecules, which are less than 1 nm across, that makes diffusiophoresis closely related to diffusioosomosis at a flat surface. In both cases the forces that drive the motion are largely localised to the interfacial region, which is a few molecules across and so typically of order a nanometer across. Over distances of order a nanometer, there is little difference between the surface of a colloidal particle 1 μm across, and a flat surface.
Diffusioosmosis is flow of a fluid at a solid surface, or in other words, flow at a solid/fluid interface. The Marangoni effect is flow at a fluid/fluid interface. So the two phenomena are analogous with the difference being that in diffusioosmosis one of the phases is a solid. Both diffusioosmosis and the Marangoni effect are driven by gradients in the interfacial free energy, i.e., in both cases the induced velocities are zero if the interfacial free energy is uniform in space, and in both cases if there are gradients the velocities are directed along the direction of increasing interfacial free energy.
Theory for diffusioosmotic flow of a solution.
In diffusioosmosis, for a surface at rest the velocity increases from zero at the surface to the diffusioosmotic velocity, over the width of the interface between the surface and the solution. Beyond this distance, the diffusioosmotic velocity does not vary with distance from the surface. The driving force for diffusioosmosis is thermodynamic, i.e., it acts to reduce the free energy if the system, and so the direction of flow is away from surface regions of low surface free energy, and towards regions of high surface free energy. For a solute that adsorbs at surface, diffusioosmotic flow is away from regions of high solute concentration, while for solutes that are repelled by the surface, flow is away from regions of low solute concentration.
For gradients that are not-too-large, the diffusioosmotic slip velocity, i.e., the relative flow velocity far from the surface will be proportional to the gradient in the concentration gradient
formula_1
where formula_2 is a diffusioosmotic coefficient, and formula_3 is the solute concentration. When the solute is ideal and interacts with a surface in the formula_4 plane at formula_5 via a potential formula_6, the coefficient formula_2 is given by
formula_7
where formula_8 is Boltzmann's constant, formula_9 is the absolute temperature, and formula_10 is the viscosity in the interfacial region, assumed to be constant in the interface. This expression assumes that the fluid velocity for fluid in contact with the surface is forced to be zero, by interaction between the fluid and the wall. This is called the no-slip condition.
To understand these expressions better, we can consider a very simple model, where the surface simply excludes an ideal solute from an interface of width formula_11, this is would be the Asakura-Oosawa model of an ideal polymer against a hard wall. Then the integral is simplyformula_12 and the diffusioosmotic slip velocity
formula_13
Note that the slip velocity is directed towards increasing solute concentrations.
A particle much larger than formula_11 moves with a diffusiophoretic velocity formula_14 relative to the surrounding solution. So diffusiophoresis moves particles towards lower solute concentrations, in this case.
Derivation of diffusioosmotic velocity from Stokes flow.
In this simple model, formula_15 can also be derived directly from the expression for fluid flow in the Stokes limit for an incompressible fluid, which is
formula_16
for formula_17 the fluid flow velocity and formula_18 the pressure. We consider an infinite surface in the formula_4 plane at formula_5, and enforce stick boundary conditions there, i.e., formula_19. We take the concentration gradient to be along the formula_20 axis, i.e., formula_21. Then the only non-zero component of the flow velocity formula_17 is along x, formula_22, and it depends only on height formula_23. So the only non-zero component of the Stokes' equation is
formula_24
In diffusioosmosis, in the bulk of the fluid (i.e., outside the interface) the hydrostatic pressure is assumed to be uniform (as we expect any gradients to relax away by fluid flow) and so in bulk
formula_25
for formula_26the solvent's contribution to the hydrostatic pressure, and formula_27 the contribution of the solute, called the osmotic pressure. Thus in the bulk the gradients obey
formula_28
As we have assumed the solute is ideal, formula_29, and so
formula_30
Our solute is excluded from a region of width formula_11 (the interfacial region) from the surface, and so in interface formula_31, and so there formula_32. Assuming continuity of the solvent contribution into the interface we have a gradient of the hydrostatic pressure in the interface
formula_33
i.e., in the interface there is a gradient of the hydrostatic pressure equal to the negative of the bulk gradient in the osmotic pressure. It is this gradient in the interface in the hydrostatic pressure formula_18 that creates the diffusioosmotic flow. Now that we have formula_34, we can substitute into the Stokes equation, and integrate twice, then
formula_35
formula_36
where formula_37, formula_38, formula_39 and formula_40 are integration constants. Far from the surface the flow velocity must be a constant, so formula_41. We have imposed zero flow velocity at formula_5, so formula_42. Then imposing continuity where the interface meets the bulk, i.e., forcing formula_43 and formula_44 to be continuous at formula_45 we determine formula_37 and formula_40, and so get
formula_46
formula_47
Which gives, as it should, the same expression for the slip velocity, as above. This result is for a specific and very simple model, but it does illustrate general features of diffusioosmoisis: 1) the hydrostatic pressure is, by definition (flow induced by pressure gradients in the bulk is a common but separate physical phenomenon) uniform in the bulk, but there is a gradient in the pressure in the interface, 2) this pressure gradient in the interface causes the velocity to vary in the direction perpendicular to the surface, and this results in a slip velocity, i.e., for the bulk of the fluid to move relative to the surface, 3) away from the interface the velocity is constant, this type of flow is sometimes called plug flow.
Diffusiophoresis in salt solutions.
In many applications of diffusiophoresis, the motion is driven by gradients in the concentration of a salt (electrolyte) concentration, such as sodium chloride in water. Colloidal particles in water are typically charged, and there is an electrostatic potential, called a zeta potential at their surface. This charged surface of the colloidal particle interacts with a gradient in salt concentration, and this gives rise to diffusiophoretic velocity formula_48 given by
formula_49
where formula_50 is the permittivity of water, formula_10 is the viscosity of water, formula_51 is the zeta potential of the colloidal particle in the salt solution, formula_52is the reduced difference between the diffusion constant of the positively charged ion, formula_53, and the diffusion constant of the negatively charged ion, formula_54, and formula_55 is the salt concentration. formula_56is the gradient, i.e., rate of change with position, of the logarithm of the salt concentration, which is equivalent to the rate of change of the salt concentration, divided by the salt concentration – it is effectively one over the distance over which the concentration decreases by a factor of e. The above equation is approximate, and only valid for 1:1 electrolytes such as sodium chloride.
Note that there are two contributions to diffusiophoresis of a charged particle in a salt gradient, which give rise to the two terms in the above equation for formula_48. The first is due to the fact that whenever there is a salt concentration gradient, then unless the diffusion constants of the positive and negative ions are exactly equal to each other, there is an electric field, i.e., the gradient acts a little like a capacitor. This electric filed generated by the salt gradient drives electrophoresis of the charged particle, just as an externally applied electric field does. This gives rise to the first term in the equation above, i.e., diffusiophoresis at a velocity formula_57.
The second part is due to the surface free energy of the surface of a charged particle, decreasing with increasing salt concentration, this is a similar mechanism to that found in diffusiophoresis in gradients of neutrial substances. This gives rise to the second part of the diffusiophoretic velocity formula_58. Note that this simple theory predicts that this contribution to the diffusiophoretic motion is always up a salt concentration gradient, it always moves particles towards higher salt concentration. By contrast, the sign of the electric-field contribution to diffusiophoresis depends on the sign of formula_59. So for example, for a negatively charged particle, formula_60, and if the positively charged ions diffuse faster than the negatively charged ones, then this term will push particles down a salt gradient, but if it is the negatively charged ions that diffuse faster, then this term pushes the particles up the salt gradient.
Practical applications.
A group from Princeton University reported the application of diffusiophoresis to water purification. Contaminated water is treated with CO2 to create carbonic acid, and to split the water into a waste stream and a potable water stream. This allows for easy ionic separation of suspended particles. This has huge energy cost and time savings opportunity to make drinking water safe compared to traditional water filtration methods for dirty water sources.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "{\\bf v}_{dp}"
},
{
"math_id": 1,
"text": "{\\bf v}_{slip}=-K\\nabla c_{sol}"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "c_{sol}"
},
{
"math_id": 4,
"text": "xy"
},
{
"math_id": 5,
"text": "z=0"
},
{
"math_id": 6,
"text": "\\phi_{solute}(z)"
},
{
"math_id": 7,
"text": "K=\\frac{kT}{\\eta}\\int_0^{\\infty}z\\left[\\exp(-\\phi_{solute}/kT)-1\\right]{\\rm d}z"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\\eta"
},
{
"math_id": 11,
"text": "R"
},
{
"math_id": 12,
"text": "-(1/2)R^2"
},
{
"math_id": 13,
"text": "{\\bf v}_{slip}=\\frac{kTR^2}{2\\eta}\\nabla c_{sol}"
},
{
"math_id": 14,
"text": "{\\bf v}_{dp}=-{\\bf v}_{slip}"
},
{
"math_id": 15,
"text": "{\\bf v}_{slip}"
},
{
"math_id": 16,
"text": "\\eta\\nabla^2{\\bf u}=\\nabla p"
},
{
"math_id": 17,
"text": "{\\bf u}"
},
{
"math_id": 18,
"text": "p"
},
{
"math_id": 19,
"text": "{\\bf u}(x,y,z=0)=(0,0,0)"
},
{
"math_id": 20,
"text": "x"
},
{
"math_id": 21,
"text": "\\nabla c_{sol}=(\\partial c_{sol}/\\partial x,0,0)"
},
{
"math_id": 22,
"text": "u_x"
},
{
"math_id": 23,
"text": "z"
},
{
"math_id": 24,
"text": "\\eta\\frac{\\partial ^2u_x(z)}{\\partial z^2}=\\frac{\\partial p}{\\partial x}"
},
{
"math_id": 25,
"text": "\\mbox{In bulk:}~~~~~~ p=\\Pi+p_{solv}=\\mbox{constant}"
},
{
"math_id": 26,
"text": "p_{solv}"
},
{
"math_id": 27,
"text": "\\Pi"
},
{
"math_id": 28,
"text": "\\mbox{In bulk:}~~~~~~ 0=\\frac{\\partial \\Pi}{\\partial x}+\\frac{\\partial p_{solv}}{\\partial x}"
},
{
"math_id": 29,
"text": "\\Pi= kTc_{sol}"
},
{
"math_id": 30,
"text": "\\mbox{In bulk:}~~~~~~ \\frac{\\partial p_{solv}}{\\partial x}=-kT\\frac{\\partial c_{sol}}{\\partial x}"
},
{
"math_id": 31,
"text": "\\Pi=0"
},
{
"math_id": 32,
"text": "p=p_{solv}"
},
{
"math_id": 33,
"text": "\\mbox{In interface:}~~~~~~\\frac{\\partial p }{\\partial x}= \\frac{\\partial p_{solv}}{\\partial x}=-kT\\frac{\\partial c_{sol}}{\\partial x}"
},
{
"math_id": 34,
"text": "\\partial p/\\partial x"
},
{
"math_id": 35,
"text": "\\mbox{In interface:}~~~~~~u_x(z)= -\\frac{kT z^2}{2\\eta}\\frac{\\partial c_{sol}}{\\partial x}+Az+B"
},
{
"math_id": 36,
"text": "\\mbox{In bulk:}~~~~~~u_x(z)=Cz+D"
},
{
"math_id": 37,
"text": "A"
},
{
"math_id": 38,
"text": "B"
},
{
"math_id": 39,
"text": "C"
},
{
"math_id": 40,
"text": "D"
},
{
"math_id": 41,
"text": "C=0"
},
{
"math_id": 42,
"text": "B=0"
},
{
"math_id": 43,
"text": "u_x(z)"
},
{
"math_id": 44,
"text": "\\partial u_x(z)/\\partial z"
},
{
"math_id": 45,
"text": "z=R"
},
{
"math_id": 46,
"text": "\\mbox{In interface:}~~~~~~u_x(z)= \\frac{kTR^2}{2\\eta}\\left(\\frac{\\partial c_{sol}}{\\partial x}\\right)\\left(\\frac{2z}{R}-\\frac{z^2}{R^2}\\right)"
},
{
"math_id": 47,
"text": "\\mbox{In bulk:}~~~~~~u_x(z)=\\frac{kTR^2}{2\\eta}\\left(\\frac{\\partial c_{sol}}{\\partial x}\\right)=v_{slip}"
},
{
"math_id": 48,
"text": "{\\bf U}"
},
{
"math_id": 49,
"text": "{\\bf U}=\\frac{\\epsilon}{\\eta}\\left[\\frac{kT}{e}\\beta\\zeta+\\frac{\\zeta^2}{8}\\right]\\nabla \\ln c_{salt}"
},
{
"math_id": 50,
"text": "\\epsilon"
},
{
"math_id": 51,
"text": "\\zeta"
},
{
"math_id": 52,
"text": "\\beta=(D_+-D_-)/(D_++D_-)"
},
{
"math_id": 53,
"text": "D_+"
},
{
"math_id": 54,
"text": "D_-"
},
{
"math_id": 55,
"text": "c_{salt}"
},
{
"math_id": 56,
"text": "\\nabla \\ln c_{salt}"
},
{
"math_id": 57,
"text": "(\\epsilon/\\eta)(kT/e)\\beta\\zeta\\nabla \\ln c_{salt}"
},
{
"math_id": 58,
"text": "(\\epsilon\\zeta^2/8\\eta)\\nabla \\ln c_{salt}"
},
{
"math_id": 59,
"text": "\\beta\\zeta"
},
{
"math_id": 60,
"text": "\\zeta<0"
}
]
| https://en.wikipedia.org/wiki?curid=14005026 |
14006293 | Quasi-Monte Carlo methods in finance | High-dimensional integrals in hundreds or thousands of variables occur commonly in finance. These integrals have to be computed numerically to within a threshold formula_0. If the integral is of dimension formula_1 then in the worst case, where one has a guarantee of error at most formula_0, the computational complexity is typically of order formula_2. That is, the problem suffers the curse of dimensionality. In 1977 P. Boyle, University of Waterloo, proposed using Monte Carlo (MC) to evaluate options. Starting in early 1992, J. F. Traub, Columbia University, and a graduate student at the time, S. Paskov, used quasi-Monte Carlo (QMC) to price a Collateralized mortgage obligation with parameters specified by Goldman Sachs. Even though it was believed by the world's leading experts that QMC should not be used for high-dimensional integration, Paskov and Traub found that QMC beat MC by one to three orders of magnitude and also enjoyed other desirable attributes. Their results were first published in 1995. Today QMC is widely used in the financial sector to value financial derivatives; see list of books below.
QMC is not a panacea for all high-dimensional integrals. A number of explanations have been proposed for why QMC is so good for financial derivatives. This continues to be a very fruitful research area.
Monte Carlo and quasi-Monte Carlo methods.
Integrals in hundreds or thousands of variables are common in computational finance. These have to be approximated numerically to within an error threshold formula_0. It is well known that if a worst case guarantee of error at most formula_0 is required then the computational complexity of integration may be exponential in formula_1, the dimension of the integrand; See Ch. 3 for details. To break this curse of dimensionality one can use the Monte Carlo (MC) method defined by
formula_3
where the evaluation points formula_4 are randomly chosen. It is well known that the expected error of Monte Carlo is of order formula_5. Thus, the cost of the algorithm that has error formula_0 is of order formula_6 breaking the curse of dimensionality.
Of course in computational practice pseudo-random points are used. Figure 1 shows the distribution of 500 pseudo-random points on the unit square.
Note there are regions where there are no points and other regions where there are clusters of points. It would be desirable to sample the integrand at uniformly distributed points. A rectangular grid would be uniform but even if there were only 2 grid points in each Cartesian direction there would be formula_7 points. So the desideratum should be as few points as possible chosen as uniform as possible.
It turns out there is a well-developed part of number theory which deals exactly with this desideratum. Discrepancy is a measure of deviation from uniformity so what one wants are low discrepancy sequences (LDS). An example of ditribution with 500 LDS points is given in Figure 2.
Numerous LDS have been created named after their inventors, for example:
Generally, the quasi-Monte Carlo (QMC) method is defined by
formula_8
where the formula_4 belong to an LDS. The standard terminology quasi-Monte Carlo is somewhat unfortunate since MC is a randomized method whereas QMC is purely deterministic.
The uniform distribution of LDS is desirable. But the worst case error of QMC is of order
formula_9
where formula_10 is the number of sample points. See for the theory of LDS and references to the literature. The rate of convergence of LDS may be contrasted with the expected rate of convergence of MC which is formula_5. For formula_1 small the rate of convergence of QMC is faster than MC but for formula_1 large the factor formula_11 is devastating. For example, if formula_12, then even with formula_13 the QMC error is proportional to formula_14. Thus, it was widely believed by the world's leading experts that QMC should not be used for high-dimensional integration. For example, in 1992 Bratley, Fox and Niederreiter performed extensive testing on certain mathematical problems. They conclude ""in high-dimensional problems (say formula_15), QMC seems to offer no practical advantage over M"C". In 1993, Rensburg and Torrie compared QMC with MC for the numerical estimation of high-dimensional integrals which occur in computing virial coefficients for the hard-sphere fluid. They conclude QMC is more effective than MC only if formula_16. As we shall see, tests on 360-dimensional integrals arising from a collateralized mortgage obligation (CMO) lead to very different conclusions.
Woźniakowski's 1991 paper, showing the connection between average case complexity of integration and QMC, led to new interest in QMC.
Woźniakowski's result received considerable coverage in the scientific press.
In early 1992, I. T. Vanderhoof, New York University, became aware of Woźniakowski's result and gave Woźniakowski's colleague J. F. Traub, Columbia University, a CMO with parameters set by Goldman Sachs. This CMO had 10 tranches each requiring the computation of a 360 dimensional integral. Traub asked a Ph.D. student, Spassimir Paskov, to compare QMC with MC for the CMO. In 1992 Paskov built a software system called FinDer and ran extensive tests. To the Columbia's research group's surprise and initial disbelief, Paskov reported that QMC was always superior to MC in a number of ways. Details are given below. Preliminary results were presented by Paskov and Traub to a number of Wall Street firms in Fall 1993 and Spring 1994. The firms were initially skeptical of the claim that QMC was superior to MC for pricing financial derivatives. A January 1994 article in Scientific American by Traub and Woźniakowski discussed the theoretical issues and reported that "preliminary results obtained by testing certain finance problems suggests the superiority of the deterministic methods in practice".
In Fall 1994 Paskov wrote a Columbia University Computer Science Report which appeared in slightly modified form in 1997.
In Fall 1995 Paskov and Traub published a paper
in The Journal of Portfolio Management. They compared MC and two QMC methods. The two deterministic methods used Sobol and Halton low-discrepancy points. Since better LDS were created later, no comparison will be made between Sobol and Halton sequences. The experiments drew the following conclusions regarding the performance of MC and QMC on the 10 tranche CMO:
To summarize, QMC beats MC for the CMO on accuracy, confidence level, and computational speed.
This paper was followed by reports on tests by a number of researchers which also led to the conclusion the QMC is superior to MC for a variety of high-dimensional finance problems. This includes papers by Caflisch and Morokoff (1996),
Joy, Boyle, Tan (1996),
Ninomiya and Tezuka (1996),
Papageorgiou and Traub (1996),
Ackworth, and Broadie and (1997).
Further testing of the CMO was carried out by Anargyros Papageorgiou, who developed an improved version of the FinDer software system. The new results include the following:
Currently the highest reported dimension for which QMC outperforms MC is 65536 (formula_18.
The software is the Sobol' Sequence generator codice_0 which generates Sobol' Sequences satisfying Property A for all dimensions and Property A' for the adjacent dimensions.
Theoretical explanations.
The results reported so far in this article are empirical. A number of possible theoretical explanations have been advanced. This has been a very research rich area leading to powerful new concepts but a definite answer has not been obtained.
A possible explanation of why QMC is good for finance is the following. Consider a tranche of the CMO mentioned earlier. The integral gives expected future cash flows from a basket of 30-year mortgages at 360 monthly intervals. Because of the discounted value of money variables representing future times are increasingly less important. In a seminal paper I. Sloan and H. Woźniakowski
introduced the idea of weighted spaces. In these spaces the dependence on the successive variables can be moderated by weights. If the weights decrease sufficiently rapidly the curse of dimensionality is broken even with a worst case guarantee. This paper led to a great amount of work on the tractability of integration and other problems. A problem is tractable when its complexity is of order formula_19 and formula_20 is independent of the dimension.
On the other hand, "effective dimension" was proposed by Caflisch, Morokoff and Owen as an indicator
of the difficulty of high-dimensional integration. The purpose was to explain
the remarkable success of quasi-Monte Carlo (QMC) in approximating the very-high-dimensional integrals in finance. They argued that
the integrands are of low effective dimension and that is why QMC is much faster than Monte Carlo (MC).
The impact of the arguments of Caflisch et al. was great.
A number of papers deal with the relationship between the error of QMC and the effective dimension.
It is known that QMC fails for certain functions that have high effective dimension.
However, low effective dimension is not a necessary condition for QMC to beat MC and for
high-dimensional integration
to be tractable. In 2005, Tezuka exhibited a class of functions of
formula_1 variables, all with maximum effective dimension equal to formula_1. For these functions QMC is very fast since its convergence rate is of order formula_21, where formula_10 is the number of function evaluations.
Isotropic integrals.
QMC can also be superior to MC and to other methods for isotropic problems, that is, problems where all variables are equally important. For example, Papageorgiou and Traub reported test results on the model integration problems suggested by the physicist B. D. Keister
formula_22
where formula_23 denotes the Euclidean norm and formula_24. Keister reports that using a standard numerical method some 220,000 points were needed to obtain a relative error on the order of formula_17. A QMC calculation using the generalized Faure low discrepancy sequence (QMC-GF) used only 500 points to obtain the same relative error. The same integral was tested for a range of values of formula_25 up to formula_26. Its error was
formula_27
formula_28, where formula_29 is the number of evaluations of formula_30. This may be compared with the MC method whose error was proportional to formula_5.
These are empirical results. In a theoretical investigation Papageorgiou proved that the convergence rate of QMC for a class of formula_1-dimensional isotropic integrals which includes the integral defined above is of the order
formula_31
This is with a worst case guarantee compared to the expected convergence rate of formula_5 of Monte Carlo and shows the superiority of QMC for this type of integral.
In another theoretical investigation Papageorgiou presented sufficient conditions for fast QMC convergence. The conditions apply to isotropic and non-isotropic problems and, in particular, to a number of problems in computational finance. He presented classes of functions where even in the worst case the convergence rate of QMC is of order
formula_32,
where formula_33 is a constant that depends on the class of functions.
But this is only a sufficient condition and leaves open the major question we pose in the next section.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "\\epsilon^{-d}"
},
{
"math_id": 3,
"text": "\\varphi^{\\mathop{\\rm MC}}(f)=\\frac 1n \\sum_{i=1}^nf(x_i),"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "n^{-1/2}"
},
{
"math_id": 6,
"text": "\\epsilon^{-2}"
},
{
"math_id": 7,
"text": "2^d"
},
{
"math_id": 8,
"text": " \\varphi^{\\mathop{\\rm QMC}}(f)=\\frac 1n \\sum_{i=1}^nf(x_i),"
},
{
"math_id": 9,
"text": "\\frac{(\\log n)^d}{n},"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "(\\log n)^d"
},
{
"math_id": 12,
"text": "d=360"
},
{
"math_id": 13,
"text": "\\log n=2"
},
{
"math_id": 14,
"text": "2^{360}"
},
{
"math_id": 15,
"text": "d > 12"
},
{
"math_id": 16,
"text": "d<10"
},
{
"math_id": 17,
"text": "10^{-2}"
},
{
"math_id": 18,
"text": "2^{16})"
},
{
"math_id": 19,
"text": "\\epsilon^{-p}"
},
{
"math_id": 20,
"text": "p"
},
{
"math_id": 21,
"text": "n^{-1}"
},
{
"math_id": 22,
"text": "\\left(\\frac 1{2\\pi}\\right)^{d/2} \\int_{\\mathbb R^d}\\cos(\\|x\\| )e^{-\\| x\\|^2}\\,dx,"
},
{
"math_id": 23,
"text": "\\|\\cdot\\|"
},
{
"math_id": 24,
"text": "d=25"
},
{
"math_id": 25,
"text": " d"
},
{
"math_id": 26,
"text": " d=100"
},
{
"math_id": 27,
"text": " c\\cdot n^{-1},"
},
{
"math_id": 28,
"text": "c<110"
},
{
"math_id": 29,
"text": " n"
},
{
"math_id": 30,
"text": "f"
},
{
"math_id": 31,
"text": "\\frac{\\sqrt{\\log n}}{n}."
},
{
"math_id": 32,
"text": "n^{-1+p(\\log n)^{-1/2}}"
},
{
"math_id": 33,
"text": "p\\ge 0"
}
]
| https://en.wikipedia.org/wiki?curid=14006293 |
14006400 | Magic number (oil) | Term in Economics
The magic number is a term in economics that denotes the price of crude oil (measured in dollars per barrel) at which a crude oil exporting economy runs a deficit.
Some countries support almost all spending from income derived from oil exports. As the price of oil drops, these countries take in less revenue from oil. The magic number denotes the point at which the revenue from oil is no longer sufficient to pay for spending. Mathematically, this can be expressed by the inequality:
formula_0
where Q is the quantity of oil exported, P is the price, and S is spending. The magic number is the value of P at which this inequality no longer holds true — that is, that the economy runs a deficit.
PFC Energy publishes the magic number for all the OPEC nations.
"Qatar is at $21 a barrel, because it brings in much more oil money than it spends. Saudi Arabia's break-even point is at $49 a barrel. And Venezuela is at $58, second only to Nigeria's $65."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q \\times P > S"
}
]
| https://en.wikipedia.org/wiki?curid=14006400 |
1400840 | Chen's theorem | In number theory, Chen's theorem states that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes).
It is a weakened form of Goldbach's conjecture, which states that every even number is the sum of two primes.
History.
The theorem was first stated by Chinese mathematician Chen Jingrun in 1966, with further details of the proof in 1973. His original proof was much simplified by P. M. Ross in 1975. Chen's theorem is a giant step towards the Goldbach's conjecture, and a remarkable result of the sieve methods.
Chen's theorem represents the strengthening of a previous result due to Alfréd Rényi, who in 1947 had shown there exists a finite "K" such that any even number can be written as the sum of a prime number and the product of at most "K" primes.
Variations.
Chen's 1973 paper stated two results with nearly identical proofs. His Theorem I, on the Goldbach conjecture, was stated above. His Theorem II is a result on the twin prime conjecture. It states that if "h" is a positive even integer, there are infinitely many primes "p" such that "p" + "h" is either prime or the product of two primes.
Ying Chun Cai proved the following in 2002:
<templatestyles src="Block indent/styles.css"/>"There exists a natural number N such that every even integer n larger than N is a sum of a prime less than or equal to n"0.95 "and a number with at most two prime factors."
Tomohiro Yamada claimed a proof of the following explicit version of Chen's theorem in 2015:
<templatestyles src="Block indent/styles.css"/>"Every even number greater than formula_0 is the sum of a prime and a product of at most two primes." In 2022, Matteo Bordignon implies there are gaps in Yamada's proof, which Bordignon overcomes in his PhD. thesis.
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e^{e^{36}} \\approx 1.7\\cdot10^{1872344071119343}"
}
]
| https://en.wikipedia.org/wiki?curid=1400840 |
1401020 | Christoffel symbols | Array of numbers describing a metric connection
In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group O("p", "q"). As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols.
In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point.
At each point of the underlying "n"-dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted Γ"i""jk" for "i", "j", "k" = 1, 2, ..., "n". Each entry of this "n" × "n" × "n" array is a real number. Under "linear" coordinate transformations on the manifold, the Christoffel symbols transform like the components of a tensor, but under general coordinate transformations (diffeomorphisms) they do not. Most of the algebraic properties of the Christoffel symbols follow from their relationship to the affine connection; only a few follow from the fact that the structure group is the orthogonal group O("m", "n") (or the Lorentz group O(3, 1) for general relativity).
Christoffel symbols are used for performing practical calculations. For example, the Riemann curvature tensor can be expressed entirely in terms of the Christoffel symbols and their first partial derivatives. In general relativity, the connection plays the role of the gravitational force field with the corresponding gravitational potential being the metric tensor. When the coordinate system and the metric tensor share some symmetry, many of the Γ"i""jk" are zero.
The Christoffel symbols are named for Elwin Bruno Christoffel (1829–1900).
Note.
The definitions given below are valid for both Riemannian manifolds and pseudo-Riemannian manifolds, such as those of general relativity, with careful distinction being made between upper and lower indices (contra-variant and co-variant indices). The formulas hold for either sign convention, unless otherwise noted.
Einstein summation convention is used in this article, with vectors indicated by bold font. The connection coefficients of the Levi-Civita connection (or pseudo-Riemannian connection) expressed in a coordinate basis are called "Christoffel symbols".
Preliminary definitions.
Given a manifold formula_0, an atlas consists of a collection of charts formula_1 for each open cover formula_2. Such charts allow the standard vector basis formula_3 on formula_4 to be pulled back to a vector basis on the tangent space formula_5 of formula_0. This is done as follows. Given some arbitrary real function formula_6, the chart allows a gradient to be defined:
formula_7
This gradient is commonly called a pullback because it "pulls back" the gradient on formula_8 to a gradient on formula_0. The pullback is independent of the chart formula_9. In this way, the standard vector basis formula_3 on formula_8 pulls back to a standard ("coordinate") vector basis formula_10 on formula_5. This is called the "coordinate basis", because it explicitly depends on the coordinates on formula_8. It is sometimes called the "local basis".
This definition allows a common abuse of notation. The formula_11 were defined to be in one-to-one correspondence with the basis vectors formula_12 on formula_4. The notation formula_11 serves as a reminder that the basis vectors on the tangent space formula_5 came from a gradient construction. Despite this, it is common to "forget" this construction, and just write (or rather, define) vectors formula_13 on formula_5 such that formula_14. The full range of commonly used notation includes the use of arrows and boldface to denote vectors:
formula_15
where formula_16 is used as a reminder that these are defined to be equivalent notation for the same concept. The choice of notation is according to style and taste, and varies from text to text.
The coordinate basis provides a vector basis for vector fields on formula_0. Commonly used notation for vector fields on formula_0 include
formula_17
The upper-case formula_18, without the vector-arrow, is particularly popular for index-free notation, because it both minimizes clutter and reminds that results are independent of the chosen basis, and, in this case, independent of the atlas.
The same abuse of notation is used to push forward one-forms from formula_8 to formula_0. This is done by writing formula_19 or formula_20 or formula_21. The one-form is then formula_22. This is soldered to the basis vectors as formula_23. Note the careful use of upper and lower indexes, to distinguish contravarient and covariant vectors.
The pullback induces (defines) a metric tensor on formula_0. Several styles of notation are commonly used:
formula_24
where both the centerdot and the angle-bracket formula_25 denote the scalar product. The last form uses the tensor formula_26, which is understood to be the "flat-space" metric tensor. For Riemannian manifolds, it is the Kronecker delta formula_27. For pseudo-Riemannian manifolds, it is the diagonal matrix having signature formula_28. The notation formula_29 serves as a reminder that pullback really is a linear transform, given as the gradient, above. The index letters formula_30 live in formula_4 while the index letters formula_31 live in the tangent manifold.
The matrix inverse formula_32 of the metric tensor formula_33 is given by
formula_34
This is used to define the dual basis:
formula_35
Some texts write formula_36 for formula_37, so that the metric tensor takes the particularly beguiling form formula_38. This is commonly done so that the symbol formula_13 can be used unambiguously for the vierbein.
Definition in Euclidean space.
In Euclidean space, the general definition given below for the Christoffel symbols of the second kind can be proven to be equivalent to:
formula_39
Christoffel symbols of the first kind can then be found via index lowering:
formula_40
Rearranging, we see that (assuming the partial derivative belongs to the tangent space, which cannot occur on a non-Euclidean curved space):
formula_41
In words, the arrays represented by the Christoffel symbols track how the basis changes from point to point. If the derivative does not lie on the tangent space, the right expression is the projection of the derivative over the tangent space (see covariant derivative below). Symbols of the second kind decompose the change with respect to the basis, while symbols of the first kind decompose it with respect to the dual basis. In this form, it is easy to see the symmetry of the lower or last two indices:
formula_42 and formula_43
from the definition of formula_44 and the fact that partial derivatives commute (as long as the manifold and coordinate system are well behaved).
The same numerical values for Christoffel symbols of the second kind also relate to derivatives of the dual basis, as seen in the expression:
formula_45
which we can rearrange as:
formula_46
General definition.
The Christoffel symbols come in two forms: the first kind, and the second kind. The definition of the second kind is more basic, and thus is presented first.
Christoffel symbols of the second kind (symmetric definition).
The Christoffel symbols of the second kind are the connection coefficients—in a coordinate basis—of the Levi-Civita connection.
In other words, the Christoffel symbols of the second kind Γ"k""ij" (sometimes Γ or {}) are defined as the unique coefficients such that
formula_47
where ∇"i" is the Levi-Civita connection on "M" taken in the coordinate direction e"i" (i.e., ∇"i" ≡ ∇e"i") and where e"i" = ∂"i" is a local coordinate (holonomic) basis. Since this connection has zero torsion, and holonomic vector fields commute (i.e. formula_48) we have
formula_49
Hence in this basis the connection coefficients are symmetric:
formula_50
For this reason, a torsion-free connection is often called "symmetric".
The Christoffel symbols can be derived from the vanishing of the covariant derivative of the metric tensor "gik":
formula_51
As a shorthand notation, the nabla symbol and the partial derivative symbols are frequently dropped, and instead a semicolon and a comma are used to set off the index that is being used for the derivative. Thus, the above is sometimes written as
formula_52
Using that the symbols are symmetric in the lower two indices, one can solve explicitly for the Christoffel symbols as a function of the metric tensor by permuting the indices and resumming:
formula_53
where ("gjk") is the inverse of the matrix ("gjk"), defined as (using the Kronecker delta, and Einstein notation for summation) "gjigik" = "δ jk". Although the Christoffel symbols are written in the same notation as tensors with index notation, they do not transform like tensors under a change of coordinates.
Contraction of indices.
Contracting the upper index with either of the lower indices (those being symmetric) leads to
formula_54
where formula_55 is the determinant of the metric tensor. This identity can be used to evaluate divergence of vectors.
Christoffel symbols of the first kind.
The Christoffel symbols of the first kind can be derived either from the Christoffel symbols of the second kind and the metric,
formula_56
or from the metric alone,
formula_57
As an alternative notation one also finds
formula_58
It is worth noting that ["ab", "c"] = ["ba", "c"].
Connection coefficients in a nonholonomic basis.
The Christoffel symbols are most typically defined in a coordinate basis, which is the convention followed here. In other words, the name Christoffel symbols is reserved only for coordinate (i.e., holonomic) frames. However, the connection coefficients can also be defined in an arbitrary (i.e., nonholonomic) basis of tangent vectors u"i" by
formula_59
Explicitly, in terms of the metric tensor, this is
formula_60
where "cklm" = "gmpcklp" are the commutation coefficients of the basis; that is,
formula_61
where u"k" are the basis vectors and [ , ] is the Lie bracket. The standard unit vectors in spherical and cylindrical coordinates furnish an example of a basis with non-vanishing commutation coefficients. The difference between the connection in such a frame, and the Levi-Civita connection is known as the contorsion tensor.
Ricci rotation coefficients (asymmetric definition).
When we choose the basis X"i" ≡ u"i" orthonormal: "gab" ≡ "ηab" = ⟨"Xa", "Xb"⟩ then "gmk,l" ≡ "ηmk,l" = 0. This implies that
formula_62
and the connection coefficients become antisymmetric in the first two indices:
formula_63
where
formula_64
In this case, the connection coefficients "ωabc" are called the Ricci rotation coefficients.
Equivalently, one can define Ricci rotation coefficients as follows:
formula_65
where u"i" is an orthonormal nonholonomic basis and u"k" = "ηkl"u"l" its "co-basis".
Transformation law under change of variable.
Under a change of variable from formula_66 to formula_67, Christoffel symbols transform as
formula_68
where the overline denotes the Christoffel symbols in the formula_69 coordinate system. The Christoffel symbol does not transform as a tensor, but rather as an object in the jet bundle. More precisely, the Christoffel symbols can be considered as functions on the jet bundle of the frame bundle of "M", independent of any local coordinate system. Choosing a local coordinate system determines a local section of this bundle, which can then be used to pull back the Christoffel symbols to functions on "M", though of course these functions then depend on the choice of local coordinate system.
For each point, there exist coordinate systems in which the Christoffel symbols vanish at the point. These are called (geodesic) normal coordinates, and are often used in Riemannian geometry.
There are some interesting properties which can be derived directly from the transformation law.
Relationship to parallel transport and derivation of Christoffel symbols in Riemannian space.
If a vector formula_74 is transported parallel on a curve parametrized by some parameter formula_75 on a Riemannian manifold, the rate of change of the components of the vector is given by
formula_76
Now just by using the condition that the scalar product formula_77 formed by two arbitrary vectors formula_74 and formula_78 is unchanged is enough to derive the Christoffel symbols. The condition is
formula_79
which by the product rule expands to
formula_80
Applying the parallel transport rule for the two arbitrary vectors and relabelling dummy indices and collecting the coefficients of formula_81 (arbitrary), we obtain
formula_82
This is same as the equation obtained by requiring the covariant derivative of the metric tensor to vanish in the General definition section. The derivation from here is simple. By cyclically permuting the indices formula_83 in above equation, we can obtain two more equations and then linearly combining these three equations, we can express formula_70 in terms of metric tensor.
Relationship to index-free notation.
Let "X" and "Y" be vector fields with components "Xi" and "Yk". Then the "k"th component of the covariant derivative of "Y" with respect to "X" is given by
formula_84
Here, the Einstein notation is used, so repeated indices indicate summation over indices and contraction with the metric tensor serves to raise and lower indices:
formula_85
Keep in mind that "gik" ≠ "gik" and that "gik" = "δ ik", the Kronecker delta. The convention is that the metric tensor is the one with the lower indices; the correct way to obtain "gik" from "gik" is to solve the linear equations "gijgjk" = "δ ik".
The statement that the connection is torsion-free, namely that
formula_86
is equivalent to the statement that—in a coordinate basis—the Christoffel symbol is symmetric in the lower two indices:
formula_87
The index-less transformation properties of a tensor are given by pullbacks for covariant indices, and pushforwards for contravariant indices. The article on covariant derivatives provides additional discussion of the correspondence between index-free notation and indexed notation.
Covariant derivatives of tensors.
The covariant derivative of a vector field with components "Vm" is
formula_88
By corollary, divergence of a vector can be obtained as
formula_89
The covariant derivative of a covector field "ωm" is
formula_90
The symmetry of the Christoffel symbol now implies
formula_91
for any scalar field, but in general the covariant derivatives of higher order tensor fields do not commute (see curvature tensor).
The covariant derivative of a type (2, 0) tensor field "Aik" is
formula_92
that is,
formula_93
If the tensor field is mixed then its covariant derivative is
formula_94
and if the tensor field is of type (0, 2) then its covariant derivative is
formula_95
Contravariant derivatives of tensors.
To find the contravariant derivative of a vector field, we must first transform it into a covariant derivative using the metric tensor
formula_96
Applications.
In general relativity.
The Christoffel symbols find frequent use in Einstein's theory of general relativity, where spacetime is represented by a curved 4-dimensional Lorentz manifold with a Levi-Civita connection. The Einstein field equations—which determine the geometry of spacetime in the presence of matter—contain the Ricci tensor, and so calculating the Christoffel symbols is essential. Once the geometry is determined, the paths of particles and light beams are calculated by solving the geodesic equations in which the Christoffel symbols explicitly appear.
In classical (non-relativistic) mechanics.
Let formula_97 be the generalized coordinates and formula_98 be the generalized velocities, then the kinetic energy for a unit mass is given by formula_99, where formula_100 is the metric tensor. If formula_101, the potential function, exists then the contravariant components of the generalized force per unit mass are formula_102. The metric (here in a purely spatial domain) can be obtained from the line element formula_103. Substituting the Lagrangian formula_104 into the Euler-Lagrange equation, we get
formula_105
Now multiplying by formula_32, we get
formula_106
When Cartesian coordinates can be adopted (as in inertial frames of reference), we have an Euclidean metrics, the Christoffel symbol vanishes, and the equation reduces to Newton's second law of motion. In curvilinear coordinates (forcedly in non-inertial frames, where the metrics is non-Euclidean and not flat), fictitious forces like the Centrifugal force and Coriolis force originate from the Christoffel symbols, so from the purely spatial curvilinear coordinates.
In Earth surface coordinates.
Given a spherical coordinate system, which describes points on the Earth surface (approximated as an ideal sphere).
formula_107
For a point x, R is the distance to the Earth core (usually approximately the Earth radius). θ and φ are the latitude and longitude. Positive θ is the northern hemisphere. To simplify the derivatives, the angles are given in radians (where d sin(x)/dx = cos(x), the degree values introduce an additional factor of 360 / 2 pi).
At any location, the tangent directions are formula_108 (up), formula_109 (north) and formula_110 (east) - you can also use indices 1,2,3.
formula_111
The related metric tensor has only diagonal elements (the squared vector lengths). This is an advantage of the coordinate system and not generally true.
formula_112
Now the necessary quantities can be calculated. Examples:
formula_113
The resulting Christoffel symbols of the second kind formula_114 then are (organized by the "derivative" index i in a matrix):
formula_115
These values show how the tangent directions (columns: formula_108, formula_109, formula_110) change, seen from an outside perspective (e.g. from space), but given in the tangent directions of the actual location (rows: R, θ, φ).
As an example, take the nonzero derivatives by θ in formula_116, which corresponds to a movement towards north (positive dθ):
These effects are maybe not apparent during the movement, because they are the adjustments that keep the measurements in the coordinates R, θ, φ. Nevertheless, it can affect distances, physics equations, etc. So if e.g. you need the exact change of a magnetic field pointing approximately "south", it can be necessary to also correct your measurement by the change of the north direction using the Christoffel symbols to get the "true" (tensor) value.
The Christoffel symbols of the first kind formula_117 show the same change using metric-corrected coordinates, e.g. for derivative by φ:
formula_118
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\varphi: U \\to \\mathbb{R}^{n}"
},
{
"math_id": 2,
"text": "U\\subset M"
},
{
"math_id": 3,
"text": "(\\vec{e}_1,\\cdots,\\vec{e}_n)"
},
{
"math_id": 4,
"text": "\\mathbb{R}^{n}"
},
{
"math_id": 5,
"text": "TM"
},
{
"math_id": 6,
"text": "f:M\\to\\mathbb{R}"
},
{
"math_id": 7,
"text": "\\partial_i f \\equiv \\frac{\\partial \\left(f\\circ\\varphi^{-1}\\right)}{\\partial x^i}\\quad\\mbox{for } i = 1,\\, 2,\\, \\dots,\\, n"
},
{
"math_id": 8,
"text": "\\mathbb{R}^n"
},
{
"math_id": 9,
"text": "\\varphi"
},
{
"math_id": 10,
"text": "(\\partial_1,\\cdots,\\partial_n)"
},
{
"math_id": 11,
"text": "\\partial_i"
},
{
"math_id": 12,
"text": "\\vec{e}_i"
},
{
"math_id": 13,
"text": "e_i"
},
{
"math_id": 14,
"text": "e_i \\equiv \\partial_i"
},
{
"math_id": 15,
"text": "\\partial_i \\equiv \\frac{\\partial}{\\partial x^i} \\equiv e_i \\equiv \\vec{e}_i \\equiv \\mathbf{e}_i \\equiv \\boldsymbol\\partial_i"
},
{
"math_id": 16,
"text": "\\equiv"
},
{
"math_id": 17,
"text": "X = \\vec X = X^i\\partial_i=X^i\\frac{\\partial}{\\partial x^i}"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "(\\varphi^1,\\ldots, \\varphi^n)=(x^{1},\\ldots,x^{n})"
},
{
"math_id": 20,
"text": "x=\\varphi"
},
{
"math_id": 21,
"text": "x^i=\\varphi^i"
},
{
"math_id": 22,
"text": "dx^i=d\\varphi^i"
},
{
"math_id": 23,
"text": "dx^i(\\partial_j)=\\delta^i_j"
},
{
"math_id": 24,
"text": "g_{ij} = \\mathbf{e}_i \\cdot \\mathbf{e}_j=\\langle \\vec{e}_i, \\vec{e}_j\\rangle = e_i^a e_j^b \\,\\eta_{ab}"
},
{
"math_id": 25,
"text": "\\langle,\\rangle"
},
{
"math_id": 26,
"text": "\\eta_{ab}"
},
{
"math_id": 27,
"text": "\\eta_{ab}=\\delta_{ab}"
},
{
"math_id": 28,
"text": "(p,q)"
},
{
"math_id": 29,
"text": "e_i^a"
},
{
"math_id": 30,
"text": "a,b,c,\\cdots"
},
{
"math_id": 31,
"text": "i,j,k,\\cdots"
},
{
"math_id": 32,
"text": "g^{ij}"
},
{
"math_id": 33,
"text": "g_{ij}"
},
{
"math_id": 34,
"text": "g^{ij} g_{jk}=\\delta^i_k"
},
{
"math_id": 35,
"text": "\\mathbf{e}^i = \\mathbf{e}_j g^{ji},\\quad i = 1,\\, 2,\\, \\dots,\\, n"
},
{
"math_id": 36,
"text": "\\mathbf{g}_i"
},
{
"math_id": 37,
"text": "\\mathbf{e}_i"
},
{
"math_id": 38,
"text": "g_{ij} = \\mathbf{g}_i \\cdot \\mathbf{g}_j"
},
{
"math_id": 39,
"text": "{\\Gamma^k}_{ij}\n = \\frac{\\partial \\mathbf{e}_{i}}{\\partial x^j} \\cdot \\mathbf{e}^{k}\n = \\frac{\\partial \\mathbf{e}_{i}}{\\partial x^j} \\cdot g^{km} \\mathbf{e}_{m}\n"
},
{
"math_id": 40,
"text": "\\Gamma_{kij}\n = {\\Gamma^m}_{ij}g_{mk}\n = \\frac{\\partial \\mathbf{e}_{i}}{\\partial x^j} \\cdot \\mathbf{e}^{m} g_{mk}\n = \\frac{\\partial \\mathbf{e}_{i}}{\\partial x^j} \\cdot \\mathbf{e}_{k}\n"
},
{
"math_id": 41,
"text": "\\frac{\\partial \\mathbf{e}_{i}}{\\partial x^j} = {\\Gamma^k}_{ij} \\mathbf{e}_{k} = \\Gamma_{kij} \\mathbf{e}^{k}"
},
{
"math_id": 42,
"text": "{\\Gamma^k}_{ij} = {\\Gamma^k}_{ji} "
},
{
"math_id": 43,
"text": "\\Gamma_{kij} = \\Gamma_{kji},"
},
{
"math_id": 44,
"text": " \\mathbf{e}_i "
},
{
"math_id": 45,
"text": "\\frac{\\partial \\mathbf{e}^{i}}{\\partial x^j} = -{\\Gamma^i}_{jk} \\mathbf{e}^{k},"
},
{
"math_id": 46,
"text": "{\\Gamma^i}_{jk} = -\\frac{\\partial \\mathbf{e}^{i}}{\\partial x^j} \\cdot \\mathbf{e}_{k}."
},
{
"math_id": 47,
"text": "\\nabla_i \\mathrm{e}_j = {\\Gamma^k}_{ij}\\mathrm{e}_k,"
},
{
"math_id": 48,
"text": " [e_i, e_j] = [\\partial_i, \\partial_j] = 0"
},
{
"math_id": 49,
"text": "\\nabla_i \\mathrm{e}_j = \\nabla_j \\mathrm{e}_i."
},
{
"math_id": 50,
"text": "{\\Gamma^k}_{ij} = {\\Gamma^k}_{ji}."
},
{
"math_id": 51,
"text": "\n 0 = \\nabla_l g_{ik}\n = \\frac{\\partial g_{ik}}{\\partial x^l} - g_{mk}{\\Gamma^m}_{il} - g_{im}{\\Gamma^m}_{kl}\n = \\frac{\\partial g_{ik}}{\\partial x^l} - 2g_{m(k}{\\Gamma^m}_{i)l}.\n"
},
{
"math_id": 52,
"text": "0 = \\,g_{ik;l} = g_{ik,l} - g_{mk} {\\Gamma^m}_{il} - g_{im} {\\Gamma^m}_{kl} ."
},
{
"math_id": 53,
"text": "{\\Gamma^i}_{kl}\n = \\frac{1}{2} g^{im} \\left(\\frac{\\partial g_{mk}}{\\partial x^l} + \\frac{\\partial g_{ml}}{\\partial x^k} - \\frac{\\partial g_{kl}}{\\partial x^m} \\right)\n = \\frac{1}{2} g^{im} \\left(g_{mk,l} + g_{ml,k} - g_{kl,m}\\right),\n"
},
{
"math_id": 54,
"text": "{\\Gamma^{i}}_{ki} = \\frac{\\partial}{\\partial x^k} \\ln\\sqrt{|g|}"
},
{
"math_id": 55,
"text": "g = \\det g_{ik}"
},
{
"math_id": 56,
"text": "\\Gamma_{cab} = g_{cd} {\\Gamma^d}_{ab}\\,,"
},
{
"math_id": 57,
"text": "\n\\begin{align}\n\\Gamma_{cab}\n &= \\frac{1}{2} \\left(\\frac{\\partial g_{ca}}{\\partial x^b} + \\frac{\\partial g_{cb}}{\\partial x^a} - \\frac{\\partial g_{ab}}{\\partial x^c} \\right) \\\\\n &= \\frac{1}{2}\\, \\left(g_{ca, b} + g_{cb, a} - g_{ab, c}\\right) \\\\\n &= \\frac{1}{2}\\, \\left(\\partial_{b}g_{ca} + \\partial_{a}g_{cb} - \\partial_{c}g_{ab}\\right) \\,. \\\\ \n\\end{align}\n"
},
{
"math_id": 58,
"text": "\\Gamma_{cab} = [ab, c]."
},
{
"math_id": 59,
"text": "\\nabla_{\\mathbf{u}_i}\\mathbf{u}_j = {\\omega^k}_{ij}\\mathbf{u}_k."
},
{
"math_id": 60,
"text": "{\\omega^i}_{kl} = \\frac{1}{2} g^{im} \\left( g_{mk,l} + g_{ml,k} - g_{kl,m} + c_{mkl} + c_{ml k} - c_{kl m} \\right),"
},
{
"math_id": 61,
"text": "[\\mathbf{u}_k,\\, \\mathbf{u}_l] = {c_{kl}}^m \\mathbf{u}_m"
},
{
"math_id": 62,
"text": "{\\omega^i}_{kl} = \\frac{1}{2} \\eta^{im} \\left( c_{mkl} + c_{ml k} - c_{kl m} \\right)"
},
{
"math_id": 63,
"text": "\\omega_{abc} = -\\omega_{bac}\\,,"
},
{
"math_id": 64,
"text": "\\omega_{abc} = \\eta_{ad}{\\omega^d}_{bc}\\, ."
},
{
"math_id": 65,
"text": "{\\omega^k}_{ij} := \\mathbf{u}^k \\cdot \\left( \\nabla_j \\mathbf{u}_i \\right)\\,,"
},
{
"math_id": 66,
"text": "\\left(x^1,\\, \\ldots,\\, x^n\\right)"
},
{
"math_id": 67,
"text": "\\left(\\bar{x}^1,\\, \\ldots,\\, \\bar{x}^n\\right)"
},
{
"math_id": 68,
"text": "{\\bar{\\Gamma}^i}_{kl} =\n \\frac{\\partial \\bar{x}^i}{\\partial x^m}\\,\n \\frac{\\partial x^n}{\\partial \\bar{x}^k}\\,\n \\frac{\\partial x^p}{\\partial \\bar{x}^l}\\,\n {\\Gamma^m}_{np}\n +\n \\frac{\\partial^2 x^m}{\\partial \\bar{x}^k \\partial \\bar{x}^l}\\,\n \\frac{\\partial \\bar{x}^i}{\\partial x^m}\n"
},
{
"math_id": 69,
"text": "\\bar{x}^i"
},
{
"math_id": 70,
"text": "{\\Gamma^i}_{jk}"
},
{
"math_id": 71,
"text": "{\\tilde\\Gamma^i}_{jk}"
},
{
"math_id": 72,
"text": "{\\Gamma^i}_{jk} - {\\tilde\\Gamma^i}_{jk}"
},
{
"math_id": 73,
"text": "{\\Gamma^i}_{jk} \\neq {\\Gamma^i}_{kj}"
},
{
"math_id": 74,
"text": "\\xi^i"
},
{
"math_id": 75,
"text": "s"
},
{
"math_id": 76,
"text": "\\frac{d\\xi^i}{ds} = -{\\Gamma^i}_{mj} \\frac{dx^m}{ds}\\xi^j."
},
{
"math_id": 77,
"text": "g_{ik}\\xi^i\\eta^k"
},
{
"math_id": 78,
"text": "\\eta^k"
},
{
"math_id": 79,
"text": "\\frac{d}{ds}\\left(g_{ik}\\xi^i\\eta^k\\right) = 0"
},
{
"math_id": 80,
"text": "\\frac{\\partial g_{ik}}{\\partial x^l} \\frac{dx^l}{ds} \\xi^i\\eta^k + g_{ik} \\frac{d\\xi^i}{ds}\\eta^k + g_{ik}\\xi^i\\frac{d\\eta^k}{ds} = 0."
},
{
"math_id": 81,
"text": "\\xi^i\\eta^k dx^l"
},
{
"math_id": 82,
"text": "\\frac{\\partial g_{ik}}{\\partial x^l} = g_{rk}{\\Gamma^r}_{il} + g_{ir}{\\Gamma^r}_{lk}."
},
{
"math_id": 83,
"text": "ikl"
},
{
"math_id": 84,
"text": "\\left(\\nabla_X Y\\right)^k = X^i (\\nabla_i Y)^k = X^i \\left(\\frac{\\partial Y^k}{\\partial x^i} + {\\Gamma^k}_{im} Y^m\\right)."
},
{
"math_id": 85,
"text": "g(X, Y) = X^i Y_i = g_{ik}X^i Y^k = g^{ik}X_i Y_k."
},
{
"math_id": 86,
"text": "\\nabla_X Y - \\nabla_Y X = [X,\\, Y]"
},
{
"math_id": 87,
"text": "{\\Gamma^i}_{jk} = {\\Gamma^i}_{kj}."
},
{
"math_id": 88,
"text": "\\nabla_l V^m = \\frac{\\partial V^m}{\\partial x^l} + {\\Gamma^m}_{kl} V^k."
},
{
"math_id": 89,
"text": "\\nabla_i V^i = \\frac{1}{\\sqrt{-g}}\\frac{\\partial \\left(\\sqrt{-g}\\, V^i\\right)}{\\partial x^i}."
},
{
"math_id": 90,
"text": "\\nabla_l \\omega_m = \\frac{\\partial \\omega_m}{\\partial x^l} - {\\Gamma^k}_{ml} \\omega_k."
},
{
"math_id": 91,
"text": "\\nabla_i\\nabla_j \\varphi = \\nabla_j\\nabla_i \\varphi"
},
{
"math_id": 92,
"text": "\\nabla_l A^{ik} = \\frac{\\partial A^{ik}}{\\partial x^l} + {\\Gamma^i}_{ml} A^{mk} + {\\Gamma^k}_{ml} A^{im},"
},
{
"math_id": 93,
"text": "{A^{ik}}_{;l} = {A^{ik}}_{,l} + A^{mk} {\\Gamma^i}_{ml} + A^{im} {\\Gamma^k}_{ml}."
},
{
"math_id": 94,
"text": "{A^i}_{k;l} = {A^i}_{k,l} + {A^{m}}_k {\\Gamma^i}_{ml} - {A^i}_m {\\Gamma^m}_{kl}, "
},
{
"math_id": 95,
"text": " A_{ik;l} = A_{ik,l} - A_{mk} {\\Gamma^m}_{il} - A_{im} {\\Gamma^m}_{kl}. "
},
{
"math_id": 96,
"text": "\\nabla^l V^m = g^{il} \\nabla_i V^m = g^{il} \\partial_i V^m + g^{il} \\Gamma^m_{ki} V^k = \\partial^l V^m + g^{il} \\Gamma^m_{ki} V^k"
},
{
"math_id": 97,
"text": "x^i"
},
{
"math_id": 98,
"text": "\\dot{x}^i"
},
{
"math_id": 99,
"text": "T = \\tfrac{1}{2} g_{ik}\\dot{x}^i \\dot{x}^k"
},
{
"math_id": 100,
"text": "g_{ik}"
},
{
"math_id": 101,
"text": "V\\left(x^i\\right)"
},
{
"math_id": 102,
"text": "F_i = \\partial V/\\partial x^i"
},
{
"math_id": 103,
"text": "ds^2 = 2T dt^2"
},
{
"math_id": 104,
"text": "L = T - V"
},
{
"math_id": 105,
"text": "g_{ik}\\ddot{x}^k + \\frac{1}{2}\\left(\\frac{\\partial g_{ik}}{\\partial x^l} + \\frac{\\partial g_{il}}{\\partial x^k} - \\frac{\\partial g_{lk}}{\\partial x^i}\\right) \\dot{x}^l \\dot{x}^k = F_i."
},
{
"math_id": 106,
"text": "\\ddot{x}^j + {\\Gamma^j}_{lk} \\dot{x}^l \\dot{x}^k = F^j."
},
{
"math_id": 107,
"text": "\\begin{align}\nx(R, \\theta, \\varphi) &= \\begin{pmatrix} R\\cos\\theta\\cos\\varphi & R\\cos\\theta\\sin\\varphi & R\\sin\\theta \\end{pmatrix} \\\\\n\\end{align}"
},
{
"math_id": 108,
"text": " e_{R}"
},
{
"math_id": 109,
"text": " e_{\\theta}"
},
{
"math_id": 110,
"text": " e_{\\varphi}"
},
{
"math_id": 111,
"text": "\\begin{align}\ne_{R} &= \\begin{pmatrix} \\cos\\theta\\cos\\varphi & \\cos\\theta\\sin\\varphi & \\sin\\theta \\end{pmatrix} \\\\\ne_{\\theta} &= R \\cdot \\begin{pmatrix} -\\sin\\theta\\cos\\varphi & - \\sin\\theta\\sin\\varphi & \\cos\\theta \\end{pmatrix} \\\\\ne_{\\varphi} &= R\\cos\\theta \\cdot \\begin{pmatrix} -\\sin\\varphi & \\cos\\varphi & 0 \\end{pmatrix} \\\\\n\\end{align}"
},
{
"math_id": 112,
"text": "\\begin{align}\ng_{RR} = 1 \\qquad & g_{\\theta\\theta} = R^2 \\qquad & g_{\\varphi\\varphi} = R^2\\cos^2\\theta \\qquad & g_{ij} = 0 \\quad \\mathrm{else} \\\\\ng^{RR} = 1 \\qquad & g^{\\theta\\theta} = 1/R^2 \\qquad & g^{\\varphi\\varphi} = 1/(R^2\\cos^2\\theta) \\qquad & g^{ij} = 0 \\quad \\mathrm{else} \\\\\n\\end{align}"
},
{
"math_id": 113,
"text": "\\begin{align}\ne^{R} = e_{R} g^{RR} = 1 \\cdot e_{R} &= \\begin{pmatrix} \\cos\\theta\\cos\\varphi & \\cos\\theta\\sin\\varphi & \\sin\\theta \\end{pmatrix} \\\\\n{\\Gamma^R}_{\\varphi \\varphi} = e^{R} \\cdot \\frac{\\partial}{\\partial \\varphi} e_\\varphi &= e^{R} \\cdot \\begin{pmatrix} -R\\cos\\theta\\cos\\varphi & -R\\cos\\theta\\sin\\varphi & 0 \\end{pmatrix} = -R\\cos^2\\theta \\\\\n\\end{align}"
},
{
"math_id": 114,
"text": " {\\Gamma^k}_{ji} = e^k \\cdot \\frac{\\partial e_j}{\\partial x^i} "
},
{
"math_id": 115,
"text": "\\begin{align}\n\\begin{pmatrix}\n{\\Gamma^R}_{RR} & {\\Gamma^R}_{\\theta R} & {\\Gamma^R}_{\\varphi R} \\\\\n{\\Gamma^\\theta }_{RR} & {\\Gamma^\\theta }_{\\theta R} & {\\Gamma^\\theta }_{\\varphi R} \\\\\n{\\Gamma^\\varphi }_{RR} & {\\Gamma^\\varphi }_{\\theta R} & {\\Gamma^\\varphi }_{\\varphi R} \\\\\n\\end{pmatrix} &= \\quad \\begin{pmatrix} 0 & 0 & 0 \\\\ 0 & 1/R & 0 \\\\ 0 & 0 & 1/R \\end{pmatrix} \\\\\n\\begin{pmatrix}\n{\\Gamma^R}_{R\\theta } & {\\Gamma^R}_{\\theta \\theta } & {\\Gamma^R}_{\\varphi \\theta } \\\\\n{\\Gamma^\\theta }_{R\\theta } & {\\Gamma^\\theta }_{\\theta \\theta } & {\\Gamma^\\theta }_{\\varphi \\theta } \\\\\n{\\Gamma^\\varphi }_{R\\theta } & {\\Gamma^\\varphi }_{\\theta \\theta } & {\\Gamma^\\varphi }_{\\varphi \\theta } \\\\\n\\end{pmatrix} \\quad &= \\begin{pmatrix} 0 & -R & 0 \\\\ 1/R & 0 & 0 \\\\ 0 & 0 & -\\tan\\theta \\end{pmatrix} \\\\\n\\begin{pmatrix}\n{\\Gamma^R}_{R\\varphi } & {\\Gamma^R}_{\\theta \\varphi } & {\\Gamma^R}_{\\varphi \\varphi } \\\\\n{\\Gamma^\\theta }_{R\\varphi } & {\\Gamma^\\theta }_{\\theta \\varphi } & {\\Gamma^\\theta }_{\\varphi \\varphi } \\\\\n{\\Gamma^\\varphi }_{R\\varphi } & {\\Gamma^\\varphi }_{\\theta \\varphi } & {\\Gamma^\\varphi }_{\\varphi \\varphi } \\\\\n\\end{pmatrix} &= \\quad \\begin{pmatrix} 0 & 0 & -R\\cos^2\\theta \\\\ 0 & 0 & \\cos\\theta\\sin\\theta \\\\ 1/R & -\\tan\\theta & 0 \\end{pmatrix} \\\\\n\\end{align}"
},
{
"math_id": 116,
"text": "{\\Gamma^k}_{j\\ \\theta}"
},
{
"math_id": 117,
"text": " {\\Gamma_l}_{ji} = g_{lk} {\\Gamma^k}_{ji} "
},
{
"math_id": 118,
"text": "\\begin{align}\n\\begin{pmatrix}\n{\\Gamma_R}_{R\\varphi } & {\\Gamma_R}_{\\theta \\varphi } & {\\Gamma_R}_{\\varphi \\varphi } \\\\\n{\\Gamma_\\theta }_{R\\varphi } & {\\Gamma_\\theta }_{\\theta \\varphi } & {\\Gamma_\\theta }_{\\varphi \\varphi } \\\\\n{\\Gamma_\\varphi }_{R\\varphi } & {\\Gamma_\\varphi }_{\\theta \\varphi } & {\\Gamma_\\varphi }_{\\varphi \\varphi } \\\\\n\\end{pmatrix} &= R\\cos\\theta \\begin{pmatrix} 0 & 0 & -\\cos\\theta \\\\ 0 & 0 & R\\sin\\theta \\\\ \\cos\\theta & -R\\sin\\theta & 0 \\end{pmatrix} \\\\\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=1401020 |
14019 | Horsepower | Unit of power with different values
<templatestyles src="Template:Infobox/styles-images.css" />
Horsepower (hp) is a unit of measurement of power, or the rate at which work is done, usually in reference to the output of engines or motors. There are many different standards and types of horsepower. Two common definitions used today are the imperial horsepower, which is about 745.7 watts, and the metric horsepower, which is approximately 735.5 watts.
The term was adopted in the late 18th century by Scottish engineer James Watt to compare the output of steam engines with the power of draft horses. It was later expanded to include the output power of other types of piston engines, as well as turbines, electric motors and other machinery. The definition of the unit varied among geographical regions. Most countries now use the SI unit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on 1 January 2010, the use of horsepower in the EU is permitted only as a supplementary unit.
History.
The development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. In 1702, Thomas Savery wrote in "The Miner's Friend":
So that an engine which will raise as much water as two horses, working together at one time in such a work, can do, and for which there must be constantly kept ten or twelve horses for doing the same. Then I say, such an engine may be made large enough to do the work required in employing eight, ten, fifteen, or twenty horses to be constantly maintained and kept for doing such a work...
The idea was later used by James Watt to help market his improved steam engine. He had previously agreed to take royalties of one-third of the savings in coal from the older Newcomen steam engines. This royalty scheme did not work with customers who did not have existing steam engines but used horses instead.
Watt determined that a horse could turn a mill wheel 144 times in an hour (or 2.4 times a minute). The wheel was in radius; therefore, the horse travelled 2.4 × 2π × 12 feet in one minute. Watt judged that the horse could pull with a force of . So:
formula_0
"Engineering in History" recounts that John Smeaton initially estimated that a horse could produce per minute. John Desaguliers had previously suggested per minute, and Thomas Tredgold suggested per minute. "Watt found by experiment in 1782 that a 'brewery horse' could produce per minute." James Watt and Matthew Boulton standardized that figure at per minute the next year.
A common legend states that the unit was created when one of Watt's first customers, a brewer, specifically demanded an engine that would match a horse, and chose the strongest horse he had and driving it to the limit. In that legend, Watt accepted the challenge and built a machine that was actually even stronger than the figure achieved by the brewer, and the output of that machine became the horsepower.
In 1993, R. D. Stevenson and R. J. Wassersug published correspondence in "Nature" summarizing measurements and calculations of peak and sustained work rates of a horse. Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as and also observed that for sustained activity, a work rate of about per horse is consistent with agricultural advice from both the 19th and 20th centuries and also consistent with a work rate of about four times the basal rate expended by other vertebrates for sustained activity.
When considering human-powered equipment, a healthy human can produce about briefly (see orders of magnitude) and sustain about indefinitely; trained athletes can manage up to about briefly
and for a period of several hours. The Jamaican sprinter Usain Bolt produced a maximum of 0.89 seconds into his 9.58 second sprint world record in 2009.
In 2023 a group of engineers modified a dynometer to be able to measure how much horsepower a horse can produce. This horse was measured to .
Calculating power.
When torque T is in pound-foot units, rotational speed N is in rpm, the resulting power in horsepower is
formula_1
The constant 5252 is the rounded value of (33,000 ft⋅lbf/min)/(2π rad/rev).
When torque T is in inch-pounds,
formula_2
The constant 63,025 is the approximation of
formula_3
Definitions.
Imperial horsepower.
Assuming the third CGPM (1901, CR 70) definition of standard gravity, "g"n = 9.80665 m/s2, is used to define the pound-force as well as the kilogram force, and the international avoirdupois pound (1959), one imperial horsepower is:
Or given that 1 hp = 550 ft⋅lbf/s, 1 ft = 0.3048 m, 1 lbf ≈ 4.448 N, 1 J = 1 N⋅m, 1 W = 1 J/s: 1 hp ≈ 745.7 W
Metric horsepower (PS, KM, cv, hk, pk, k, ks, ch).
The various units used to indicate this definition ("PS", " KM", "cv", "hk", "pk", "k", "ks" and "ch") all translate to "horse power" in English. British manufacturers often intermix metric horsepower and mechanical horsepower depending on the origin of the engine in question.
DIN 66036 defines one metric horsepower as the power to raise a mass of 75 kilograms against the Earth's gravitational force over a distance of one metre in one second: 75 kg × 9.80665 m/s2 × 1 m / 1 s = 75 kgf⋅m/s = 1 PS. This is equivalent to 735.49875 W, or 98.6% of an imperial horsepower. In 1972, the PS was replaced by the kilowatt as the official power-measuring unit in EEC directives.
Other names for the metric horsepower are the Italian , Dutch , the French , the Spanish and Portuguese , the Russian , the Swedish , the Finnish , the Estonian , the Norwegian and Danish , the Hungarian , the Czech and Slovak or ), the Serbo-Croatian , the Bulgarian , the Macedonian , the Polish (lit. 'mechanical horse'), Slovenian , the Ukrainian , the Romanian , and the German .
In the 19th century, the French had their own unit, which they used instead of the CV or horsepower. Based on a 100 kgf⋅m/s standard, it was called the poncelet and was abbreviated "p".
Tax horsepower.
Tax or fiscal horsepower is a non-linear rating of a motor vehicle for tax purposes. Tax horsepower ratings were originally more or less directly related to the size of the engine; but as of 2000, many countries changed over to systems based on CO2 emissions, so are not directly comparable to older ratings. The Citroën 2CV is named for its French fiscal horsepower rating, "deux chevaux" (2CV).
Electrical horsepower.
Nameplates on electrical motors show their power output, not the power input (the power delivered at the shaft, not the power consumed to drive the motor). This power output is ordinarily stated in watts or kilowatts. In the United States, the power output is stated in horsepower, which for this purpose is defined as exactly 746 W.
Hydraulic horsepower.
Hydraulic horsepower can represent the power available within hydraulic machinery, power through the down-hole nozzle of a drilling rig, or can be used to estimate the mechanical power needed to generate a known hydraulic flow rate.
It may be calculated as
formula_4
where pressure is in psi, and flow rate is in US gallons per minute.
Drilling rigs are powered mechanically by rotating the drill pipe from above. Hydraulic power is still needed though, as 1 500 to 5 000 W are required to push mud through the drill bit to clear waste rock. Additional hydraulic power may also be used to drive a down-hole mud motor to power directional drilling.
When using SI units, the equation becomes coherent and there is no dividing constant.
formula_5
where pressure is in pascals (Pa), and flow rate is in cubic metres per second (m3).
Boiler horsepower.
Boiler horsepower is a boiler's capacity to deliver steam to a steam engine and is not the same unit of power as the 550 ft lb/s definition. One boiler horsepower is equal to the thermal energy rate required to evaporate of fresh water at in one hour. In the early days of steam use, the boiler horsepower was roughly comparable to the horsepower of engines fed by the boiler.
The term "boiler horsepower" was originally developed at the Philadelphia Centennial Exhibition in 1876, where the best steam engines of that period were tested. The average steam consumption of those engines (per output horsepower) was determined to be the evaporation of of water per hour, based on feed water at , and saturated steam generated at . This original definition is equivalent to a boiler heat output of . A few years later in 1884, the ASME re-defined the boiler horsepower as the thermal output equal to the evaporation of 34.5 pounds per hour of water "from and at" . This considerably simplified boiler testing, and provided more accurate comparisons of the boilers at that time. This revised definition is equivalent to a boiler heat output of . Present industrial practice is to define "boiler horsepower" as a boiler thermal output equal to , which is very close to the original and revised definitions.
Boiler horsepower is still used to measure boiler output in industrial boiler engineering in the US. Boiler horsepower is abbreviated BHP, not to be confused with brake horsepower, below, which is also abbreviated bhp, in lower case.
Drawbar power.
Drawbar power (dbp) is the power a railway locomotive has available to haul a train or an agricultural tractor to pull an implement. This is a measured figure rather than a calculated one. A special railway car called a dynamometer car coupled behind the locomotive keeps a continuous record of the drawbar pull exerted, and the speed. From these, the power generated can be calculated. To determine the maximum power available, a controllable load is required; it is normally a second locomotive with its brakes applied, in addition to a static load.
If the drawbar force (F) is measured in pounds-force (lbf) and speed (v) is measured in miles per hour (mph), then the drawbar power (P) in horsepower (hp) is
formula_6
Example: How much power is needed to pull a drawbar load of 2,025 pounds-force at 5 miles per hour?
formula_7
The constant 375 is because 1 hp = 375 lbf⋅mph. If other units are used, the constant is different. When using coherent SI units (watts, newtons, and metres per second), no constant is needed, and the formula becomes "P" = "Fv".
This formula may also be used to calculate the power of a jet engine, using the speed of the jet and the thrust required to maintain that speed.
Example: how much power is generated with a thrust of 4000 pounds at 400 miles per hour?
formula_8
RAC horsepower (taxable horsepower).
This measure was instituted by the Royal Automobile Club and was used to denote the power of early 20th-century British cars. Many cars took their names from this figure (hence the Austin Seven and Riley Nine), while others had names such as "40/50 hp", which indicated the RAC figure followed by the true measured power.
Taxable horsepower does not reflect developed horsepower; rather, it is a calculated figure based on the engine's bore size, number of cylinders, and a (now archaic) presumption of engine efficiency. As new engines were designed with ever-increasing efficiency, it was no longer a useful measure, but was kept in use by UK regulations, which used the rating for tax purposes. The United Kingdom was not the only country that used the RAC rating; many states in Australia used RAC hp to determine taxation. The RAC formula was sometimes applied in British colonies as well, such as Kenya (British East Africa).
formula_9
where
"D" is the diameter (or bore) of the cylinder in inches,
"n" is the number of cylinders.
Since taxable horsepower was computed based on bore and number of cylinders, not based on actual displacement, it gave rise to engines with "undersquare" dimensions (bore smaller than stroke), which tended to impose an artificially low limit on rotational speed, hampering the potential power output and efficiency of the engine.
The situation persisted for several generations of four- and six-cylinder British engines: For example, Jaguar's 3.4-litre XK engine of the 1950s had six cylinders with a bore of and a stroke of , where most American automakers had long since moved to oversquare (large bore, short stroke) V8 engines. See, for example, the early .
Measurement.
The power of an engine may be measured or estimated at several points in the transmission of the power from its generation to its application. A number of names are used for the power developed at various stages in this process, but none is a clear indicator of either the measurement system or definition used.
In general:
nominal horsepower is derived from the size of the engine and the piston speed and is only accurate at a steam pressure of ;
indicated or gross horsepower is the theoretical capability of the engine [PLAN/ 33000];
brake/net/crankshaft horsepower (power delivered directly to and measured at the engine's crankshaft) equals
indicated horsepower minus frictional losses within the engine (bearing drag, rod and crankshaft windage losses, oil film drag, etc.);
shaft horsepower (power delivered to and measured at the output shaft of the transmission, when present in the system) equals
crankshaft horsepower minus frictional losses in the transmission (bearings, gears, oil drag, windage, etc.);
effective, true (thp) or commonly referred to as wheel horsepower (whp) equals
shaft horsepower minus frictional losses in the universal joint/s, differential, wheel bearings, tire and chain, (if present).
All the above assumes that no power inflation factors have been applied to any of the readings.
Engine designers use expressions other than horsepower to denote objective targets or performance, such as brake mean effective pressure (BMEP). This is a coefficient of theoretical brake horsepower and cylinder pressures during combustion.
Nominal horsepower.
Nominal horsepower (nhp) is an early 19th-century rule of thumb used to estimate the power of steam engines. It assumed a steam pressure of .
Nominal horsepower = 7 × area of piston in square inches × equivalent piston speed in feet per minute/33,000.
For paddle ships, the Admiralty rule was that the piston speed in feet per minute was taken as 129.7 × (stroke)1/3.38. For screw steamers, the intended piston speed was used.
The stroke (or length of stroke) was the distance moved by the piston measured in feet.
For the nominal horsepower to equal the actual power it would be necessary for the mean steam pressure in the cylinder during the stroke to be and for the piston speed to be that generated by the assumed relationship for paddle ships.
The French Navy used the same definition of nominal horse power as the Royal Navy.
Indicated horsepower.
Indicated horsepower (ihp) is the theoretical power of a reciprocating engine if it is completely frictionless in converting the expanding gas energy (piston pressure × displacement) in the cylinders. It is calculated from the pressures developed in the cylinders, measured by a device called an "engine indicator" – hence indicated horsepower. As the piston advances throughout its stroke, the pressure against the piston generally decreases, and the indicator device usually generates a graph of pressure vs stroke within the working cylinder. From this graph the amount of work performed during the piston stroke may be calculated.
Indicated horsepower was a better measure of engine power than nominal horsepower (nhp) because it took account of steam pressure. But unlike later measures such as shaft horsepower (shp) and brake horsepower (bhp), it did not take into account power losses due to the machinery internal frictional losses, such as a piston sliding within the cylinder, plus bearing friction, transmission and gear box friction, etc.
Brake horsepower.
Brake horsepower (bhp) is the power measured using a brake type (load) dynamometer at a specified location, such as the crankshaft, output shaft of the transmission, rear axle or rear wheels.
In Europe, the DIN 70020 standard tests the engine fitted with all ancillaries and the exhaust system as used in the car. The older American standard (SAE gross horsepower, referred to as "bhp") used an engine without alternator, water pump, and other auxiliary components such as power steering pump, muffled exhaust system, etc., so the figures were higher than the European figures for the same engine. The newer American standard (referred to as SAE net horsepower) tests an engine with all the auxiliary components (see "Engine power test standards" below).
"Brake" refers to the device which is used to provide an equal braking force, load to balance, or equal an engine's output force and hold it at a desired rotational speed. During testing, the output torque and rotational speed are measured to determine the brake horsepower. Horsepower was originally measured and calculated by use of the "indicator diagram" (a James Watt invention of the late 18th century), and later by means of a Prony brake connected to the engine's output shaft. Modern dynamometers use any of several braking methods to measure the engine's brake horsepower, the actual output of the engine itself, before losses to the drivetrain.
Shaft horsepower.
Shaft horsepower (shp) is the power delivered to a propeller shaft, a turbine shaft, or to an output shaft of an automotive transmission. Shaft horsepower is a common rating for turboshaft and turboprop engines, industrial turbines, and some marine applications.
Equivalent shaft horsepower (eshp) is sometimes used to rate turboprop engines. It includes the equivalent power derived from residual jet thrust from the turbine exhaust. of residual jet thrust is estimated to be produced from one unit of horsepower.
Engine power test standards.
There exist a number of different standards determining how the power and torque of an automobile engine is measured and corrected. Correction factors are used to adjust power and torque measurements to standard atmospheric conditions, to provide a more accurate comparison between engines as they are affected by the pressure, humidity, and temperature of ambient air. Some standards are described below.
Society of Automotive Engineers/SAE International.
Early "SAE horsepower".
In the early twentieth century, a so-called "SAE horsepower" was sometimes quoted for U.S. automobiles. This long predates the Society of Automotive Engineers (SAE) horsepower measurement standards and was another name for the industry standard ALAM or NACC horsepower figure and the same as the British RAC horsepower also used for tax purposes. Alliance for Automotive Innovation is the current successor of ALAM and NACC.
SAE gross power.
Prior to the 1972 model year, American automakers rated and advertised their engines in brake horsepower, "bhp", which was a version of brake horsepower called SAE gross horsepower because it was measured according to Society of Automotive Engineers (SAE) standards (J245 and J1995) that call for a stock test engine without accessories (such as dynamo/alternator, radiator fan, water pump), and sometimes fitted with long tube test headers in lieu of the OEM exhaust manifolds. This contrasts with both SAE net power and DIN 70020 standards, which account for engine accessories (but not transmission losses). The atmospheric correction standards for barometric pressure, humidity and temperature for SAE gross power testing were relatively idealistic.
SAE net power.
In the United States, the term "bhp" fell into disuse in 1971–1972, as automakers began to quote power in terms of SAE net horsepower in accord with SAE standard J1349. Like SAE gross and other brake horsepower protocols, SAE net hp is measured at the engine's crankshaft, and so does not account for transmission losses. However, similar to the DIN 70020 standard, SAE net power testing protocol calls for standard production-type belt-driven accessories, air cleaner, emission controls, exhaust system, and other power-consuming accessories. This produces ratings in closer alignment with the power produced by the engine as it is actually configured and sold.
SAE certified power.
In 2005, the SAE introduced "SAE Certified Power" with SAE J2723. To attain certification the test must follow the SAE standard in question, take place in an ISO 9000/9002 certified facility and be witnessed by an SAE approved third party.
A few manufacturers such as Honda and Toyota switched to the new ratings immediately. The rating for Toyota's Camry 3.0 L "1MZ-FE" V6 fell from . The company's Lexus ES 330 and Camry SE V6 (3.3 L V6) were previously rated at but the ES 330 dropped to while the Camry declined to . The first engine certified under the new program was the 7.0 L LS7 used in the 2006 Chevrolet Corvette Z06. Certified power rose slightly from .
While Toyota and Honda are retesting their entire vehicle lineups, other automakers generally are retesting only those with updated powertrains. For example, the 2006 Ford Five Hundred is rated at , the same as that of 2005 model. However, the 2006 rating does not reflect the new SAE testing procedure, as Ford is not going to incur the extra expense of retesting its existing engines. Over time, most automakers are expected to comply with the new guidelines.
SAE tightened its horsepower rules to eliminate the opportunity for engine manufacturers to manipulate factors affecting performance such as how much oil was in the crankcase, engine control system calibration, and whether an engine was tested with high octane fuel. In some cases, such can add up to a change in horsepower ratings.
"Deutsches Institut für Normung" 70020 (DIN 70020).
DIN 70020 is a German DIN standard for measuring road vehicle horsepower. DIN hp is measured at the engine's output shaft as a form of metric horsepower rather than mechanical horsepower. Similar to SAE net power rating, and unlike SAE gross power, DIN testing measures the engine as installed in the vehicle, with cooling system, charging system and stock exhaust system all connected. DIN hp is often abbreviated as "PS", derived from the German word Pferdestärke (literally, "horsepower").
CUNA.
A test standard by Italian CUNA ("Commissione Tecnica per l'Unificazione nell'Automobile", Technical Commission for Automobile Unification), a federated entity of standards organisation UNI, was formerly used in Italy.
CUNA prescribed that the engine be tested with all accessories necessary to its running fitted (such as the water pump), while all others – such as alternator/dynamo, radiator fan, and exhaust manifold – could be omitted. All calibration and accessories had to be as on production engines.
Economic Commission for Europe R24.
ECE R24 is a UN standard for the approval of compression ignition engine emissions, installation and measurement of engine power. It is similar to DIN 70020 standard, but with different requirements for connecting an engine's fan during testing causing it to absorb less power from the engine.
Economic Commission for Europe R85.
ECE R85 is a UN standard for the approval of internal combustion engines with regard to the measurement of the net power.
80/1269/EEC.
80/1269/EEC of 16 December 1980 is a European Union standard for road vehicle engine power.
International Organization for Standardization.
The International Organization for Standardization (ISO) publishes several standards for measuring engine horsepower.
Japanese Industrial Standard D 1001.
JIS D 1001 is a Japanese net, and gross, engine power test code for automobiles or trucks having a spark ignition, diesel engine, or fuel injection engine.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P = \\frac{W}{t} = \\frac{Fd}{t} = \\frac{180~\\text{lbf} \\times 2.4 \\times 2\\,\\pi \\times 12~\\text{ft}}{1~\\text{min}} = 32{,}572~\\frac{\\text{ft} \\cdot \\text{lbf}}{\\text{min}}."
},
{
"math_id": 1,
"text": "\\{P\\}_\\mathrm{hp} = \\frac{\\{T\\}_\\mathrm{ft {\\cdot} lbf} \\{N\\}_\\mathrm{rpm}}{5252}."
},
{
"math_id": 2,
"text": "\\{P\\}_\\mathrm{hp} = \\frac{\\{T\\}_\\mathrm{in {\\cdot} lbf} \\{N\\}_\\mathrm{rpm}}{63{,}025}."
},
{
"math_id": 3,
"text": "33{,}000~\\frac{\\text{ft} {\\cdot} \\text{lbf}}{\\text{min}} \\times \\frac{12~\\frac{\\text{in}}{\\text{ft}}}{2\\pi~\\text{rad}} \\approx 63{,}025 \\frac{\\text{in} {\\cdot} \\text{lbf}}{\\text{min}}."
},
{
"math_id": 4,
"text": "\\text{hydraulic power} = \\frac{\\text{pressure} \\times \\text{volumetric flow rate}}{1714},"
},
{
"math_id": 5,
"text": "\\text{hydraulic power} = \\text{pressure} \\times \\text{volumetric flow rate}"
},
{
"math_id": 6,
"text": "\\{P\\}_\\mathrm{hp} = \\frac{\\{F\\}_\\mathrm{lbf} \\{v\\}_\\mathrm{mph}}{375}."
},
{
"math_id": 7,
"text": "\\{P\\}_\\mathrm{hp} = \\frac{2025 \\times 5}{375} = 27."
},
{
"math_id": 8,
"text": "\\{P\\}_\\mathrm{hp} = \\frac{4000 \\times 400}{375} = 4266.7."
},
{
"math_id": 9,
"text": "\\text{RAC h.p.} = \\frac{D \\times D \\times n}{2.5}"
}
]
| https://en.wikipedia.org/wiki?curid=14019 |
14019863 | Multiple of the median | Measure of how far an individual test result deviates from the median
A multiple of the median (MoM) is a measure of how far an individual test result deviates from the median. MoM is commonly used to report the results of medical screening tests, particularly where the results of the individual tests are highly variable.
MoM was originally used as a method to normalize data from participating laboratories of Alpha-fetoprotein (AFP) so that individual test results could be compared. 35 years later, it is the established standard for reporting maternal serum screening results.
An MoM for a test result for a patient can be determined by the following:
formula_0
As an example, Alpha-fetoprotein (AFP) testing is used to screen for a neural tube defect (NTD) during the second trimester of pregnancy. If the median AFP result at 16 weeks of gestation is 30 ng/mL and a pregnant woman's AFP result at that same gestational age is 60 ng/mL, then her MoM is equal to 60/30 = 2.0. In other words, her AFP result is 2 times higher than median.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "MoM(Patient) = \\frac{Result(Patient)}{Median(PatientPopulation)}"
}
]
| https://en.wikipedia.org/wiki?curid=14019863 |
1402030 | Marginal revenue productivity theory of wages | Model of wage levels
The marginal revenue productivity theory of wages is a model of wage levels in which they set to match to the marginal revenue product of labor, formula_0 (the value of the marginal product of labor), which is the increment to revenues caused by the increment to output produced by the last laborer employed. In a model, this is justified by an assumption that the firm is profit-maximizing and thus would employ labor only up to the point that marginal labor costs equal the marginal revenue generated for the firm. This is a model of the neoclassical economics type.
The marginal revenue product (formula_0) of a worker is equal to the product of the marginal product of labour (formula_1) (the increment to output from an increment to labor used) and the marginal revenue (formula_2) (the increment to sales revenue from an increment to output): formula_3. The theory states that workers will be hired up to the point when the marginal revenue product is equal to the wage rate. If the marginal revenue brought by the worker is less than the wage rate, then employing that laborer would cause a decrease in profit.
The idea that payments to factors of production equal their marginal productivity had been laid out by John Bates Clark and Knut Wicksell in simpler models. Much of the MRP theory stems from Wicksell's model.
Mathematical relation.
The marginal revenue product of labour formula_4 is the increase in revenue per unit increase in the variable input = formula_5
formula_6
Here:
[This page is incomplete. Please define each and every variable and include their dimension]
The change in output is not limited to that directly attributable to the additional worker. Assuming that the firm is operating with diminishing marginal returns then the addition of an extra worker reduces the average productivity of every other worker (and every other worker affects the marginal productivity of the additional worker).
The firm is modeled as choosing to add units of labor until the formula_0 equals the wage rate formula_10 — mathematically until
formula_11
Marginal revenue product in a perfectly competitive market.
Under perfect competition, marginal revenue product is equal to marginal physical product (extra unit of good produced as a result of a new employment) multiplied by price.
formula_12
This is because the firm in perfect competition is a price taker. It does not have to lower the price in order to sell additional units of the good.
MRP in monopoly or imperfect competition.
Firms operating as monopolies or in imperfect competition face downward-sloping demand curves. To sell extra units of output, they would have to lower their output's price. Under such market conditions, marginal revenue product will not equal formula_13. This is because the firm is not able to sell output at a fixed price per unit. Thus the formula_0 curve of a firm in monopoly or in imperfect competition will slope downwards, when plotted against labor usage, at a faster rate than in perfect specific competition. | [
{
"math_id": 0,
"text": "MRP"
},
{
"math_id": 1,
"text": "MP"
},
{
"math_id": 2,
"text": "MR"
},
{
"math_id": 3,
"text": "MRP = MP \\times MR"
},
{
"math_id": 4,
"text": "MRP_L"
},
{
"math_id": 5,
"text": "\\frac{\\Delta TR}{\\Delta L}"
},
{
"math_id": 6,
"text": "\\begin{align}\n MR &= \\frac{\\Delta TR}{\\Delta Q}\\\\[5pt]\n MP_L &= \\frac{\\Delta Q}{\\Delta L}\\\\[5pt]\n MR \\times MP_L &= \\frac{\\Delta TR}{\\Delta Q} \\times \\frac{\\Delta Q}{\\Delta L} = \\frac{\\Delta TR}{\\Delta L}\n\\end{align}"
},
{
"math_id": 7,
"text": "TR"
},
{
"math_id": 8,
"text": "Q"
},
{
"math_id": 9,
"text": "L"
},
{
"math_id": 10,
"text": "w"
},
{
"math_id": 11,
"text": "\\begin{align}\n MRP_L &= w\\\\[5pt]\n MR(MP_L) &= w\\\\[5pt]\n MR &= \\frac{w}{MP_L}\\\\[5pt]\n MR &= MC ,\\text{ which is the profit maximizing rule.}\n\\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align}\n MRP &= MPP \\times MR(D=AR=P) \\text{ as perfectly competitive labour market}\\\\[5pt]\n MRP &= MPP \\times \\text{Price}\n\\end{align}"
},
{
"math_id": 13,
"text": "MPP \\times \\text{Price}"
}
]
| https://en.wikipedia.org/wiki?curid=1402030 |
14020842 | Particle size | Notion for comparing dimensions of particles in different states of matter
Particle size is a notion introduced for comparing dimensions of solid particles ("flecks"), liquid particles ("droplets"), or gaseous particles ("bubbles"). The notion of particle size applies to particles in colloids, in ecology, in granular material (whether airborne or not), and to particles that form a granular material (see also grain size).
Measurement.
There are several methods for measuring particle size and particle size distribution. Some of them are based on light, other on ultrasound, or electric field, or gravity, or centrifugation. The use of sieves is a common measurement technique, however this process can be more susceptible to human error and is time consuming. Technology such as dynamic image analysis (DIA) can make particle size distribution analyses much easier. This approach can be seen in instruments like Retsch Technology's CAMSIZER or the Sympatec QICPIC series of instruments. They still lack the capability of inline measurements for real time monitoring in production environments. Therefore, inline imaging devices like the SOPAT system are most efficient.
Machine learning algorithms are used to increase the performance of particle size measurement. This line of research can yield low-cost and real time particle size analysis.
In all methods the size is an indirect measure, obtained by a model that transforms, in abstract way, the real particle shape into a simple and standardized shape, like a sphere (the most usual) or a cuboid (when minimum bounding box is used), where the "size" parameter (ex. diameter of sphere) makes sense. Exception is the mathematical morphology approach, where no shape hypothesis is necessary.
Definition of the particle size for an ensemble (collection) of particles presents another problem. Real systems are practically always polydisperse, which means that the particles in an ensemble have different sizes. The notion of particle size distribution reflects this polydispersity. There is often a need for a certain average particle size for the ensemble of particles.
Expressions for sphere size.
The particle size of a spherical object can be unambiguously and quantitatively defined by its diameter. However, a typical material object is likely to be irregular in shape and non-spherical. The above quantitative definition of "particle size" cannot be applied to non-spherical particles. There are several ways of extending the above quantitative definition to apply to non-spherical particles. Existing definitions are based on replacing a given particle with an imaginary sphere that has one of the properties identical with the particle.
formula_0
where
formula_1: diameter of representative sphere
formula_2: volume of particle
formula_3
where
formula_1: diameter of representative sphere
formula_4: surface area of particle
Indirect measure expressions.
In some measures the size (a length dimension in the expression) can't be obtained, only calculated as a function of another dimensions and parameters. Illustrating below by the main cases.
formula_5
where
formula_1: diameter of representative sphere
formula_6: weight of particle
formula_7: density of particle
formula_8: gravitational constant
Another complexity in defining "particle size" in a fluid medium appears for particles with sizes below a micrometre. When a particle becomes that small, the thickness of the interface layer becomes comparable with the particle size. As a result, the position of the particle surface becomes uncertain. There is a convention for placing this imaginary surface at a certain position suggested by Gibbs and presented in many books on interface and colloid science.
International conventions.
There is an international standard on presenting various characteristic particle sizes, the ISO 9276 (Representation of results of particle size analysis). This set of various average sizes includes "median size", "geometric mean size", "average size". In the selection of specific small-size particles is common the use of ISO 565 and ISO 3310-1 to the choice of mesh size.
Colloidal particle.
In materials science and colloidal chemistry, the term colloidal particle refers to a small amount of matter having a size typical for colloids and with a clear phase boundary. The dispersed-phase particles have a diameter between approximately 1 and 1000 nanometers. Colloids are heterogeneous in nature, invisible to the naked eye, and always move in a random zig-zag-like motion known as Brownian motion. The scattering of light by colloidal particles is known as Tyndall effect.
References.
<templatestyles src="Reflist/styles.css" />
8.ISO Standard 14644-1 Classification Airborne Particles Cleanliness | [
{
"math_id": 0,
"text": "D = 2 \\sqrt[3] {\\frac{3V}{4\\pi}}"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "D = \\sqrt[2] {\\frac{4A}{\\pi}}"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "D = 2 \\sqrt[3] {\\frac{3W}{4\\pi dg}}"
},
{
"math_id": 6,
"text": "W"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": "g"
}
]
| https://en.wikipedia.org/wiki?curid=14020842 |
14021543 | SSS* | Game tree search algorithm
SSS* is a search algorithm, introduced by George Stockman in 1979, that conducts a state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm.
SSS* is based on the notion of solution trees. Informally, a solution tree can be formed from any arbitrary game tree by pruning the number of branches at each MAX node to one. Such a tree represents a complete strategy for MAX, since it specifies exactly one MAX action for every possible sequence of moves made by the opponent. Given a game tree, SSS* searches through the space of partial solution trees, gradually analyzing larger and larger subtrees, eventually producing a single solution tree with the same root and Minimax value as the original game tree. SSS* never examines a node that alpha–beta pruning would prune, and may prune some branches that alpha–beta would not. Stockman speculated that SSS* may therefore be a better general algorithm than alpha–beta. However, Igor Roizen and Judea Pearl have shown that the savings in the number of positions that SSS* evaluates relative to alpha/beta is limited and generally not enough to compensate for the increase in other resources (e.g., the storing and sorting of a list of nodes made necessary by the best-first nature of the algorithm). However, Aske Plaat, Jonathan Schaeffer, Wim Pijls and Arie de Bruin have shown that a sequence of null-window alpha–beta calls is equivalent to SSS* (i.e., it expands the same nodes in the same order) when alpha–beta is used with a transposition table, as is the case in all game-playing programs for chess, checkers, etc. Now the storing and sorting of the OPEN list were no longer necessary. This allowed the implementation of (an algorithm equivalent to) SSS* in tournament quality game-playing programs. Experiments showed that it did indeed perform better than Alpha–Beta in practice, but that it did not beat NegaScout.
The reformulation of a best-first algorithm as a sequence of depth-first calls prompted the formulation of a class of null-window alpha–beta algorithms, of which MTD(f) is the best known example.
Algorithm.
There is a priority queue OPEN that stores states formula_0 or the nodes, where formula_1 - node identificator (Dot-decimal notation is used to identify nodes, formula_2 is a root), formula_3 - state of the node formula_1 (L - the node is live, which means it's not solved yet and S - the node is solved), formula_4 - value of the solved node. Items in OPEN queue are sorted descending by their formula_5 value. If more than one node has the same value of formula_5, a node left-most in the tree is chosen.
while true do // repeat until stopped
pop an element "p"=("J", "s", "h") from the head of the OPEN queue
if "J" = "e" and "s" = "S" then
STOP the algorithm and return h as a result
else
apply Gamma operator for "p"
formula_6 operator for formula_7 is defined in the following way:
if "s" = "L" then
if "J" is a terminal node then
(1.) add ("J", "S", min("h", value("J"))) to OPEN
else if "J" is a MIN node then
(2.) add (J.1, "L", "h") to OPEN
else
(3.) for "j"=1..number_of_children("J") add (J.j, "L", "h") to OPEN
else
if "J" is a MIN node then
(4.) add (parent("J"), "S", "h") to OPEN
remove from OPEN all the states that are associated with the children of parent(J)
else if is_last_child("J") then // if J is the last child of parent(J)
(5.) add (parent("J"), "S", "h") to OPEN
else
(6.) add (parent("J").("k"+1), "L", "h") to OPEN // add state associated with the next child of parent(J) to OPEN
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(J, s, h)"
},
{
"math_id": 1,
"text": "J"
},
{
"math_id": 2,
"text": "\\epsilon"
},
{
"math_id": 3,
"text": "s\\in\\{L,S\\}"
},
{
"math_id": 4,
"text": "h\\in(-\\infty, \\infty)"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "\\Gamma"
},
{
"math_id": 7,
"text": "p=(J,s,h)"
}
]
| https://en.wikipedia.org/wiki?curid=14021543 |
14022 | Haber process | Industrial process for ammonia production
The Haber process, also called the Haber–Bosch process, is the main industrial procedure for the production of ammonia. It converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using a finely divided iron metal catalyst:
formula_0
This reaction is slightly favorable in terms of enthalpy, but is disfavored in terms of entropy because four equivalents of reactant gases are converted into two equivalents of product gas. As a result, high pressures and moderately high temperatures are needed to drive the reaction forward.
The German chemists Fritz Haber and Carl Bosch developed the process in the first decade of the 20th century, and its improved efficiency over existing methods such as the Birkeland-Eyde and Frank-Caro processes was a major advancement in the industrial production of ammonia. The Haber process can be combined with steam reforming to produce ammonia with just three chemical inputs: water, natural gas, and atmospheric nitrogen. Both Haber and Bosch were eventually awarded the Nobel Prize in Chemistry: Haber in 1918 for ammonia synthesis specifically, and Bosch in 1931 for related contributions to high-pressure chemistry.
History.
During the 19th century, the demand for nitrates and ammonia for use as fertilizers, that supply plants the nutrients they need to grow, and industrial feedstocks rapidly increased. The main source was mining niter deposits and guano from tropical islands. At the beginning of the 20th century these reserves were thought insufficient to satisfy future demands, and research into new potential sources of ammonia increased. Although atmospheric nitrogen (N2) is abundant, comprising ~78% of the air, it is exceptionally stable and does not readily react with other chemicals.
Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at a laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from the air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial scale. He succeeded in 1910. Haber and Bosch were later awarded Nobel Prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology.
Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes/day in 1914. During World War I, the production of munitions required large amounts of nitrate. The Allies had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. India had large supplies too, but it was also controlled by the British. Moreover, even if German commercial interests had nominal legal control of such resources, the Allied powers controlled the sea lanes and imposed a highly effective blockade which would have prevented such supplies from reaching Germany. The Haber process proved so essential to the German war effort that it is considered virtually certain Germany would have been defeated in a matter of months without it. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives.
The original Haber–Bosch reaction chambers used osmium as the catalyst, but it was available in extremely small quantities. Haber noted uranium was almost as effective and easier to obtain than osmium. In 1909, BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst that is still used. A major contributor to the elucidation of this catalysis was Gerhard Ertl. The most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3.
During the interwar years, alternative processes were developed, most notably the Casale process, Claude process, and the Mont-Cenis process developed by Friedrich Uhde Ingenieurbüro. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Claude proposed to have three or four converters with liquefaction steps in series, thereby avoiding recycling. Most plants continue to use the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization.
Process.
Combined with the energy needed to produce hydrogen and purified atmospheric nitrogen, ammonia production is energy-intensive, accounting for 1% to 2% of global energy consumption, 3% of global carbon emissions, and 3% to 5% of natural gas consumption. Hydrogen required for ammonia synthesis is most often produced through gasification of carbon-containing material, mostly natural gas, but other potential carbon sources include coal, petroleum, peat, biomass, or waste. As of 2012, the global production of ammonia produced from natural gas using the steam reforming process was 72%. Hydrogen can also be produced from water and electricity using electrolysis: at one time, most of Europe's ammonia was produced from the Hydro plant at Vemork. Other possibilities include biological hydrogen production or photolysis, but at present, steam reforming of natural gas is the most economical means of mass-producing hydrogen.
The choice of catalyst is important for synthesizing ammonia. In 2012, Hideo Hosono's group found that Ru-loaded calcium-aluminum oxide C12A7: electride works well as a catalyst and pursued more efficient formation. This method is implemented in a small plant for ammonia synthesis in Japan. In 2019, Hosono's group found another catalyst, a novel perovskite oxynitride-hydride , that works at lower temperature and without costly ruthenium.
Hydrogen production.
The major source of hydrogen is methane. Steam reforming of natural gas extracts hydrogen from methane in a high-temperature and pressure tube inside a reformer with a nickel catalyst. Other fossil fuel sources include coal, heavy fuel oil and naphtha.
Green hydrogen is produced without fossil fuels or carbon dioxide emissions from biomass, water electrolysis and thermochemical (solar or another heat source) water splitting. However, these hydrogen sources are not economically competitive with steam reforming.
Starting with a natural gas (CH4) feedstock, the steps are as follows;
<chem>H2 + RSH -> RH + H2S</chem>
<chem>H2S + ZnO -> ZnS + H2O</chem>
<chem>CH4 + H2O -> CO + 3 H2</chem>
<chem>CO + H2O -> CO2 + H2</chem>
<chem> CO + 3 H2 -> CH4 + H2O</chem>
<chem> CO2 + 4 H2 -> CH4 + 2 H2O</chem>
Ammonia production.
The hydrogen is catalytically reacted with nitrogen (derived from process air) to form anhydrous liquid ammonia. It is difficult and expensive, as lower temperatures result in slower reaction kinetics (hence a slower reaction rate) and high pressure requires high-strength pressure vessels that resist hydrogen embrittlement. Diatomic nitrogen is bound together by a triple bond, which makes it relatively inert. Yield and efficiency are low, meaning that the ammonia must be extracted and the gases reprocessed for the reaction to proceed at an acceptable pace.
This step is known as the ammonia synthesis loop:
<chem>3 H2 + N2 -> 2 NH3</chem>
The gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass to maintain a reasonable equilibrium constant. On each pass, only about 15% conversion occurs, but unreacted gases are recycled, and eventually conversion of 97% is achieved.
Due to the nature of the (typically multi-promoted magnetite) catalyst used in the ammonia synthesis reaction, only low levels of oxygen-containing (especially CO, CO2 and H2O) compounds can be tolerated in the hydrogen/nitrogen mixture. Relatively pure nitrogen can be obtained by air separation, but additional oxygen removal may be required.
Because of relatively low single pass conversion rates (typically less than 20%), a large recycle stream is required. This can lead to the accumulation of inerts in the gas.
Nitrogen gas (N2) is unreactive because the atoms are held together by triple bonds. The Haber process relies on catalysts that accelerate the scission of these bonds.
Two opposing considerations are relevant: the equilibrium position and the reaction rate. At room temperature, the equilibrium is in favor of ammonia, but the reaction does not proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant decreases with increasing temperature following Le Châtelier's principle. It becomes unity at around .
Above this temperature, the equilibrium quickly becomes unfavorable at atmospheric pressure, according to the Van 't Hoff equation. Lowering the temperature is unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient.
Increased pressure favors the forward reaction because 4 moles of reactant produce 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship:
formula_1
where formula_2 is the fugacity coefficient of species formula_3, formula_4 is the mole fraction of the same species, formula_5 is the reactor pressure, and formula_6 is standard pressure, typically .
Economically, reactor pressurization is expensive: pipes, valves, and reaction vessels need to be strong enough, and safety considerations affect operating at 20 MPa. Compressors take considerable energy, as work must be done on the (compressible) gas. Thus, the compromise used gives a single-pass yield of around 15%.
While removing the ammonia from the system increases the reaction yield, this step is not used in practice, since the temperature is too high; instead it is removed from the gases leaving the reaction vessel. The hot gases are cooled under high pressure, allowing the ammonia to condense and be removed as a liquid. Unreacted hydrogen and nitrogen gases are returned to the reaction vessel for another round. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream. In academic literature, a more complete separation of ammonia has been proposed by absorption in metal halides or zeolites. Such a process is called an "absorbent-enhanced Haber process" or "adsorbent-enhanced Haber–Bosch process".
Pressure/temperature.
The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at absolute pressures of about 25 to 35 bar, while the ammonia synthesis loop operates at temperatures of and pressures ranging from 60 to 180 bar depending upon the method used. The resulting ammonia must then be separated from the residual hydrogen and nitrogen at temperatures of .
Catalysts.
The Haber–Bosch process relies on catalysts to accelerate N2 hydrogenation. The catalysts are heterogeneous solids that interact with gaseous reagents.
The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide.
Iron-based catalysts.
The iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is oxidized to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of metallic iron. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its catalytic effectiveness. Minor components include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by hydrogen.
The production of the catalyst requires a particular melting process in which used raw materials must be free of catalyst poisons and the promoter aggregates must be evenly distributed in the magnetite melt. Rapid cooling of the magnetite, which has an initial temperature of about 3500 °C, produces the desired precursor. Unfortunately, the rapid cooling ultimately forms a catalyst of reduced abrasion resistance. Despite this disadvantage, the method of rapid cooling is often employed.
The reduction of the precursor magnetite to α-iron is carried out directly in the production plant with synthesis gas. The reduction of the magnetite proceeds via the formation of wüstite (FeO) so that particles with a core of magnetite become surrounded by a shell of wüstite. The further reduction of magnetite and wüstite leads to the formation of α-iron, which forms together with the promoters the outer shell. The involved processes are complex and depend on the reduction temperature: At lower temperatures, wüstite disproportionates into an iron phase and a magnetite phase; at higher temperatures, the reduction of the wüstite and magnetite to iron dominates.
The α-iron forms primary crystallites with a diameter of about 30 nanometers. These crystallites form a bimodal pore system with pore diameters of about 10 nanometers (produced by the reduction of the magnetite phase) and of 25 to 50 nanometers (produced by the reduction of the wüstite phase). With the exception of cobalt oxide, the promoters are not reduced.
During the reduction of the iron oxide with synthesis gas, water vapor is formed. This water vapor must be considered for high catalyst quality as contact with the finely divided iron would lead to premature aging of the catalyst through recrystallization, especially in conjunction with high temperatures. The vapor pressure of the water in the gas mixture produced during catalyst formation is thus kept as low as possible, target values are below 3 gm−3. For this reason, the reduction is carried out at high gas exchange, low pressure, and low temperatures. The exothermic nature of the ammonia formation ensures a gradual increase in temperature.
The reduction of fresh, fully oxidized catalyst or precursor to full production capacity takes four to ten days. The wüstite phase is reduced faster and at lower temperatures than the magnetite phase (Fe3O4). After detailed kinetic, microscopic, and X-ray spectroscopic investigations it was shown that wüstite reacts first to metallic iron. This leads to a gradient of iron(II) ions, whereby these diffuse from the magnetite through the wüstite to the particle surface and precipitate there as iron nuclei.
Pre-reduced, stabilized catalysts occupy a significant market share. They are delivered showing the fully developed pore structure, but have been oxidized again on the surface after manufacture and are therefore no longer pyrophoric. The reactivation of such pre-reduced catalysts requires only 30 to 40 hours instead of several days. In addition to the short start-up time, they have other advantages such as higher water resistance and lower weight.
Catalysts other than iron.
Many efforts have been made to improve the Haber–Bosch process. Many metals were tested as catalysts. The requirement for suitability is the dissociative adsorption of nitrogen (i. e. the nitrogen molecule must be split into nitrogen atoms upon adsorption). If the binding of the nitrogen is too strong, the catalyst is blocked and the catalytic ability is reduced (self-poisoning). The elements in the periodic table to the left of the iron group show such strong bonds. Further, the formation of surface nitrides makes, for example, chromium catalysts ineffective. Metals to the right of the iron group, in contrast, adsorb nitrogen too weakly for ammonia synthesis. Haber initially used catalysts based on osmium and uranium. Uranium reacts to its nitride during catalysis, while osmium oxide is rare.
According to theoretical and practical studies, improvements over pure iron are limited. The activity of iron catalysts is increased by the inclusion of cobalt.
Ruthenium.
Ruthenium forms highly active catalysts. Allowing milder operating pressures and temperatures, Ru-based materials are referred to as second-generation catalysts. Such catalysts are prepared by the decomposition of triruthenium dodecacarbonyl on graphite. A drawback of activated-carbon-supported ruthenium-based catalysts is the methanation of the support in the presence of hydrogen. Their activity is strongly dependent on the catalyst carrier and the promoters. A wide range of substances can be used as carriers, including carbon, magnesium oxide, aluminium oxide, zeolites, spinels, and boron nitride.
Ruthenium-activated carbon-based catalysts have been used industrially in the KBR Advanced Ammonia Process (KAAP) since 1992. The carbon carrier is partially degraded to methane; however, this can be mitigated by a special treatment of the carbon at 1500 °C, thus prolonging the catalyst lifetime. In addition, the finely dispersed carbon poses a risk of explosion. For these reasons and due to its low acidity, magnesium oxide has proven to be a good choice of carrier. Carriers with acidic properties extract electrons from ruthenium, make it less reactive, and have the undesirable effect of binding ammonia to the surface.
Catalyst poisons.
Catalyst poisons lower catalyst activity. They are usually impurities in the synthesis gas. Permanent poisons cause irreversible loss of catalytic activity, while temporary poisons lower the activity while present. Sulfur compounds, phosphorus compounds, arsenic compounds, and chlorine compounds are permanent poisons. Oxygenic compounds like water, carbon monoxide, carbon dioxide, and oxygen are temporary poisons.
Although chemically inert components of the synthesis gas mixture such as noble gases or methane are not strictly poisons, they accumulate through the recycling of the process gases and thus lower the partial pressure of the reactants, which in turn slows conversion.
Industrial production.
Synthesis parameters.
The reaction is:
formula_7
The reaction is an exothermic equilibrium reaction in which the gas volume is reduced. The equilibrium constant Keq of the reaction (see table) and obtained from:
formula_8
Since the reaction is exothermic, the equilibrium of the reaction shifts at lower temperatures to the ammonia side. Furthermore, four volumetric units of the raw materials produce two volumetric units of ammonia. According to Le Chatelier's principle, higher pressure favours ammonia. High pressure is necessary to ensure sufficient surface coverage of the catalyst with nitrogen. For this reason, a ratio of nitrogen to hydrogen of 1 to 3, a pressure of 250 to 350 bar, a temperature of 450 to 550 °C and α iron are optimal.
The catalyst ferrite (α-Fe) is produced in the reactor by the reduction of magnetite with hydrogen. The catalyst has its highest efficiency at temperatures of about 400 to 500 °C. Even though the catalyst greatly lowers the activation energy for the cleavage of the triple bond of the nitrogen molecule, high temperatures are still required for an appropriate reaction rate. At the industrially used reaction temperature of 450 to 550 °C an optimum between the decomposition of ammonia into the starting materials and the effectiveness of the catalyst is achieved. The formed ammonia is continuously removed from the system. The volume fraction of ammonia in the gas mixture is about 20%.
The inert components, especially the noble gases such as argon, should not exceed a certain content in order not to reduce the partial pressure of the reactants too much. To remove the inert gas components, part of the gas is removed and the argon is separated in a gas separation plant. The extraction of pure argon from the circulating gas is carried out using the Linde process.
Large-scale implementation.
Modern ammonia plants produce more than 3000 tons per day in one production line. The following diagram shows the set-up of a Haber–Bosch plant:
Depending on its origin, the synthesis gas must first be freed from impurities such as hydrogen sulfide or organic sulphur compounds, which act as a catalyst poison. High concentrations of hydrogen sulfide, which occur in synthesis gas from carbonization coke, are removed in a wet cleaning stage such as the sulfosolvan process, while low concentrations are removed by adsorption on activated carbon. Organosulfur compounds are separated by pressure swing adsorption together with carbon dioxide after CO conversion.
To produce hydrogen by steam reforming, methane reacts with water vapor using a nickel oxide-alumina catalyst in the primary reformer to form carbon monoxide and hydrogen. The energy required for this, the enthalpy ΔH, is 206 kJ/mol.
formula_9
The methane gas reacts in the primary reformer only partially. To increase the hydrogen yield and keep the content of inert components (i. e. methane) as low as possible, the remaining methane gas is converted in a second step with oxygen to hydrogen and carbon monoxide in the secondary reformer. The secondary reformer is supplied with air as the oxygen source. Also, the required nitrogen for the subsequent ammonia synthesis is added to the gas mixture.
formula_10
In the third step, the carbon monoxide is oxidized to carbon dioxide, which is called CO conversion or water-gas shift reaction.
formula_11
Carbon monoxide and carbon dioxide would form carbamates with ammonia, which would clog (as solids) pipelines and apparatus within a short time. In the following process step, the carbon dioxide must therefore be removed from the gas mixture. In contrast to carbon monoxide, carbon dioxide can easily be removed from the gas mixture by gas scrubbing with triethanolamine. The gas mixture then still contains methane and noble gases such as argon, which, however, behave inertly.
The gas mixture is then compressed to operating pressure by turbo compressors. The resulting compression heat is dissipated by heat exchangers; it is used to preheat raw gases.
The actual production of ammonia takes place in the ammonia reactor. The first reactors were bursting under high pressure because the atomic hydrogen in the carbonaceous steel partially recombined into methane and produced cracks in the steel. Bosch, therefore, developed tube reactors consisting of a pressure-bearing steel tube in which a low-carbon iron lining tube was inserted and filled with the catalyst. Hydrogen that diffused through the inner steel pipe escaped to the outside via thin holes in the outer steel jacket, the so-called Bosch holes. A disadvantage of the tubular reactors was the relatively high-pressure loss, which had to be applied again by compression. The development of hydrogen-resistant chromium-molybdenum steels made it possible to construct single-walled pipes.
Modern ammonia reactors are designed as multi-storey reactors with a low-pressure drop, in which the catalysts are distributed as fills over about ten storeys one above the other. The gas mixture flows through them one after the other from top to bottom. Cold gas is injected from the side for cooling. A disadvantage of this reactor type is the incomplete conversion of the cold gas mixture in the last catalyst bed.
Alternatively, the reaction mixture between the catalyst layers is cooled using heat exchangers, whereby the hydrogen-nitrogen mixture is preheated to the reaction temperature. Reactors of this type have three catalyst beds. In addition to good temperature control, this reactor type has the advantage of better conversion of the raw material gases compared to reactors with cold gas injection.
Uhde has developed and is using an ammonia converter with three radial flow catalyst beds and two internal heat exchangers instead of axial flow catalyst beds. This further reduces the pressure drop in the converter.
The reaction product is continuously removed for maximum yield. The gas mixture is cooled to 450 °C in a heat exchanger using water, freshly supplied gases, and other process streams. The ammonia also condenses and is separated in a pressure separator. Unreacted nitrogen and hydrogen are then compressed back to the process by a circulating gas compressor, supplemented with fresh gas, and fed to the reactor. In a subsequent distillation, the product ammonia is purified.
Mechanism.
Elementary steps.
The mechanism of ammonia synthesis contains the following seven elementary steps:
Transport and diffusion (the first and last two steps) are fast compared to adsorption, reaction, and desorption because of the shell structure of the catalyst. It is known from various investigations that the rate-determining step of the ammonia synthesis is the dissociation of nitrogen. In contrast, exchange reactions between hydrogen and deuterium on the Haber–Bosch catalysts still take place at temperatures of at a measurable rate; the exchange between deuterium and hydrogen on the ammonia molecule also takes place at room temperature. Since the adsorption of both molecules is rapid, it cannot determine the speed of ammonia synthesis.
In addition to the reaction conditions, the adsorption of nitrogen on the catalyst surface depends on the microscopic structure of the catalyst surface. Iron has different crystal surfaces, whose reactivity is very different. The Fe(111) and Fe(211) surfaces have by far the highest activity. The explanation for this is that only these surfaces have so-called C7 sites – these are iron atoms with seven closest neighbours.
The dissociative adsorption of nitrogen on the surface follows the following scheme, where S* symbolizes an iron atom on the surface of the catalyst:
N2 → S*–N2 (γ-species) → S*–N2–S* (α-species) → 2 S*–N (β-species, "surface nitride")
The adsorption of nitrogen is similar to the chemisorption of carbon monoxide. On a Fe(111) surface, the adsorption of nitrogen first leads to an adsorbed γ-species with an adsorption energy of 24 kJmol−1 and an N-N stretch vibration of 2100 cm−1. Since the nitrogen is isoelectronic to carbon monoxide, it adsorbs in an on-end configuration in which the molecule is bound perpendicular to the metal surface at one nitrogen atom. This has been confirmed by photoelectron spectroscopy.
Ab-initio-MO calculations have shown that, in addition to the σ binding of the free electron pair of nitrogen to the metal, there is a π binding from the d orbitals of the metal to the π* orbitals of nitrogen, which strengthens the iron-nitrogen bond. The nitrogen in the α state is more strongly bound with 31 kJmol−1. The resulting N–N bond weakening could be experimentally confirmed by a reduction of the wave numbers of the N–N stretching oscillation to 1490 cm−1.
Further heating of the Fe(111) area covered by α-N2 leads to both desorption and the emergence of a new band at 450 cm−1. This represents a metal-nitrogen oscillation, the β state. A comparison with vibration spectra of complex compounds allows the conclusion that the N2 molecule is bound "side-on", with an N atom in contact with a C7 site. This structure is called "surface nitride". The surface nitride is very strongly bound to the surface. Hydrogen atoms (Hads), which are very mobile on the catalyst surface, quickly combine with it.
Infrared spectroscopically detected surface imides (NHad), surface amides (NH2,ad) and surface ammoniacates (NH3,ad) are formed, the latter decay under NH3 release (desorption). The individual molecules were identified or assigned by X-ray photoelectron spectroscopy (XPS), high-resolution electron energy loss spectroscopy (HREELS) and Ir Spectroscopy.
On the basis of these experimental findings, the reaction mechanism is believed to involve the following steps (see also figure):
Reaction 5 occurs in three steps, forming NH, NH2, and then NH3. Experimental evidence points to reaction 2 as being slow, rate-determining step. This is not unexpected, since that step breaks the nitrogen triple bond, the strongest of the bonds broken in the process.
As with all Haber–Bosch catalysts, nitrogen dissociation is the rate-determining step for ruthenium-activated carbon catalysts. The active center for ruthenium is a so-called B5 site, a 5-fold coordinated position on the Ru(0001) surface where two ruthenium atoms form a step edge with three ruthenium atoms on the Ru(0001) surface. The number of B5 sites depends on the size and shape of the ruthenium particles, the ruthenium precursor and the amount of ruthenium used. The reinforcing effect of the basic carrier used in the ruthenium catalyst is similar to the promoter effect of alkali metals used in the iron catalyst.
Energy diagram.
An energy diagram can be created based on the Enthalpy of Reaction of the individual steps. The energy diagram can be used to compare homogeneous and heterogeneous reactions: Due to the high activation energy of the dissociation of nitrogen, the homogeneous gas phase reaction is not realizable. The catalyst avoids this problem as the energy gain resulting from the binding of nitrogen atoms to the catalyst surface overcompensates for the necessary dissociation energy so that the reaction is finally exothermic. Nevertheless, the dissociative adsorption of nitrogen remains the rate-determining step: not because of the activation energy, but mainly because of the unfavorable pre-exponential factor of the rate constant. Although hydrogenation is endothermic, this energy can easily be applied by the reaction temperature (about 700 K).
Economic and environmental aspects.
When first invented, the Haber process competed against another industrial process, the cyanamide process. However, the cyanamide process consumed large amounts of electrical power and was more labor-intensive than the Haber process.
As of 2018, the Haber process produces 230 million tonnes of anhydrous ammonia per year. The ammonia is used mainly as a nitrogen fertilizer as ammonia itself, in the form of ammonium nitrate, and as urea. The Haber process consumes 3–5% of the world's natural gas production (around 1–2% of the world's energy supply). In combination with advances in breeding, herbicides, and pesticides, these fertilizers have helped to increase the productivity of agricultural land:
<templatestyles src="Template:Blockquote/styles.css" />
The energy-intensity of the process contributes to climate change and other environmental problems such as the leaching of nitrates into groundwater, rivers, ponds, and lakes; expanding dead zones in coastal ocean waters, resulting from recurrent eutrophication; atmospheric deposition of nitrates and ammonia affecting natural ecosystems; higher emissions of nitrous oxide (N2O), now the third most important greenhouse gas following CO2 and CH4. The Haber–Bosch process is one of the largest contributors to a buildup of reactive nitrogen in the biosphere, causing an anthropogenic disruption to the nitrogen cycle.
Since nitrogen use efficiency is typically less than 50%, farm runoff from heavy use of fixed industrial nitrogen disrupts biological habitats.
Nearly 50% of the nitrogen found in human tissues originated from the Haber–Bosch process. Thus, the Haber process serves as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2018.
Reverse fuel cell technology converts electric energy, water and nitrogen into ammonia without a separate hydrogen electrolysis process.
The use of synthetic nitrogen fertilisers reduces the incentive for farmers to use more sustainable crop rotations which include legumes for their natural nitrogen-fixing ability.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{N2 + 3H2 <=> 2NH3} \\qquad {\\Delta H^\\circ = -92.28 \\; \\ce{kJ}} \\ ({\\Delta H^\\circ_{298 \\mathrm K} = -46.14 \\; \\mathrm{kJ/mol}})"
},
{
"math_id": 1,
"text": "K = \\frac{y_\\ce{NH3}^2}{y_\\ce{H2}^3 y_\\ce{N2}} \\frac{\\hat\\phi_\\ce{NH3}^2}{\\hat\\phi_\\ce{H2}^3 \\hat\\phi_\\ce{N2}} \\left(\\frac{P^\\circ}{P}\\right)^2,"
},
{
"math_id": 2,
"text": "\\hat\\phi_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "y_i"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "P^\\circ"
},
{
"math_id": 7,
"text": "\\ce{N2 + 3H2 <=> 2NH3} \\qquad {\\Delta H^\\circ = -92.28 \\; \\ce{kJ}} \\ ({\\Delta H^\\circ_{298 \\mathrm K} = -46.14 \\; \\mathrm{kJ/mol}}) "
},
{
"math_id": 8,
"text": "K_{eq} = \\frac{p^2 \\ce{(NH3)}}{p \\ce{(N2)}\\cdot p^3 \\ce{(H2)}}"
},
{
"math_id": 9,
"text": "\\ce{ {CH4_{(g)} } + H2O_{(g)} -> {CO_{(g)} } + 3H2_{(g)} } \\qquad {\\Delta H = +206\\ \\ce{kJ/mol} }"
},
{
"math_id": 10,
"text": "\\ce{ {2CH4_{(g)} } + O2_{(g)} -> {2CO_{(g)} } + 4H2_{(g)} } \\qquad {\\Delta H = -71\\ \\ce{kJ/mol} }"
},
{
"math_id": 11,
"text": "\\ce{ {CO_{(g)} } + H2O(g) -> {CO2_{(g)} } + H2_{(g)}} \\qquad {\\Delta H = -41\\ \\ce{kJ/mol} }"
}
]
| https://en.wikipedia.org/wiki?curid=14022 |
14023830 | 10-hydroxydihydrosanguinarine 10-O-methyltransferase | Class of enzymes
In enzymology, a 10-hydroxydihydrosanguinarine 10-O-methyltransferase (EC 2.1.1.119) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 10-hydroxydihydrosanguinarine formula_0 S-adenosyl-L-homocysteine + dihydrochelirubine
Thus, the two substrates of this enzyme are S-adenosyl methionine and 10-hydroxydihydrosanguinarine, whereas its two products are S-adenosylhomocysteine and dihydrochelirubine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:10-hydroxydihydrosanguinarine 10-O-methyltransferase. This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023830 |
14023839 | 12-hydroxydihydrochelirubine 12-O-methyltransferase | Class of enzymes
In enzymology, a 12-hydroxydihydrochelirubine 12-O-methyltransferase (EC 2.1.1.120) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 12-hydroxydihydrochelirubine formula_0 S-adenosyl-L-homocysteine + dihydromacarpine
Thus, the two substrates of this enzyme are S-adenosyl methionine and 12-hydroxydihydrochelirubine, whereas its two products are S-adenosylhomocysteine and dihydromacarpine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:12-hydroxydihydrochelirubine 12-O-methyltransferase. This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023839 |
14023851 | 24-methylenesterol C-methyltransferase | Class of enzymes
In enzymology, a 24-methylenesterol C-methyltransferase (EC 2.1.1.143) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 24-methylenelophenol formula_0 S-adenosyl-L-homocysteine + (Z)-24-ethylidenelophenol
Thus, the two substrates of this enzyme are S-adenosyl methionine and 24-Methylenelophenol, whereas its two products are S-adenosylhomocysteine and (Z)-24-ethylidenelophenol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:24-methylenelophenol C-methyltransferase. Other names in common use include SMT2, and 24-methylenelophenol C-241-methyltransferase. This enzyme participates in the biosynthesis of steroids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023851 |
14023870 | 3,7-dimethylquercetin 4'-O-methyltransferase | Class of enzymes
In enzymology, a 3,7-dimethylquercetin 4'-O-methyltransferase (EC 2.1.1.83) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 5,3',4'-trihydroxy-3,7-dimethoxyflavone formula_0 S-adenosyl-L-homocysteine + 5,3'-dihydroxy-3,7,4'-trimethoxyflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and 5,3',4'-trihydroxy-3,7-dimethoxyflavone (rhamnazin), whereas its two products are S-adenosylhomocysteine and 5,3'-dihydroxy-3,7,4'-trimethoxyflavone (ayanin).
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:5,3',4'-trihydroxy-3,7-dimethoxyflavone 4'-O-methyltransferase. Other names in common use include flavonol 4'-O-methyltransferase, flavonol 4'-methyltransferase, 4'-OMT, S-adenosyl-L-methionine:3',4',5-trihydroxy-3,7-dimethoxyflavone, 4'-O-methyltransferase, and 3,7-dimethylquercitin 4'-O-methyltransferase [mis-spelt].
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023870 |
14023885 | 3'-demethylstaurosporine O-methyltransferase | Class of enzymes
In enzymology, a 3'-demethylstaurosporine O-methyltransferase (EC 2.1.1.139) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3'-demethylstaurosporine formula_0 S-adenosyl-L-homocysteine + staurosporine
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3'-demethylstaurosporine, whereas its two products are S-adenosylhomocysteine and staurosporine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:3'-demethylstaurosporine O-methyltransferase. Other names in common use include 3'-demethoxy-3'-hydroxystaurosporine O-methyltransferase, and staurosporine synthase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023885 |
14023901 | 3-demethylubiquinone-9 3-O-methyltransferase | Class of enzymes
In enzymology, a 3-demethylubiquinone-9 3-O-methyltransferase (EC 2.1.1.64) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3-demethylubiquinone-9 formula_0 S-adenosyl-L-homocysteine + ubiquinone-9
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3-demethylubiquinone-9, whereas its two products are S-adenosylhomocysteine and ubiquinone-9.
This enzyme participates in ubiquinone biosynthesis.
Nomenclature.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:2-nonaprenyl-3-methyl-5-hydroxy-6-methoxy-1, 4-benzoquinone 3-O-methyltransferase. Other names in common use include 5-demethylubiquinone-9 methyltransferase, OMHMB-methyltransferase, 2-Octaprenyl-3-methyl-5-hydroxy-6-methoxy-1,4-benzoquinone, methyltransferase, S-adenosyl-L-methionine:2-octaprenyl-3-methyl-5-hydroxy-6-methoxy-, and 1,4-benzoquinone-O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023901 |
14023913 | 3-hydroxy-16-methoxy-2,3-dihydrotabersonine N-methyltransferase | Class of enzymes
In enzymology, a 3-hydroxy-16-methoxy-2,3-dihydrotabersonine N-methyltransferase (EC 2.1.1.99) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3-hydroxy-16-methoxy-2,3-dihydrotabersonine formula_0 S-adenosyl-L-homocysteine + deacetoxyvindoline
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3-hydroxy-16-methoxy-2,3-dihydrotabersonine, whereas its two products are S-adenosylhomocysteine and deacetoxyvindoline.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:3-hydroxy-16-methoxy-2,3-dihydrotabersonine N-methyltransferase. Other names in common use include 16-methoxy-2,3-dihydro-3-hydroxytabersonine methyltransferase, NMT, 16-methoxy-2,3-dihydro-3-hydroxytabersonine N-methyltransferase, S-adenosyl-L-methionine:16-methoxy-2,3-dihydro-3-hydroxytabersonine, and N-methyltransferase. This enzyme participates in terpene indole and ipecac alkaloid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023913 |
14023922 | 3-hydroxyanthranilate 4-C-methyltransferase | Class of enzymes
In enzymology, a 3-hydroxyanthranilate 4-C-methyltransferase (EC 2.1.1.97) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3-hydroxyanthranilate formula_0 S-adenosyl-L-homocysteine + 3-hydroxy-4-methylanthranilate
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3-hydroxyanthranilate, whereas its two products are S-adenosylhomocysteine and 3-hydroxy-4-methylanthranilate.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:3-hydroxyanthranilate 4-C-methyltransferase. This enzyme is also called 3-hydroxyanthranilate 4-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023922 |
14023934 | 3'-hydroxy-N-methyl-(S)-coclaurine 4'-O-methyltransferase | Class of enzymes
In enzymology, a 3'-hydroxy-N-methyl-(S)-coclaurine 4'-O-methyltransferase (EC 2.1.1.116) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3'-hydroxy-N-methyl-(S)-coclaurine formula_0 S-adenosyl-L-homocysteine + (S)-reticuline
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3'-hydroxy-N-methyl-(S)-coclaurine, whereas its two products are S-adenosylhomocysteine and (S)-reticuline.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:3'-hydroxy-N-methyl-(S)-coclaurine 4'-O-methyltransferase. This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023934 |
14023946 | 3-methylquercetin 7-O-methyltransferase | Class of enzymes
In enzymology, a 3-methylquercetin 7-O-methyltransferase (EC 2.1.1.82) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 5,7,3',4'-tetrahydroxy-3-methoxyflavone formula_0 S-adenosyl-L-homocysteine + 5,3',4'-trihydroxy-3,7-dimethoxyflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and 5,7,3',4'-tetrahydroxy-3-methoxyflavone (isorhamnetin), whereas its two products are S-adenosylhomocysteine and 5,3',4'-trihydroxy-3,7-dimethoxyflavone (rhamnazin).
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:5,7,3',4'-tetrahydroxy-3-methoxyflavone 7-O-methyltransferase. Other names in common use include flavonol 7-O-methyltransferase, flavonol 7-methyltransferase, 7-OMT, S-adenosyl-L-methionine:3',4',5,7-tetrahydroxy-3-methoxyflavone, 7-O-methyltransferase, and 3-methylquercitin 7-O-methyltransferase [mis-spelt].
The enzyme can be found in "Chrysosplenium americanum" (American Golden Saxifrage).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023946 |
14023957 | 5-Methyltetrahydropteroyltriglutamate—homocysteine S-methyltransferase | Class of enzymes
In enzymology, a 5-methyltetrahydropteroyltriglutamate—homocysteine "S"-methyltransferase (EC 2.1.1.14) is an enzyme that catalyzes the chemical reaction
5-methyltetrahydropteroyltri-L-glutamate + L-homocysteine formula_0 tetrahydropteroyltri-L-glutamate + L-methionine
Thus, the two substrates of this enzyme are 5-methyltetrahydropteroyltri-L-glutamate and L-homocysteine, whereas its two products are tetrahydropteroyltri-L-glutamate and L-methionine. This enzyme participates in methionine metabolism. It has 2 cofactors: orthophosphate, and zinc.
Nomenclature.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is 5-methyltetrahydropteroyltri-L-glutamate:L-homocysteine S-methyltransferase. Other names in common use include tetrahydropteroyltriglutamate methyltransferase, homocysteine methylase, methyltransferase, tetrahydropteroylglutamate-homocysteine transmethylase, methyltetrahydropteroylpolyglutamate:homocysteine methyltransferase, cobalamin-independent methionine synthase, methionine synthase (cobalamin-independent), and MetE.
Structure.
The enzyme from "Escherichia coli" consists of two alpha8-beta8 (TIM) barrels positioned face to face and thought to have evolved by gene duplication. The active site lies between the tops of the two barrels, the "N"-terminal barrel binds 5-methyltetrahydropteroyltri-L-glutamic acid and the "C"-terminal barrel binds homocysteine. Homocysteine is coordinated to a zinc ion, as initially suggested by spectroscopy and mutagenesis .
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023957 |
14023976 | 6-hydroxymellein O-methyltransferase | Class of enzymes
In enzymology, a 6-hydroxymellein O-methyltransferase (EC 2.1.1.108) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 6-hydroxymellein formula_0 S-adenosyl-L-homocysteine + 6-methoxymellein
Thus, the two substrates of this enzyme are S-adenosyl methionine and 6-hydroxymellein, whereas its two products are S-adenosylhomocysteine and 6-methoxymellein.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:6-hydroxymellein 6-O-methyltransferase. This enzyme is also called 6-hydroxymellein methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023976 |
14023988 | 6-O-methylnorlaudanosoline 5'-O-methyltransferase | In enzymology, a 6-O-methylnorlaudanosoline 5'-O-methyltransferase (EC 2.1.1.121) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 6-O-methylnorlaudanosoline formula_0 S-adenosyl-L-homocysteine + nororientaline
Thus, the two substrates of this enzyme are S-adenosyl methionine and 6-O-methylnorlaudanosoline, whereas its two products are S-adenosylhomocysteine and nororientaline.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:6-O-methylnorlaudanosoline 5'-O-methyltransferase. This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14023988 |
14024002 | 7-methylxanthosine synthase | Class of enzymes
In enzymology, a 7-methylxanthosine synthase (EC 2.1.1.158) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + xanthosine formula_0 S-adenosyl-L-homocysteine + 7-methylxanthosine
Thus, the two substrates of this enzyme are S-adenosyl methionine and xanthosine, whereas its two products are S-adenosylhomocysteine and 7-methylxanthosine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:xanthosine N7-methyltransferase. Other names in common use include xanthosine methyltransferase, XMT, xanthosine:S-adenosyl-L-methionine methyltransferase, CtCS1, CmXRS1, CaXMT1, and S-adenosyl-L-methionine:xanthosine 7-N-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024002 |
14024016 | 8-hydroxyquercetin 8-O-methyltransferase | Class of enzymes
In enzymology, a 8-hydroxyquercetin 8-O-methyltransferase (EC 2.1.1.88) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3,5,7,8,3',4'-hexahydroxyflavone formula_0 S-adenosyl-L-homocysteine + 3,5,7,3',4'-pentahydroxy-8-methoxyflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3,5,7,8,3',4'-hexahydroxyflavone (gossypetin), whereas its two products are S-adenosylhomocysteine and 3,5,7,3',4'-pentahydroxy-8-methoxyflavone.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:3,5,7,8,3',4'-hexahydroxyflavone 8-O-methyltransferase. Other names in common use include flavonol 8-O-methyltransferase, flavonol 8-methyltransferase, S-adenosyl-L-methionine:3,3',4',5,7,8-hexahydroxyflavone, 8-O-methyltransferase, and 8-hydroxyquercitin 8-O-methyltransferase [mis-spelt].
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024016 |
14024030 | Amine N-methyltransferase | Class of enzymes
Amine "N"-methyltransferase (EC 2.1.1.49), also called indolethylamine "N"-methyltransferase, and thioether "S"-methyltransferase, is an enzyme that is ubiquitously present in non-neural tissues and catalyzes the "N"-methylation of tryptamine and structurally related compounds. More recently, it was discovered that this enzyme can also catalyze the methylation of thioether and selenoether compounds, although the physiological significance of this biotransformation is not yet known.
The chemical reaction taking place is:
Thus, the two substrates of this enzyme are "S"-adenosyl methionine and amine, whereas its two products are "S"-adenosylhomocysteine and methylated amine. In the case of tryptamine and serotonin these then become the dimethylated indolethylamines "N","N"-dimethyltryptamine (DMT) and bufotenine respectively.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is "S"-adenosyl--methionine:amine "N"-methyltransferase. Other names in common use include nicotine "N"-methyltransferase, tryptamine "N"-methyltransferase, indolethylamine "N"-methyltransferase, and arylamine "N"-methyltransferase. This enzyme participates in tryptophan metabolism.
A wide range of primary, secondary and tertiary amines can act as acceptors, including tryptamine, aniline, nicotine and a variety of drugs and other xenobiotics.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2A14.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024030 |
14024043 | Anthranilate N-methyltransferase | In enzymology, an anthranilate N-methyltransferase (EC 2.1.1.111) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + anthranilate formula_0 S-adenosyl-L-homocysteine + N-methylanthranilate
Thus, the two substrates of this enzyme are S-adenosyl methionine and anthranilate, whereas its two products are S-adenosylhomocysteine and N-methylanthranilate.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:anthranilate N-methyltransferase. This enzyme is also called anthranilic acid N-methyltransferase. This enzyme participates in acridone alkaloid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024043 |
14024059 | Apigenin 4'-O-methyltransferase | In enzymology, an apigenin 4'-O-methyltransferase (EC 2.1.1.75) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 5,7,4'-trihydroxyflavone formula_0 S-adenosyl-L-homocysteine + 4'-methoxy-5,7-dihydroxyflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and 5,7,4'-trihydroxyflavone (apigenin), whereas its two products are S-adenosylhomocysteine and 4'-methoxy-5,7-dihydroxyflavone (acacetin).
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:5,7,4'-trihydroxyflavone 4'-O-methyltransferase. Other names in common use include flavonoid O-methyltransferase, and flavonoid methyltransferase. This enzyme participates in flavonoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024059 |
14024079 | Caffeate O-methyltransferase | Enzyme
In enzymology, a caffeate "O"-methyltransferase (EC 2.1.1.68) is an enzyme that catalyzes the chemical reaction
SAM + caffeic acid formula_0 SAH + ferulic acid
Thus, the two substrates of this enzyme are "S"-adenosyl methionine and 3,4-dihydroxy-"trans"-cinnamate (caffeic acid), whereas its two products are "S"-adenosylhomocysteine and 3-methoxy-4-hydroxy-"trans"-cinnamate (ferulic acid).
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S"-adenosyl--methionine:3,4-dihydroxy-"trans"-cinnamate 3-"O"-methyltransferase. Other names in common use include caffeate methyltransferase, caffeate 3-"O"-methyltransferase, and S"-adenosyl--methionine:caffeic acid-"O"-methyltransferase. This enzyme participates in phenylpropanoid biosynthesis.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1KYW and 1KYZ.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024079 |
14024091 | Caffeoyl-CoA O-methyltransferase | In enzymology, a caffeoyl-CoA O-methyltransferase (EC 2.1.1.104) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + caffeoyl-CoA formula_0 S-adenosyl-L-homocysteine + feruloyl-CoA
Thus, the two substrates of this enzyme are S-adenosyl methionine and caffeoyl-CoA, whereas its two products are S-adenosylhomocysteine and feruloyl-CoA. A large number of natural products are generated via a step involving this enzyme.
This enzyme is classified to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:caffeoyl-CoA 3-O-methyltransferase. Other names in common use include caffeoyl coenzyme A methyltransferase, caffeoyl-CoA 3-O-methyltransferase, and trans-caffeoyl-CoA 3-O-methyltransferase. This enzyme participates in phenylpropanoid biosynthesis.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1SUI and 1SUS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024091 |
14024103 | Calmodulin-lysine N-methyltransferase | Enzyme
In enzymology, a calmodulin-lysine N-methyltransferase (EC 2.1.1.60) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + calmodulin L-lysine formula_0 S-adenosyl-L-homocysteine + calmodulin N6-methyl-L-lysine
Thus, the two substrates of this enzyme are S-adenosyl methionine and calmodulin L-lysine, whereas its two products are S-adenosylhomocysteine and calmodulin N6-methyl-L-lysine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:calmodulin-L-lysine N6-methyltransferase. Other names in common use include S-adenosylmethionine:calmodulin (lysine) N-methyltransferase, and S-adenosyl-L-methionine:calmodulin-L-lysine 6-N-methyltransferase. This enzyme participates in lysine degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024103 |
14024116 | Carnosine N-methyltransferase | In enzymology, a carnosine N-methyltransferase (EC 2.1.1.22) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + carnosine formula_0 S-adenosyl-L-homocysteine + anserine
Thus, the two substrates of this enzyme are S-adenosyl methionine and carnosine, whereas its two products are S-adenosylhomocysteine and anserine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:carnosine N-methyltransferase. This enzyme participates in histidine metabolism.
Gene.
The genes encoding carnosine "N"-methyltransferase activity have been identified by Jakub Drozak and coworkers in 2013 and 2015. In birds and reptiles, the enzyme is encoded by histamine "N"-methyltransferase-like gene (HNMT-like). Importantly, the HNMT-like gene is absent from available mammalian genomes and in mammalian species, the formation of anserine is catalyzed by methyltransferase that is unrelated to the reptilian and avian enzyme and encoded by C9orf41/UPF0586 gene.
Protein Nomenclature.
Currently, the avian-reptilian enzyme encoded by HNMT-like gene is labeled as carnosine "N"-methyltransferase 2 in public databases, while the mammalian methyltransferase is named carnosine "N"-methyltransferase 1 (CARNMT1).
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024116 |
14024124 | Chlorophenol O-methyltransferase | In enzymology, a chlorophenol O-methyltransferase (EC 2.1.1.136) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + trichlorophenol formula_0 S-adenosyl-L-homocysteine + trichloroanisole
Thus, the two substrates of this enzyme are S-adenosyl methionine and trichlorophenol, whereas its two products are S-adenosylhomocysteine and trichloroanisole.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:trichlorophenol O-methyltransferase. Other names in common use include halogenated phenol O-methyltransferase, trichlorophenol, and O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024124 |
14024140 | Cobalt-factor II C20-methyltransferase | In enzymology, a cobalt-factor II C20-methyltransferase (EC 2.1.1.151) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + cobalt-factor II formula_0 S-adenosyl-L-homocysteine + cobalt-factor III
The two substrates of this enzyme are S-adenosyl methionine and cobalt-factor II; its two products are S-adenosylhomocysteine and cobalt-factor III.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:cobalt-factor-II C20-methyltransferase. This enzyme is also called CbiL. This enzyme is part of the biosynthetic pathway to cobalamin (vitamin B12) in anaerobic bacteria such as "Salmonella typhimurium" and "Bacillus megaterium".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024140 |
14024161 | Columbamine O-methyltransferase | Enzyme
In enzymology, a columbamine O-methyltransferase (EC 2.1.1.118) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + columbamine formula_0 S-adenosyl-L-homocysteine + palmatine
Thus, the two substrates of this enzyme are S-adenosyl methionine and columbamine, whereas its two products are S-adenosylhomocysteine and palmatine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:columbamine O-methyltransferase. This enzyme participates in alkaloid biosynthesis i.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024161 |
14024177 | Corydaline synthase | Enzyme
In enzymology, a corydaline synthase (EC 2.1.1.147) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + palmatine + 2 NADPH + H+ formula_0 S-adenosyl-L-homocysteine + corydaline + 2 NADP+
The 4 substrates of this enzyme are S-adenosyl methionine, palmatine, NADPH, and H+, whereas its 3 products are S-adenosylhomocysteine, corydaline, and NADP+.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:protoberberine 13-C-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024177 |
14024187 | Cycloartenol 24-C-methyltransferase | In enzymology, a cycloartenol 24-C-methyltransferase (EC 2.1.1.142) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + cycloartenol formula_0 S-adenosyl-L-homocysteine + (24R)-24-methylcycloart-25-en-3beta-ol
Thus, the two substrates of this enzyme are S-adenosyl methionine and cycloartenol, whereas its two products are S-adenosylhomocysteine and (24R)-24-methylcycloart-25-en-3beta-ol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:cycloartenol 24-C-methyltransferase. This enzyme is also called sterol C-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024187 |
14024205 | Cyclopropane-fatty-acyl-phospholipid synthase | In enzymology, a cyclopropane-fatty-acyl-phospholipid synthase (EC 2.1.1.79) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + phospholipid olefinic fatty acid formula_0 S-adenosyl-L-homocysteine + phospholipid cyclopropane fatty acid
Thus, the two substrates of this enzyme are S-adenosyl methionine and phospholipid olefinic fatty acid, whereas its two products are S-adenosylhomocysteine and phospholipid cyclopropane fatty acid.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:unsaturated-phospholipid methyltransferase (cyclizing). Other names in common use include cyclopropane synthetase, unsaturated-phospholipid methyltransferase, cyclopropane synthase, cyclopropane fatty acid synthase, cyclopropane fatty acid synthetase, and CFA synthase.
Structural studies.
As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1KP9, 1KPG, 1KPH, 1KPI, 1L1E, and 1TPY.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024205 |
14024228 | (cytochrome c)-arginine N-methyltransferase | Class of enzymes
In enzymology, a [cytochrome c]-arginine N-methyltransferase (EC 2.1.1.124) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + [cytochrome c]-arginine formula_0 S-adenosyl-L-homocysteine + [cytochrome c]-Nomega-methyl-arginine
Thus, the two substrates of this enzyme are S-adenosyl methionine and cytochrome c-arginine, whereas its two products are S-adenosylhomocysteine and cytochrome c-Nomega-methyl-arginine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:[cytochrome c]-arginine Nomega-methyltransferase. Other names in common use include S-adenosyl-L-methionine:[cytochrome c]-arginine, and omega-N-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024228 |
14024250 | (cytochrome c)-lysine N-methyltransferase | Class of enzymes
In enzymology, a [cytochrome c]-lysine N-methyltransferase (EC 2.1.1.59) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + [cytochrome c]-L-lysine formula_0 S-adenosyl-L-homocysteine + [cytochrome c]-N6-methyl-L-lysine
Thus, the two substrates of this enzyme are S-adenosyl methionine and cytochrome c-L-lysine, whereas its two products are S-adenosylhomocysteine and cytochrome c-N6-methyl-L-lysine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:[cytochrome c]-L-lysine N6-methyltransferase. Other names in common use include cytochrome c (lysine) methyltransferase, cytochrome c methyltransferase, cytochrome c-specific protein methylase III, cytochrome c-specific protein-lysine methyltransferase, S-adenosyl-L-methionine:[cytochrome c]-L-lysine, and 6-N-methyltransferase. This enzyme participates in lysine degradation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024250 |
14024275 | (cytochrome c)-methionine S-methyltransferase | Class of enzymes
In enzymology, a [cytochrome-c]-methionine S-methyltransferase (EC 2.1.1.123) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + [cytochrome c]-methionine formula_0 S-adenosyl-L-homocysteine + [cytochrome c]-S-methyl-methionine
Thus, the two substrates of this enzyme are S-adenosyl methionine and cytochrome c methionine, whereas its two products are S-adenosylhomocysteine and cytochrome c-S-methyl-methionine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:[cytochrome c]-methionine S-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024275 |
14024299 | Demethylmacrocin O-methyltransferase | In enzymology, a demethylmacrocin O-methyltransferase (EC 2.1.1.102) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + demethylmacrocin formula_0 S-adenosyl-L-homocysteine + macrocin
Thus, the two substrates of this enzyme are S-adenosyl methionine and demethylmacrocin, whereas its two products are S-adenosylhomocysteine and macrocin.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:demethylmacrocin 2"'-O-methyltransferase. This enzyme is also called demethylmacrocin methyltransferase. This enzyme participates in biosynthesis of 12-, 14- and 16-membered macrolides.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024299 |
14024325 | Demethylsterigmatocystin 6-O-methyltransferase | In enzymology, a demethylsterigmatocystin 6-O-methyltransferase (EC 2.1.1.109) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 6-demethylsterigmatocystin formula_0 S-adenosyl-L-homocysteine + sterigmatocystin
Thus, the two substrates of this enzyme are S-adenosyl methionine and 6-demethylsterigmatocystin, whereas its two products are S-adenosylhomocysteine and sterigmatocystin.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:6-demethylsterigmatocystin 6-O-methyltransferase. Other names in common use include demethylsterigmatocystin methyltransferase, and O-methyltransferase I.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024325 |
14024341 | Deoxycytidylate C-methyltransferase | In enzymology, a deoxycytidylate C-methyltransferase (EC 2.1.1.54) is an enzyme that catalyzes the chemical reaction
5,10-methylenetetrahydrofolate + dCMP formula_0 dihydrofolate + deoxy-5-methylcytidylate
Thus, the two substrates of this enzyme are 5,10-Methylenetetrahydrofolic acid and dCMP, whereas its two products are dihydrofolic acid and deoxy-5-methylcytidylic acid.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is 5,10-methylenetetrahydrofolate:dCMP C-methyltransferase. Other names in common use include deoxycytidylate methyltransferase, and dCMP methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024341 |
14024353 | Dimethylhistidine N-methyltransferase | In enzymology, a dimethylhistidine N-methyltransferase (EC 2.1.1.44) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + Nalpha,Nalpha-dimethyl-L-histidine formula_0 S-adenosyl-L-homocysteine + Nalpha,Nalpha,Nalpha-trimethyl-L-histidine
Thus, the two substrates of this enzyme are S-adenosyl methionine and Nalpha,Nalpha-dimethyl-L-histidine, whereas its two products are S-adenosylhomocysteine and Nalpha,Nalpha,Nalpha-trimethyl-L-histidine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:Nalpha,Nalpha-dimethyl-L-histidine Nalpha-methyltransferase. Other names in common use include dimethylhistidine methyltransferase, histidine-alpha-N-methyltransferase, S-adenosyl-L-methionine:alpha-N,alpha-N-dimethyl-L-histidine, and alpha-N-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024353 |
14024363 | Diphthine synthase | Class of enzymes
In enzymology, a diphthine synthase (EC 2.1.1.98) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 2-(3-carboxy-3-aminopropyl)-L-histidine formula_0 S-adenosyl-L-homocysteine + 2-[3-carboxy-3-(methylammonio)propyl]-L-histidine
Thus, the two substrates of this enzyme are S-adenosyl methionine and 2-(3-carboxy-3-aminopropyl)-L-histidine, whereas its two products are S-adenosylhomocysteine and 2-[3-carboxy-3-(methylammonio)propyl]-L-histidine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:2-(3-carboxy-3-aminopropyl)-L-histidine methyltransferase. Other names in common use include S-adenosyl-L-methionine:elongation factor 2 methyltransferase, and diphthine methyltransferase.
Structural studies.
As of late 2007, 84 structures have been solved for this class of enzymes, with PDB accession codes 1VCE, 1VHV, 1WDE, 1WNG, 2DEK, 2DSG, 2DSH, 2DSI, 2DV3, 2DV4, 2DV5, 2DV7, 2DXV, 2DXW, 2DXX, 2E07, 2E08, 2E15, 2E16, 2E17, 2E4N, 2E4R, 2E7R, 2E8H, 2E8Q, 2E8R, 2E8S, 2ED3, 2ED5, 2EEQ, 2EGB, 2EGL, 2EGS, 2EH2, 2EH4, 2EH5, 2EHC, 2EHL, 2EJJ, 2EJK, 2EJZ, 2EK2, 2EK3, 2EK4, 2EK7, 2EKA, 2EL0, 2EL1, 2EL2, 2EL3, 2ELD, 2ELE, 2EMR, 2EMU, 2EN5, 2ENI, 2HR8, 2HUQ, 2HUT, 2HUV, 2HUX, 2OWF, 2OWG, 2OWK, 2OWU, 2OWV, 2P2X, 2P5C, 2P5F, 2P6D, 2P6I, 2P6K, 2P6L, 2P9D, 2PB4, 2PB5, 2PB6, 2PCA, 2PCG, 2PCH, 2PCI, 2PCK, 2PCM, and 2Z6R.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024363 |
14024378 | Fatty-acid O-methyltransferase | In enzymology, a fatty-acid O-methyltransferase (EC 2.1.1.15) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + a fatty acid formula_0 S-adenosyl-L-homocysteine + a fatty acid methyl ester
Thus, the two substrates of this enzyme are S-adenosyl methionine and fatty acid, whereas its two products are S-adenosylhomocysteine and fatty acid methyl ester.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:fatty-acid O-methyltransferase. Other names in common use include fatty acid methyltransferase, and fatty acid O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024378 |
14024391 | Glucuronoxylan 4-O-methyltransferase | In enzymology, a glucuronoxylan 4-O-methyltransferase (EC 2.1.1.112) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + glucuronoxylan D-glucuronate formula_0 S-adenosyl-L-homocysteine + glucuronoxylan 4-O-methyl-D-glucuronate
Thus, the two substrates of this enzyme are S-adenosyl methionine and glucuronoxylan D-glucuronate, whereas its two products are S-adenosylhomocysteine and glucuronoxylan 4-O-methyl-D-glucuronate.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:glucuronoxylan-D-glucuronate 4-O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024391 |
14024406 | Glycine N-methyltransferase | In enzymology, a glycine N-methyltransferase (EC 2.1.1.20) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + glycine formula_0 S-adenosyl-L-homocysteine + sarcosine
Thus, the substrates of this enzyme are S-adenosyl methionine and glycine, whereas its two products are S-adenosylhomocysteine and sarcosine.
Glycine N-methyltransferase belongs to the family of methyltransferase enzymes. The systematic name of this enzyme class is S-adenosyl-L-methionine:glycine N-methyltransferase. Other names in common use include glycine methyltransferase, S-adenosyl-L-methionine:glycine methyltransferase, and GNMT. This family of enzymes participates in the metabolism of multiple amino acids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024406 |
14024420 | Guanidinoacetate N-methyltransferase | Mammalian protein found in Homo sapiens
Guanidinoacetate N-methyltransferase (EC 2.1.1.2) is an enzyme that catalyzes the chemical reaction and is encoded by gene "GAMT" located on chromosome 19p13.3.
S-adenosyl-L-methionine + guanidinoacetate formula_0 S-adenosyl-L-homocysteine + creatine
Thus, the two substrates of this enzyme are S-adenosyl methionine and guanidinoacetate, whereas its two products are S-adenosylhomocysteine and creatine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:N-guanidinoacetate methyltransferase. Other names in common use include GA methylpherase, guanidinoacetate methyltransferase, guanidinoacetate transmethylase, methionine-guanidinoacetic transmethylase, and guanidoacetate methyltransferase. This enzyme participates in glycine, serine and threonine metabolism and arginine and proline metabolism.
The protein encoded by this gene is a methyltransferase that converts guanidoacetate to creatine, using S-adenosylmethionine as the methyl donor. Defects in this gene have been implicated in neurologic syndromes and muscular hypotonia, probably due to creatine deficiency and accumulation of guanidinoacetate in the brain of affected individuals. Two transcript variants encoding different isoforms have been described for this gene.
Structural studies.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1KHH, 1P1B, 1P1C, 1XCJ, 1XCL, 1ZX0, and 2BLN.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024420 |
14024437 | Hexaprenyldihydroxybenzoate methyltransferase | In enzymology, a hexaprenyldihydroxybenzoate methyltransferase (EC 2.1.1.114) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 3-hexaprenyl-4,5-dihydroxybenzoate formula_0 S-adenosyl-L-homocysteine + 3-hexaprenyl-4-hydroxy-5-methoxybenzoate
Thus, the two substrates of this enzyme are S-adenosyl methionine and 3-hexaprenyl-4,5-dihydroxybenzoate, whereas its two products are S-adenosylhomocysteine and 3-hexaprenyl-4-hydroxy-5-methoxybenzoate.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:3-hexaprenyl-4,5-dihydroxylate O-methyltransferase. Other names in common use include 3,4-dihydroxy-5-hexaprenylbenzoate methyltransferase, and dihydroxyhexaprenylbenzoate methyltransferase. This enzyme participates in ubiquinone biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024437 |
14024457 | Homocysteine S-methyltransferase | Enzyme
In enzymology, a homocysteine S-methyltransferase (EC 2.1.1.10) is an enzyme that catalyzes the chemical reaction
S-methylmethionine + L-homocysteine formula_0 2 L-methionine
Thus, the two substrates of this enzyme are S-methylmethionine and L-homocysteine, and it produces 2 molecules of L-methionine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:L-homocysteine S-methyltransferase. This enzyme participates in methionine metabolism.
Alternative names.
Other names of this enzyme in common use include S-adenosylmethionine homocysteine transmethylase, S-methylmethionine homocysteine transmethylase, adenosylmethionine transmethylase, methylmethionine:homocysteine methyltransferase, adenosylmethionine:homocysteine methyltransferase, homocysteine methylase, homocysteine methyltransferase, homocysteine transmethylase, L-homocysteine S-methyltransferase, S-adenosyl-L-methionine:L-homocysteine methyltransferase, S-adenosylmethionine-homocysteine transmethylase, and S-adenosylmethionine:homocysteine methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024457 |
14024469 | Indolepyruvate C-methyltransferase | In enzymology, an indolepyruvate C-methyltransferase (EC 2.1.1.47) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + (indol-3-yl)pyruvate formula_0 S-adenosyl-L-homocysteine + (3S)-3-(indol-3-yl)-3-oxobutanoate
Thus, the two substrates of this enzyme are S-adenosyl methionine and (indol-3-yl)pyruvate, whereas its two products are S-adenosylhomocysteine and (3S)-3-(indol-3-yl)-3-oxobutanoate.
Nomenclature.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine: (indol-3-yl)pyruvate C-methyltransferase. Other names in common use include indolepyruvate methyltransferase, indolepyruvate 3-methyltransferase, indolepyruvic acid methyltransferase, and S-adenosyl-L-methionine:indolepyruvate C-methyltransferase. This enzyme participates in tryptophan metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024469 |
14024485 | Inositol 1-methyltransferase | In enzymology, an inositol 1-methyltransferase (EC 2.1.1.40) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + myo-inositol formula_0 S-adenosyl-L-homocysteine + 1D-1-O-methyl-myo-inositol
Thus, the two substrates of this enzyme are S-adenosyl methionine and myo-inositol, whereas its two products are S-adenosylhomocysteine and 1D-1-O-methyl-myo-inositol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:1D-myo-inositol 1-O-methyltransferase. Other names in common use include inositol D-1-methyltransferase, S-adenosylmethionine:myo-inositol 3-methyltransferase, myo-inositol 3-O-methyltransferase, inositol 3-O-methyltransferase (name based on 1L-numbering system, and not 1D-numbering), and S-adenosyl-L-methionine:myo-inositol 3-O-methyltransferase. This enzyme participates in inositol phosphate metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024485 |
14024503 | Inositol 3-methyltransferase | In enzymology, an inositol 3-methyltransferase (EC 2.1.1.39) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + myo-inositol formula_0 S-adenosyl-L-homocysteine + 1D-3-O-methyl-myo-inositol
Thus, the two substrates of this enzyme are S-adenosyl methionine and myo-inositol, whereas its two products are S-adenosylhomocysteine and 1D-3-O-methyl-myo-inositol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:1D-myo-inositol 3-O-methyltransferase. Other names in common use include inositol L-1-methyltransferase, myo-inositol 1-methyltransferase, S-adenosylmethionine:myo-inositol 1-methyltransferase, myo-inositol 1-O-methyltransferase (name based on 1L-numbering, system and not 1D-numbering), and S-adenosyl-L-methionine:myo-inositol 1-O-methyltransferase. This enzyme participates in inositol phosphate metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024503 |
14024513 | Inositol 4-methyltransferase | In enzymology, an inositol 4-methyltransferase (EC 2.1.1.129) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + myo-inositol formula_0 S-adenosyl-L-homocysteine + 1D-4-O-methyl-myo-inositol
Thus, the two substrates of this enzyme are S-adenosyl methionine and myo-inositol, whereas its two products are S-adenosylhomocysteine and 1D-4-O-methyl-myo-inositol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:1D-myo-inositol 4-methyltransferase. Other names in common use include myo-inositol 4-O-methyltransferase, S-adenosyl-L-methionine:myo-inositol 4-O-methyltransferase, and myo-inositol 6-O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024513 |
14024526 | Iodophenol O-methyltransferase | In enzymology, an iodophenol "O"-methyltransferase (EC 2.1.1.26) is an enzyme that catalyzes the chemical reaction
"S"-adenosyl--methionine + 2-iodophenol formula_0 "S"-adenosyl--homocysteine + 2-iodophenol methyl ether
Thus, the two substrates of this enzyme are "S"-adenosyl methionine and 2-iodophenol, whereas its two products are "S"-adenosylhomocysteine and 2-iodophenol methyl ether.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is "S"-adenosyl--methionine:2-iodophenol "O"-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024526 |
14024546 | Isobutyraldoxime O-methyltransferase | In enzymology, an isobutyraldoxime O-methyltransferase (EC 2.1.1.91) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 2-methylpropanal oxime formula_0 S-adenosyl-L-homocysteine + 2-methylpropanal O-methyloxime
Thus, the two substrates of this enzyme are S-adenosyl methionine and 2-methylpropanal oxime, whereas its two products are S-adenosylhomocysteine and 2-methylpropanal O-methyloxime.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:2-methylpropanal-oxime O-methyltransferase. Other names in common use include aldoxime methyltransferase, S-adenosylmethionine:aldoxime O-methyltransferase, and aldoxime O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024546 |
14024564 | (Iso)eugenol O-methyltransferase | Class of enzymes
In enzymology, a (iso)eugenol O-methyltransferase (EC 2.1.1.146) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + isoeugenol formula_0 S-adenosyl-L-homocysteine + isomethyleugenol
Thus, the two substrates of this enzyme are S-adenosyl methionine and isoeugenol, whereas its two products are S-adenosylhomocysteine and isomethyleugenol.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:isoeugenol O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024564 |
14024576 | Isoflavone 4'-O-methyltransferase | In enzymology, an isoflavone 4'-O-methyltransferase (EC 2.1.1.46) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + an isoflavone formula_0 S-adenosyl-L-homocysteine + a 4'-O-methylisoflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and isoflavone, whereas its two products are S-adenosylhomocysteine and 4'-O-methylisoflavone.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:isoflavone 4'-O-methyltransferase. Other names in common use include 4'-hydroxyisoflavone methyltransferase, isoflavone methyltransferase, and isoflavone O-methyltransferase. This enzyme participates in isoflavonoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024576 |
14024587 | Isoflavone 7-O-methyltransferase | In enzymology, an isoflavone 7-O-methyltransferase (EC 2.1.1.150) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + a 7-hydroxyisoflavone formula_0 S-adenosyl-L-homocysteine + a 7-methoxyisoflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and 7-hydroxyisoflavone, whereas its two products are S-adenosylhomocysteine and 7-methoxyisoflavone.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:hydroxyisoflavone 7-O-methyltransferase. This enzyme participates in isoflavonoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024587 |
14024601 | Isoliquiritigenin 2'-O-methyltransferase | In enzymology, an isoliquiritigenin 2'-O-methyltransferase (EC 2.1.1.154) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + isoliquiritigenin formula_0 S-adenosyl-L-homocysteine + 2'-O-methylisoliquiritigenin
Thus, the two substrates of this enzyme are S-adenosyl methionine and isoliquiritigenin, whereas its two products are S-adenosylhomocysteine and 2'-O-methylisoliquiritigenin.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:isoliquiritigenin 2'-O-methyltransferase. Other names in common use include chalcone OMT, and CHMT.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024601 |
14024614 | Isoorientin 3'-O-methyltransferase | In enzymology, an isoorientin 3'-O-methyltransferase (EC 2.1.1.78) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + isoorientin formula_0 S-adenosyl-L-homocysteine + isoscoparin
Thus, the two substrates of this enzyme are S-adenosyl methionine and isoorientin, whereas its two products are S-adenosylhomocysteine and isoscoparin.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:isoorientin 3'-O-methyltransferase. This enzyme is also called isoorientin 3'-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024614 |
14024628 | Jasmonate O-methyltransferase | In enzymology, a jasmonate O-methyltransferase (EC 2.1.1.141) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + jasmonate formula_0 S-adenosyl-L-homocysteine + methyl jasmonate
Thus, the two substrates of this enzyme are S-adenosyl methionine and jasmonate, whereas its two products are S-adenosylhomocysteine and methyl jasmonate.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:jasmonate O-methyltransferase. This enzyme is also called jasmonic acid carboxyl methyltransferase.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024628 |
14024644 | Kaempferol 4'-O-methyltransferase | In enzymology, a kaempferol 4'-O-methyltransferase (EC 2.1.1.155) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + kaempferol formula_0 S-adenosyl-L-homocysteine + kaempferide
Thus, the two substrates of this enzyme are S-adenosyl methionine and kaempferol, whereas its two products are S-adenosylhomocysteine and kaempferide.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:kaempferol 4'-O-methyltransferase. Other names in common use include S-adenosyl-L-methionine:flavonoid 4'-O-methyltransferase, and F 4'-OMT.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024644 |
14024662 | Licodione 2'-O-methyltransferase | In enzymology, a licodione 2'-O-methyltransferase (EC 2.1.1.65) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + licodione formula_0 S-adenosyl-L-homocysteine + 2'-O-methyllicodione
Thus, the two substrates of this enzyme are S-adenosyl methionine and licodione, whereas its two products are S-adenosylhomocysteine and 2'-O-methyllicodione.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:licodione 2'-O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024662 |
14024679 | Loganate O-methyltransferase | In enzymology, a loganate O-methyltransferase (EC 2.1.1.50) is an enzyme that catalyzes the chemical reaction.
S-adenosyl-L-methionine + loganic acid formula_0 S-adenosyl-L-homocysteine + loganin
Thus, the two substrates of this enzyme are S-adenosyl methionine and loganic acid (also called loganate), whereas its two products are S-adenosylhomocysteine and loganin.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon-group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:loganate 11-O-methyltransferase. Other names in common use include loganate methyltransferase and S-adenosyl-L-methionine:loganic acid methyltransferase. This enzyme participates in terpene indole and ipecac alkaloid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024679 |
14024699 | Luteolin O-methyltransferase | In enzymology, a luteolin O-methyltransferase (EC 2.1.1.42) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + 5,7,3',4'-tetrahydroxyflavone formula_0 S-adenosyl-L-homocysteine + 5,7,4'-trihydroxy-3'-methoxyflavone
Thus, the two substrates of this enzyme are S-adenosyl methionine and 5,7,3',4'-tetrahydroxyflavone (luteolin), whereas its two products are S-adenosylhomocysteine and 5,7,4'-trihydroxy-3'-methoxyflavone.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:5,7,3',4'-tetrahydroxyflavone 3'-O-methyltransferase. Other names in common use include o-dihydric phenol methyltransferase, luteolin methyltransferase, luteolin 3'-O-methyltransferase, o-diphenol m-O-methyltransferase, o-dihydric phenol meta-O-methyltransferase, and S-adenosylmethionine:flavone/flavonol 3'-O-methyltransferase. This enzyme participates in flavonoid biosynthesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024699 |
14024717 | Macrocin O-methyltransferase | In enzymology, a macrocin O-methyltransferase (EC 2.1.1.101) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + macrocin formula_0 S-adenosyl-L-homocysteine + tylosin
Thus, the two substrates of this enzyme are S-adenosyl methionine and macrocin, whereas its two products are S-adenosylhomocysteine and tylosin.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:macrocin 3"'-O-methyltransferase. Other names in common use include macrocin methyltransferase, and S-adenosyl-L-methionine-macrocin O-methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024717 |
14024737 | Magnesium protoporphyrin IX methyltransferase | In enzymology, a magnesium protoporphyrin IX methyltransferase (EC 2.1.1.11) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + magnesium protoporphyrin IX formula_0 S-adenosyl-L-homocysteine + magnesium protoporphyrin IX 13-methyl ester
The two substrates of this enzyme are S-adenosyl methionine and magnesium protoporphyrin IX; its two products are S-adenosylhomocysteine and magnesium protoporphyrin IX 13-methyl ester.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:magnesium-protoporphyrin-IX O-methyltransferase. This enzyme is part of the biosynthetic pathway to chlorophylls.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024737 |
14024750 | Methanol—5-hydroxybenzimidazolylcobamide Co-methyltransferase | In enzymology, a methanol-5-hydroxybenzimidazolylcobamide Co-methyltransferase (EC 2.1.1.90) is an enzyme that catalyzes the chemical reaction
methanol + 5-hydroxybenzimidazolylcobamide formula_0 Co-methyl-Co-5-hydroxybenzimidazolylcob(I)amide + H2O
Thus, the two substrates of this enzyme are methanol and 5-hydroxybenzimidazolylcobamide, whereas its two products are Co-methyl-Co-5-hydroxybenzimidazolylcob(I)amide and H2O.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is methanol:5-hydroxybenzimidazolylcobamide Co-methyltransferase. Other names in common use include methanol cobalamin methyltransferase, methanol:5-hydroxybenzimidazolylcobamide methyltransferase, and MT 1.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2I2X.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024750 |
14024767 | Methionine S-methyltransferase | In enzymology, a methionine S-methyltransferase (EC 2.1.1.12) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + L-methionine formula_0 S-adenosyl-L-homocysteine + S-methyl-L-methionine
Thus, the two substrates of this enzyme are S-adenosyl methionine and L-methionine, whereas its two products are S-adenosylhomocysteine and S-methyl-L-methionine.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:L-methionine S-methyltransferase. Other names in common use include S-adenosyl methionine:methionine methyl transferase, methionine methyltransferase, S-adenosylmethionine transmethylase, and S-adenosylmethionine-methionine methyltransferase. This enzyme participates in selenoamino acid metabolism. It has 2 cofactors: manganese, and zinc.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024767 |
14024778 | Methylamine—glutamate N-methyltransferase | In enzymology, a methylamine-glutamate "N"-methyltransferase (EC 2.1.1.21) is an enzyme that catalyzes the chemical reaction
methylamine + l-glutamate formula_0 NH3 + "N"-methyl-l-glutamate
Thus, the two substrates of this enzyme are methylamine and l-glutamate, whereas its two products are NH3 and "N"-methyl-l-glutamate.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is methylamine:l-glutamate "N"-methyltransferase. Other names in common use include "N"-methylglutamate synthase, and methylamine-glutamate methyltransferase. This enzyme participates in methane metabolism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024778 |
14024795 | Methylated-DNA—(protein)-cysteine S-methyltransferase | In enzymology, a methylated-DNA-[protein]-cysteine S-methyltransferase (EC 2.1.1.63) is an enzyme that catalyzes the chemical reaction
DNA (containing 6-O-methylguanine) + protein L-cysteine formula_0 DNA (without 6-O-methylguanine) + protein S-methyl-L-cysteine
Thus, the two substrates of this enzyme are DNA containing 6-O-methylguanine and protein L-cysteine, whereas its two products are DNA and protein S-methyl-L-cysteine. The S-methyl-L-cysteine residue irreversibly inactivates the protein, allowing only one transfer for each protein.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is DNA-6-O-methylguanine:[protein]-L-cysteine S-methyltransferase.
Structural studies.
As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes 1EH6, 1EH7, 1EH8, 1MGT, 1QNT, 1SFE, 1T38, 1T39, 1WRJ, 1YFH, and 2G7H.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024795 |
14024804 | Methylene-fatty-acyl-phospholipid synthase | In enzymology, a methylene-fatty-acyl-phospholipid synthase (EC 2.1.1.16) is an enzyme that catalyzes the chemical reaction
S-adenosyl-L-methionine + phospholipid olefinic fatty acid formula_0 S-adenosyl-L-homocysteine + phospholipid methylene fatty acid
Thus, the two substrates of this enzyme are S-adenosyl methionine and phospholipid olefinic fatty acid, whereas its two products are S-adenosylhomocysteine and phospholipid methylene fatty acid.
This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:unsaturated-phospholipid methyltransferase (methenylating). This enzyme is also called unsaturated-phospholipid methyltransferase.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=14024804 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.