id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
75966601 | Lawvere's fixed-point theorem | Theorem in category theory
In mathematics, Lawvere's fixed-point theorem is an important result in category theory. It is a broad abstract generalization of many diagonal arguments in mathematics and logic, such as Cantor's diagonal argument, Russell's paradox, Gödel's first incompleteness theorem and Turing's solution to the Entscheidungsproblem.
It was first proven by William Lawvere in 1969.
Statement.
Lawvere's theorem states that, for any Cartesian closed category formula_0 and given an object formula_1 in it, if there is a weakly point-surjective morphism formula_2 from some object formula_3 to the exponential object formula_4, then every endomorphism formula_5 has a fixed point. That is, there exists a morphism formula_6 (where formula_7 is a terminal object in formula_0 ) such that formula_8.
Applications.
The theorem's contrapositive is particularly useful in proving many results. It states that if there is an object formula_1 in the category such that there is an endomorphism formula_5 which has no fixed points, then there is no object formula_3 with a weakly point-surjective map formula_9. Some important corollaries of this are:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{C}"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "B^A"
},
{
"math_id": 5,
"text": "g: B \\rightarrow B"
},
{
"math_id": 6,
"text": " b : 1 \\rightarrow B"
},
{
"math_id": 7,
"text": "1"
},
{
"math_id": 8,
"text": "g \\circ b = b"
},
{
"math_id": 9,
"text": "f : A \\rightarrow B^A "
}
]
| https://en.wikipedia.org/wiki?curid=75966601 |
75969668 | Category of Markov kernels | Definition and properties of the category of Markov kernels, in more detail than at "Markov kernel".
In mathematics, the category of Markov kernels, often denoted Stoch, is the category whose objects are measurable spaces and whose morphisms are Markov kernels.
It is analogous to the category of sets and functions, but where the arrows can be interpreted as being stochastic.
Several variants of this category are used in the literature. For example, one can use subprobability kernels instead of probability kernels, or more general s-finite kernels.
Also, one can take as morphisms equivalence classes of Markov kernels under almost sure equality; see below.
Definition.
Recall that a Markov kernel between measurable spaces formula_0 and formula_1 is an assignment formula_2 which is measurable as a function on formula_3 and which is a probability measure on formula_4. We denote its values by formula_5 for formula_6 and formula_7, which suggests an interpretation as conditional probability.
The category Stoch has:
formula_8
for all formula_6 and formula_9;
formula_13
for all formula_6 and formula_14.
This composition formula is sometimes called the Chapman-Kolmogorov equation.
This composition is unital, and associative by the monotone convergence theorem, so that one indeed has a category.
Basic properties.
Probability measures.
The terminal object of Stoch is the one-point space formula_15. Morphisms in the form formula_16 can be equivalently seen as probability measures on formula_3, since they correspond to functions formula_17, i.e. elements of formula_18.
Given kernels formula_19 and formula_20, the composite kernel formula_21 gives the probability measure on formula_22 with values
formula_23
for every measurable subset formula_24 of formula_22.
Given probability spaces formula_25 and formula_26, a measure-preserving Markov kernel formula_27 is a Markov kernel formula_10 such that for every measurable subset formula_7,
formula_28
Probability spaces and measure-preserving Markov kernels form a category, which can be seen as the slice category formula_29.
Measurable functions.
Every measurable function formula_30 defines canonically a Markov kernel formula_31 as follows,
formula_32
for every formula_6 and every formula_7. This construction preserves identities and compositions, and is therefore a functor from Meas to Stoch.
Isomorphisms.
By functoriality, every isomorphism of measurable spaces (in the category Meas) induces an isomorphism in Stoch. However, in Stoch there are more isomorphisms, and in particular, measurable spaces can be isomorphic in Stoch even when the underlying sets are not in bijection.
formula_33
between Stoch and the category of measurable spaces.
Particular limits and colimits.
Since the functor formula_34 is left adjoint, it preserves colimits. Because of this, all colimits in the category of measurable spaces are also colimits in Stoch. For example,
In general, the functor formula_36 does not preserve limits. This in particular implies that the product of measurable spaces is not a product in Stoch in general. Since the Giry monad is monoidal, however, the product of measurable spaces still makes Stoch a monoidal category.
A limit of particular significance for probability theory is de Finetti's theorem, which can be interpreted as the fact that the space of probability measures (Giry monad) is the limit in Stoch of the diagram formed by finite permutations of sequences.
Almost sure version.
Sometimes it is useful to consider Markov kernels only up to almost sure equality, for example when talking about disintegrations or about regular conditional probability.
Given probability spaces formula_25 and formula_26, we say that two measure-preserving kernels formula_37 are almost surely equal if and only if for every measurable subset formula_7,
formula_38
for formula_39-almost all formula_6.
This defines an equivalence relation on the set of measure-preserving Markov kernels formula_37.
Probability spaces and equivalence classes of Markov kernels under the relation defined above form a category. When restricted to standard Borel probability spaces, the category is often denoted by Krn.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(X,\\mathcal{F})"
},
{
"math_id": 1,
"text": "(Y,\\mathcal{G})"
},
{
"math_id": 2,
"text": "k:X\\times\\mathcal{G}\\to\\mathbb{R}"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\mathcal{G}"
},
{
"math_id": 5,
"text": "k(B|x)"
},
{
"math_id": 6,
"text": "x\\in X"
},
{
"math_id": 7,
"text": "B\\in\\mathcal{G}"
},
{
"math_id": 8,
"text": "\n\\delta(A|x) = 1_A(x) = \\begin{cases}\n1 & x\\in A ; \\\\\n0 & x\\notin A \n\\end{cases}\n"
},
{
"math_id": 9,
"text": "A\\in\\mathcal{F}"
},
{
"math_id": 10,
"text": "k:(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 11,
"text": "h:(Y,\\mathcal{G})\\to(Z,\\mathcal{H})"
},
{
"math_id": 12,
"text": "h\\circ k:(X,\\mathcal{F})\\to(Z,\\mathcal{H})"
},
{
"math_id": 13,
"text": "\n(h\\circ k) (C|x) = \\int_Y h(C|y) \\, k(dy|x)\n"
},
{
"math_id": 14,
"text": "C\\in\\mathcal{H}"
},
{
"math_id": 15,
"text": "1"
},
{
"math_id": 16,
"text": "1\\to X"
},
{
"math_id": 17,
"text": "1\\to PX"
},
{
"math_id": 18,
"text": "PX"
},
{
"math_id": 19,
"text": "p:1\\to X"
},
{
"math_id": 20,
"text": "k:X\\to Y"
},
{
"math_id": 21,
"text": "k\\circ p:1\\to Y"
},
{
"math_id": 22,
"text": "Y"
},
{
"math_id": 23,
"text": "\n(k\\circ p) (B) = \\int_X k(B|x)\\,p(dx) ,\n"
},
{
"math_id": 24,
"text": "B"
},
{
"math_id": 25,
"text": "(X,\\mathcal{F},p)"
},
{
"math_id": 26,
"text": "(Y,\\mathcal{G},q)"
},
{
"math_id": 27,
"text": "(X,\\mathcal{F},p)\\to(Y,\\mathcal{G},q)"
},
{
"math_id": 28,
"text": "\nq(B) = \\int_X k(B|x) \\, p(dx) .\n"
},
{
"math_id": 29,
"text": "(\\mathrm{Hom}_\\mathrm{Stoch}(1,-),\\mathrm{Stoch})"
},
{
"math_id": 30,
"text": "f:(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 31,
"text": "\\delta_f:(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 32,
"text": "\n\\delta_f(B|x) = 1_B(f(x)) = \\begin{cases}\n1 & f(x)\\in B ; \\\\\n0 & f(x)\\notin B\n\\end{cases}\n"
},
{
"math_id": 33,
"text": "\n\\mathrm{Hom}_\\mathrm{Stoch}(X,Y) \\cong \\mathrm{Hom}_\\mathrm{Meas}(X,PY)\n"
},
{
"math_id": 34,
"text": "L:\\mathrm{Meas}\\to\\mathrm{Stoch}"
},
{
"math_id": 35,
"text": "(\\mathrm{Hom}_\\mathrm{Stoch}(1,-),L)"
},
{
"math_id": 36,
"text": "L"
},
{
"math_id": 37,
"text": "k,h:(X,\\mathcal{F},p)\\to(Y,\\mathcal{G},q)"
},
{
"math_id": 38,
"text": "\nk(B|x) = h(B|x)\n"
},
{
"math_id": 39,
"text": "p"
}
]
| https://en.wikipedia.org/wiki?curid=75969668 |
75977697 | Swinnerton-Dyer polynomial | Family of polynomials
In algebra, the Swinnerton-Dyer polynomials are a family of polynomials, introduced by Peter Swinnerton-Dyer, that serve as examples where polynomial factorization algorithms have worst-case runtime. They have the property of being reducible modulo every prime, while being irreducible over the rational numbers. They are a standard counterexample in number theory.
Given a finite set formula_0 of prime numbers, the Swinnerton-Dyer polynomial associated to formula_0 is the polynomial:
formula_1
where the product extends over all formula_2 choices of sign in the enclosed sum. The polynomial formula_3 has degree formula_2 and integer coefficients, which alternate in sign. If formula_4, then formula_3 is reducible modulo formula_5 for all primes formula_5, into linear and quadratic factors, but irreducible over formula_6. The Galois group of formula_3 is formula_7.
The first few Swinnerton-Dyer polynomials are:
formula_8
formula_9
formula_10 | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "f_P(x) = \\prod \\left(x + \\sum_{p\\in P} (\\pm) \\sqrt{p}\\right)"
},
{
"math_id": 2,
"text": "2^{|P|}"
},
{
"math_id": 3,
"text": "f_P(x)"
},
{
"math_id": 4,
"text": "|P|>1"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "\\mathbb Q"
},
{
"math_id": 7,
"text": "\\mathbb Z_2^{|P|}"
},
{
"math_id": 8,
"text": "\\mathcal P = \\{2\\}:\\quad f_P(x) = (x-\\sqrt 2)(x+\\sqrt 2) = x^2-2"
},
{
"math_id": 9,
"text": "\\mathcal P = \\{2,3\\}:\\quad f_P(x) = (x-\\sqrt 2-\\sqrt 3)(x-\\sqrt 2+\\sqrt 3)(x+\\sqrt 2 -\\sqrt 3)(x+\\sqrt 2+\\sqrt 3) = x^4-10x^2+1"
},
{
"math_id": 10,
"text": "\\mathcal P = \\{2,3,5\\}:\\quad f_P(x) = x^8-20x^6+352x^4-960x^2+576."
}
]
| https://en.wikipedia.org/wiki?curid=75977697 |
759831 | Best response | In game theory, the best response is the strategy (or strategies) which produces the most favorable outcome for a player, taking other players' strategies as given (; ). The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response (or one of the best responses) to the other players' strategies .
Correspondence.
Reaction , also known as best response correspondences, are used in the proof of the existence of mixed strategy Nash equilibria (, Section 1.3.B; , Section 2.2). Reaction correspondences are not "reaction functions" since functions must only have one value per argument, and many reaction correspondences will be undefined, i.e., a vertical line, for some opponent strategy choice. One constructs a correspondence formula_0, for each player from the set of opponent strategy profiles into the set of the player's strategies. So, for any given set of opponent's strategies formula_1, formula_2 represents player formula_3 's best responses to formula_1.
Response correspondences for all 2x2 normal form games can be drawn with a line for each player in a unit square strategy space. Figures 1 to 3 graphs the best response correspondences for the stag hunt game. The dotted line in Figure 1 shows the optimal probability that player Y plays 'Stag' (in the y-axis), as a function of the probability that player X plays Stag (shown in the x-axis). In Figure 2 the dotted line shows the optimal probability that player X plays 'Stag' (shown in the x-axis), as a function of the probability that player Y plays Stag (shown in the y-axis). Note that Figure 2 plots the independent and response variables in the opposite axes to those normally used, so that it may be superimposed onto the previous graph, to show the Nash equilibria at the points where the two player's best responses agree in Figure 3.
There are three distinctive reaction correspondence shapes, one for each of the three types of symmetric 2x2 games: coordination games, discoordination games and games with dominated strategies
(the trivial fourth case in which payoffs are always equal for both moves is not really a game theoretical problem). Any payoff symmetric 2x2 game will take one of these three forms.
Coordination games.
Games in which players score highest when both players choose the same strategy, such as the stag hunt and battle of the sexes, are called coordination games. These games have reaction correspondences of the same shape as Figure 3, where there is one Nash equilibrium in the bottom left corner, another in the top right, and a mixing Nash somewhere along the diagonal between the other two.
Anti-coordination games.
Games such as the game of chicken and hawk-dove game in which players score highest when they choose opposite strategies, i.e., discoordinate, are called anti-coordination games. They have reaction correspondences (Figure 4) that cross in the opposite direction to coordination games, with three Nash equilibria, one in each of the top left and bottom right corners, where one player chooses one strategy, the other player chooses the opposite strategy. The third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know which one of them is which, then the mixed Nash is an evolutionarily stable strategy (ESS), as play is confined to the bottom left to top right diagonal line. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes.
Games with dominated strategies.
Games with dominated strategies have reaction correspondences which only cross at one point, which will be in either the bottom left, or top right corner in payoff symmetric 2x2 games. For instance, in the single-play prisoner's dilemma, the "Cooperate" move is not optimal for any probability of opponent Cooperation. Figure 5 shows the reaction correspondence for such a game, where the dimensions are "Probability play Cooperate", the Nash equilibrium is in the lower left corner where neither player plays Cooperate. If the dimensions were defined as "Probability play Defect", then both players best response curves would be 1 for all opponent strategy probabilities and the reaction correspondences would cross (and form a Nash equilibrium) at the top right corner.
Other (payoff asymmetric) games.
A wider range of reaction correspondences shapes is possible in 2x2 games with payoff asymmetries. For each player there are five possible best response shapes, shown in Figure 6. From left to right these are: dominated strategy (always play 2), dominated strategy (always play 1), rising (play strategy 2 if probability that the other player plays 2 is above threshold), falling (play strategy 1 if probability that the other player plays 2 is above threshold), and indifferent (both strategies play equally well under all conditions).
While there are only four possible types of payoff symmetric 2x2 games (of which one is trivial), the five different best response curves per player allow for a larger number of payoff asymmetric game types. Many of these are not truly different from each other. The dimensions may be redefined (exchange names of strategies 1 and 2) to produce symmetrical games which are logically identical.
Matching pennies.
One well-known game with payoff asymmetries is the matching pennies game. In this game one player, the row player — graphed on the y dimension — wins if the players coordinate (both choose heads or both choose tails) while the other player, the column player — shown in the x-axis — wins if the players discoordinate. Player Y's reaction correspondence is that of a coordination game, while that of player X is a discoordination game. The only Nash equilibrium is the combination of mixed strategies where both players independently choose heads and tails with probability 0.5 each.
Dynamics.
In evolutionary game theory, best response dynamics represents a class of strategy updating rules, where players strategies in the next round are determined by their best responses to some subset of the population. Some examples include:
Importantly, in these models players only choose the best response on the next round that would give them the highest payoff "on the next round". Players do not consider the effect that choosing a strategy on the next round would have on future play in the game. This constraint results in the dynamical rule often being called myopic best response.
In the theory of potential games, best response dynamics refers to a way of finding a Nash equilibrium by computing the best response for every player:
Theorem: In any finite potential game, best response dynamics always converge to a Nash equilibrium.
Smoothed.
Instead of best response correspondences, some models use smoothed best response functions. These functions are similar to the best response correspondence, except that the function does not "jump" from one pure strategy to another. The difference is illustrated in Figure 8, where black represents the best response correspondence and the other colors each represent different smoothed best response functions. In standard best response correspondences, even the slightest benefit to one action will result in the individual playing that action with probability 1. In smoothed best response as the difference between two actions decreases the individual's play approaches 50:50.
There are many functions that represent smoothed best response functions. The functions illustrated here are several variations on the following function:
formula_4
where formula_5 represents the expected payoff of action formula_6, and formula_7 is a parameter that determines the degree to which the function deviates from the true best response (a larger formula_7 implies that the player is more likely to make 'mistakes').
There are several advantages to using smoothed best response, both theoretical and empirical. First, it is consistent with psychological experiments; when individuals are roughly indifferent between two actions they appear to choose more or less at random. Second, the play of individuals is uniquely determined in all cases, since it is a that is also a function. Finally, using smoothed best response with some learning rules (as in Fictitious play) can result in players learning to play mixed strategy Nash equilibria . | [
{
"math_id": 0,
"text": "b(\\cdot)"
},
{
"math_id": 1,
"text": "\\sigma_{-i}"
},
{
"math_id": 2,
"text": "b_{i}(\\sigma_{-i})"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "\\frac{e^{E(1)/\\gamma}}{e^{E(1)/\\gamma} + e^{E(2)/\\gamma}}"
},
{
"math_id": 5,
"text": "E(x)"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "\\gamma"
}
]
| https://en.wikipedia.org/wiki?curid=759831 |
75994590 | Quantum singular value transformation | Quantum singular value transformation is a framework for designing quantum algorithms. It encompasses a variety of quantum algorithms for problems which can be solved with linear algebra, including Hamiltonian simulation, search problems, and linear system solving. It was introduced in 2018 by András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe, generalizing algorithms for Hamiltonian simulation of Guang Hao Low and Isaac Chuang inspired by signal processing.
High-level description.
The basic primitive of quantum singular value transformation is the block-encoding. A quantum circuit is a block-encoding of a matrix "A" if it implements a unitary matrix "U" such that "U" contains "A" in a specified sub-matrix. For example, if formula_0, then "U" is a block-encoding of "A".
The fundamental algorithm of QSVT is one that converts a block-encoding of "A" to a block-encoding of formula_1, where "p" is a polynomial of degree "d" and formula_2 denotes the conjugate transpose, with only "d" applications of the circuit and one ancilla qubit. This can be done for a large class of polynomials "p" which correspond to applying a polynomial to the singular values of "A", giving a "singular value transformation".
A variant of this algorithm can also be performed when "A" is Hermitian, corresponding to an "eigenvalue transformation". That is, given a block-encoding of "A" with Eigendecomposition of a matrix formula_3, one can get a block-encoding for formula_4, provided "p" is bounded.
"Input": A matrix formula_5 whose singular value decomposition is formula_6 where formula_7 are the singular values of A
"Input": A polynomial formula_8
"Output": A unitary where formula_8 has been applied to the singular values of formula_5: formula_9
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\langle 0 | \\otimes I) U (|0\\rangle |\\phi\\rangle) = A|\\phi\\rangle"
},
{
"math_id": 1,
"text": "p(A, A^\\dagger)"
},
{
"math_id": 2,
"text": "A^\\dagger"
},
{
"math_id": 3,
"text": "A = \\sum \\lambda_i u_iu_i^\\dagger"
},
{
"math_id": 4,
"text": "\\sum p(\\lambda_i) u_iu_i^\\dagger"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "A = W\\Sigma V^{\\dagger}"
},
{
"math_id": 7,
"text": "\\Sigma"
},
{
"math_id": 8,
"text": "P"
},
{
"math_id": 9,
"text": "\\begin{bmatrix}WP(\\Sigma) V^{\\dagger} & . \\\\. & . \\end{bmatrix}"
},
{
"math_id": 10,
"text": "U"
},
{
"math_id": 11,
"text": "U = \\begin{bmatrix}A & . \\\\. & . \\end{bmatrix}"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "|0\\rangle^{\\otimes n}"
},
{
"math_id": 14,
"text": "\\tilde{\\Pi}_{\\phi_{1}} U "
},
{
"math_id": 15,
"text": " \\prod_{k=1}^{\\frac{d-1}{2}} \\Pi_{\\phi_{2k}}U^{\\dagger}\\tilde{\\Pi}_{\\phi_{2k+1}} U "
},
{
"math_id": 16,
"text": " \\prod_{k=1}^{\\frac{d}{2}} \\Pi_{\\phi_{2k}-1}U^{\\dagger}\\tilde{\\Pi}_{\\phi_{2k}} U "
}
]
| https://en.wikipedia.org/wiki?curid=75994590 |
75998118 | Giry monad | An abstract structure modeling spaces of probability measures, first defined in the 80s.
In mathematics, the Giry monad is a construction that assigns to a measurable space a space of probability measures over it, equipped with a canonical sigma-algebra. It is one of the main examples of a probability monad.
It is implicitly used in probability theory whenever one considers probability measures which depend measurably on a parameter (giving rise to Markov kernels), or when one has "probability measures over probability measures" (such as in de Finetti's theorem).
Like many iterable constructions, it has the category-theoretic structure of a monad, on the category of measurable spaces.
Construction.
The Giry monad, like every monad, consists of three structures:
The space of probability measures.
Let formula_4 be a measurable space.
Denote by formula_1 the set of probability measures over formula_4.
We equip the set formula_1 with a sigma-algebra as follows. First of all, for every measurable set formula_5, define the map formula_6 by formula_7.
We then define the sigma algebra formula_8 on formula_1 to be the smallest sigma-algebra which makes the maps formula_9 measurable, for all formula_10 (where formula_11 is assumed equipped with the Borel sigma-algebra).
Equivalently, formula_8 can be defined as the smallest sigma-algebra on formula_1 which makes the maps
formula_12
measurable for all bounded measurable formula_13.
The assignment formula_14 is part of an endofunctor on the category of measurable spaces, usually denoted again by formula_15. Its action on morphisms, i.e. on measurable maps, is via the pushforward of measures.
Namely, given a measurable map formula_16, one assigns to formula_17 the map formula_18 defined by
formula_19
for all formula_20 and all measurable sets formula_21.
The Dirac delta map.
Given a measurable space formula_22, the map formula_23 maps an element formula_24 to the Dirac measure formula_25, defined on measurable subsets formula_10 by
formula_26
The expectation map.
Let formula_27, i.e. a probability measure over the probability measures over formula_22. We define the probability measure formula_28 by
formula_29
for all measurable formula_10.
This gives a measurable, natural map formula_30.
Example: mixture distributions.
A mixture distribution, or more generally a compound distribution, can be seen as an application of the map formula_31.
Let's see this for the case of a finite mixture. Let formula_32 be probability measures on formula_22, and consider the probability measure formula_33 given by the mixture
formula_34
for all measurable formula_10, for some weights formula_35 satisfying formula_36.
We can view the mixture formula_33 as the average formula_37, where the measure on measures formula_27, which in this case is discrete, is given by
formula_38
More generally, the map formula_3 can be seen as the most general, non-parametric way to form arbitrary mixture or compound distributions.
The triple formula_39 is called the Giry monad.
Relationship with Markov kernels.
One of the properties of the sigma-algebra formula_8 is that given measurable spaces formula_22 and formula_40, we have a bijective correspondence between measurable functions formula_41 and Markov kernels formula_42. This allows to view a Markov kernel, equivalently, as a measurably parametrized probability measure.
In more detail, given a measurable function formula_43, one can obtain the Markov kernel formula_44 as follows,
formula_45
for every formula_24 and every measurable formula_21 (note that formula_46 is a probability measure).
Conversely, given a Markov kernel formula_47, one can form the measurable function formula_48 mapping formula_24 to the probability measure formula_49 defined by
formula_50
for every measurable formula_21.
The two assignments are mutually inverse.
From the point of view of category theory, we can interpret this correspondence as an adjunction
formula_51
between the category of measurable spaces and the category of Markov kernels. In particular, the category of Markov kernels can be seen as the Kleisli category of the Giry monad.
Product distributions.
Given measurable spaces formula_22 and formula_40, one can form the measurable space formula_52 with the product sigma-algebra, which is the product in the category of measurable spaces.
Given probability measures formula_20 and formula_53, one can form the product measure formula_54 on formula_55. This gives a natural, measurable map
formula_56
usually denoted by formula_57 or by formula_58.
The map formula_59 is in general not an isomorphism, since there are probability measures on formula_60 which are not product distributions, for example in case of correlation.
However, the maps formula_59 and the isomorphism formula_61 make the Giry monad a monoidal monad, and so in particular a commutative strong monad.
formula_65
whenever formula_66 is supported on formula_67 and has finite expected value, and formula_68 otherwise.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "PX"
},
{
"math_id": 2,
"text": "\\delta:X\\to PX"
},
{
"math_id": 3,
"text": "\\mathcal{E}:PPX\\to PX"
},
{
"math_id": 4,
"text": "(X, \\mathcal{F})"
},
{
"math_id": 5,
"text": "A\\in \\mathcal{F}"
},
{
"math_id": 6,
"text": "\\varepsilon_A:PX\\to\\mathbb{R}"
},
{
"math_id": 7,
"text": "p\\longmapsto p(A)"
},
{
"math_id": 8,
"text": "\\mathcal{PF}"
},
{
"math_id": 9,
"text": "\\varepsilon_A"
},
{
"math_id": 10,
"text": "A\\in\\mathcal{F}"
},
{
"math_id": 11,
"text": "\\mathbb{R}"
},
{
"math_id": 12,
"text": "p\\longmapsto\\int_X f \\,dp"
},
{
"math_id": 13,
"text": "f:X\\to\\mathbb{R}"
},
{
"math_id": 14,
"text": "(X,\\mathcal{F})\\mapsto (PX,\\mathcal {PF})"
},
{
"math_id": 15,
"text": "P"
},
{
"math_id": 16,
"text": "f:(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "f_*:(PX,\\mathcal {PF})\\to(PY,\\mathcal {PG})"
},
{
"math_id": 19,
"text": "f_*p\\,(B)=p(f^{-1}(B))"
},
{
"math_id": 20,
"text": "p\\in PX"
},
{
"math_id": 21,
"text": "B\\in\\mathcal{G}"
},
{
"math_id": 22,
"text": "(X,\\mathcal{F})"
},
{
"math_id": 23,
"text": "\\delta:(X,\\mathcal{F})\\to(PX,\\mathcal{PF})"
},
{
"math_id": 24,
"text": "x\\in X"
},
{
"math_id": 25,
"text": "\\delta_x\\in PX"
},
{
"math_id": 26,
"text": "\n\\delta_x(A) = 1_A(x) =\n\\begin{cases}\n1 & \\text{if }x\\in A, \\\\\n0 & \\text{if }x\\notin A.\n\\end{cases}\n"
},
{
"math_id": 27,
"text": "\\mu\\in PPX"
},
{
"math_id": 28,
"text": "\\mathcal{E}\\mu\\in PX"
},
{
"math_id": 29,
"text": "\n\\mathcal{E}\\mu(A) = \\int_{PX} p(A)\\,\\mu(dp)\n"
},
{
"math_id": 30,
"text": "\\mathcal{E}:(PPX,\\mathcal{PPF})\\to(PX,\\mathcal{PF})"
},
{
"math_id": 31,
"text": "\\mathcal{E}"
},
{
"math_id": 32,
"text": "p_1,\\dots,p_n"
},
{
"math_id": 33,
"text": "q"
},
{
"math_id": 34,
"text": "\nq(A) = \\sum_{i=1}^n w_i\\,p_i(A)\n"
},
{
"math_id": 35,
"text": "w_i\\ge 0"
},
{
"math_id": 36,
"text": "w_1+\\dots+w_n=1"
},
{
"math_id": 37,
"text": "q=\\mathcal{E}\\mu"
},
{
"math_id": 38,
"text": "\n\\mu = \\sum_{i=1}^n w_i\\,\\delta_{p_i} .\n"
},
{
"math_id": 39,
"text": "(P,\\delta,\\mathcal{E})"
},
{
"math_id": 40,
"text": "(Y,\\mathcal{G})"
},
{
"math_id": 41,
"text": "(X,\\mathcal{F})\\to(PY,\\mathcal{PG})"
},
{
"math_id": 42,
"text": "(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 43,
"text": "f:(X,\\mathcal{F})\\to(PY,\\mathcal{PG})"
},
{
"math_id": 44,
"text": "f^\\flat:(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 45,
"text": "\nf^\\flat(B|x) = f(x)(B)\n"
},
{
"math_id": 46,
"text": "f(x)\\in PY"
},
{
"math_id": 47,
"text": "k:(X,\\mathcal{F})\\to(Y,\\mathcal{G})"
},
{
"math_id": 48,
"text": "k^\\sharp:(X,\\mathcal{F})\\to(PY,\\mathcal{PG})"
},
{
"math_id": 49,
"text": "k^\\sharp(x)\\in PY"
},
{
"math_id": 50,
"text": "\nk^\\sharp(x)(B) = k(B|x)\n"
},
{
"math_id": 51,
"text": "\n\\mathrm{Hom}_\\mathrm{Meas} (X,PY) \\cong \\mathrm{Hom}_\\mathrm{Stoch} (X,Y)\n"
},
{
"math_id": 52,
"text": "(PX,\\mathcal{PX})\\times (PY,\\mathcal{PY})=(X\\times Y, \\mathcal{F}\\times\\mathcal{G})"
},
{
"math_id": 53,
"text": "q\\in PY"
},
{
"math_id": 54,
"text": "p\\otimes q"
},
{
"math_id": 55,
"text": "(X\\times Y, \\mathcal{F}\\times\\mathcal{G})"
},
{
"math_id": 56,
"text": "\n(PX,\\mathcal{PF})\\times (PY,\\mathcal{PG})\\to \\big(P(X\\times Y), \\mathcal{P(F\\times G)}\\big)\n"
},
{
"math_id": 57,
"text": "\\nabla"
},
{
"math_id": 58,
"text": "\\otimes"
},
{
"math_id": 59,
"text": "\\nabla:PX\\times PY\\to P(X\\times Y)"
},
{
"math_id": 60,
"text": "X\\times Y"
},
{
"math_id": 61,
"text": "1\\cong P1"
},
{
"math_id": 62,
"text": "(PX,\\mathcal{PF})"
},
{
"math_id": 63,
"text": "[0,\\infty]"
},
{
"math_id": 64,
"text": "e:P[0,\\infty]\\to [0,\\infty]"
},
{
"math_id": 65,
"text": "\np \\longmapsto \\int_{[0,\\infty)} x\\,p(dx)\n"
},
{
"math_id": 66,
"text": "p"
},
{
"math_id": 67,
"text": "[0,\\infty)"
},
{
"math_id": 68,
"text": "e(p)=\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=75998118 |
76000393 | List of novels by Lincoln Child | The American author Lincoln Child has released a number of novels and works.
Stand-alone novels.
Utopia (2002).
Utopia is the first solo novel by Lincoln Child, published in 2002. It is set in a futuristic amusement park called "Utopia", a park that relies heavily on holographics and robotics. Dr. Andrew Warne, the man who designed the program that runs the park's robots, is called in to help fix a problem. But when he gets there, he finds out that the park is being held hostage by a mysterious man known as John Doe.
Utopia consists of five "Worlds", each modeled after different time eras.
A review in "Publishers Weekly" criticized the "Sluggish prose and overload of technical detail", but admired the book's conclusion as properly thrilling. One blogger called it a "page turner¨ and another blogger admired Child keeping it suspenseful as to which characters would survive and which would perish.
Death Match (2004).
Death Match is a 2004 horror novel by Lincoln Child. It is his second solo novel. It is a techno-horror look at electronic matchmaking where for a substantial sum of money, the computer will locate a 'perfect match' for anyone. However, these most perfect of matches ('supercouples, 100% compatibility, etc.) suddenly start seeing mysterious tragedy. The plot begins with a "supercouple" found dead in their Arizona home, in an apparent double suicide.
Jeremy Logan series.
Deep Storm (2007).
Deep Storm is the third solo novel by American author Lincoln Child, published on January 30, 2007. This is the first of Child's novels to introduce Dr. Jeremy Logan, the protagonist of Child's solo works.
In the prologue, three workers – Kevin Lindengood, Fred Hicks, and John Wherry – are operating the rig on the Storm King oil rig in the North Atlantic, off the coast of Greenland. When the equipment begins malfunctioning, Wherry orders everything to be shut down. However, even after Lindengood shuts off the electromagnet, a series of strange signals are still being transmitted to their devices.
Twenty months later, Former naval doctor Peter Crane is sent to investigate a mysterious illness that has broken out on the rig. He meets Dr. Howard Asher, who hints at a fantastic secret being discovered. Government officials transport him to a massive, 12-level facility run by the United States military. He receives a confidential envelope that explains how the military has discovered Atlantis. As he is brought down into the facility, codenamed Deep Storm, he discovers that nearly a quarter of the staff have been acting strangely within the last few weeks. Working alongside the psychiatrist Dr. Roger Corbett and the chief military doctor Michele Bishop, Crane is witness to one of these incidents; a worker named Randall Waite suddenly grabs a hostage after screaming about "voices" in his head, then eventually stabs himself in the neck with a screwdriver. After interviewing some of the patients there, and finding many of the symptoms including sleeplessness, lack of focus, nausea, and psychological effects such as changes in personality, Crane realizes that there must be some kind of unifying basis to all of them.
Meanwhile, Asher talks with the military commander in charge of Deep Storm, Admiral Spartan, and his second-in-command, Commander Terrence Korolis. Asher thinks that Crane should have the right to go down to the “classified” levels, levels 6 through 1, to investigate the cause of the sickness, After this, the base is set on alert after a pinhole breach in one of the corridors. The officers determine it was an act of sabotage and Asher reminds all of the heads of departments to be vigilant, while Korolis brings in a team of black ops soldiers, who answer directly to him rather than Spartan, to reinforce security. Asher also shows Crane several "sentinels" that they have found: cube-shaped objects with a texture that seems to consist of every color known to man, and emit thin beams of light straight up, and gravitate to the center of any room or container they are kept in. Asher tells him that this is actually not a solid beam of light, but a pulse sending out a mini signal in binary code. He further goes on to say that he hired his personal cryptographer, Joseph Marris, to analyze this binary code since he believes that this technology is not meant for humans. Admiral Spartan and his forces come in at this instant and, much to Crane's surprise, give him clearance to visit the entire facility. Crane goes down and meets Hui Ping, a doctor who is also trying to analyze the beams of light. Ping and Crane also agree to leave no stone unturned and check for any kind of similarity all the patients may have.
Meanwhile, back on the mainland, Lindengood gets in contact with a man named Wallace, who represents a shadowy organization that has taken a great interest in the discovery after Lindengood provides them with certain information. However, unbeknownst to him, they plan to simply destroy whatever is down there. When Lindengood demands an increased pay for his information, Wallace kills him and flees to Storm King, working undercover as a crew member and regularly shipping supplies to a fellow insider on Deep Storm.
A few days later, Asher reaches a breakthrough with the binomial code, and realizes that it is a mathematical expression: 1 divided by 0. A while later, after Crane mistakenly handles a sentinel with his bare hand, Asher excitedly describes how the sentinel's broadcasts are now more clear, and they can now analyze messages on the infra-red spectrum, radioactive spectrum, and any other kind of measuring device know to man. However, during this exchange, Peter notices how Asher has a very pale complexion and bruising along his arm. He requests that Asher go to medical, but Asher disagrees, saying that he could spend time in the Hyperbaric chamber as a way to alleviate his illness for a short time, just until they decode the rest of the messages coming from the sentinel. At this point, Crane runs a brain scan on all of the patients and discovers that they all do have something in common; all of their brain waves spike in formation, even their theta waves. He realizes that this is another signal coming from the source. He also realizes the implications, that whoever made this technology is much more powerful than humans. He is about to tell Asher of his discovery when Asher phones him saying that he decoded all of the messages. However, upon his arrival at the Hyperbaric chamber, the saboteur has struck again, burning the Hyperbaric chamber with Asher and Marris both inside. Asher is nearly dead but manages to say one word to Crane before he dies: Whip. Along with Ping, Crane does not realize what this means, but then figures out that they could possibly salvage the hard drive and look at the decoded messages from Asher's laptop. However, Commander Korolis records the conversation and hurriedly runs a degaussing magnet over the hard drive, erasing it. He notices how Asher did not want to continue with the digging, and assumes that whatever is on the hard drive is not relevant and would halt America from recovering possibly beneficial technology.
Korolis subsequently frames Ping as the saboteur, forcing Crane and Ping into hiding as they decipher Asher's hard drive. They go to a deserted physics lab and realize that the hard drive was magnetized. Despite this, Ping manages to resurrect the data using a crude form of magnetic force microscopy, and as they peer onto the screen they realize that the other messages included "formula_0", "π=a/b" and "formula_1", other impossible mathematical equations. Because humans place passive and active ways to warn people of the danger of such stored weapons, Crane assumes aliens think the same way and the sentinels are actually a message warning advanced civilizations to stay off earth. This deciphers the mathematical expressions because the "forbidden" mathematical maneuvers are the only way aliens can communicate with other more sophisticated races.
Crane leaves Ping and goes to warn Spartan of this danger. Spartan initially does not seem to take the hint, and still believes that there is beneficial technology there. Frustrated, Crane goes to Dr. Bishop and asks her to organize the other heads of departments into believing him. Bishop promises to call him back but is discovered by Dr. Corbett an hour later in the Environmental Control section, wiring C-4 into the facility's wall. Corbett secretly switches on his phone, dials his intern, and confronts Bishop. She does not deny it, instead revealing that she is a radical with anti-American ideals, and how she believes that America has no right to take this technology. She shoots him with a silenced pistol and quickly leaves at the sound of approaching voices. Corbett is barely alive, and starts to disable the C4, but Bishop re-enters the room and finds him. In his panic, he accidentally activates the fourth and final detonator, killing himself and Bishop and blowing open the facility wall. The resulting leak floods all of level 8 and half of level 7.
Meanwhile, Spartan tells Korolis about how Crane's advice does make sense, and he is going to call for an investigation before starting the drilling again. Korolis, determined to acquire the technology no matter what, and believing that Spartan has become infected with the disease, knocks Spartan unconscious, locks him in his quarters, and assumes command.
Crane and Ping meet with Dr. Gene Vanderbilt, the ranking science officer of levels 8 through 12, and he orders a mass evacuation of all personnel on levels 9 through 12, as those on level 8 and below (including Korolis) are stranded by the flood. They round up all 112 people on the higher levels and begin the evacuation process. A single black ops soldier arrives at the ladder to the escape pod as the group begins to escape, and he orders them to return to their stations. A wounded Spartan then appears and guns the soldier down, ordering everyone to evacuate while he stays behind to fend off any other approaching soldiers. He gives Crane the card of his contact in Washington, simply named McPherson, and tells Crane to tell McPherson everything. During this time, Ping manages to decode another of the warnings which suggests that uncovering the weapons could destroy the Solar System. The survivors manage to launch the escape pod shortly before Korolis and his men discover a fantastic weapons cache of stable orbiting black holes. Before they can investigate further, a blast that is presumed to be one of the active countermeasures is fired, consuming the drill team and Deep Storm.
In the epilogue several months later, a small salvaging operation of various wreckage from Deep Storm is underway, and the other insider from the Storm King platform, Wallace, has been arrested. Crane and Ping have met up with McPherson, and Crane tells McPherson the entire story. They listen to a recorded tape of Korolis before his death, and agree not to tell anyone of this discovery, as no one was meant to access such powerful weapons. Crane reasons that whoever put the sentinels there were also cautious enough to provide obvious warning signals, as evident by the impossible mathematic equations. However, McPherson raises two disturbing points: The aliens more than likely consider humans to be negligible due to their primitive technology, hence the violent placing of the devices in the earth as recounted by Albarn 600 years ago; and also that humans at least deactivate weapons before storing them, but because the aliens did not attempt this at all, McPherson thinks that this is not a waste dump at all; it is an active storage facility of weapons for future use.
Terminal Freeze (2009).
Terminal Freeze is the fourth solo novel by Lincoln Child. The novel was released on February 24, 2009, by Random House. It is the second novel in the Jeremy Logan series.
The events take place in Alaska, north of the Arctic Circle. A decommissioned military base located near the fictional Mount Fear, the Mount Fear Remote Sensing Installation, is being used by a research team from Northern Massachusetts University to study the effects of global warming on a receding glacier. The team consists of five scientists from the university - Evan Marshall, a paleoecologist; Gerard Sully, a climatologist and the team leader; Wright Faraday, an evolutionary biologist; Ang Chen, a graduate student; and Penny Barbour, a computer scientist - along with the skeleton crew of four soldiers - Corporal Marcelin, Privates First Class Tad Phillips and Donovan Fluke, and the leader, Sergeant Paul Gonzalez.
The expedition discovers a monstrous ancient animal, presumed to be a preserved example of "Smilodon populator", frozen in solid ice inside a lava tube made into an ice cave. The expedition's corporate sponsors, Terra Prime and its parent corporation Blackpool Entertainment, sense huge publicity and decide to have the beast cut from the ice, thawed, and revealed live on television. A massive entourage is sent to the base to begin production of the documentary, and the new crew includes Kari Ekberg, the field producer; Emilio Conti, the eccentric director (along with his assistant Hulce); Allan Fortnum, the director of photography; Ken Toussaint, the assistant director of photography; Wolff, the network liaison and channel representative; George Creel, the production foreman; and Ashleigh Davis, the spoiled host and star of the show (along with her assistant Brianna). The group is later joined by the truck driver Carradine, who brings Davis' luxurious trailer to the site, and a man who hitched a ride with Carradine, named Dr. Jeremy Logan, a private investigator and Yale professor of medieval history.
Meanwhile, local Tunit people, led by their chief Usuguk, try to warn the scientists that they do not understand what they have found. Specific warnings are that the entire mountain it was found in is a place of evil, the creature exists only for the sole purpose of killing, and that the Tunit do not believe it is dead. In addition, after a reexamination of the creature by Marshall, Faraday, and Barbour, it is revealed that the creature is not a "Smilodon" at all, but a new, unknown animal entirely, which may be up to 16 feet in length. However, Conti and Wolff are determined to move forward with the production, up until the creature suddenly vanishes from the vault it is being stored in. Although they initially believe it to be stolen, analysis of the hole in the vault floor by Faraday reveals that the incisions in the wood were made from the inside, and appear to be made by something more natural than a tool. Meanwhile, Logan reveals to Marshall that he is investigating the base itself after uncovering recently declassified government documents, detailing an incident at the base in 1958 that resulted in the deaths of 7 of the 8 scientists there. The incident, however, was forgotten when the officer reporting it, Colonel H.N. Rose, died in a plane crash with the full, detailed report.
When a production assistant named Josh Peters, Davis, and Fluke are all suddenly and brutally killed, along with Toussaint and Brianna being wounded, Gonzalez decides that the base has to be evacuated immediately. Carradine offers to transport everyone in his semi's trailer, and although Wolff initially objects, Gonzalez overrides him and agrees. Everyone boards the trailer and flees, leaving behind only the three remaining soldiers, Marshall, Logan, Ekberg, Sully, Conti, Wolff, Faraday, and Creel. After Logan investigates the abandoned quarters of the base's previous science team, he discovers a small journal left behind and hidden in a crawlspace by one of the former occupants, which, among other things, says that the Tunit have the answer to whatever it was that killed the team. Marshall decides to take the Sno-Cat and travel to the Tunit village, only to find that all of the Tunit have fled for the shoreline, leaving behind only the elderly shaman Usuguk. After Marshall's pleas for help and information about the monster, Usuguk reveals that he was the sole survivor of the crew of '58, and agrees to go back to the base with Marshall. Once they return, Usuguk explains the full story, and how the crew of the base discovered a similar creature similarly encased in ice, cut it out, and brought it back to the base. Usuguk calls it the "kurrshuq" (the "Fang of the Gods" and the "Devourer of Souls"), a local legend among the Tunit people for generations, and shocks everyone by explaining that the creature, upon thawing out and coming alive, was actually quiet friendly and playful. However, only after one scientist attempted to study its hunting habits by playing recordings of animal screams, did the creature suddenly turn violent and kill all except Usuguk. He then reveals one final, chilling detail: The "kurrshuq" that killed the team in 1958, before it also suddenly died and its body vanished, was no bigger than an Arctic fox; far smaller than this creature.
Meanwhile, the three soldiers and Creel (who volunteered to stay behind due to his hunting and military experience) all begin searching the base for the creature. Once they finally encounter it, it kills Creel and Marcelin while Gonzalez and Phillips return to the life sciences lab where the others are hiding. With Faraday's research of the blood found inside the vault, they realize that the creature has extremely advanced white blood cells that rapidly heal all wounds, and also contain the same compounds as PCP, thus giving the creature enormous and enduring strength. Marshall then speculates, after comparing all of the victims and the unusually distinct shape of the creature's ears, that the "kurrshuq" has extremely sensitive, sonar-like hearing, like a bat. He claims that, as Usuguk described and Toussaint himself raved about after being attacked, the creature likes to play with people rather than deliberately kill them, as none of its victims were actually eaten like a usual carnivore. Thus, the reason that all were killed except for Toussaint was because they screamed upon seeing the creature, and it killed them in order to stop the noise, just as the original "kurrshuq" did in 1958. Using this information, Marshall and Sully begin to construct a machine out of sonar technology to emit loud sounds that might be enough to ward off the creature. When Ekberg radios in for help after the monster kills Conti and Wolff (as the three had left on their own in a final attempt to film the creature), Marshall leaves to bring her back, and both return just as the monster appears. Sully tries several different frequencies of sound on the creature, with the final kind - the sine waves - causing the creature noticeable pain. However, the creature kills Sully and briefly stops the machine. Marshall, realizing that it was finally working, starts dragging the machine into a large echo chamber in order to amplify the sounds even greater. Marshall, Logan, and Usuguk lure the creature in, and Marshall turns up the sine waves even louder, causing the creature to collapse and writhe in agony before its head explodes.
In the epilogue, as the remaining crew members - Marshall, Logan, Ekberg, Faraday, Gonzalez, and Phillips - are being evacuated, the scientists are still baffled by the nature of the creature, as its corpse had suddenly disappeared after it supposedly died. After Usuguk leaves, Marshall speculates that perhaps the theory Usuguk had been insisting all along was true; that the creature was a creation of the spirits that rule this land according to Tunit culture, and that both the new "kurrshuq" and the older one had left the physical world for the spirit world, as Usuguk claimed they did. Logan cryptically mentions how he once lost a pet dachshund while on a family trip, implying that he believes the creature was left behind by extraterrestrial visitors. A budding relationship between Marshall and Ekberg is hinted at, and Logan bids them farewell, saying that he's gotten a call from his private investigation office about another interesting case.
The setting and story of the book is very similar to the novella "Who Goes There?", featuring an arctic setting where a group of scientists uncover and thaw out a mysterious creature in the ice, which then breaks free and runs amuck. Tunit was the name assigned to the Dorset people in Greenland by the early Thule proto-Inuit. The last Dorset survivors died in 1902 and no Dorset People ever lived in Alaska, being resident in eastern Canada and Greenland.
The Third Gate (2012).
The Third Gate is the fifth solo novel by American writer Lincoln Child. The novel was released on June 12, 2012, by Doubleday. The book is also the third installment in the Jeremy Logan series.
Shortly after the events of "Terminal Freeze", Dr. Jeremy Logan is contacted by an old colleague named Dr. Ethan Rush, who invites him on an expedition into the Sudd in southern Egypt. The expedition, led by famed archaeologist Dr. Porter Stone, seeks to finally locate and excavate the long-lost tomb of the ancient Egyptian pharaoh Narmer, located at the bottom of the swamp. Other members of the expedition include the head of security Frank Valentino, technician Cory Landau, archaeologist Tina Romero, and mechanic Frank Kowinsky. Also accompanying the expedition is Rush's wife Jennifer, who has been maintaining a special connection to "the other side" after a near-death experience where she technically died in a car crash, but was revived by her husband. Rush uses his special method of hypnosis to put Jennifer into a lucid state through which they can communicate with the spirits within the tomb below them, which they believe to be that of Narmer himself. The base of operations is a massive group of canvas-covered outposts floating in the middle of the Sudd, simply referred to as "The Station."
Once they finally manage to create a passageway down to the tomb entrance—nicknamed the Umbilical Cord—they slowly begin excavation through the first two chambers, known as the Gates, with the Third Gate containing the tomb of Narmer himself, while the first two Gates contain rooms full of treasure. However, when Romero studies the mummified remains within the Third Gate, she realizes that the remains are of a female body. Logan similarly draws a conclusion based on the mannerisms Jennifer displayed whenever possessed by the spirit, and deduces that it has to be a female spirit inhabiting her during the sessions, not that of a man. Thus, they realize that Narmer's queen, Niethotep, must have killed Narmer by poisoning him and taking his place in the tomb.
Shortly after this discovery, Jennifer is fully possessed by the spirit of Niethotep once more, which then sabotages the ventilation system on the base and starts a fire in the engine room. She then takes two cylinders of nitroglycerin and uses one to damage the Umbilical Cord, killing Kowinsky, while holding the second one in her hand to keep everyone at bay. Valentino orders an evacuation of the Station, with most personnel taking as much treasure with them as possible, and escapes in one of the rafts along with Stone, Romero, and Landau. Logan and Rush stay behind to try to bring back Jennifer Rush and cast out the evil spirit of Niethotep, but they are unsuccessful in doing so; Niethotep throws the final canister of nitroglycerin down between her and Rush, creating an explosion that kills both of them while narrowly sparing Logan. Logan grabs a handful of treasure and escapes on one of the final rafts before the base explodes and sinks into the Sudd.
A review by Anthony Schultz praised the introductory chapter, but felt the middle of the novel was disappointing, with too much fruitless investigation by Logan before the answers start finally being revealed toward the very end of the book. He also thought the characters were "flat and shallow", and even the better characters weren't fleshed out properly, with too many loose ends.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a^3+b^3=c^3"
},
{
"math_id": 1,
"text": "x=0^0"
}
]
| https://en.wikipedia.org/wiki?curid=76000393 |
76015102 | Invariant sigma-algebra | Sigma-algebra used in probability and ergodic theory
In mathematics, especially in probability theory and ergodic theory, the invariant sigma-algebra is a sigma-algebra formed by sets which are invariant under a group action or dynamical system. It can be interpreted as of being "indifferent" to the dynamics.
The invariant sigma-algebra appears in the study of ergodic systems, as well as in theorems of probability theory such as de Finetti's theorem and the Hewitt-Savage law.
Definition.
Strictly invariant sets.
Let formula_0 be a measurable space, and let formula_1 be a measurable function. A measurable subset formula_2 is called invariant if and only if formula_3.
Equivalently, if for every formula_4, we have that formula_5 if and only if formula_6.
More generally, let formula_7 be a group or a monoid, let formula_8 be a monoid action, and denote the action of formula_9 on formula_10 by formula_11.
A subset formula_12 is formula_13-invariant if for every formula_9, formula_14.
Almost surely invariant sets.
Let formula_0 be a measurable space, and let formula_1 be a measurable function. A measurable subset (event) formula_2 is called almost surely invariant if and only if its indicator function formula_15 is almost surely equal to the indicator function formula_16.
Similarly, given a measure-preserving Markov kernel formula_17, we call an event formula_18 almost surely invariant if and only if formula_19 for almost all formula_4.
As for the case of strictly invariant sets, the definition can be extended to an arbitrary group or monoid action.
In many cases, almost surely invariant sets differ by invariant sets only by a null set (see below).
Sigma-algebra structure.
Both strictly invariant sets and almost surely invariant sets are closed under taking countable unions and complements, and hence they form sigma-algebras.
These sigma-algebras are usually called either the invariant sigma-algebra or the sigma-algebra of invariant events, both in the strict case and in the almost sure case, depending on the author.
For the purpose of the article, let's denote by formula_20 the sigma-algebra of strictly invariant sets, and by formula_21 the sigma-algebra of almost surely invariant sets.
Examples.
Exchangeable sigma-algebra.
Given a measurable space formula_35, denote by formula_36 be the countable cartesian power of formula_10, equipped with the product sigma-algebra.
We can view formula_37 as the space of infinite sequences of elements of formula_10,
formula_38
Consider now the group formula_39 of finite permutations of formula_40, i.e. bijections formula_41 such that formula_42 only for finitely many formula_43.
Each finite permutation formula_44 acts measurably on formula_37 by permuting the components, and so we have an action of the countable group formula_39 on formula_37.
An invariant event for this sigma-algebra is often called an exchangeable event or symmetric event, and the sigma-algebra of invariant events is often called the exchangeable sigma-algebra.
A random variable on formula_37 is exchangeable (i.e. permutation-invariant) if and only if it is measurable for the exchangeable sigma-algebra.
The exchangeable sigma-algebra plays a role in the Hewitt-Savage zero-one law, which can be equivalently stated by saying that for every probability measure formula_31 on formula_35, the product measure formula_45 on formula_37 assigns to each exchangeable event probability either zero or one.
Equivalently, for the measure formula_45, every exchangeable random variable on formula_37 is almost surely constant.
It also plays a role in the de Finetti theorem.
Shift-invariant sigma-algebra.
As in the example above, given a measurable space formula_35, consider the countably infinite cartesian product formula_36.
Consider now the shift map formula_46 given by mapping formula_47 to formula_48.
An invariant event for this sigma-algebra is called a shift-invariant event, and the resulting sigma-algebra is sometimes called the shift-invariant sigma-algebra.
This sigma-algebra is related to the one of tail events, which is given by the following intersection,
formula_49
where formula_50 is the sigma-algebra induced on formula_37 by the projection on the formula_51-th component formula_52.
Every shift-invariant event is a tail event, but the converse is not true.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X,\\mathcal{F})"
},
{
"math_id": 1,
"text": "T:(X,\\mathcal{F})\\to(X,\\mathcal{F})"
},
{
"math_id": 2,
"text": "S\\in \\mathcal{F}"
},
{
"math_id": 3,
"text": "T^{-1}(S)=S"
},
{
"math_id": 4,
"text": "x\\in X"
},
{
"math_id": 5,
"text": "x\\in S"
},
{
"math_id": 6,
"text": "T(x)\\in S"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "\\alpha:M\\times X\\to X"
},
{
"math_id": 9,
"text": "m\\in M"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\alpha_m:X\\to X"
},
{
"math_id": 12,
"text": "S\\subseteq X"
},
{
"math_id": 13,
"text": "\\alpha"
},
{
"math_id": 14,
"text": "\\alpha_m^{-1}(S) = S"
},
{
"math_id": 15,
"text": "1_S"
},
{
"math_id": 16,
"text": "1_{T^{-1}(S)}"
},
{
"math_id": 17,
"text": "k:(X,\\mathcal{F},p)\\to(X,\\mathcal{F},p)"
},
{
"math_id": 18,
"text": "S\\in\\mathcal{F}"
},
{
"math_id": 19,
"text": "k(S\\mid x) = 1_S(x)"
},
{
"math_id": 20,
"text": "\\mathcal{I}"
},
{
"math_id": 21,
"text": "\\tilde{\\mathcal{I}}"
},
{
"math_id": 22,
"text": "T:(X,\\mathcal{A},p)\\to (X,\\mathcal{A},p)"
},
{
"math_id": 23,
"text": "A\\in\\mathcal{A}"
},
{
"math_id": 24,
"text": "A'\\in\\mathcal{I}"
},
{
"math_id": 25,
"text": "p(A\\triangle A')=0"
},
{
"math_id": 26,
"text": "T:(X,\\mathcal{A})\\to (X,\\mathcal{A})"
},
{
"math_id": 27,
"text": "f:(X,\\mathcal{A})\\to(\\mathbb{R},\\mathcal{B})"
},
{
"math_id": 28,
"text": "f"
},
{
"math_id": 29,
"text": "f\\circ T=f"
},
{
"math_id": 30,
"text": "(\\mathbb{R},\\mathcal{B})"
},
{
"math_id": 31,
"text": "p"
},
{
"math_id": 32,
"text": "A\\in\\mathcal{I}"
},
{
"math_id": 33,
"text": "p(A)=0"
},
{
"math_id": 34,
"text": "p(A)=1"
},
{
"math_id": 35,
"text": "(X,\\mathcal{A})"
},
{
"math_id": 36,
"text": "(X^\\mathbb{N},\\mathcal{A}^{\\otimes\\mathbb{N}})"
},
{
"math_id": 37,
"text": "X^\\mathbb{N}"
},
{
"math_id": 38,
"text": "\nX^\\mathbb{N} = \\{(x_0,x_1,x_2,\\dots), x_i\\in X\\}.\n"
},
{
"math_id": 39,
"text": "S_\\infty"
},
{
"math_id": 40,
"text": "\\mathbb{N}"
},
{
"math_id": 41,
"text": "\\sigma:\\mathbb{N}\\to\\mathbb{N}"
},
{
"math_id": 42,
"text": "\\sigma(n)\\ne n"
},
{
"math_id": 43,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 44,
"text": "\\sigma"
},
{
"math_id": 45,
"text": "p^{\\otimes\\mathbb{N}}"
},
{
"math_id": 46,
"text": "T:X^\\mathbb{N}\\to X^\\mathbb{N}"
},
{
"math_id": 47,
"text": "(x_0,x_1,x_2,\\dots)\\in X^\\mathbb{N}"
},
{
"math_id": 48,
"text": "(x_1,x_2,x_3,\\dots)\\in X^\\mathbb{N}"
},
{
"math_id": 49,
"text": "\n\\bigcap_{n\\in\\mathbb{N}} \\left( \\bigotimes_{m\\ge n} \\mathcal{A}_m \\right),\n"
},
{
"math_id": 50,
"text": "\\mathcal{A}_m\\subseteq \\mathcal{A}^{\\otimes\\mathbb{N}}"
},
{
"math_id": 51,
"text": "m"
},
{
"math_id": 52,
"text": "\\pi_m:(X^\\mathbb{N},\\mathcal{A}^{\\otimes\\mathbb{N}})\\to(X,\\mathcal{A})"
}
]
| https://en.wikipedia.org/wiki?curid=76015102 |
7601777 | Reid index | The Reid Index is a mathematical relationship that exists in a human bronchus section observed under the microscope. It is defined as ratio between the thickness of the submucosal mucus secreting glands and the thickness between the epithelium and cartilage that covers the bronchi. The Reid index is not of diagnostic use "in vivo" since it requires a dissection of the airway tube, but it has value in "post mortem" evaluations and for research.
The Reid Index was developed in the late 1950s from the work of Dr. Lynne McArthur Reid, M.D. who first described the relationship between hypertrophic bronchial mucous glands and the resultant narrowing of the airways seen in chronic bronchitis. In 1967, Dr. Reid became the first woman to achieve the rank of professor of experimental pathology in England and later became the first dean of the Cardiothoracic Institute at London University.
formula_0
Calculation.
where:
RI is the Reid Index
"wall" is the thickness of the airway wall between the epithelium and the cartilage's perichondrium
"gland" is the thickness of the mucus-producing gland at the location of inspection.
Interpretation.
A normal Reid Index should be smaller than 0.4, the thickness of the wall is always more than double the thickness of the glands it contains. Chronic smoking causes submucosal gland hypertrophy and hyperplasia, leading to a Reid Index of >0.5 indicating chronic bronchitis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ RI = \\frac {gland}{wall}"
}
]
| https://en.wikipedia.org/wiki?curid=7601777 |
76019869 | Young subgroup | In mathematics, the Young subgroups of the symmetric group formula_0 are special subgroups that arise in combinatorics and representation theory. When formula_0 is viewed as the group of permutations of the set formula_1, and if formula_2 is an integer partition of formula_3, then the Young subgroup formula_4 indexed by formula_5 is defined by
formula_6
where formula_7 denotes the set of permutations of formula_8 and formula_9 denotes the direct product of groups. Abstractly, formula_4 is isomorphic to the product formula_10. Young subgroups are named for Alfred Young.
When formula_0 is viewed as a reflection group, its Young subgroups are precisely its parabolic subgroups. They may equivalently be defined as the subgroups generated by a subset of the adjacent transpositions formula_11.
In some cases, the name "Young subgroup" is used more generally for the product formula_12, where formula_13 is any set partition of formula_14 (that is, a collection of disjoint, nonempty subsets whose union is formula_14). This more general family of subgroups consists of all the conjugates of those under the previous definition. These subgroups may also be characterized as the subgroups of formula_0 that are generated by a set of transpositions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_n"
},
{
"math_id": 1,
"text": "\\{1, 2, \\ldots, n\\}"
},
{
"math_id": 2,
"text": "\\lambda = (\\lambda_1, \\ldots, \\lambda_\\ell)"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "S_\\lambda"
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "S_\\lambda = S_{\\{1, 2, \\ldots, \\lambda_1\\}} \\times S_{\\{\\lambda_1 + 1, \\lambda_1 + 2, \\ldots, \\lambda_1 + \\lambda_2\\}} \\times \\cdots \\times S_{\\{n - \\lambda_\\ell + 1, n - \\lambda_\\ell + 2, \\ldots, n\\}},"
},
{
"math_id": 7,
"text": "S_{\\{a, b, \\ldots\\}}"
},
{
"math_id": 8,
"text": "\\{a, b, \\ldots\\}"
},
{
"math_id": 9,
"text": "\\times"
},
{
"math_id": 10,
"text": "S_{\\lambda_1} \\times S_{\\lambda_2} \\times \\cdots \\times S_{\\lambda_\\ell}"
},
{
"math_id": 11,
"text": "(1 \\ 2), (2 \\ 3), \\ldots, (n - 1 \\ n)"
},
{
"math_id": 12,
"text": "S_{B_1} \\times \\cdots \\times S_{B_\\ell}"
},
{
"math_id": 13,
"text": "\\{B_1, \\ldots, B_\\ell\\}"
},
{
"math_id": 14,
"text": "\\{1, \\ldots, n\\}"
}
]
| https://en.wikipedia.org/wiki?curid=76019869 |
76020733 | First-order approach | In microeconomics and contract theory, the first-order approach is a simplifying assumption used to solve models with a principal-agent problem. It suggests that, instead of following the usual assumption that the agent will take an action that is utility-maximizing, the modeller use a weaker constraint, and looks only for actions which satisfy the first-order conditions of the agent's problem. This makes the model mathematically more tractable (usually resulting in closed-form solutions), but it may not always give a valid solution to the agent's problem.
History.
Historically, the first-order approach was the main tool used to solve the first formal moral hazard models, such as those of Richard Zeckhauser, Michael Spence, and Joseph Stiglitz. Not long after these models were published, James Mirrlees was the first to point out that the approach was not generally valid, and sometimes imposed even stronger necessary conditions than those of the original problem. Following this realization, he and other economists such as Bengt Holmström, William P. Rogerson and Ian Jewitt gave both sufficient conditions for cases where the first-order approach gives a valid solution to the problem, and also different techniques that could be applied to solve general principal-agent models.
Mathematical formulation.
In mathematical terms, the first-order approach relaxes the more general incentive compatibility constraint in the principal's problem. The principal decides on an action formula_0 and proposes a contract formula_1 to the agent by solving the following program:
formula_2 formula_3
subject to
formula_4 formula_5
formula_6 formula_7
where formula_3 and formula_8 are the principal's and the agent's expected utilities, respectively. Constraint formula_4 is usually called the participation constraint (where formula_9 is the agent's reservation utility), and constraint formula_6 is the incentive compatibility constraint.
Constraint formula_6 states that the action formula_10 that the principal wants the agent to take must be utility-maximizing for the agent – that is, it must be compatible with her incentives. The first-order approach relaxes this constraint with the first-order condition
formula_11 formula_12
Equation formula_11 is oftentimes much simpler and easier to work with than constraint formula_6, which justifies the attractiveness of the first-order approach. Nonetheless, it is only a necessary condition, and not equivalent to the more general incentive compatibility constraint.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": " \\underset{w, a} \\operatorname{max} "
},
{
"math_id": 3,
"text": " U(w, a)"
},
{
"math_id": 4,
"text": " (1) "
},
{
"math_id": 5,
"text": " V(w, a) \\geq \\bar{V} "
},
{
"math_id": 6,
"text": " (2) "
},
{
"math_id": 7,
"text": " a \\in \\underset{\\hat{a} \\in A}\\operatorname{argmax} V(w, \\hat{a})"
},
{
"math_id": 8,
"text": " V(w, a)"
},
{
"math_id": 9,
"text": " \\bar{V} "
},
{
"math_id": 10,
"text": " a "
},
{
"math_id": 11,
"text": " (3) "
},
{
"math_id": 12,
"text": " V_a (w, a) = 0 "
}
]
| https://en.wikipedia.org/wiki?curid=76020733 |
76036942 | Blackwell's informativeness theorem | Information theorem
In the mathematical subjects of information theory and decision theory, Blackwell's informativeness theorem is an important result related to the ranking of information structures, or experiments. It states that there is an equivalence between three possible rankings of information structures: one based in "expected utility", one based in "informativeness", and one based in "feasibility". This ranking defines a partial order over information structures known as the Blackwell order, or Blackwell's criterion.
The theorem states equivalent conditions under which any expected utility maximizing decision maker prefers information structure formula_0 over formula_1, for any decision problem. The result was first proven by David Blackwell in 1951, and generalized in 1953.
Setting.
Decision making under uncertainty.
A decision maker faces a set of possible states of the world formula_2 and a set of possible actions formula_3 to take. For every formula_4 and formula_5, her utility is formula_6. She does not know the state of the world formula_7, but has a prior probability formula_8 for every possible state. For every action she takes, her expected utility is
formula_9
Given such prior formula_10, she chooses an action formula_11 to maximize her expected utility. We denote such maximum attainable utility (the expected value of taking the optimal action) by
formula_12
We refer to the data formula_13 as a "decision making problem".
Information structures.
An "information structure" (or an "experiment") can be seen as way to improve on the utility given by the prior, in the sense of providing more information to the decision maker. Formally, an information structure is a tuple formula_14, where formula_15 is a signal space and formula_16 is a function which gives the conditional probability formula_17 of observing signal formula_18 when the state of the world is formula_19. An information structure can also be thought of as the setting of an experiment.
By observing the signal formula_20, the decision maker can update her beliefs about the state of the world formula_21 via Bayes' rule, giving the posterior probability
formula_22
where formula_23. By observing the signal formula_24 and updating her beliefs with the information structure formula_14, the decision maker's new expected utility value from taking the optimal action is
formula_25
and the "expected value of formula_14" for the decision maker (i.e., the expected value of taking the optimal action under the information structure) is defined as
formula_26
Garbling.
If two information structures formula_27 and formula_28 have the same underlying signal space, we abuse some notation and refer to formula_29 and formula_30 as information structures themselves. We say that formula_30 is a garbling of formula_29 if there exists a stochastic map (for finite signal spaces formula_15, a Markov matrix) formula_31 such that
formula_32
Intuitively, garbling is a way of adding "noise" to an information structure, such that the garbled information structure is considered to be less informative.
Feasibility.
A mixed strategy in the context of a decision making problem is a function formula_33 which gives, for every signal formula_34, a probability distribution formula_35 over possible actions in formula_36. With the information structure formula_27, a strategy formula_37 induces a distribution over actions formula_38 conditional on the state of the world formula_21, given by the mapping
formula_39
That is, formula_38 gives the probability of taking action formula_5 given that the state of the world is formula_40 under information structure formula_27 – notice that this is nothing but a convex combination of the formula_35 with weights formula_41. We say that formula_38 is a "feasible" strategy (or conditional probability over actions) under formula_27.
Given an information structure formula_27, let
formula_42 | formula_43
be the set of all conditional probability over
actions (i.e., strategies) that are feasible under formula_27.
Given two information structures formula_27 and formula_28, we say that formula_44 "yields a larger set of feasible strategies" than formula_45 if
formula_46
Statement.
Blackwell's theorem states that, given any decision making problem formula_47 and two information structures formula_29 and formula_30, the following are equivalent:
Blackwell order.
Definition.
Blackwell's theorem allows us to construct a partial order over information structures. We say that formula_44 is "more informative in the sense of Blackwell" (or simply "Blackwell more informative") than formula_45 if any (and therefore all) of the conditions of Blackwell's theorem holds, and write formula_51.
The order formula_52 is not a complete one, and most experiments cannot be ranked by it. More specifically, it is a chain of the partially-ordered set of information structures.
Applications.
The Blackwell order has many applications in decision theory and economics, in particular in contract theory. For example, if two information structures in a principal-agent model can be ranked in the Blackwell sense, then the more informative one is more efficient in the sense of inducing a smaller cost for second-best implementation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma "
},
{
"math_id": 1,
"text": "\\sigma '"
},
{
"math_id": 2,
"text": "\\Omega "
},
{
"math_id": 3,
"text": "A "
},
{
"math_id": 4,
"text": "\\omega \\in \\Omega "
},
{
"math_id": 5,
"text": "a \\in A"
},
{
"math_id": 6,
"text": "u(\\omega, a)"
},
{
"math_id": 7,
"text": "\\omega "
},
{
"math_id": 8,
"text": "p : \\Omega \\rightarrow [0, 1]"
},
{
"math_id": 9,
"text": " \\sum_{\\omega \\in \\Omega} u(a, \\omega) p(\\omega) "
},
{
"math_id": 10,
"text": " p "
},
{
"math_id": 11,
"text": " a \\in A "
},
{
"math_id": 12,
"text": " V(p) = \\underset{a \\in A}\\operatorname{max} \\sum_{\\omega \\in \\Omega} u(a, \\omega) p (\\omega) "
},
{
"math_id": 13,
"text": "(\\Omega, A, u, p)"
},
{
"math_id": 14,
"text": "(S, \\sigma)"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "\\sigma : \\Omega \\rightarrow \\Delta S"
},
{
"math_id": 17,
"text": "\\sigma(s | \\omega)"
},
{
"math_id": 18,
"text": "s \\in S "
},
{
"math_id": 19,
"text": "\\omega"
},
{
"math_id": 20,
"text": "s "
},
{
"math_id": 21,
"text": " \\omega "
},
{
"math_id": 22,
"text": " \\pi (\\omega | s) = \\frac{p (\\omega) \\sigma(s | \\omega)}{\\pi (s)} "
},
{
"math_id": 23,
"text": " \\pi(s) := \\sum_{\\omega' \\in \\Omega} p (\\omega') \\sigma(s | \\omega') "
},
{
"math_id": 24,
"text": " s "
},
{
"math_id": 25,
"text": " V(\\pi, s) = \\underset{a \\in A}\\operatorname{max} \\sum_{\\omega \\in \\Omega} u(a, \\omega) \\pi (\\omega | s) "
},
{
"math_id": 26,
"text": " W(\\sigma) = \\sum_{s \\in S} V(\\pi, s) \\pi (s) "
},
{
"math_id": 27,
"text": " (S, \\sigma) "
},
{
"math_id": 28,
"text": " (S, \\sigma') "
},
{
"math_id": 29,
"text": " \\sigma "
},
{
"math_id": 30,
"text": " \\sigma' "
},
{
"math_id": 31,
"text": "\\Gamma : S \\rightarrow S "
},
{
"math_id": 32,
"text": "\\sigma' = \\Gamma \\sigma "
},
{
"math_id": 33,
"text": "\\alpha : S \\rightarrow \\Delta A"
},
{
"math_id": 34,
"text": "s \\in S"
},
{
"math_id": 35,
"text": " \\alpha(a | s) "
},
{
"math_id": 36,
"text": "A"
},
{
"math_id": 37,
"text": " \\alpha"
},
{
"math_id": 38,
"text": "\\alpha_{\\sigma}( a |\\omega)"
},
{
"math_id": 39,
"text": " \\omega \\mapsto \\alpha_{\\sigma}( a |\\omega)= \\sum_{s \\in S} \\alpha (a | s) \\sigma (s | \\omega) \\in \\Delta A"
},
{
"math_id": 40,
"text": "\\omega \\in \\Omega"
},
{
"math_id": 41,
"text": "\\sigma (s | \\omega)"
},
{
"math_id": 42,
"text": " \\Phi_{\\sigma} = \\{\\alpha_{\\sigma}( a | \\omega) "
},
{
"math_id": 43,
"text": " \\alpha : S \\rightarrow \\Delta A\\}"
},
{
"math_id": 44,
"text": "\\sigma"
},
{
"math_id": 45,
"text": "\\sigma'"
},
{
"math_id": 46,
"text": " \\Phi_{\\sigma'} \\subset \\Phi_{\\sigma}"
},
{
"math_id": 47,
"text": " (\\Omega, A, u, p) "
},
{
"math_id": 48,
"text": " W(\\sigma') \\leq W (\\sigma) "
},
{
"math_id": 49,
"text": "\\Gamma "
},
{
"math_id": 50,
"text": "\\Phi_{\\sigma'} \\subset \\Phi_{\\sigma} "
},
{
"math_id": 51,
"text": "\\sigma' \\preceq_B \\sigma "
},
{
"math_id": 52,
"text": "\\preceq_B "
}
]
| https://en.wikipedia.org/wiki?curid=76036942 |
76040232 | Julian Sahasrabudhe | Canadian mathematician
Julian Sahasrabudhe (born May 8, 1988) is a Canadian mathematician who is an assistant professor of mathematics at the University of Cambridge, in their Department of Pure Mathematics and Mathematical Statistics. His research interests are in extremal and probabilistic combinatorics, Ramsey theory, random polynomials and matrices, and combinatorial number theory.
Life and education.
Sahasrabudhe grew up on Bowen Island, British Columbia, Canada. He studied music at Capilano College and later moved to study at Simon Fraser University where he completed his undergraduate degree in mathematics. After graduating in 2012, Julian received his Ph.D. in 2017 under the supervision of Béla Bollobás at the University of Memphis.
Following his Ph.D., Sahasrabudhe was a Junior Research Fellow at Peterhouse, Cambridge from 2017 to 2021. He currently holds a position as an assistant professor in the Department of Pure Mathematics and Mathematical Statistics (DPMMS) at the University of Cambridge.
Career and research.
Sahasrabudhe's work covers many topics such as Littlewood problems on polynomials, probability and geometry of polynomials, arithmetic Ramsey theory, Erdős covering systems, random matrices and polynomials, etc. In one of his more recent works in Ramsey theory, he published a paper on "Exponential Patterns in Arithmetic Ramsey Theory" in 2018 by building on an observation made by the Alessandro Sisto in 2011. He proved that for every finite colouring of the natural numbers there exists formula_0 such that the triple formula_1 is monochromatic, demonstrating the partition regularity of complex exponential patterns. This work marks a crucial development in understanding the structure of numbers under partitioning.
In 2023, Sahasrabudhe submitted a paper titled "An exponential improvement for diagonal Ramsey" along with Marcelo Campos, Simon Griffiths, and Robert Morris. In this paper, they proved that the Ramsey number
formula_2 for some constant formula_3
This is the first exponential improvement over the upper bound of Erdős and Szekeres, proved in 1935.
Sahasrabudhe has also worked with Marcelo Campos, Matthew Jenssen, and Marcus Michelen on random matrix theory with the paper "The singularity probability of a random symmetric matrix is exponentially small". The paper addresses a long-standing conjecture concerning symmetric matrix with entries in formula_4. They proved that the probability of such a matrix being singular is exponentially small. The research quantifies this probability as formula_5 where formula_6 is drawn uniformly at random from the set of all formula_7 symmetric matrices and formula_8 is an absolute constant.
In 2020, Sahasrabudhe published a paper named "Flat Littlewood Polynomials exists", which he co-authored with Paul Ballister, Bela Bollobás, Robert Morris, and Marius Tiba. This work confirms the Littlewood conjecture by demonstrating the existence of Littlewood polynomials with coefficients of formula_9 that are flat, meaning their magnitudes remain bounded within a specific range on the complex unit circle. This achievement not only validates a hypothesis made by Littlewood in 1966 but also contributes significantly to the field of mathematics, particularly in combinatorics and polynomial analysis.
In 2022, the authors worked on Erdős covering systems with the paper "On the Erdős Covering Problem: The Density of the Uncovered Set". They confirmed and provided a stronger proof of a conjecture proposed by Micheal Filaseta, Kevin Ford, Sergei Konyagin, Carl Pomerance, and Gang Yu, which states that for distinct moduli within the interval formula_10, the density of uncovered integers is bounded below by a constant. Furthermore, the authors establish a condition on the moduli that provides an optimal lower bound for the density of the uncovered set.
Awards and honours.
In August 2021, Julian Sahasrabudhe was awarded the European Prize in Combinatorics for his contribution to applying combinatorial methods to problems in harmonic analysis, combinatorial number theory, Ramsey theory, and probability theory. In particular, Sahasrabudhe proved theorems on the Littlewood problems, on geometry of polynomials (Pemantle's conjecture), and on problems of Erdős, Schinzel, and Selfridge.
In October 2023, Julian Sahasrabudhe was awarded with the Salem Prize for his contribution to harmonic analysis, probability theory, and combinatorics. More specifically, Sahasrabudhe improved the bound on the singularity probability of random symmetric matrices and obtained a new upper bound for diagonal Ramsey numbers.
Sahasrabudhe is a 2024 recipient of the Whitehead Prize, given "for his outstanding contributions to Ramsey theory, his solutions to famous problems in complex analysis and random matrix theory, and his remarkable progress on sphere packings". | [
{
"math_id": 0,
"text": " a, b > 1 "
},
{
"math_id": 1,
"text": " a,b,a^b "
},
{
"math_id": 2,
"text": " R(k) \\leq (4-\\epsilon)^k "
},
{
"math_id": 3,
"text": " \\epsilon > 0 "
},
{
"math_id": 4,
"text": " \\{-1, 1\\} "
},
{
"math_id": 5,
"text": "\\mathbb{P}(\\det(A)=0)\\leq e^{-cn} "
},
{
"math_id": 6,
"text": " A "
},
{
"math_id": 7,
"text": " n \\times n "
},
{
"math_id": 8,
"text": " c "
},
{
"math_id": 9,
"text": " \\pm 1"
},
{
"math_id": 10,
"text": " [n, Cn] "
}
]
| https://en.wikipedia.org/wiki?curid=76040232 |
76044342 | Arboreal Galois representation | Mathematical arithmetic dynamics function
In arithmetic dynamics, an arboreal Galois representation is a continuous group homomorphism between the absolute Galois group of a field and the automorphism group of an infinite, regular, rooted tree.
The study of arboreal Galois representations of goes back to the works of Odoni in 1980s.
Definition.
Let formula_0 be a field and formula_1 be its separable closure. The Galois group formula_2 of the extension formula_3 is called the absolute Galois group of formula_0. This is a profinite group and it is therefore endowed with its natural Krull topology.
For a positive integer formula_4, let formula_5 be the infinite regular rooted tree of degree formula_4. This is an infinite tree where one node is labeled as the root of the tree and every node has exactly formula_4 descendants. An automorphism of formula_5 is a bijection of the set of nodes that preserves vertex-edge connectivity. The group formula_6 of all automorphisms of formula_5 is a profinite group as well, as it can be seen as the inverse limit of the automorphism groups of the finite sub-trees formula_7 formed by all nodes at distance at most formula_8 from the root. The group of automorphisms of formula_7 is isomorphic to formula_9, the iterated wreath product of formula_8 copies of the symmetric group of degree formula_4.
An arboreal Galois representation is a continuous group homomorphism formula_10.
Arboreal Galois representations attached to rational functions.
The most natural source of arboreal Galois representations is the theory of iterations of self-rational functions on the projective line. Let formula_0 be a field and formula_11 a rational function of degree formula_4. For every formula_12 let formula_13 be the formula_8-fold composition of the map formula_14 with itself. Let formula_15 and suppose that for every formula_12 the set formula_16 contains formula_17 elements of the algebraic closure formula_18. Then one can construct an infinite, regular, rooted formula_4-ary tree formula_19 in the following way: the root of the tree is formula_20, and the nodes at distance formula_8 from formula_20 are the elements of formula_16. A node formula_21 at distance formula_8 from formula_20 is connected with an edge to a node formula_22 at distance formula_23 from formula_20 if and only if formula_24.
The absolute Galois group formula_2 acts on formula_19 via automorphisms, and the induced homorphism formula_26 is continuous, and therefore is called the arboreal Galois representation attached to formula_14 with basepoint formula_20.
Arboreal representations attached to rational functions can be seen as a wide generalization of Galois representations on Tate modules of abelian varieties.
Arboreal Galois representations attached to quadratic polynomials.
The simplest non-trivial case is that of monic quadratic polynomials. Let formula_0 be a field of characteristic not 2, let formula_27 and set the basepoint formula_28. The adjusted post-critical orbit of formula_14 is the sequence defined by formula_29 and formula_30 for every formula_31. A resultant argument shows that formula_32 has formula_17 elements for ever formula_8 if and only if formula_33 for every formula_8. In 1992, Stoll proved the following theorem:
Theorem: the arboreal representation formula_34 is surjective if and only if the span of formula_35 in the formula_36-vector space formula_37 is formula_8-dimensional for every formula_12.
The following are examples of polynomials that satisfy the conditions of Stoll's Theorem, and that therefore have surjective arboreal representations.
Higher degrees and Odoni's conjecture.
In 1985 Odoni formulated the following conjecture.
Conjecture: Let formula_0 be a Hilbertian field of characteristic formula_25, and let formula_8 be a positive integer. Then there exists a polynomial formula_50 of degree formula_8 such that formula_34 is surjective.
Although in this very general form the conjecture has been shown to be false by Dittmann and Kadets, there are several results when formula_0 is a number field. Benedetto and Juul proved Odoni's conjecture for formula_0 a number field and formula_8 even, and also when both formula_51 and formula_8 are odd, Looper independently proved Odoni's conjecture for formula_8 prime and formula_38.
Finite index conjecture.
When formula_0 is a global field and formula_52 is a rational function of degree 2, the image of formula_34 is expected to be "large" in most cases. The following conjecture quantifies the previous statement, and it was formulated by Jones in 2013.
Conjecture Let formula_0 be a global field and formula_52 a rational function of degree 2. Let formula_53 be the critical points of formula_14. Then formula_54 if and only if at least one of the following conditions hold:
(1) The map formula_14 is post-critically finite, namely the orbits of formula_55 are both finite.
(2) There exists formula_12 such that formula_56.
(3) formula_25 is a periodic point for formula_14.
(4) There exist a Möbius transformation formula_57 that fixes formula_25 and is such that formula_58.
Jones' conjecture is considered to be a dynamical analogue of Serre's open image theorem.
One direction of Jones' conjecture is known to be true: if formula_14 satisfies one of the above conditions, then formula_54. In particular, when formula_14 is post-critically finite then formula_59 is a topologically finitely generated closed subgroup of formula_60 for every formula_15.
In the other direction, Juul et al. proved that if the abc conjecture holds for number fields, formula_0 is a number field and formula_50 is a quadratic polynomial, then formula_54 if and only if formula_14 is post-critically finite or not eventually stable. When formula_50 is a quadratic polynomial, conditions (2) and (4) in Jones' conjecture are never satisfied. Moreover, Jones and Levy conjectured that formula_14 is eventually stable if and only if formula_25 is not periodic for formula_14.
Abelian arboreal representations.
In 2020, Andrews and Petsche formulated the following conjecture.
Conjecture Let formula_0 be a number field, let formula_61 be a polynomial of degree formula_62 and let formula_15. Then formula_59 is abelian if and only if there exists a root of unity formula_63 such that the pair formula_64 is conjugate over the maximal abelian extension formula_65 to formula_66 or to formula_67, where formula_68 is the Chebyshev polynomial of the first kind of degree formula_4.
Two pairs formula_69, where formula_70 and formula_71 are conjugate over a field extension formula_72 if there exists a Möbius transformation formula_73 such that formula_74 and formula_75. Conjugacy is an equivalence relation. The Chebyshev polynomials the conjecture refers to are a normalized version, conjugate by the Möbius transformation formula_76 to make them monic.
It has been proven that Andrews and Petsche's conjecture holds true when formula_38.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "K^{sep}"
},
{
"math_id": 2,
"text": "G_K"
},
{
"math_id": 3,
"text": "K^{sep}/K"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "T^d"
},
{
"math_id": 6,
"text": "Aut(T^d)"
},
{
"math_id": 7,
"text": "T^d_n"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "S_d\\wr S_d\\wr \\ldots \\wr S_d"
},
{
"math_id": 10,
"text": "G_K \\to Aut(T^d)"
},
{
"math_id": 11,
"text": "f \\colon \\mathbb P^1_K\\to \\mathbb P^1_K"
},
{
"math_id": 12,
"text": "n\\geq 1"
},
{
"math_id": 13,
"text": "f^n=f\\circ f\\circ \\ldots \\circ f"
},
{
"math_id": 14,
"text": "f"
},
{
"math_id": 15,
"text": "\\alpha\\in K"
},
{
"math_id": 16,
"text": "(f^n)^{-1}(\\alpha)"
},
{
"math_id": 17,
"text": "d^n"
},
{
"math_id": 18,
"text": "\\overline{K}"
},
{
"math_id": 19,
"text": "T(f)"
},
{
"math_id": 20,
"text": "\\alpha"
},
{
"math_id": 21,
"text": "\\beta"
},
{
"math_id": 22,
"text": "\\gamma"
},
{
"math_id": 23,
"text": "n+1"
},
{
"math_id": 24,
"text": "f(\\beta)=\\gamma"
},
{
"math_id": 25,
"text": "0"
},
{
"math_id": 26,
"text": "\\rho_{f,\\alpha}\\colon G_K\\to Aut(T(f))"
},
{
"math_id": 27,
"text": "f=(x-a)^2+b\\in K[x]"
},
{
"math_id": 28,
"text": "\\alpha=0"
},
{
"math_id": 29,
"text": "c_1=-f(a)"
},
{
"math_id": 30,
"text": "c_n= f^n(a)"
},
{
"math_id": 31,
"text": "n\\geq 2"
},
{
"math_id": 32,
"text": "(f^n)^{-1}(0)"
},
{
"math_id": 33,
"text": "c_n\\neq 0"
},
{
"math_id": 34,
"text": "\\rho_{f,0}"
},
{
"math_id": 35,
"text": "\\{c_1,\\ldots,c_n\\}"
},
{
"math_id": 36,
"text": "\\mathbb F_2"
},
{
"math_id": 37,
"text": "K^*/(K^*)^2"
},
{
"math_id": 38,
"text": "K=\\mathbb Q"
},
{
"math_id": 39,
"text": "f=x^2+a"
},
{
"math_id": 40,
"text": "a\\in \\mathbb Z"
},
{
"math_id": 41,
"text": "a>0"
},
{
"math_id": 42,
"text": "a\\equiv 1,2\\bmod 4"
},
{
"math_id": 43,
"text": "a<0"
},
{
"math_id": 44,
"text": "a\\equiv 0\\bmod 4"
},
{
"math_id": 45,
"text": "-a"
},
{
"math_id": 46,
"text": "k"
},
{
"math_id": 47,
"text": "2"
},
{
"math_id": 48,
"text": "K=k(t)"
},
{
"math_id": 49,
"text": "f=x^2+t\\in K[x]"
},
{
"math_id": 50,
"text": "f\\in K[x]"
},
{
"math_id": 51,
"text": "[K:\\mathbb Q]"
},
{
"math_id": 52,
"text": "f\\in K(x)"
},
{
"math_id": 53,
"text": "\\gamma_1,\\gamma_2\\in \\mathbb P^1_K"
},
{
"math_id": 54,
"text": "[Aut(T(f)):Im(\\rho_{f,0})]=\\infty"
},
{
"math_id": 55,
"text": "\\gamma_1,\\gamma_2"
},
{
"math_id": 56,
"text": "f^n(\\gamma_1)=f^n(\\gamma_2)"
},
{
"math_id": 57,
"text": "m=\\frac{ax+b}{cx+d}\\in PGL_2(K)"
},
{
"math_id": 58,
"text": "m\\circ f \\circ m^{-1}=f"
},
{
"math_id": 59,
"text": "Im(\\rho_{f,\\alpha})"
},
{
"math_id": 60,
"text": "Aut(T(f))"
},
{
"math_id": 61,
"text": "f \\in K[x]"
},
{
"math_id": 62,
"text": "d\\ge 2"
},
{
"math_id": 63,
"text": "\\zeta"
},
{
"math_id": 64,
"text": "(f,\\alpha)"
},
{
"math_id": 65,
"text": "K^{ab}"
},
{
"math_id": 66,
"text": "(x^d,\\zeta)"
},
{
"math_id": 67,
"text": "(\\pm T_d,\\zeta+\\zeta^{-1})"
},
{
"math_id": 68,
"text": "T_d"
},
{
"math_id": 69,
"text": "(f,\\alpha),(g,\\beta)"
},
{
"math_id": 70,
"text": "f,g\\in K(x)"
},
{
"math_id": 71,
"text": "\\alpha,\\beta\\in K"
},
{
"math_id": 72,
"text": "L/K"
},
{
"math_id": 73,
"text": "m=\\frac{ax+b}{cx+d}\\in PGL_2(L)"
},
{
"math_id": 74,
"text": "m\\circ f \\circ m^{-1}=g"
},
{
"math_id": 75,
"text": "m(\\alpha)=\\beta"
},
{
"math_id": 76,
"text": "2x"
}
]
| https://en.wikipedia.org/wiki?curid=76044342 |
76047368 | Point-surjective morphism | In category theory, a point-surjective morphism is a morphism formula_0 that "behaves" like surjections on the category of sets.
The notion of point-surjectivity is an important one in Lawvere's fixed-point theorem, and it first was introduced by William Lawvere in his original article.
Definition.
Point-surjectivity.
In a category formula_1 with a terminal object formula_2, a morphism formula_0 is said to be point-surjective if for every morphism formula_3, there exists a morphism formula_4 such that formula_5.
Weak point-surjectivity.
If formula_6 is an exponential object of the form formula_7 for some objects formula_8 in formula_1, a weaker (but technically more cumbersome) notion of point-surjectivity can be defined.
A morphism formula_9 is said to be "weakly" point-surjective if for every morphism formula_10 there exists a morphism formula_4 such that, for every morphism formula_11, we have
formula_12
where formula_13 denotes the product of two morphisms (formula_14 and formula_15) and formula_16 is the evaluation map in the category of morphisms of formula_1.
Equivalently, one could think of the morphism formula_17 as the transpose of some other morphism formula_18. Then the isomorphism between the hom-sets formula_19 allow us to say that formula_20 is weakly point-surjective if and only if formula_21 is weakly point-surjective.
Relation to surjective functions in Set.
Set elements as morphisms from terminal objects.
In the category of sets, morphisms are functions and the terminal objects are singletons. Therefore, a morphism formula_11 is a function from a singleton formula_22 to the set formula_23: since a function must specify a "unique" element in the codomain for every element in the domain, we have that formula_24 is one specific element of formula_23. Therefore, each morphism formula_11 can be thought of as a specific element of formula_25 itself.
For this reason, morphisms formula_11 can serve as a "generalization" of elements of a set, and are sometimes called global elements.
Surjective functions and point-surjectivity.
With that correspondence, the definition of point-surjective morphisms closely resembles that of surjective functions. A function (morphism) formula_26 is said to be surjective (point-surjective) if, for every element formula_27 (for every morphism formula_3), there exists an element formula_28 (there exists a morphism formula_29) such that formula_30 ( formula_31).
The notion of weak point-surjectivity also resembles this correspondence, if only one notices that the exponential object formula_7 in the category of sets is nothing but the set of all functions formula_32.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f : X \\rightarrow Y"
},
{
"math_id": 1,
"text": "\\mathbf{C}"
},
{
"math_id": 2,
"text": "1"
},
{
"math_id": 3,
"text": "y : 1 \\rightarrow Y"
},
{
"math_id": 4,
"text": "x : 1 \\rightarrow X"
},
{
"math_id": 5,
"text": "f \\circ x = y"
},
{
"math_id": 6,
"text": "Y"
},
{
"math_id": 7,
"text": "B^A"
},
{
"math_id": 8,
"text": "A, B"
},
{
"math_id": 9,
"text": "f : X \\rightarrow B^A"
},
{
"math_id": 10,
"text": "g : A \\rightarrow B"
},
{
"math_id": 11,
"text": "a : 1 \\rightarrow A"
},
{
"math_id": 12,
"text": " \\epsilon \\circ \\langle f \\circ x, a \\rangle = g \\circ a"
},
{
"math_id": 13,
"text": " \\langle -, - \\rangle : A \\rightarrow B \\times C"
},
{
"math_id": 14,
"text": "A \\rightarrow B"
},
{
"math_id": 15,
"text": "A \\rightarrow C"
},
{
"math_id": 16,
"text": " \\epsilon : B^A \\times A \\rightarrow B "
},
{
"math_id": 17,
"text": "f: X \\rightarrow B^A"
},
{
"math_id": 18,
"text": "\\tilde{f}: X \\times A \\rightarrow B"
},
{
"math_id": 19,
"text": "\\mathrm{Hom}(X\\times A,B) \\cong \\mathrm{Hom}(X,B^A)"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "\\tilde{f}"
},
{
"math_id": 22,
"text": "\\{x\\}"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "a(x) \\in A"
},
{
"math_id": 25,
"text": " A"
},
{
"math_id": 26,
"text": "f : X \\rightarrow Y "
},
{
"math_id": 27,
"text": "y \\in Y"
},
{
"math_id": 28,
"text": "x \\in X"
},
{
"math_id": 29,
"text": "x: 1 \\rightarrow X"
},
{
"math_id": 30,
"text": "f(x) = y "
},
{
"math_id": 31,
"text": " f \\circ x = y"
},
{
"math_id": 32,
"text": "f : A \\rightarrow B"
}
]
| https://en.wikipedia.org/wiki?curid=76047368 |
7604764 | Beale number | In mechanical engineering, the Beale number is a parameter that characterizes the performance of Stirling engines. It is often used to estimate the power output of a Stirling engine design. For engines operating with a high temperature differential, typical values for the Beale number are in the range 0.11−0.15; where a larger number indicates higher performance.
Definition.
The Beale number can be defined in terms of a Stirling engine's operating parameters:
formula_0
where:
Estimating Stirling power.
To estimate the power output of an engine, nominal values are assumed for the Beale number, pressure, swept volume and frequency, then the power is calculated as the product of these parameters, as follows:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B_n = \\frac{Wo}{PVF}"
},
{
"math_id": 1,
"text": "Wo = B_n PVF"
}
]
| https://en.wikipedia.org/wiki?curid=7604764 |
7604906 | Mantle convection | Gradual movement of the planet's mantle
Mantle convection is the very slow creep of Earth's solid silicate mantle as convection currents carry heat from the interior to the planet's surface. Mantle convection causes tectonic plates to move around the Earth's surface.
The Earth's lithosphere rides atop the asthenosphere, and the two form the components of the upper mantle. The lithosphere is divided into tectonic plates that are continuously being created or consumed at plate boundaries. Accretion occurs as mantle is added to the growing edges of a plate, associated with seafloor spreading. Upwelling beneath the spreading centers is a shallow, rising component of mantle convection and in most cases not directly linked to the global mantle upwelling. The hot material added at spreading centers cools down by conduction and convection of heat as it moves away from the spreading centers. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction usually at an oceanic trench. Subduction is the descending component of mantle convection.
This subducted material sinks through the Earth's interior. Some subducted material appears to reach the lower mantle, while in other regions this material is impeded from sinking further, possibly due to a phase transition from spinel to silicate perovskite and magnesiowustite, an endothermic reaction.
The subducted oceanic crust triggers volcanism, although the basic mechanisms are varied. Volcanism may occur due to processes that add buoyancy to partially melted mantle, which would cause upward flow of the partial melt as it decreases in density. Secondary convection may cause surface volcanism as a consequence of intraplate extension and mantle plumes. In 1993 it was suggested that inhomogeneities in D" layer have some impact on mantle convection.
Types of convection.
<templatestyles src="Stack/styles.css"/>
During the late 20th century, there was significant debate within the geophysics community as to whether convection is likely to be "layered" or "whole". Although elements of this debate still continue, results from seismic tomography, numerical simulations of mantle convection and examination of Earth's gravitational field are all beginning to suggest the existence of whole mantle convection, at least at the present time. In this model, cold subducting oceanic lithosphere descends all the way from the surface to the core–mantle boundary (CMB), and hot plumes rise from the CMB all the way to the surface. This model is strongly based on the results of global seismic tomography models, which typically show slab and plume-like anomalies crossing the mantle transition zone.
Although it is accepted that subducting slabs cross the mantle transition zone and descend into the lower mantle, debate about the existence and continuity of plumes persists, with important implications for the style of mantle convection. This debate is linked to the controversy regarding whether intraplate volcanism is caused by shallow, upper mantle processes or by plumes from the lower mantle.
Many geochemistry studies have argued that the lavas erupted in intraplate areas are different in composition from shallow-derived mid-ocean ridge basalts. Specifically, they typically have elevated helium-3 : helium-4 ratios. Being a primordial nuclide, helium-3 is not naturally produced on Earth. It also quickly escapes from Earth's atmosphere when erupted. The elevated He-3:He-4 ratio of ocean island basalts suggest that they must be sourced from a part of the Earth that has not previously been melted and reprocessed in the same way as mid-ocean ridge basalts have been. This has been interpreted as their originating from a different less well-mixed region, suggested to be the lower mantle. Others, however, have pointed out that geochemical differences could indicate the inclusion of a small component of near-surface material from the lithosphere.
Planform and vigour of convection.
On Earth, the Rayleigh number for convection within Earth's mantle is estimated to be of order 107, which indicates vigorous convection. This value corresponds to whole mantle convection (i.e. convection extending from the Earth's surface to the border with the core). On a global scale, surface expression of this convection is the tectonic plate motions and therefore has speeds of a few cm per year. Speeds can be faster for small-scale convection occurring in low viscosity regions beneath the lithosphere, and slower in the lowermost mantle where viscosities are larger. A single shallow convection cycle takes on the order of 50 million years, though deeper convection can be closer to 200 million years.
Currently, whole mantle convection is thought to include broad-scale downwelling beneath the Americas and the western Pacific, both regions with a long history of subduction, and upwelling flow beneath the central Pacific and Africa, both of which exhibit dynamic topography consistent with upwelling. This broad-scale pattern of flow is also consistent with the tectonic plate motions, which are the surface expression of convection in the Earth's mantle and currently indicate convergence toward the western Pacific and the Americas, and divergence away from the central Pacific and Africa. The persistence of net tectonic divergence away from Africa and the Pacific for the past 250 myr indicates the long-term stability of this general mantle flow pattern and is consistent with other studies that suggest long-term stability of the large low-shear-velocity provinces of the lowermost mantle that form the base of these upwellings.
Creep in the mantle.
Due to the varying temperatures and pressures between the lower and upper mantle, a variety of creep processes can occur, with dislocation creep dominating in the lower mantle and diffusional creep occasionally dominating in the upper mantle. However, there is a large transition region in creep processes between the upper and lower mantle, and even within each section creep properties can change strongly with location and thus temperature and pressure.
Since the upper mantle is primarily composed of olivine ((Mg,Fe)2SiO4), the rheological characteristics of the upper mantle are largely those of olivine. The strength of olivine is proportional to its melting temperature, and is also very sensitive to water and silica content. The solidus depression by impurities, primarily Ca, Al, and Na, and pressure affects creep behavior and thus contributes to the change in creep mechanisms with location. While creep behavior is generally plotted as homologous temperature versus stress, in the case of the mantle it is often more useful to look at the pressure dependence of stress. Though stress is simply force over area, defining the area is difficult in geology. Equation 1 demonstrates the pressure dependence of stress. Since it is very difficult to simulate the high pressures in the mantle (1MPa at 300–400 km), the low pressure laboratory data is usually extrapolated to high pressures by applying creep concepts from metallurgy.
formula_0
Most of the mantle has homologous temperatures of 0.65–0.75 and experiences strain rates of formula_1 per second. Stresses in the mantle are dependent on density, gravity, thermal expansion coefficients, temperature differences driving convection, and the distance over which convection occurs—all of which give stresses around a fraction of 3-30MPa.
Due to the large grain sizes (at low stresses as high as several mm), it is unlikely that Nabarro-Herring (NH) creep dominates; dislocation creep tends to dominate instead. 14 MPa is the stress below which diffusional creep dominates and above which power law creep dominates at 0.5Tm of olivine. Thus, even for relatively low temperatures, the stress diffusional creep would operate at is too low for realistic conditions. Though the power law creep rate increases with increasing water content due to weakening (reducing activation energy of diffusion and thus increasing the NH creep rate) NH is generally still not large enough to dominate. Nevertheless, diffusional creep can dominate in very cold or deep parts of the upper mantle.
Additional deformation in the mantle can be attributed to transformation enhanced ductility. Below 400 km, the olivine undergoes a pressure-induced phase transformation, which can cause more deformation due to the increased ductility. Further evidence for the dominance of power law creep comes from preferred lattice orientations as a result of deformation. Under dislocation creep, crystal structures reorient into lower stress orientations. This does not happen under diffusional creep, thus observation of preferred orientations in samples lends credence to the dominance of dislocation creep.
Mantle convection in other celestial bodies.
A similar process of slow convection probably occurs (or occurred) in the interiors of other planets (e.g., Venus, Mars) and some satellites (e.g., Io, Europa, Enceladus).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left ( \\frac{\\partial \\ln\\sigma}{\\partial P}\\right)_{T,\\dot{\\epsilon}} = \\left ( \\frac{1}{TT_m} \\right ) \\times \\left ( \\frac{\\partial \\ln\\sigma}{\\partial (1/T)}\\right)_{P,\\dot{\n\\epsilon}} \\times \\frac{dT_m}{dP}"
},
{
"math_id": 1,
"text": "10^{-14} - 10^{-16}"
}
]
| https://en.wikipedia.org/wiki?curid=7604906 |
76055488 | Neutral atom quantum computer | A neutral atom quantum computer is a modality of quantum computers built out of Rydberg atoms; this modality has many commonalities with trapped-ion quantum computers. As of December 2023, the concept has been used to demonstrate a 48 logical qubit processor.
To perform computation, the atoms are first trapped in a magneto-optical trap. Qubits are then encoded in the energy levels of the atoms. Initialization and operation of the computer is performed via the application of lasers on the qubits. For example, the laser can accomplish arbitrary single qubit gates and a formula_0 gate for universal quantum computation.
The formula_0 gate is carried out by leveraging the Rydberg blockade which leads to strong interactions when the qubits are physically close to each other. To perform a formula_0 gate a Rydberg formula_1 pulse is applied to the control qubit, a formula_2 on the target qubit and then a formula_1 on the control. Measurement is enforced at the end of the computation with a camera that generates an image of the outcome by measuring the fluorescence of the atoms.
Architecture.
Neutral atom quantum computing makes use of several technological advancements in the field laser cooling, magneto-optical trapping and optical tweezers. In one example of the architecture, an array of atoms is loaded into a laser cooled at micro-kelvin temperatures. In each of these atoms, two levels of hyperfine ground subspace are isolated. The qubits are prepared in some initial state using optical pumping. Logic gates are performed using optical or microwave frequency fields and the measurements are done using resonance fluorescence. Most of these architecture are based on Rubidium, Cesium, Ytterbium and Strontium atoms.
Single Qubit Gates.
Global single qubit gates on all the atoms can be done either by applying a microwave field for qubits encoded in the Hyperfine manifold such as Rb and Cs or by applying an RF magnetic field for qubits encoded in the nuclear spin such as Yb and Sr. Focused laser beams can be used to do single-site one qubit rotation using a lambda-type three level Raman scheme (see figure). In this scheme, the rotation between the qubit states is mediated by an intermediate excited state. Single qubit gate fidelities have been shown to be as high as .999 in state-of-the-art experiments.
Entangling Gates.
To do universal quantum computation, we need at least one two qubit entangling gate. All the gates we talk about in this article are equivalent to a Controlled NOT gate up-to single qubit rotations. Early proposals for doing gates included gates that depended on inter-atomic forces. These forces are weak and thus the gates were slow. The first fast gate was proposed by Jaksch et al. and made use of the principle of Rydberg Blockade (discussed below). Since then, most gates that have been proposed use this principle. We call all such gates Rydberg mediated gates and discuss them below.
Rydberg mediated gates.
Atoms that have been excited to very large principal quantum number formula_3 are known as Rydberg atoms. These highly excited atoms have several desirable properties including high decay life-time and amplified couplings with electromagnetic fields.
The basic principle for Rydberg mediated gates is called the Rydberg blockade. Consider two neutral atoms in their respective ground states. When they are close to each other, their interaction potential is dominated by van Der Waals force formula_4 where formula_5 is the Bohr Magneton and formula_6 is the distance between the atoms. This interaction is very weak, around formula_7 Hz for formula_8. When one of the atoms is put into a Rydberg state (state with very high principal quantum number), the interaction between the two atoms is dominated by second order dipole-dipole interaction which is also weak. When both of the atoms are excited to a Rydberg state, then the resonant dipole-dipole interaction becomes formula_9 where formula_10 is the Bohr radius. This interaction is around formula_11MHz at formula_12, around twelve orders of magnitude larger. This interaction potential induces a blockade, where-in, if one atom is excited to a Rydberg state, the other nearby atoms cannot be excited to a Rydberg state because the two-atom Rydberg state is far detuned. This phenomena is called the Rydberg blockade. Rydberg mediated gates make use of this blockade as a control mechanism to implement two qubit controlled gates.
Let's consider the physics induced by the this blockade. Suppose we are considering two isolated neutral atoms in a magneto-optical trap. Ignoring the coupling of hyperfine levels that make the qubit and motional degrees of freedom, the Hamiltonian of this system can be written as:
formula_13
where, formula_16 is the Hamiltonian of i-th atom,
formula_17 is the Rabi frequency of coupling between the Rydberg states and the formula_18 state and formula_19 is the detuning (see figure to the right for level diagram). When formula_20, we are in the so-called Rydberg Blockade regime. In this regime, the formula_21 state is highly detuned from the rest of the system and thus is effectively decoupled. For the rest of this article, we consider only the Rydberg Blockade regime.
The physics of this Hamiltonian can be divided into several subspaces depending on the initial state. The formula_22 state is decoupled and does not evolve. Suppose only the i-th the atom is in formula_23 state (formula_24, formula_25), then the Hamiltonian is given by formula_26. This Hamiltonian is the standard two-level Rabi hamiltonian. It characterizes the "light shift" in a two level system and has eigenvalues formula_27.
If both atoms are in the excited state formula_28 the effective system evolves in the subspace of formula_29. It is convenient to rewrite the Hamiltonian in terms of bright formula_30 and dark formula_31 basis states, along with formula_32. In this basis, the Hamiltonian is given by
formula_33.
Note that the dark state is decoupled from the bright state and the formula_34 state. Thus we can ignore it and the effective evolution reduces to a two-level system consisting of the bright state and formula_34 state. In this basis, the dressed eigenvalues and eigenvectors of the hamiltonian are given by:
formula_35
formula_36
formula_37,
where, formula_38 depends on the Rabi frequency and detuning.
We will make use of these considerations in the gates below. The level diagrams of these subspaces have been shown in the figure above.
Jaksch Gate.
We can use the Rydberg blockade to implement a controlled-phase gate by applying standard Rabi pulses between the formula_14 and formula_15 levels. Consider the following protocol:
The figure on the right shows what this pulse sequence does. When the state is formula_42, both levels are uncoupled from the Rydberg states and so the pulses do nothing. When either of the atoms is in formula_43 state, the other one picks up a formula_44 phase due to the formula_45 pulse. When the state is formula_28, the second atom is off-resonant to its Rydberg state and thus does not pick up any phase, however the first one does. The truth table of this gate is given below. This is equivalent to a controlled-z gate up-to a local rotation to the hyperfine levels.
Adiabatic Gate.
The adiabatic gate was introduced as an alternative to the Jaksch gate. It is global and symmetric and thus it does not require locally focused lasers. Moreover, the Adiabatic Gate prevents the problem of spurious phase accumulation when the atom is in Rydberg state. In the Adiabatic Gate, instead of doing fast pulses, we dress the atom with an adiabatic pulse sequence that takes the atom on a trajectory around the Bloch sphere and back. The levels pick up a phase on this trip due to the so-called "light shift" induced by the lasers. The shapes of pulses can be chosen to control this phase.
If both atoms are in the formula_46 state, nothing happens so formula_47. If one of them is in the formula_39 state, the other atom picks up a phase due to light shift: formula_48 and similarly formula_49 with:
formula_50.
When both of the atoms are in formula_14 states, the atoms pick up a phase due to the two-atom light shift as seen by the eigenvalues of Hamiltonian above, then formula_51 with
formula_52.
Note that this light shift is not equal to twice the single atom light shifts. The single atom light-shifts are then cancelled by a global pulse that implements formula_53 to get rid of the single qubit light shifts. The truth table for this gate is given to the right. This protocol leaves a total phase of formula_54 phase on the formula_55 state. We can choose the pulses so that this phase equals formula_40, making it a controlled-Z gate. An extension to this gate was introduced to make it robust against errors in reference.
Levine-Pichler Gate.
The adiabatic gate is global but it is slow (due to adiabatic condition). The Levine-Pichler gate was introduced as a fast diabatic substitute to the global Adiabatic Gate. This gate uses carefully chosen pulse sequences to perform a controlled-phase gate.
In this protocol, we apply the following pulse sequence:
The intuition of this gate is best understood in terms of the picture given above. When the state of the system is formula_55, the pulses send the state around the Bloch sphere twice and accumulates a net phase formula_58. When one of the atoms is in formula_39 state, the other atom does not go around the Bloch sphere fully after the first pulse due to the mismatch in Rabi frequency. The second pulse corrects for this effect by rotating the state around a different axis. This puts the atom back into the formula_14 state with a net phase formula_59, which can be calculated easily. The pulses can be chosen to make formula_60. Doing so makes this gate equivalent to a controlled-z gate up-to a local rotation. The truth table of Levine-Pichler gate is given on the right. This gate has been improved using the methods of quantum optimal controls recently.
Entangling gates in state-of-the art neutral atom quantum computing platforms have been implemented with up-to .995 quantum fidelity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "CZ"
},
{
"math_id": 1,
"text": "\\pi"
},
{
"math_id": 2,
"text": " 2\\pi "
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "V_{qq} \\approx \\frac{ \\mu_{B}^{2}}{R^{6}} "
},
{
"math_id": 5,
"text": " \\mu_{B} "
},
{
"math_id": 6,
"text": "R "
},
{
"math_id": 7,
"text": "10^{-5} "
},
{
"math_id": 8,
"text": "R = 10 \\mu m "
},
{
"math_id": 9,
"text": "V_{rr} = \\frac{(n^{2}e a_{0})^{2}}{R^{3}} "
},
{
"math_id": 10,
"text": "a_{0} "
},
{
"math_id": 11,
"text": " 100"
},
{
"math_id": 12,
"text": "R=10\\mu m "
},
{
"math_id": 13,
"text": "H = H_{1} + H_{2} + V_{rr} |r \\rangle _{1} \\langle r| \\otimes |r \\rangle _{2} \\langle r|) "
},
{
"math_id": 14,
"text": "|1\\rangle "
},
{
"math_id": 15,
"text": "|r\\rangle "
},
{
"math_id": 16,
"text": " H_{i} = \\frac{1}{2}((\\Omega |1 \\rangle _{i} \\langle r| + \\Omega^{*} |r \\rangle _{i} \\langle 1|) - \\Delta |r \\rangle _{i} \\langle r| "
},
{
"math_id": 17,
"text": "\\Omega "
},
{
"math_id": 18,
"text": " | 1 \\rangle "
},
{
"math_id": 19,
"text": " \\Delta "
},
{
"math_id": 20,
"text": "|V_{rr}| >> |\\Omega|, |\\Delta| "
},
{
"math_id": 21,
"text": "|rr \\rangle "
},
{
"math_id": 22,
"text": " |00 \\rangle "
},
{
"math_id": 23,
"text": " | 1\\rangle"
},
{
"math_id": 24,
"text": " |10\\rangle"
},
{
"math_id": 25,
"text": " |01\\rangle"
},
{
"math_id": 26,
"text": "H_{i} "
},
{
"math_id": 27,
"text": "E_{LS}^{(1)} = \\frac{1}{2}(\\Delta \\pm \\sqrt{\\Omega^{2} +\\Delta^{2}}) "
},
{
"math_id": 28,
"text": "|11 \\rangle "
},
{
"math_id": 29,
"text": "\\{|1r \\rangle, |r1 \\rangle, |11 \\rangle \\}"
},
{
"math_id": 30,
"text": " |b\\rangle = \\frac{1}{\\sqrt{2}}(|r1\\rangle + |1r\\rangle) "
},
{
"math_id": 31,
"text": " |d\\rangle = \\frac{1}{\\sqrt{2}}(|r1\\rangle - |1r\\rangle) "
},
{
"math_id": 32,
"text": "| 11 \\rangle "
},
{
"math_id": 33,
"text": "H = -\\Delta(|b\\rangle \\langle b| + |d\\rangle \\langle d|) + \\frac{\\sqrt{2}}{2}(\\Omega |b\\rangle \\langle 11| + \\Omega^{*} |11\\rangle \\langle b|) "
},
{
"math_id": 34,
"text": " |11\\rangle "
},
{
"math_id": 35,
"text": "E_{LS}^{(2)} = \\frac{1}{2}(\\Delta \\pm \\sqrt{2\\Omega^{2} +\\Delta^{2}}) "
},
{
"math_id": 36,
"text": " | \\tilde{11} \\rangle = \\cos (\\theta/2)| 11 \\rangle + \\sin (\\theta/2)| b \\rangle "
},
{
"math_id": 37,
"text": " | \\tilde{b} \\rangle = \\cos (\\theta/2)| b \\rangle - \\sin (\\theta/2)| 11\\rangle "
},
{
"math_id": 38,
"text": "\\theta"
},
{
"math_id": 39,
"text": "|0\\rangle "
},
{
"math_id": 40,
"text": " \\pi "
},
{
"math_id": 41,
"text": "2 \\pi "
},
{
"math_id": 42,
"text": "|00 \\rangle "
},
{
"math_id": 43,
"text": "|0 \\rangle "
},
{
"math_id": 44,
"text": "-1 "
},
{
"math_id": 45,
"text": "2 \\pi"
},
{
"math_id": 46,
"text": "|00\\rangle "
},
{
"math_id": 47,
"text": "|00\\rangle \\rightarrow | 00\\rangle "
},
{
"math_id": 48,
"text": "|01\\rangle \\rightarrow e^{i\\phi_{1}}| 01\\rangle "
},
{
"math_id": 49,
"text": "|10\\rangle \\rightarrow e^{i\\phi_{1}}| 10\\rangle "
},
{
"math_id": 50,
"text": " \\phi_{1} = \\int E_{LS}^{(1)} (t) dt = \\int \\frac{1}{2}(\\Delta(t) -\\sqrt{\\Omega^{2}(t) + \\Delta^{2} (t)}) dt"
},
{
"math_id": 51,
"text": "|11\\rangle \\rightarrow e^{i\\phi_{2}}| 11\\rangle "
},
{
"math_id": 52,
"text": " \\phi_{2} = \\int E_{LS}^{(2)} (t) dt = \\int \\frac{1}{2}(\\Delta(t) -\\sqrt{2\\Omega^{2}(t) + \\Delta^{2} (t)}) dt"
},
{
"math_id": 53,
"text": "U = \\exp(-i\\phi_{1} |1 \\rangle \\langle 1|) "
},
{
"math_id": 54,
"text": " \\int (E_{LS}^{(2)}(t) - 2E_{LS}^{(1)}(t)) dt "
},
{
"math_id": 55,
"text": "|11\\rangle "
},
{
"math_id": 56,
"text": " \\tau = 2 \\pi/\\sqrt{\\Delta^{2} + 2\\Omega^{2}} "
},
{
"math_id": 57,
"text": " \\Omega \\rightarrow e^{-i \\xi} \\Omega "
},
{
"math_id": 58,
"text": " \\phi_{2} = \\frac{4 \\pi \\Delta}{\\sqrt{\\Delta^{2} +2 \\Omega^{2}}} "
},
{
"math_id": 59,
"text": " \\phi_{1} "
},
{
"math_id": 60,
"text": "e^{i\\phi_{2}} =e^{i(2\\phi_{1}+\\pi)} "
}
]
| https://en.wikipedia.org/wiki?curid=76055488 |
7606440 | Efimov state | The Efimov effect is an effect in the quantum mechanics of few-body systems predicted by the Russian theoretical physicist V. N. Efimov in 1970. Efimov's effect is where three identical bosons interact, with the prediction of an infinite series of excited three-body energy levels when a two-body state is exactly at the dissociation threshold. One corollary is that there exist bound states (called Efimov states) of three bosons even if the two-particle attraction is too weak to allow two bosons to form a pair. A (three-particle) Efimov state, where the (two-body) sub-systems are unbound, is often depicted symbolically by the Borromean rings. This means that if one of the particles is removed, the remaining two fall apart. In this case, the Efimov state is also called a Borromean state.
Theory.
Efimov predicted that, as the pair interactions among three identical bosons approach resonance—that is, as the binding energy of some two-body bound state approaches zero or the scattering length of such a state becomes infinite—the three-body spectrum exhibits an infinite sequence of bound states formula_0 whose scattering lengths formula_1 and binding energies formula_2 each form a geometric progression
formula_3
where the common ratio
formula_4
is a universal constant (OEIS OEIS: ). Here
formula_5
is the order of the imaginary-order modified Bessel function of the second kind formula_6 that describes the radial dependence of the wavefunction. By virtue of the resonance-determined boundary conditions, it is the unique positive value of formula_7 satisfying the transcendental equation
formula_8.
Experimental results.
In 2005, for the first time the research group of Rudolf Grimm and Hanns-Christoph Nägerl from the Institute for Experimental Physics at the University of Innsbruck experimentally confirmed such a state in an ultracold gas of caesium atoms. In 2006, they published their findings in the scientific journal Nature.
Further experimental proof for the existence of the Efimov state has been given recently by independent groups. Almost 40 years after Efimov's purely theoretical prediction, the characteristic periodic behavior of the states has been confirmed.
The most accurate experimental value of the scaling factor of the states has been determined by the experimental group of Rudolf Grimm at Innsbruck University as 21.0(1.3), being very close to Efimov's original prediction.
The interest in the "universal phenomena" of cold atomic gases is still growing, especially because of the long-awaited experimental results. The discipline of universality in cold atomic gases near the Efimov states is sometimes referred to as "Efimov physics".
In 2014, the experimental group of Cheng Chin of the University of Chicago and the group of Matthias Weidemüller of the University of Heidelberg have observed Efimov states in an ultracold mixture of lithium and caesium atoms, which extends Efimov's original picture of three identical bosons.
An Efimov state existing as an excited state of a helium trimer was observed in an experiment in 2015.
Usage.
The Efimov states are independent of the underlying physical interaction and can in principle be observed in all quantum mechanical systems (i.e. molecular, atomic, and nuclear).
The states are very special because of their "non-classical" nature: The size of each three-particle Efimov state is much larger than the force-range between the individual particle pairs. This means that the state is purely quantum mechanical. Similar phenomena are observed in two-neutron halo-nuclei, such as lithium-11; these are called Borromean nuclei. (Halo nuclei could be seen as special Efimov states, depending on the subtle definitions.) | [
{
"math_id": 0,
"text": "N=0,1,2,\\ldots"
},
{
"math_id": 1,
"text": "a_{N}"
},
{
"math_id": 2,
"text": "E_N"
},
{
"math_id": 3,
"text": "\\begin{align}\na_{N}&=a_0\\lambda^N\\\\\nE_{N}&=E_0\\lambda^{-2N}\n\\end{align}"
},
{
"math_id": 4,
"text": "\\lambda=\\mathrm{e}^{\\mathrm{\\pi}/s_0}=22.69438\\ldots"
},
{
"math_id": 5,
"text": "s_0=1.0062378\\ldots"
},
{
"math_id": 6,
"text": "\\tilde{K}_{s_0}(r/a)"
},
{
"math_id": 7,
"text": "s"
},
{
"math_id": 8,
"text": "-s\\cosh\\left.\\tfrac{\\mathrm{\\pi}s}{2}\\right.+\\tfrac{8}{\\sqrt{3}}\\sinh\\left.\\tfrac{\\mathrm{\\pi}s}{6}\\right.=0"
}
]
| https://en.wikipedia.org/wiki?curid=7606440 |
76067366 | Fiveling | Five crystals arranged round a common axis
A fiveling, also known as a decahedral nanoparticle, a multiply-twinned particle (MTP), a pentagonal nanoparticle, a pentatwin, or a five-fold twin is a type of twinned crystal that can exist at sizes ranging from nanometers to millimetres. It contains five different single crystals arranged around a common axis. In most cases each unit has a face centered cubic (fcc) arrangement of the atoms, although they are also known for other types of crystal structure.
They nucleate at quite small sizes in the nanometer range, but can be grown much larger. They have been found in mineral crystals excavated from mines such as pentagonite or native gold from Ukraine, in rods of metals grown via electrochemical processes and in nanoparticles produced by the condensation of metals either onto substrates or in inert gases. They have been investigated for their potential uses in areas such as improving the efficiency of solar cell or heterogeneous catalysis for more efficient production of chemicals. Information about them is distributed across a diverse range of scientific disciplines, mainly chemistry, materials science, mineralogy, nanomaterials and physics. Because many different names have been used, sometimes the information in the different disciplines or within any one discipline is fragmented and overlapping.
At small sizes in the nanometer range, up to millimetres in size, with fcc metals they often have a combination of {111} and {100} facets, a low energy shape called a Marks decahedron. Relative to a single crystal, at small sizes a fiveling can be a lower energy structure due to having more low energy surface facets. Balancing this there is an energy cost due to elastic strains to close an angular gap (disclination), which makes them higher in energy at larger sizes. They can be the most stable structure in some intermediate sizes, but they can be one among many in a population of different structures due to a combination of coexisting nanoparticles and kinetic growth factors. The temperature, gas environment and chemisorption can play an important role in both their thermodynamic stability and growth. While they are often symmetric, they can also be asymmetric with the disclination not in the center of the particle.
History.
Dating back to the nineteenth century there are reports of these particles by authors such as Jacques-Louis Bournon in 1813 for marcasite, and Gustav Rose in 1831 for gold. In mineralogy and the crystal twinning literature they are referred to as a type of cyclic twin where a number of identical single crystal units are arranged in a ring-like pattern where they all join at a common point or line. The name fiveling comes from them having five members (single crystals). The older literature was mainly observational, with information on many materials documented by Victor Mordechai Goldschmidt in his "Atlas der Kristallformen". Drawings are available showing their presence in marcasite, gold, silver, copper and diamond. New mineral forms with a fiveling structure continue to be found, for instance pentagonite, whose structure was first decoded in 1973, is named because it is often found with the five-fold twinning.
Most modern analysis started with the observation of these particles by Shozo Ino and Shiro Ogawa in 1966-67, and independently but slightly later (which they acknowledged) work by John Allpress and John Veysey Sanders. In both cases these were for vacuum deposition of metal onto substrates in very clean (ultra-high vacuum) conditions, where nanoparticle islands of size 10-50 nm were formed during thin film growth. Using transmission electron microscopy and diffraction these authors demonstrated the presence of the five single crystal units in the particles, and also the twin relationships. They also observed single crystals and a related type of icosahedral nanoparticle. They called the five-fold and icosahedral crystals multiply twinned particles (MTPs). In the early work near perfect decahedron (pentagonal bipyramid) and icosahedron shapes were formed, so they were called decahedral MTPs or icosahedral MTPs, the names connecting to the decahedral (formula_0) and icosahedral (formula_1) point group symmetries. Parallel, and apparently independent there was work on larger metal whiskers (nanowires) which sometimes showed a very similar five-fold structure, an occurrence reported in 1877 by Gerhard vom Rath. There was fairly extensive analysis following this, particularly for the nanoparticles, both of their internal structure by some of the first electron microscopes that could image at the atomic scale, and by various continuum or atomic models as cited later.
Following this early work there was a large effort, mainly in Japan, to understand what were then called "fine particles", but would now be called nanoparticles. By heating up different elements so atoms evaporated and were then condensed in an inert argon atmosphere, fine particles of almost all the elemental solids were made and then analyzed using electron microscopes. The decahedral particles were found for all face centered cubic materials and a few others, often together with other shapes.
While there was some continuing work over the following decades, it was with the National Nanotechnology Initiative that substantial interest was reignited. At the same time terms such as pentagonal nanoparticle, pentatwin, or five-fold twin became common in the literature, together with the earlier names. A large number of different methods have now been published for fabricating fivelings, sometimes with a high yield but often as part of a larger population of different shapes. These range from colloidal solution methods to different deposition approaches. It is documented that fivelings occur frequently for diamond, gold and silver, sometimes for copper or palladium and less often for some of the other face-centered cubic (fcc) metals such as nickel. There are also cases such as pentagonite where the crystal structure allows for five-fold twinning with minimal to no elastic strain (see later). There is work where they have been observed in colloidal crystals consisting of ordered arrays of nanoparticles, and single crystals composed on individual decahedral nanoparticles. There has been extensive modeling by many different approaches such as embedded atom, many body, molecular dynamics, tight binding approaches, and density functional theory methods as discussed by Francesca Baletto and Riccardo Ferrando and also discussed for energy landscapes later.
Disclination strain.
These particles consist of five different (single crystal) units which are joined together by twin boundaries. The simplest form shown in the figure has five tetrahedral crystals which most commonly have a face centered cubic structure, but there are other possibilities such as diamond cubic and a few others as well as more complex shapes. The angle between two twin planes is approximately 70.5 degrees in fcc, so five of these sums to 352.5 degrees (not 360 degrees) leading to an angular gap. At small sizes this gap is closed by an elastic deformation, which Roland de Wit pointed out could be described as a wedge disclination, a type of defect first discussed by Vito Volterra in 1907. With a disclination the strains to close the gap vary radially and are distributed throughout the particle.
With other structures the angle can be different; marcasite has a twin angle of 74.6 degrees, so instead of closing a missing wedge, one of angle 13 degrees has to be opened, which would be termed a negative disclination of 13 degrees. It has been pointed out by Chao Liang and Yi Yu that when intermetallics are included there is a range of different angles, some similar to fcc where there is a deficiency (positive disclination), others such as AuCu where there is an overlap (negative disclination) similar to marcasite, while pentagonite has probably the smallest overlap at 3.5 degrees.
Early experimental high-resolution transmission electron microscopy data supported the idea of a distributed disclination strain field in the nanoparticles, as did dark field and other imaging modes in electron microscopes. In larger particles dislocations have been detected to relieve some of the strain. The disclination deformation requires an energy which scales with the particle volume, so dislocations or grain boundaries are lower in energy for large sizes.
More recently there has been detailed analysis of the atomic positions first by Craig Johnson et al, followed up by a number of other authors, providing more information on the strains and showing how they are distributed in the particles. While the classic disclination strain field is a reasonable first approximation model, there are differences when more complete elastic models are used such as finite element methods, particularly as pointed out by Johnson et al, anisotropic elasticity needs to be used. One further complication is that the strain field is three dimensional, and more complex approaches are needed to measure the full details as detailed by Bart Goris et al, who also mention issues with strain from the support film. In addition, as pointed out by Srikanth Patala, Monica Olvera de la Cruz and Marks and shown in the figure, the Von Mises stress are different for (kinetic growth) pentagonal bipyramids versus the minimum energy shape. As of 2024 the strains are consistent with finite element calculations and a disclination strain field, with the possible addition of a shear component at the twin boundaries to accommodate some of the strains.
An alternative to the disclination strain model which was proposed by B G Bagley in 1965 for whiskers is that there is a change in the atomic structure away from face-centered cubic; a hypothesis that a tetragonal crystal structure is lower in energy than fcc, and a lower energy atomic structure leads to the decahedral particles. This view was expanded upon by Cary Y. Yang and can also be found in some of the early work of Miguel José Yacamán. There have been measurements of the average structure using X-ray diffraction which it has been argued support this view. However, these x-ray measurements only see the average which necessarily shows a tetragonal arrangement, and there is extensive evidence for inhomogeneous deformations dating back to the early work of Allpress and Sanders, Tsutomu Komoda, Marks and David J. Smith and more recently by high resolution imaging of details of the atomic structure. As mentioned above, as of 2024 experimental imaging supports a disclination model with anisotropic elasticity.
Three-dimensional shape.
The three-dimensional shape depends upon how the fivelings are formed, including the environment such as gas pressure and temperature. In the very early work only pentagonal bipyramids were reported. In 1970 Ino tried to model the energetics, but found that these bipyramids were higher in energy than single crystals with a Wulff construction shape. He found a lower energy form where he added {100} facets, what is now commonly called the Ino decahedron. The surface energy of this form and a related icosahedral twin scale as the two-thirds power of the volume, so they can be lower in energy than a single crystal as discussed further below.
However, while Ino was able to explain the icosahedral particles, he was not able to explain the decahedral ones. Later Laurence D. Marks proposed a model using both experimental data and a theoretical analysis, which is based upon a modified Wulff construction which includes more surface facets, including Ino's {100} as well as re-entrant {111} surfaces at the twin boundaries with the possibility of others such as {110}, while retaining the decahedral formula_0 point group symmetry. This approach also includes the effect of gas and other environmental factors via how they change the surface energy of different facets. By combining this model with de Wit's elasticity, Archibald Howie and Marks were able to rationalize the stability of the decahedral to particles. Other work soon confirmed the shape reported by Marks for annealed particles. This was further confirmed in detailed atomistic calculations a few years later by Charles Cleveland and Uzi Landman who coined the term Marks decahedra for these shapes, this name now being widely used.
The minimum energy or thermodynamic shape for these particles depends upon the relative surface energies of different facets, similar to a single crystal Wulff shape; they are formed by combining segments of a conventional Wulff construction with two additional internal facets to represent the twin boundaries. An overview of codes to calculate these shapes was published in 2021 by Christina Boukouvala et al. Considering just {111} and {100} facets:
The photograph of an 0.5 cm gold fiveling from Miass is a Marks decahedron with formula_6, while the sketch of Rose is for formula_7. The 75 atom cluster shown above corresponds to the same shape for a small number of atoms. Experimentally, in fcc crystals fivelings with only {111} and {100} facets are common, but many other facets can be present in the Wulff construction leading to more rounded shapes, for instance {113} facets for silicon. It is known that the surface can reconstruct to a different atomic arrangement in the outermost atomic plane, for instance a dimer reconstruction for {100} facets of silicon particles of a hexagonal overlayer on the {100} facets of gold decahedra.
What shape is present depends not just on the surface energy of the different facets, but also upon how the particles grow. The thermodynamic shape is determined by the Wulff construction, which considers the energy of each possible surface facet and yields the lowest energy shape. The original Marks decahedron was based upon a form of Wulff construction that takes into account the twin boundaries. There is a related kinetic Wulff construction where the growth rate of different surfaces is used instead of the energies. This type of growth matters when the formation of a new island on a flat facet limits the growth rate. If the {100} surfaces of Ino grow faster, then they will not appear in the final shape, similarly for the re-entrant surfaces at the twin boundaries—this leads to the pentagonal bipyramids often observed. Alternatively, if the {111} surfaces grow fast and {100} slow the kinetic shape will be a long rod along the common five-fold axis as shown in the figure.
Another different set of shapes can occur when diffusion of atoms to the particles dominates, a growth regime called diffusion controlled growth. In such cases surface curvature can play a major role, for instance leading to spikes originating at the sharp corners of a pentagonal bipyramids, sometimes leading to pointy stars, as shown in the figure.
Energy versus size.
The most common approach to understand the formation of these particles, first used by Ino in 1969, is to look at the energy as a function of size comparing icosahedral twins, decahedral nanoparticles and single crystals. The total energy for each type of particle can be written as the sum of three terms:
formula_8
for a volume formula_9, where formula_10 is the surface energy, formula_11 is the disclination strain energy to close the gap (or overlap for marcasite and others), and formula_12 is a coupling term for the effect of the strain on the surface energy via the surface stress, which can be a significant contribution. The sum of these three terms is compared to the total surface energy of a single crystal (which has no strain), and to similar terms for an icosahedral particle. Because the decahedral particles have a lower total surface energy than single crystals due (approximately, in fcc) to more low energy {111} surfaces, they are lower in total energy for an intermediate size regime, with the icosahedral particles more stable at very small sizes. (The icosahedral particle have even more {111} surfaces, but also more strain.) At large sizes the strain energy can become very large, so it is energetically favorable to have dislocations and/or a grain boundary instead of a distributed strain. The very large mineral samples are almost certainly trapped in metastable higher energy configurations.
There is no general consensus on the exact sizes when there is a transition in which type of particle is lowest in energy, as these vary with material and also the environment such as gas and temperature; the coupling surface stress term and also the surface energies of the facets are very sensitive to these. In addition, as first described by Michael Hoare and P Pal and R. Stephen Berry and analyzed for these particles by Pulickel Ajayan and Marks as well as discussed by others such as Amanda Barnard, David J. Wales, Kristen Fichthorn and Baletto and Ferrando, at very small sizes there will be a statistical population of different structures so many different ones will coexist. In many cases nanoparticles are believed to grow from a very small seed without changing shape, and reflect the distribution of coexisting structures. For systems where icosahedral and decahedral morphologies are both relatively low in energy, the competition between these structures has implications for structure prediction and for the global thermodynamic and kinetic properties. These result from a double funnel energy landscape where the two families of structures are separated by a relatively high energy barrier at the temperature where they are in thermodynamic equilibrium. This situation arises for a cluster of 75 atoms with the Lennard-Jones potential, where the global potential energy minimum is decahedral, and structures based upon incomplete Mackay icosahedra are also low in potential energy, but higher in entropy. The free energy barrier between these families is large compared to the available thermal energy at the temperature where they are in equilibrium. An example is shown in the figure, with probability in the lower part and energy above with axes of an order parameter formula_13 and temperature formula_14. At low temperature the 75 atom decahedral cluster (Dh) is the global free energy minimum, but as the temperature increases the higher entropy of the competing structures based on incomplete icosahedra (Ic) causes the finite system analogue of a first-order phase transition; at even higher temperatures a liquid-like state is favored.
There has been experiment support based upon work where single nanoparticles are imaged using electron microscopes either as they grow or as a function of time. One of the earliest works was that of Yagi et al who directly observed changes in the internal structure with time during growth. More recent work has observed variations in the internal structure in liquid cells, or changes between different forms due to either (or both) heating or the electron beam in an electron microscope including substrate effects.
Successive twinning.
Allpress and Sanders proposed an alternative approach to energy minimization to understanding these particles called "successive twinning". Here one starts with a single tetrahedral unit, which then forms a twin either by accident during growth or by collision with another tetrahedron. It was proposed that this could continue to eventually have five units join.
The term "successive twinning" has now come to mean a related concept: motion of the disclination either to or from a symmetric position as sketched in the atomistic simulation in the figure; see also Haiqiang Zhao et al for very similar experimental images.
While in many cases experimental images show symmetric structures, sometimes they are less so and the five-fold center is quite asymmetric. There are asymmetric cases which can be metastable, and asymmetry can also be a strain relief process or involved in how the particle convert to single crystals or from single crystals. During growth there may be changes, as directly observed by Katsumichi Yagi et al for growth inside an electron microscope, and migration of the disclination from the outside has been observed in liquid-cell studies in electron microscopes. Extensive details about the atomic processes involved in motion of the disclination have been given using molecular dynamics calculations supported by density functional theory as shown in the figure.
Connections.
There are a number of related concepts and applications of decahedral particles.
Quasicrystals.
Soon after the discovery of quasicrystals it was suggested by Linus Pauling that five-fold cyclic twins such as these were the source of the electron diffraction data observed by Dan Shechtman. While there are similarities, quasicrystals are now considered to be a class of packing which is different from fivelings and the related icosahedral particles.
Heterogeneous catalysts.
There are possible links to heterogeneous catalysis, with the decahedral particles displaying different performance. The first study by Avery and Sanders did not find them in automobile catalysts. Later work by Marks and Howie found them in silver catalysts, and there have been other reports. It has been suggested that the strain at the surface can change reaction rates, and since there is evidence that surface strain can change the adsorption of molecules and catalysis there is circumstantial support for this. As of 2024[ [update]], there is some experimental evidence for different catalytic reactivity.
Plasmonics.
It is known that the response of the surface plasmon polaritons in nanoparticles depends upon their shape. As a consequence decahedral particles have specific optical responses. One suggested use is to improve light adsorption using their plasmonic properties by adding them to polymer solar cells.
Thin films and mechanical deformation.
Most observations of fivelings have been for isolated particles. Similar structures can occur in thin films when particles merge to form a continuous coating, but do not recrystallize immediately. They can also form during annealing of films, which molecular dynamics simulations have indicated correlates to the motion of twin boundaries and a disclination, similar to the case of isolated nanoparticles described earlier. There is experimental evidence in thin films for interactions between partial dislocations and disclinations, as discussed in 1971 by de Wit. They can also be formed by mechanical deformation. The formation of a local fiveling structure by annealing or deformation has been attributed to a combination of stress relief and twin motion, which is different from the surface energy driven formation of isolated particles described above.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D_{5h}"
},
{
"math_id": 1,
"text": "I_h"
},
{
"math_id": 2,
"text": "\\gamma_{111} > 2 \\gamma_{100}/\\sqrt{3}"
},
{
"math_id": 3,
"text": "\\gamma_{100}/\\sqrt{3} < \\gamma_{111} < 2\\gamma_{100}/\\sqrt{3}"
},
{
"math_id": 4,
"text": "\\gamma_{111} < \\gamma_{100}/\\sqrt{3}"
},
{
"math_id": 5,
"text": "\\gamma_{100}"
},
{
"math_id": 6,
"text": "\\gamma_{111} \\approx 0.85 \\gamma_{100}"
},
{
"math_id": 7,
"text": "\\gamma_{111} \\approx 0.7 \\gamma_{100}"
},
{
"math_id": 8,
"text": "E_{total} = E_{surface} V^{2/3} + E_{strain} V + E_{surface\\ stress}V^{2/3}"
},
{
"math_id": 9,
"text": "V"
},
{
"math_id": 10,
"text": "E_{surface}"
},
{
"math_id": 11,
"text": "E_{strain}"
},
{
"math_id": 12,
"text": "E_{surface \\ stress}"
},
{
"math_id": 13,
"text": "Q_6"
},
{
"math_id": 14,
"text": "T"
}
]
| https://en.wikipedia.org/wiki?curid=76067366 |
76084 | Potentiometer | Type of resistor, usually with three terminals
A potentiometer is a three-terminal resistor with a sliding or rotating contact that forms an adjustable voltage divider. If only two terminals are used, one end and the wiper, it acts as a variable resistor or rheostat.
The measuring instrument called a potentiometer is essentially a voltage divider used for measuring electric potential (voltage); the component is an implementation of the same principle, hence its name.
Potentiometers are commonly used to control electrical devices such as volume controls on audio equipment. It is also used in speed control of fans. Potentiometers operated by a mechanism can be used as position transducers, for example, in a joystick. Potentiometers are rarely used to directly control significant power (more than a watt), since the power dissipated in the potentiometer would be comparable to the power in the controlled load.
Nomenclature.
Some terms in the electronics industry used to describe certain types of potentiometers are:
Construction.
Potentiometers consist of a resistive element, a sliding contact (wiper) that moves along the element, making good electrical contact with one part of it, electrical terminals at each end of the element, a mechanism that moves the wiper from one end to the other, and a housing containing the element and wiper.
Many inexpensive potentiometers are constructed with a resistive element (B in cutaway drawing) formed into an arc of a circle usually a little less than a full turn and a wiper (C) sliding on this element when rotated, making electrical contact. The resistive element can be flat or angled. Each end of the resistive element is connected to a terminal (E, G) on the case. The wiper is connected to a third terminal (F), usually between the other two. On panel potentiometers, the wiper is usually the center terminal of three. For single-turn potentiometers, this wiper typically travels just under one revolution around the contact. The only point of ingress for contamination is the narrow space between the shaft and the housing it rotates in.
Another type is the linear slider potentiometer, which has a wiper which slides along a linear element instead of rotating. Contamination can potentially enter anywhere along the slot the slider moves in, making effective sealing more difficult and compromising long-term reliability. An advantage of the slider potentiometer is that the slider position gives a visual indication of its setting. While the setting of a rotary potentiometer can be seen by the position of a marking on the knob, an array of sliders can give a visual impression of settings as in a graphic equalizer or faders on a mixing console.
The resistive element of inexpensive potentiometers is often made of graphite. Other materials used include resistance wire, carbon particles in plastic, and a ceramic/metal mixture called cermet.
Conductive track potentiometers use conductive polymer resistor pastes that contain hard-wearing resins and polymers, solvents, and lubricant, in addition to the carbon that provides the conductive properties.
Multiturn potentiometers are also operated by rotating a shaft, but by several turns rather than less than a full turn. Some multiturn potentiometers have a linear resistive element with a sliding contact moved by a lead screw; others have a helical resistive element and a wiper that turns through 10, 20, or more complete revolutions, moving along the helix as it rotates. Multiturn potentiometers, both user-accessible and preset, allow finer adjustments; rotation through the same angle changes the setting by typically a tenth as much as for a simple rotary potentiometer.
A string potentiometer is a multi-turn potentiometer operated by an attached reel of wire turning against a spring, allowing it to convert linear position to a variable resistance.
User-accessible rotary potentiometers can be fitted with a switch which operates usually at the anti-clockwise extreme of rotation. Before digital electronics became the norm such a component was used to allow radio and television receivers and other equipment to be switched on at minimum volume with an audible click, then the volume increased by turning the same knob. Multiple resistance elements can be ganged together with their sliding contacts on the same shaft, for example in stereo audio amplifiers for volume control. In other applications, such as domestic light dimmers, the normal usage pattern is best satisfied if the potentiometer remains set at its current position, so the switch is operated by a push action, alternately on and off, by axial presses of the knob.
Other potentiometers are enclosed within the equipment and are intended to only be adjusted when calibrating the equipment during manufacture or repair, and not otherwise touched. They are usually physically much smaller than user-accessible potentiometers, and may need to be operated by a screwdriver rather than having a knob. They are usually called "trimmer", "trim[ming]", or "preset" potentiometers (or pots), or the genericized brand name "trimpot".
Resistance–position relationship: "taper".
The relationship between slider position and resistance, known as the "taper" or "law", can be controlled during manufacture by changing the composition or thickness of the resistance coating along the resistance element. Although in principle any taper is possible, two types are widely manufactured: linear and logarithmic (aka "audio taper") potentiometers.
A letter code may be used to identify which taper is used, but the letter code definitions are not standardized. Potentiometers made in Asia and the US are usually marked with an "A" for logarithmic taper or a "B" for linear taper; "C" for the rarely seen reverse logarithmic taper. Others, particularly those from Europe, may be marked with an "A" for linear taper, a "C" or "B" for logarithmic taper, or an "F" for reverse logarithmic taper. The code used also varies between different manufacturers. When a percentage is referenced with a non-linear taper, it relates to the resistance value at the midpoint of the shaft rotation. A 10% log taper would therefore measure 10% of the total resistance at the midpoint of the rotation; i.e. 10% log taper on a 10 kOhm potentiometer would yield 1 kOhm at the midpoint. The higher the percentage, the steeper the log curve.
Linear taper potentiometer.
A "linear taper potentiometer" ("linear" describes the electrical characteristic of the device, not the geometry of the resistive element) has a resistive element of constant cross-section, resulting in a device where the resistance between the contact (wiper) and one end terminal is proportional to the distance between them. Linear taper potentiometers are used when the division ratio of the potentiometer must be proportional to the angle of shaft rotation (or slider position), for example, controls used for adjusting the centering of the display on an analog cathode-ray oscilloscope. Precision potentiometers have an accurate relationship between resistance and slider position.
Logarithmic potentiometer.
A "logarithmic taper potentiometer" is a potentiometer that has a bias built into the resistive element. Basically this means the center position of the potentiometer is not one half of the total value of the potentiometer. The resistive element is designed to follow a logarithmic taper, aka a mathematical exponent or "squared" profile.
A logarithmic taper potentiometer is constructed with a resistive element that either "tapers" in from one end to the other, or is made from a material whose resistivity varies from one end to the other. This results in a device where output voltage is a logarithmic function of the slider position.
Most (cheaper) "log" potentiometers are not accurately logarithmic, but use two regions of different resistance (but constant resistivity) to approximate a logarithmic law. The two resistive tracks overlap at approximately 50% of the potentiometer rotation; this gives a stepwise logarithmic taper. A logarithmic potentiometer can also be simulated with a linear one and an external resistor. True logarithmic potentiometers are significantly more expensive.
Logarithmic taper potentiometers are often used for volume or signal level in audio systems, as human perception of audio volume is logarithmic, according to the Weber–Fechner law.
Contactless potentiometer.
Unlike mechanical potentiometers, "non-contact potentiometers" use an optical disk to trigger an infrared sensor, or a magnet to trigger a magnetic sensor (as long as there are other types of sensors, such as capacitive, other types of non-contact potentiometers can probably be built), and then an electronic circuit does the signal processing to provide an output signal that can be analogue or digital.
An example of a non-contact potentiometer can be found with the AS5600 integrated circuit. However, absolute encoders must also use similar principles, although being for industrial use, certainly the cost must be unfeasible for use in domestic appliances.
Rheostat.
The most common way to vary the resistance in a circuit continuously is to use a rheostat. Because of the change in resistance, they can also be used to adjust magnitude of current in a circuit. The word "rheostat" was coined in 1843 by Sir Charles Wheatstone, from the Greek "rheos" meaning "stream", and - -"states" (from "histanai", "to set, to cause to stand") meaning "setter, regulating device", which is a two-terminal variable resistor. For low-power applications (less than about 1 watt) a three-terminal potentiometer is often used, with one terminal unconnected or connected to the wiper.
Where the rheostat must be rated for higher power (more than about 1 watt), it may be built with a resistance wire wound around a semi-circular insulator, with the wiper sliding from one turn of the wire to the next. Sometimes a rheostat is made from resistance wire wound on a heat-resisting cylinder, with the slider made from a number of metal fingers that grip lightly onto a small portion of the turns of resistance wire. The "fingers" can be moved along the coil of resistance wire by a sliding knob thus changing the "tapping" point. Wire-wound rheostats made with ratings up to several thousand watts are used in applications such as DC motor drives, electric welding controls, or in the controls for generators. The rating of the rheostat is given with the full resistance value and the allowable power dissipation is proportional to the fraction of the total device resistance in circuit. Carbon-pile rheostats are used as load banks for testing automobile batteries and power supplies.
Digital potentiometer.
A digital potentiometer (often called digipot) is an electronic component that mimics the functions of analog potentiometers. Through digital input signals, the resistance between two terminals can be adjusted, just as in an analog potentiometer. There are two main functional types: volatile, which lose their set position if power is removed, and are usually designed to initialise at the minimum position, and non-volatile, which retain their set position using a storage mechanism similar to flash memory or EEPROM.
Usage of a digipot is far more complex than that of a simple mechanical potentiometer, and there are many limitations to observe; nevertheless they are widely used, often for factory adjustment and calibration of equipment, especially where the limitations of mechanical potentiometers are problematic. A digipot is generally immune to the effects of moderate long-term mechanical vibration or environmental contamination, to the same extent as other semiconductor devices, and can be secured electronically against unauthorised tampering by protecting the access to its programming inputs by various means.
In equipment which has a microprocessor, FPGA or other functional logic which can store settings and reload them to the "potentiometer" every time the equipment is powered up, a multiplying DAC can be used in place of a digipot, and this can offer higher setting resolution, less drift with temperature, and more operational flexibility.
Membrane potentiometers.
A membrane potentiometer uses a conductive membrane that is deformed by a sliding element to contact a resistor voltage divider. Linearity can range from 0.50% to 5% depending on the material, design and manufacturing process. The repeat accuracy is typically between 0.1 mm and 1.0 mm with a theoretically infinite resolution. The service life of these types of potentiometers is typically 1 million to 20 million cycles depending on the materials used during manufacturing and the actuation method; contact and contactless (magnetic) methods are available (to sense position). Many different material variations are available such as PET, FR4, and Kapton. Membrane potentiometer manufacturers offer linear, rotary, and application-specific variations. The linear versions can range from 9 mm to 1000 mm in length and the rotary versions range from 20 to 450 mm in diameter, with each having a height of 0.5 mm. Membrane potentiometers can be used for position sensing.
For touch-screen devices using resistive technology, a two-dimensional membrane potentiometer provides x and y coordinates. The top layer is thin glass spaced close to a neighboring inner layer. The underside of the top layer has a transparent conductive coating; the surface of the layer beneath it has a transparent resistive coating. A finger or stylus deforms the glass to contact the underlying layer. Edges of the resistive layer have conductive contacts.
Locating the contact point is done by applying a voltage to opposite edges, leaving the other two edges temporarily unconnected. The voltage of the top layer provides one coordinate. Disconnecting those two edges, and applying voltage to the other two, formerly unconnected, provides the other coordinate. Alternating rapidly between pairs of edges provides frequent position updates. An analog-to-digital converter provides output data.
Advantages of such sensors are that only five connections to the sensor are needed, and the associated electronics is comparatively simple. Another is that any material that depresses the top layer over a small area works well. A disadvantage is that sufficient force must be applied to make contact. Another is that the sensor requires occasional calibration to match touch location to the underlying display. (Capacitive sensors require no calibration or contact force, only proximity of a finger or other conductive object. However, they are significantly more complex.)
Applications.
Potentiometers are rarely used to directly control significant amounts of power (more than a watt or so). Instead they are used to adjust the level of analog signals (for example volume controls audio equipment), and as control inputs for electronic circuits. For example, a light dimmer uses a potentiometer to control the switching of a TRIAC and so indirectly to control the brightness of lamps.
Preset potentiometers are widely used throughout electronics wherever adjustments must be made during manufacturing or servicing.
User-actuated potentiometers are widely used as user controls, and may control a very wide variety of equipment functions. The widespread use of potentiometers in consumer electronics declined in the 1990s, with rotary incremental encoders, up/down push-buttons, and other digital controls now more common. However they remain in many applications, such as volume controls and as position sensors.
Audio control.
Low-power potentiometers, both slide and rotary, are used to control audio equipment, changing loudness, frequency attenuation, and other characteristics of audio signals.
The 'log pot', that is, a potentiometer has a resistance, taper, or, "curve" (or law) of a logarithmic (log) form, is used as the volume control in audio power amplifiers, where it is also called an "audio taper pot", because the amplitude response of the human ear is approximately logarithmic. It ensures that on a volume control marked 0 to 10, for example, a setting of 5 sounds subjectively half as loud as a setting of 10. There is also an "anti-log pot" or "reverse audio taper" which is simply the reverse of a logarithmic potentiometer. It is almost always used in a ganged configuration with a logarithmic potentiometer, for instance, in an audio balance control.
Potentiometers used in combination with filter networks act as tone controls or equalizers.
In audio systems, the word linear, is sometimes applied in a confusing way to describe slide potentiometers because of the straight line nature of the physical sliding motion. The word linear when applied to a potentiometer regardless of being a slide or rotary type, describes a linear relationship of the pot's position versus the measured value of the pot's tap (wiper or electrical output) pin.
Television.
Potentiometers were formerly used to control picture brightness, contrast, and color response. A potentiometer was often used to adjust "vertical hold", which affected the synchronization between the receiver's internal sweep circuit (sometimes a multivibrator) and the received picture signal, along with other things such as audio-video carrier offset, tuning frequency (for push-button sets) and so on. It also helps in frequency modulation of waves.
Motion control.
Potentiometers can be used as position feedback devices in order to create closed-loop control, such as in a servomechanism. This method of motion control is the simplest method of measuring the angle or displacement.
Transducers.
Potentiometers are also very widely used as a part of displacement transducers because of the simplicity of construction and because they can give a large output signal.
Computation.
In analog computers, high precision potentiometers are used to scale intermediate results by desired constant factors, or to set initial conditions for a calculation. A motor-driven potentiometer may be used as a function generator, using a non-linear resistance card to supply approximations to trigonometric functions. For example, the shaft rotation might represent an angle, and the voltage division ratio can be made proportional to the cosine of the angle.
Theory of operation.
The potentiometer can be used as a voltage divider to obtain a manually adjustable output voltage at the slider (wiper) from a fixed input voltage applied across the two ends of the potentiometer. This is their most common use.
The voltage across "R"L can be calculated by:
formula_0
If "R"L is large compared to the other resistances (like the input to an operational amplifier), the output voltage can be approximated by the simpler equation:
formula_1
As an example, assume formula_2, formula_3, formula_4, and formula_5
Since the load resistance is large compared to the other resistances, the output voltage "V"L will be approximately:
formula_6
Because of the load resistance, however, it will actually be slightly lower: ≈ 6.623 "V".
One of the advantages of the potential divider compared to a variable resistor in series with the source is that, while variable resistors have a maximum resistance where some current will always flow, dividers are able to vary the output voltage from maximum ("V"S) to ground (zero volts) as the wiper moves from one end of the potentiometer to the other. There is, however, always a small amount of contact resistance.
In addition, the load resistance is often not known and therefore simply placing a variable resistor in series with the load could have a negligible effect or an excessive effect, depending on the load.
Failure.
Ageing may cause intermittent contact between the resistive track and the wiper as it is rotated. In volume control use this causes crackling.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nV_\\mathrm{L} = { R_2 R_\\mathrm{L} \\over R_1 R_\\mathrm{L} + R_2 R_\\mathrm{L} + R_1 R_2}\\cdot V_s.\n"
},
{
"math_id": 1,
"text": "\nV_\\mathrm{L} = { R_2 \\over R_1 + R_2 }\\cdot V_s.\n"
},
{
"math_id": 2,
"text": "V_\\mathrm{S} = 10\\ \\mathrm{V}"
},
{
"math_id": 3,
"text": "R_1 = 1\\ \\mathrm{k \\Omega}"
},
{
"math_id": 4,
"text": "R_2 = 2\\ \\mathrm{k \\Omega}"
},
{
"math_id": 5,
"text": "R_\\mathrm{L} = 100\\ \\mathrm{k \\Omega}."
},
{
"math_id": 6,
"text": "\n{2\\ \\mathrm{k \\Omega} \\over 1\\ \\mathrm{k \\Omega} + 2\\ \\mathrm{k \\Omega} } \\cdot 10\\ \\mathrm{V} = {2 \\over 3} \\cdot 10\\ \\mathrm{V} \\approx 6.667\\ \\mathrm{V}.\n"
}
]
| https://en.wikipedia.org/wiki?curid=76084 |
76086 | Electric motor | Machine that converts electrical energy into mechanical energy
An electric motor is a machine that converts electrical energy into mechanical energy. Most electric motors operate through the interaction between the motor's magnetic field and electric current in a wire winding to generate force in the form of torque applied on the motor's shaft. An electric generator is mechanically identical to an electric motor, but operates in reverse, converting mechanical energy into electrical energy.
Electric motors can be powered by direct current (DC) sources, such as from batteries or rectifiers, or by alternating current (AC) sources, such as a power grid, inverters or electrical generators.
Electric motors may be classified by considerations such as power source type, construction, application and type of motion output. They can be brushed or brushless, single-phase, two-phase, or three-phase, axial or radial flux, and may be air-cooled or liquid-cooled.
Standardized motors provide power for industrial use. The largest are used for ship propulsion, pipeline compression and pumped-storage applications, with output exceeding 100 megawatts.
Applications include industrial fans, blowers and pumps, machine tools, household appliances, power tools, vehicles, and disk drives. Small motors may be found in electric watches. In certain applications, such as in regenerative braking with traction motors, electric motors can be used in reverse as generators to recover energy that might otherwise be lost as heat and friction.
Electric motors produce linear or rotary force (torque) intended to propel some external mechanism. This makes them a type of actuator. They are generally designed for continuous rotation, or for linear movement over a significant distance compared to its size. Solenoids also convert electrical power to mechanical motion, but over only a limited distance.
<templatestyles src="Template:TOC limit/styles.css" />
History.
Early motors.
Before modern electromagnetic motors, experimental motors that worked by electrostatic force were investigated. The first electric motors were simple electrostatic devices described in experiments by Scottish monk Andrew Gordon and American experimenter Benjamin Franklin in the 1740s. The theoretical principle behind them, Coulomb's law, was discovered but not published, by Henry Cavendish in 1771. This law was discovered independently by Charles-Augustin de Coulomb in 1785, who published it so that it is now known with his name. Due to the difficulty of generating the high voltages they required, electrostatic motors were never used for practical purposes.
The invention of the electrochemical battery by Alessandro Volta in 1799 made possible the production of persistent electric currents. Hans Christian Ørsted discovered in 1820 that an electric current creates a magnetic field, which can exert a force on a magnet. It only took a few weeks for André-Marie Ampère to develop the first formulation of the electromagnetic interaction and present the Ampère's force law, that described the production of mechanical force by the interaction of an electric current and a magnetic field.
The first demonstration of the effect with a rotary motion was given by Michael Faraday on 3 September 1821 in the basement of the Royal Institution. A free-hanging wire was dipped into a pool of mercury, on which a permanent magnet (PM) was placed. When a current was passed through the wire, the wire rotated around the magnet, showing that the current gave rise to a close circular magnetic field around the wire. Faraday published the results of his discovery in the "Quarterly Journal of Science", and sent copies of his paper along with pocket-sized models of his device to colleagues around the world so they could also witness the phenomenon of electromagnetic rotations. This motor is often demonstrated in physics experiments, substituting brine for (toxic) mercury. Barlow's wheel was an early refinement to this Faraday demonstration, although these and similar homopolar motors remained unsuited to practical application until late in the century.
In 1827, Hungarian physicist Ányos Jedlik started experimenting with electromagnetic coils. After Jedlik solved the technical problems of continuous rotation with the invention of the commutator, he called his early devices "electromagnetic self-rotors". Although they were used only for teaching, in 1828 Jedlik demonstrated the first device to contain the three main components of practical DC motors: the stator, rotor and commutator. The device employed no permanent magnets, as the magnetic fields of both the stationary and revolving components were produced solely by the currents flowing through their windings.
DC motors.
The first commutator <templatestyles src="Template:Visible anchor/styles.css" />DC electric motor capable of turning machinery was invented by English scientist William Sturgeon in 1832. Following Sturgeon's work, a commutator-type direct-current electric motor was built by American inventors Thomas Davenport and Emily Davenport, which he patented in 1837. The motors ran at up to 600 revolutions per minute, and powered machine tools and a printing press. Due to the high cost of primary battery power, the motors were commercially unsuccessful and bankrupted the Davenports. Several inventors followed Sturgeon in the development of DC motors, but all encountered the same battery cost issues. As no electricity distribution system was available at the time, no practical commercial market emerged for these motors.
After many other more or less successful attempts with relatively weak rotating and reciprocating apparatus Prussian/Russian Moritz von Jacobi created the first real rotating electric motor in May 1834. It developed remarkable mechanical output power. His motor set a world record, which Jacobi improved four years later in September 1838. His second motor was powerful enough to drive a boat with 14 people across a wide river. It was also in 1839/40 that other developers managed to build motors with similar and then higher performance.
In 1827–1828, Jedlik built a device using similar principles to those used in his electromagnetic self-rotors that was capable of useful work. He built a model electric vehicle that same year.
A major turning point came in 1864, when Antonio Pacinotti first described the ring armature (although initially conceived in a DC generator, i.e. a dynamo). This featured symmetrically grouped coils closed upon themselves and connected to the bars of a commutator, the brushes of which delivered practically non-fluctuating current. The first commercially successful DC motors followed the developments by Zénobe Gramme who, in 1871, reinvented Pacinotti's design and adopted some solutions by Werner Siemens.
A benefit to DC machines came from the discovery of the reversibility of the electric machine, which was announced by Siemens in 1867 and observed by Pacinotti in 1869. Gramme accidentally demonstrated it on the occasion of the , when he connected two such DC devices up to 2 km from each other, using one of them as a generator and the other as motor.
The drum rotor was introduced by Friedrich von Hefner-Alteneck of Siemens & Halske to replace Pacinotti's ring armature in 1872, thus improving the machine efficiency. The laminated rotor was introduced by Siemens & Halske the following year, achieving reduced iron losses and increased induced voltages. In 1880, Jonas Wenström provided the rotor with slots for housing the winding, further increasing the efficiency.
In 1886, Frank Julian Sprague invented the first practical DC motor, a non-sparking device that maintained relatively constant speed under variable loads. Other Sprague electric inventions about this time greatly improved grid electric distribution (prior work done while employed by Thomas Edison), allowed power from electric motors to be returned to the electric grid, provided for electric distribution to trolleys via overhead wires and the trolley pole, and provided control systems for electric operations. This allowed Sprague to use electric motors to invent the first electric trolley system in 1887–88 in Richmond, Virginia, the electric elevator and control system in 1892, and the electric subway with independently powered centrally-controlled cars. The latter were first installed in 1892 in Chicago by the South Side Elevated Railroad, where it became popularly known as the "L". Sprague's motor and related inventions led to an explosion of interest and use in electric motors for industry. The development of electric motors of acceptable efficiency was delayed for several decades by failure to recognize the extreme importance of an air gap between the rotor and stator. Efficient designs have a comparatively small air gap. The St. Louis motor, long used in classrooms to illustrate motor principles, is inefficient for the same reason, as well as appearing nothing like a modern motor.
Electric motors revolutionized industry. Industrial processes were no longer limited by power transmission using line shafts, belts, compressed air or hydraulic pressure. Instead, every machine could be equipped with its own power source, providing easy control at the point of use, and improving power transmission efficiency. Electric motors applied in agriculture eliminated human and animal muscle power from such tasks as handling grain or pumping water. Household uses (like in washing machines, dishwashers, fans, air conditioners and refrigerators (replacing ice boxes) of electric motors reduced heavy labor in the home and made higher standards of convenience, comfort and safety possible. Today, electric motors consume more than half of the electric energy produced in the US.
AC motors.
In 1824, French physicist François Arago formulated the existence of rotating magnetic fields, termed Arago's rotations, which, by manually turning switches on and off, Walter Baily demonstrated in 1879 as in effect the first primitive induction motor. In the 1880s many inventors were trying to develop workable AC motors because AC's advantages in long-distance high-voltage transmission were offset by the inability to operate motors on AC.
The first alternating-current commutatorless induction motor was invented by Galileo Ferraris in 1885. Ferraris was able to improve his first design by producing more advanced setups in 1886. In 1888, the "Royal Academy of Science of Turin" published Ferraris's research detailing the foundations of motor operation, while concluding at that time that "the apparatus based on that principle could not be of any commercial importance as motor."
Possible industrial development was envisioned by Nikola Tesla, who invented independently his induction motor in 1887 and obtained a patent in May 1888. In the same year, Tesla presented his paper "A New System of Alternate Current Motors and Transformers" to the AIEE that described three patented two-phase four-stator-pole motor types: one with a four-pole rotor forming a non-self-starting reluctance motor, another with a wound rotor forming a self-starting induction motor, and the third a true synchronous motor with separately excited DC supply to rotor winding. One of the patents Tesla filed in 1887, however, also described a shorted-winding-rotor induction motor. George Westinghouse, who had already acquired rights from Ferraris (US$1,000), promptly bought Tesla's patents (US$60,000 plus US$2.50 per sold hp, paid until 1897), employed Tesla to develop his motors, and assigned C.F. Scott to help Tesla; however, Tesla left for other pursuits in 1889. The constant speed AC induction motor was found not to be suitable for street cars, but Westinghouse engineers successfully adapted it to power a mining operation in Telluride, Colorado in 1891. Westinghouse achieved its first practical induction motor in 1892 and developed a line of polyphase 60 hertz induction motors in 1893, but these early Westinghouse motors were two-phase motors with wound rotors. B.G. Lamme later developed a rotating bar winding rotor.
Steadfast in his promotion of three-phase development, Mikhail Dolivo-Dobrovolsky invented the three-phase induction motor in 1889, of both types cage-rotor and wound rotor with a starting rheostat, and the three-limb transformer in 1890. After an agreement between AEG and Maschinenfabrik Oerlikon, Doliwo-Dobrowolski and Charles Eugene Lancelot Brown developed larger models, namely a 20-hp squirrel cage and a 100-hp wound rotor with a starting rheostat. These were the first three-phase asynchronous motors suitable for practical operation. Since 1889, similar developments of three-phase machinery were started Wenström. At the 1891 Frankfurt International Electrotechnical Exhibition, the first long distance three-phase system was successfully presented. It was rated 15 kV and extended over 175 km from the Lauffen waterfall on the Neckar river. The Lauffen power station included a 240 kW 86 V 40 Hz alternator and a step-up transformer while at the exhibition a step-down transformer fed a 100-hp three-phase induction motor that powered an artificial waterfall, representing the transfer of the original power source. The three-phase induction is now used for the vast majority of commercial applications. Mikhail Dolivo-Dobrovolsky claimed that Tesla's motor was not practical because of two-phase pulsations, which prompted him to persist in his three-phase work.
The General Electric Company began developing three-phase induction motors in 1891. By 1896, General Electric and Westinghouse signed a cross-licensing agreement for the bar-winding-rotor design, later called the squirrel-cage rotor. Induction motor improvements flowing from these inventions and innovations were such that a 100-horsepower induction motor currently has the same mounting dimensions as a 7.5-horsepower motor in 1897.
Twenty-first century.
In 2022, electric motor sales were estimated to be 800 million units, increasing by 10% annually. Electric motors consume ≈50% of the world's electricity. Since the 1980s, the market share of DC motors has declined in favor of AC motors.89
Components.
An electric motor has two mechanical parts: the rotor, which moves, and the stator, which does not. Electrically, the motor consists of two parts, the field magnets and the armature, one of which is attached to the rotor and the other to the stator. Together they form a magnetic circuit. The magnets create a magnetic field that passes through the armature. These can be electromagnets or permanent magnets. The field magnet is usually on the stator and the armature on the rotor, but these may be reversed.
Rotor.
The rotor is the moving part that delivers the mechanical power. The rotor typically holds conductors that carry currents, on which the magnetic field of the stator exerts force to turn the shaft.
Stator.
The stator surrounds the rotor, and usually holds field magnets, which are either electromagnets (wire windings around a ferromagnetic iron core) or permanent magnets. These create a magnetic field that passes through the rotor armature, exerting force on the rotor windings. The stator core is made up of many thin metal sheets that are insulated from each other, called laminations. These laminations are made of electrical steel, which has a specified magnetic permeability, hysteresis, and saturation. Laminations reduce losses that would result from induced circulating eddy currents that would flow if a solid core were used. Mains powered AC motors typically immobilize the wires within the windings by impregnating them with varnish in a vacuum. This prevents the wires in the winding from vibrating against each other which would abrade the wire insulation and cause premature failures. Resin-packed motors, used in deep well submersible pumps, washing machines, and air conditioners, encapsulate the stator in plastic resin to prevent corrosion and/or reduce conducted noise.
Gap.
An air gap between the stator and rotor allows it to turn. The width of the gap has a significant effect on the motor's electrical characteristics. It is generally made as small as possible, as a large gap weakens performance. Conversely, gaps that are too small may create friction in addition to noise.
Armature.
The armature consists of wire windings on a ferromagnetic core. Electric current passing through the wire causes the magnetic field to exert a force (Lorentz force) on it, turning the rotor. Windings are coiled wires, wrapped around a laminated, soft, iron, ferromagnetic core so as to form magnetic poles when energized with current.
Electric machines come in salient- and nonsalient-pole configurations. In a salient-pole motor the rotor and stator ferromagnetic cores have projections called poles that face each other. Wire is wound around each pole below the pole face, which become north or south poles when current flows through the wire. In a nonsalient-pole (distributed field or round-rotor) motor, the ferromagnetic core is a smooth cylinder, with the windings distributed evenly in slots around the circumference. Supplying alternating current in the windings creates poles in the core that rotate continuously. A shaded-pole motor has a winding around part of the pole that delays the phase of the magnetic field for that pole.
Commutator.
A commutator is a rotary electrical switch that supplies current to the rotor. It periodically reverses the flow of current in the rotor windings as the shaft rotates. It consists of a cylinder composed of multiple metal contact segments on the armature. Two or more electrical contacts called "brushes" made of a soft conductive material like carbon press against the commutator. The brushes make sliding contact with successive commutator segments as the rotator turns, supplying current to the rotor. The windings on the rotor are connected to the commutator segments. The commutator reverses the current direction in the rotor windings with each half turn (180°), so the torque applied to the rotor is always in the same direction. Without this reversal, the direction of torque on each rotor winding would reverse with each half turn, stopping the rotor. Commutated motors have been mostly replaced by brushless motors, permanent magnet motors, and induction motors.
Shaft.
The motor shaft extends outside of the motor, where it satisfies the load. Because the forces of the load are exerted beyond the outermost bearing, the load is said to be overhung.
Bearings.
The rotor is supported by bearings, which allow the rotor to turn on its axis by transferring the force of axial and radial loads from the shaft to the motor housing.
Inputs.
Power supply.
A DC motor is usually supplied through a split ring commutator as described above.
AC motors' commutation can be achieved using either a slip ring commutator or external commutation. It can be fixed-speed or variable-speed control type, and can be synchronous or asynchronous. Universal motors can run on either AC or DC.
Control.
DC motors can be operated at variable speeds by adjusting the voltage applied to the terminals or by using pulse-width modulation (PWM).
AC motors operated at a fixed speed are generally powered directly from the grid or through motor soft starters.
AC motors operated at variable speeds are powered with various power inverter, variable-frequency drive or electronic commutator technologies.
The term electronic commutator is usually associated with self-commutated brushless DC motor and switched reluctance motor applications.
Types.
Electric motors operate on one of three physical principles: magnetism, electrostatics and piezoelectricity.
In magnetic motors, magnetic fields are formed in both the rotor and the stator. The product between these two fields gives rise to a force and thus a torque on the motor shaft. One or both of these fields changes as the rotor turns. This is done by switching the poles on and off at the right time, or varying the strength of the pole.
Motors operate on either DC or AC current (or either).
AC motors can be either asynchronous or synchronous. Synchronous motors require the rotor to turn at the same speed as the stator's rotating field. Asynchronous rotors relax this constraint.
A fractional-horsepower motor either has a rating below about 1 horsepower (0.746 kW), or is manufactured with a frame size smaller than a standard 1 HP motor. Many household and industrial motors are in the fractional-horsepower class.
Notes:
1. Rotation is independent of the frequency of the AC voltage.
2. Rotation is equal to synchronous speed (motor-stator-field speed).
3. In SCIM, fixed-speed operation rotation is equal to synchronous speed, less slip speed.
4. In non-slip energy-recovery systems, WRIM is usually used for motor-starting but can be used to vary load speed.
5. Variable-speed operation.
6. Whereas induction- and synchronous-motor drives are typically with either six-step or sinusoidal-waveform output, BLDC-motor drives are usually with trapezoidal-current waveform; the behavior of both sinusoidal and trapezoidal PM machines is, however, identical in terms of their fundamental aspects.
7. In variable-speed operation, WRIM is used in slip-energy recovery and double-fed induction-machine applications.
8. A cage winding is a short-circuited squirrel-cage rotor, a wound winding is connected externally through slip rings.
9. Mostly single-phase with some three-phase.
Abbreviations:
Self-commutated motor.
Brushed DC motor.
Most DC motors are small permanent magnet (PM) types. They contain a brushed internal mechanical commutation to reverse motor windings' current in synchronism with rotation.
Electrically excited DC motor.
A commutated DC motor has a set of rotating windings wound on an armature mounted on a rotating shaft. The shaft also carries the commutator. Thus, every brushed DC motor has AC flowing through its windings. Current flows through one or more pairs of brushes that touch the commutator; the brushes connect an external source of electric power to the rotating armature.
The rotating armature consists of one or more wire coils wound around a laminated, magnetically "soft" ferromagnetic core. Current from the brushes flows through the commutator and one winding of the armature, making it a temporary magnet (an electromagnet). The magnetic field produced interacts with a stationary magnetic field produced by either PMs or another winding (a field coil), as part of the motor frame. The force between the two magnetic fields rotates the shaft. The commutator switches power to the coils as the rotor turns, keeping the poles from ever fully aligning with the magnetic poles of the stator field, so that the rotor keeps turning as long as power is applied.
Many of the limitations of the classic commutator DC motor are due to the need for brushes to maintain contact with the commutator, creating friction. The brushes create sparks while crossing the insulating gaps between commutator sections. Depending on the commutator design, the brushes may create short circuits between adjacent sections—and hence coil ends. Furthermore, the rotor coils' inductance causes the voltage across each to rise when its circuit opens, increasing the sparking. This sparking limits the maximum speed of the machine, as too-rapid sparking will overheat, erode, or even melt the commutator. The current density per unit area of the brushes, in combination with their resistivity, limits the motor's output. Crossing the gaps also generates electrical noise; sparking generates RFI. Brushes eventually wear out and require replacement, and the commutator itself is subject to wear and maintenance or replacement. The commutator assembly on a large motor is a costly element, requiring precision assembly of many parts. On small motors, the commutator is usually permanently integrated into the rotor, so replacing it usually requires replacing the rotor.
While most commutators are cylindrical, some are flat, segmented discs mounted on an insulator.
Large brushes create a large contact area, which maximizes motor output, while small brushes have low mass to maximize the speed at which the motor can run without excessive sparking. (Small brushes are desirable for their lower cost.) Stiffer brush springs can be used to make brushes of a given mass work at a higher speed, despite greater friction losses (lower efficiency) and accelerated brush and commutator wear. Therefore, DC motor brush design entails a trade-off between output power, speed, and efficiency/wear.
DC machines are defined as follows:
The five types of brushed DC motor are:
Permanent magnet.
A permanent magnet (PM) motor does not have a field winding on the stator frame, relying instead on PMs to provide the magnetic field. Compensating windings in series with the armature may be used on large motors to improve commutation under load. This field is fixed and cannot be adjusted for speed control. PM fields (stators) are convenient in miniature motors to eliminate the power consumption of the field winding. Most larger DC motors are of the "dynamo" type, which have stator windings. Historically, PMs could not be made to retain high flux if they were disassembled; field windings were more practical to obtain the needed flux. However, large PMs are costly, as well as dangerous and difficult to assemble; this favors wound fields for large machines.
To minimize overall weight and size, miniature PM motors may use high energy magnets made with neodymium; most are neodymium-iron-boron alloy. With their higher flux density, electric machines with high-energy PMs are at least competitive with all optimally designed singly-fed synchronous and induction electric machines. Miniature motors resemble the structure in the illustration, except that they have at least three rotor poles (to ensure starting, regardless of rotor position) and their outer housing is a steel tube that magnetically links the exteriors of the curved field magnets.
Electronic commutator (EC).
Brushless DC.
Some of the problems of the brushed DC motor are eliminated in the BLDC design. In this motor, the mechanical "rotating switch" or commutator is replaced by an external electronic switch synchronised to the rotor's position. BLDC motors are typically 85%+ efficient, reaching up to 96.5%, while brushed DC motors are typically 75–80% efficient.
The BLDC motor's characteristic trapezoidal counter-electromotive force (CEMF) waveform is derived partly from the stator windings being evenly distributed, and partly from the placement of the rotor's permanent magnets. Also known as electronically commutated DC or inside-out DC motors, the stator windings of trapezoidal BLDC motors can be single-phase, two-phase or three-phase and use Hall effect sensors mounted on their windings for rotor position sensing and low cost closed-loop commutator control.
BLDC motors are commonly used where precise speed control is necessary, as in computer disk drives or video cassette recorders. The spindles within CD, CD-ROM (etc.) drives, and mechanisms within office products, such as fans, laser printers and photocopiers. They have several advantages over conventional motors:
Modern BLDC motors range in power from a fraction of a watt to many kilowatts. Larger BLDC motors rated up to about 100 kW are used in electric vehicles. They also find use in electric model aircraft.
Switched reluctance motor.
The switched reluctance motor (SRM) has no brushes or permanent magnets, and the rotor has no electric currents. Torque comes from a slight misalignment of poles on the rotor with poles on the stator. The rotor aligns itself with the magnetic field of the stator, while the stator field windings are sequentially energized to rotate the stator field.
The magnetic flux created by the field windings follows the path of least magnetic sending the flux through rotor poles that are closest to the energized poles of the stator, thereby magnetizing those poles of the rotor and creating torque. As the rotor turns, different windings are energized, keeping the rotor turning.
SRMs are used in some appliances and vehicles.
Universal AC/DC motor.
A commutated, electrically excited, series or parallel wound motor is referred to as a universal motor because it can be designed to operate on either AC or DC power. A universal motor can operate well on AC because the current in both the field and the armature coils (and hence the resultant magnetic fields) synchronously reverse polarity, and hence the resulting mechanical force occurs in a constant direction of rotation.
Operating at normal power line frequencies, universal motors are often used in sub-kilowatt applications. Universal motors formed the basis of the traditional railway traction motor in electric railways. In this application, using AC power on a motor designed to run on DC would experience efficiency losses due to eddy current heating of their magnetic components, particularly the motor field pole-pieces that, for DC, would have used solid (un-laminated) iron. They are now rarely used.
An advantage is that AC power may be used on motors that specifically have high starting torque and compact design if high running speeds are used. By contrast, maintenance is higher and lifetimes are shortened. Such motors are used in devices that are not heavily used, and have high starting-torque demands. Multiple taps on the field coil provide (imprecise) stepped speed control. Household blenders that advertise many speeds typically combine a field coil with several taps and a diode that can be inserted in series with the motor (causing the motor to run on half-wave rectified AC). Universal motors also lend themselves to electronic speed control and, as such, are a choice for devices such as domestic washing machines. The motor can agitate the drum (both forwards and in reverse) by switching the field winding with respect to the armature.
Whereas SCIMs cannot turn a shaft faster than allowed by the power line frequency, universal motors can run at much higher speeds. This makes them useful for appliances such as blenders, vacuum cleaners, and hair dryers where high speed and light weight are desirable. They are also commonly used in portable power tools, such as drills, sanders, circular and jig saws, where the motor's characteristics work well. Many vacuum cleaner and weed trimmer motors exceed 10,000 rpm, while miniature grinders may exceed 30,000 rpm.
Externally commutated AC machine.
AC induction and synchronous motors are optimized for operation on single-phase or polyphase sinusoidal or quasi-sinusoidal waveform power such as supplied for fixed-speed applications by the AC power grid or for variable-speed application from variable-frequency drive (VFD) controllers.
Induction motor.
An induction motor is an asynchronous AC motor where power is transferred to the rotor by electromagnetic induction, much like transformer action. An induction motor resembles a rotating transformer, because the stator (stationary part) is essentially the primary side of the transformer and the rotor (rotating part) is the secondary side. Polyphase induction motors are widely used in industry.
Cage and wound rotor.
Induction motors may be divided into Squirrel Cage Induction Motors (SCIM) and Wound Rotor Induction Motors (WRIM). SCIMs have a heavy winding made up of solid bars, usually aluminum or copper, electrically connected by rings at the ends of the rotor. The bars and rings as a whole are much like an animal's rotating exercise cage.
Currents induced into this winding provide the rotor magnetic field. The shape of the rotor bars determines the speed-torque characteristics. At low speeds, the current induced in the squirrel cage is nearly at line frequency and tends to stay in the outer parts of the cage. As the motor accelerates, the slip frequency becomes lower, and more current reaches the interior. By shaping the bars to change the resistance of the winding portions in the interior and outer parts of the cage, a variable resistance is effectively inserted in the rotor circuit. However, most such motors employ uniform bars.
In a WRIM, the rotor winding is made of many turns of insulated wire and is connected to slip rings on the motor shaft. An external resistor or other control device can be connected in the rotor circuit. Resistors allow control of the motor speed, although dissipating significant power. A converter can be fed from the rotor circuit and return the slip-frequency power that would otherwise be wasted into the power system through an inverter or separate motor-generator.
WRIMs are used primarily to start a high inertia load or a load that requires high starting torque across the full speed range. By correctly selecting the resistors used in the secondary resistance or slip ring starter, the motor is able to produce maximum torque at a relatively low supply current from zero speed to full speed.
Motor speed can be changed because the motor's torque curve is effectively modified by the amount of resistance connected to the rotor circuit. Increasing resistance lowers the speed of maximum torque. If the resistance is increased beyond the point where the maximum torque occurs at zero speed, the torque is further reduced.
When used with a load that has a torque curve that increases with speed, the motor operates at the speed where the torque developed by the motor is equal to the load torque. Reducing the load causes the motor to speed up, while increasing the load causes the motor to slow down until the load and motor torque are again equal. Operated in this manner, the slip losses are dissipated in the secondary resistors and can be significant. The speed regulation and net efficiency is poor.
Torque motor.
A torque motor can operate indefinitely while stalled, that is, with the rotor blocked from turning, without incurring damage. In this mode of operation, the motor applies a steady torque to the load.
A common application is the supply- and take-up reel motors in a tape drive. In this application, driven by a low voltage, the characteristics of these motors apply a steady light tension to the tape whether or not the capstan is feeding tape past the tape heads. Driven from a higher voltage (delivering a higher torque), torque motors can achieve fast-forward and rewind operation without requiring additional mechanics such as gears or clutches. In the computer gaming world, torque motors are used in force feedback steering wheels.
Another common application is to control the throttle of an internal combustion engine with an electronic governor. The motor works against a return spring to move the throttle in accord with the governor output. The latter monitors engine speed by counting electrical pulses from the ignition system or from a magnetic pickup and depending on the speed, makes small adjustments to the amount of current. If the engine slows down relative to the desired speed, the current increases, producing more torque, pulling against the return spring and opening the throttle. Should the engine run too fast, the governor reduces the current, allowing the return spring to pull back and reduce the throttle.
Synchronous motor.
A synchronous electric motor is an AC motor. It includes a rotor spinning with coils passing magnets at the same frequency as the AC and produces a magnetic field to drive it. It has zero slip under typical operating conditions. By contrast induction motors must slip to produce torque. One type of synchronous motor is like an induction motor except that the rotor is excited by a DC field. Slip rings and brushes conduct current to the rotor. The rotor poles connect to each other and move at the same speed. Another type, for low load torque, has flats ground onto a conventional squirrel-cage rotor to create discrete poles. Yet another, as made by Hammond for its pre-World War II clocks, and in older Hammond organs, has no rotor windings and discrete poles. It is not self-starting. The clock requires manual starting by a small knob on the back, while the older Hammond organs had an auxiliary starting motor connected by a spring-loaded manually operated switch.
Hysteresis synchronous motors typically are (essentially) two-phase motors with a phase-shifting capacitor for one phase. They start like induction motors, but when slip rate decreases sufficiently, the rotor (a smooth cylinder) becomes temporarily magnetized. Its distributed poles make it act like a permanent magnet synchronous motor. The rotor material, like that of a common nail, stays magnetized, but can be demagnetized with little difficulty. Once running, the rotor poles stay in place; they do not drift.
Low-power synchronous timing motors (such as those for traditional electric clocks) may have multi-pole permanent magnet external cup rotors, and use shading coils to provide starting torque. "Telechron" clock motors have shaded poles for starting torque, and a two-spoke ring rotor that performs like a discrete two-pole rotor.
Doubly-fed electric machine.
Doubly fed electric motors have two independent multiphase winding sets, which contribute active (i.e., working) power to the energy conversion process, with at least one of the winding sets electronically controlled for variable speed operation. Two independent multiphase winding sets (i.e., dual armature) are the maximum provided in a single package without topology duplication. Doubly-fed electric motors have an effective constant torque speed range that is twice synchronous speed for a given frequency of excitation. This is twice the constant torque speed range as singly-fed electric machines, which have only one active winding set.
A doubly-fed motor allows for a smaller electronic converter but the cost of the rotor winding and slip rings may offset the saving in the power electronics components. Difficulties affect controlling speed near synchronous speed limit applications.
Advanced types.
Rotary.
Ironless or coreless rotor motor.
The coreless or ironless DC motor is a specialized permanent magnet DC motor. Optimized for rapid acceleration, the rotor is constructed without an iron core. The rotor can take the form of a winding-filled cylinder, or a self-supporting structure comprising only wire and bonding material. The rotor can fit inside the stator magnets; a magnetically soft stationary cylinder inside the rotor provides a return path for the stator magnetic flux. A second arrangement has the rotor winding basket surrounding the stator magnets. In that design, the rotor fits inside a magnetically soft cylinder that can serve as the motor housing, and provides a return path for the flux.
Because the rotor is much lower mass than a conventional rotor, it can accelerate much more rapidly, often achieving a mechanical time constant under one millisecond. This is especially true if the windings use aluminum rather than (heavier) copper. The rotor has no metal mass to act as a heat sink; even small motors must be cooled. Overheating can be an issue for these designs.
The vibrating alert of cellular phones can be generated by cylindrical permanent-magnet motors, or disc-shaped types that have a thin multipolar disc field magnet, and an intentionally unbalanced molded-plastic rotor structure with two bonded coreless coils. Metal brushes and a flat commutator switch power to the rotor coils.
Related limited-travel actuators have no core and a bonded coil placed between the poles of high-flux thin permanent magnets. These are the fast head positioners for rigid-disk ("hard disk") drives. Although the contemporary design differs considerably from that of loudspeakers, it is still loosely (and incorrectly) referred to as a "voice coil" structure, because some earlier rigid-disk-drive heads moved in straight lines, and had a drive structure much like that of a loudspeaker.
Pancake or axial rotor motor.
The printed armature or pancake motor has windings shaped as a disc running between arrays of high-flux magnets. The magnets are arranged in a circle facing the rotor spaced to form an axial air gap. This design is commonly known as the pancake motor because of its flat profile.
The armature (originally formed on a printed circuit board) is made from punched copper sheets that are laminated together using advanced composites to form a thin, rigid disc. The armature does not have a separate ring commutator. The brushes move directly on the armature surface making the whole design compact.
An alternative design is to use wound copper wire laid flat with a central conventional commutator, in a flower and petal shape. The windings are typically stabilized with electrical epoxy potting systems. These are filled epoxies that have moderate, mixed viscosity and a long gel time. They are highlighted by low shrinkage and low exotherm, and are typically UL 1446 recognized as a potting compound insulated with , Class H rating.
The unique advantage of ironless DC motors is the absence of cogging (torque variations caused by changing attraction between the iron and the magnets). Parasitic eddy currents cannot form in the rotor as it is totally ironless, although iron rotors are laminated. This can greatly improve efficiency, but variable-speed controllers must use a higher switching rate (>40 kHz) or DC because of decreased electromagnetic induction.
These motors were invented to drive the capstan(s) of magnetic tape drives, where minimal time to reach operating speed and minimal stopping distance were critical. Pancake motors are widely used in high-performance servo-controlled systems, robotic systems, industrial automation and medical devices. Due to the variety of constructions now available, the technology is used in applications from high temperature military to low cost pump and basic servos.
Another approach (Magnax) is to use a single stator sandwiched between two rotors. One such design has produced peak power of 15 kW/kg, sustained power around 7.5 kW/kg. This yokeless axial flux motor offers a shorter flux path, keeping the magnets further from the axis. The design allows zero winding overhang; 100 percent of the windings are active. This is enhanced with the use of rectangular-crosssection copper wire. The motors can be stacked to work in parallel. Instabilities are minimized by ensuring that the two rotor discs put equal and opposing forces onto the stator disc. The rotors are connected directly to one another via a shaft ring, cancelling out the magnetic forces.
Servomotor.
A servomotor is a motor that is used within a position-control or speed-control feedback system. Servomotors are used in applications such as machine tools, pen plotters, and other process systems. Motors intended for use in a servomechanism must have predictable characteristics for speed, torque, and power. The speed/torque curve is important and is high ratio for a servomotor. Dynamic response characteristics such as winding inductance and rotor inertia are important; these factors limit performance. Large, powerful, but slow-responding servo loops may use conventional AC or DC motors and drive systems with position or speed feedback. As dynamic response requirements increase, more specialized motor designs such as coreless motors are used. AC motors' superior power density and acceleration characteristics tends to favor permanent magnet synchronous, BLDC, induction, and SRM drive approaches.
A servo system differs from some stepper motor applications in that position feedback is continuous while the motor is running. A stepper system inherently operates open-loop—relying on the motor not to "miss steps" for short term accuracy—with any feedback such as a "home" switch or position encoder external to the motor system.
Stepper motor.
Stepper motors are typically used to provide precise rotations. An internal rotor containing permanent magnets or a magnetically soft rotor with salient poles is controlled by a set of electronically switched external magnets. A stepper motor may also be thought of as a cross between a DC electric motor and a rotary solenoid. As each coil is energized in turn, the rotor aligns itself with the magnetic field produced by the energized field winding. Unlike a synchronous motor, the stepper motor may not rotate continuously; instead, it moves in steps—starting and then stopping—advancing from one position to the next as field windings are energized and de-energized in sequence. Depending on the sequence, the rotor may turn forwards or backwards, and it may change direction, stop, speed up or slow down at any time.
Simple stepper motor drivers entirely energize or entirely de-energize the field windings, leading the rotor to "cog" to a limited number of positions. Microstepping drivers can proportionally control the power to the field windings, allowing the rotors to position between cog points and rotate smoothly. Computer-controlled stepper motors are one of the most versatile positioning systems, particularly as part of a digital servo-controlled system.
Stepper motors can be rotated to a specific angle in discrete steps with ease, and hence stepper motors are used for read/write head positioning in early disk drives, where the precision and speed they offered could correctly position the read/write head. As drive density increased, precision and speed limitations made them obsolete for hard drives—the precision limitation made them unusable, and the speed limitation made them uncompetitive—thus newer hard disk drives use voice coil-based head actuator systems. (The term "voice coil" in this connection is historic; it refers to the structure in a cone-type loudspeaker.)
Stepper motors are often used in computer printers, optical scanners, and digital photocopiers to move the active element, the print head carriage (inkjet printers), and the platen or feed rollers.
So-called quartz analog wristwatches contain the smallest commonplace stepping motors; they have one coil, draw little power, and have a permanent magnet rotor. The same kind of motor drives battery-powered quartz clocks. Some of these watches, such as chronographs, contain more than one stepper motor.
Closely related in design to three-phase AC synchronous motors, stepper motors and SRMs are classified as variable reluctance motor type.
Linear.
A linear motor is essentially any electric motor that has been "unrolled" so that, instead of producing torque (rotation), it produces a straight-line force along its length.
Linear motors are most commonly induction motors or stepper motors. Linear motors are commonly found in roller-coasters where the rapid motion of the motorless railcar is controlled by the rail. They are also used in maglev trains, where the train "flies" over the ground. On a smaller scale, the 1978 era HP 7225A pen plotter used two linear stepper motors to move the pen along the X and Y axes.
Non-magnetic.
Electrostatic.
An electrostatic motor is based on the attraction and repulsion of electric charge. Usually, electrostatic motors are the dual of conventional coil-based motors. They typically require a high-voltage power supply, although small motors employ lower voltages. Conventional electric motors instead employ magnetic attraction and repulsion, and require high current at low voltages. In the 1750s, the first electrostatic motors were developed by Benjamin Franklin and Andrew Gordon. Electrostatic motors find frequent use in micro-electro-mechanical systems (MEMS) where their drive voltages are below 100 volts, and where moving, charged plates are far easier to fabricate than coils and iron cores. The molecular machinery that runs living cells is often based on linear and rotary electrostatic motors.
Piezoelectric.
A piezoelectric motor or piezo motor is a type of electric motor based upon the change in shape of a piezoelectric material when an electric field is applied. Piezoelectric motors make use of the converse piezoelectric effect whereby the material produces acoustic or ultrasonic vibrations to produce linear or rotary motion. In one mechanism, the elongation in a single plane is used to make a series of stretches and position holds, similar to the way a caterpillar moves.
Electric propulsion.
An electrically powered spacecraft propulsion system uses electric motor technology to propel spacecraft in outer space. Most systems are based on electrically accelerating propellant to high speed, while some systems are based on electrodynamic tethers principles of propulsion to the magnetosphere.
Operating principles.
Force and torque.
An electric motor converts electrical energy to mechanical energy through the force between two opposed magnetic fields. At least one of the two magnetic fields must be created by an electromagnet through the magnetic field caused by an electrical current.
The force between a current formula_0 in a conductor of length formula_1 perpendicular to a magnetic field formula_2 may be calculated using the Lorentz force law:
formula_3
Note: X denotes vector cross product.
The most general approaches to calculating the forces in motors use tensor notation.
Power.
Electric motor output power is given as formula_4where:
In Imperial units a motor's mechanical power output is given by,
formula_9 (horsepower)
where:
In an asynchronous or induction motor, the relationship between motor speed and air gap power is given by the following:
formula_11, where
Rr – rotor resistance
Ir2 – square of current induced in the rotor
s – motor slip; i.e., difference between synchronous speed and slip speed, which provides the relative movement needed for current induction in the rotor.
Back EMF.
The movement of armature windings of a direct-current or universal motor through a magnetic field, induce a voltage in them. This voltage tends to oppose the motor supply voltage and so is called "back electromotive force (EMF)". The voltage is proportional to the running speed of the motor. The back EMF of the motor, plus the voltage drop across the winding internal resistance and brushes, must equal the voltage at the brushes. This provides the fundamental mechanism of speed regulation in a DC motor. If the mechanical load increases, the motor slows down; a lower back EMF results, and more current is drawn from the supply. This increased current provides the additional torque to balance the load.
In AC machines, it is sometimes useful to consider a back EMF source within the machine; this is of particular concern for close speed regulation of induction motors on VFDs.
Losses.
Motor losses are mainly due to resistive losses in windings, core losses and mechanical losses in bearings, and aerodynamic losses, particularly where cooling fans are present, also occur.
Losses also occur in commutation, mechanical commutators spark; electronic commutators and also dissipate heat.
Efficiency.
To calculate a motor's efficiency, the mechanical output power is divided by the electrical input power:
formula_12,
where formula_13 is energy conversion efficiency, formula_14 is electrical input power, and formula_15 is mechanical output power:
formula_16
formula_17
where formula_18 is input voltage, formula_0 is input current, formula_6 is output torque, and formula_19 is output angular velocity. It is possible to derive analytically the point of maximum efficiency. It is typically at less than 1/2 the stall torque.
Various national regulatory authorities have enacted legislation to encourage the manufacture and use of higher-efficiency motors. Electric motors have efficiencies ranging from around 15%-20% for shaded pole motors, up to 98% for permanent magnet motors, with efficiency also dependent on load. Peak efficiency is usually at 75% of the rated load. So (as an example) a 10 HP motor is most efficient when driving a load that requires 7.5 HP. Efficiency also depends on motor size; larger motors tend to be more efficient. Some motors can not operate continually for more than a specified period of time (e.g. for more than an hour per run)
Goodness factor.
Eric Laithwaite proposed a metric to determine the 'goodness' of an electric motor:
formula_20
Where:
formula_21 is the goodness factor (factors above 1 are likely to be efficient)
formula_22 are the cross sectional areas of the magnetic and electric circuit
formula_23 are the lengths of the magnetic and electric circuits
formula_24 is the permeability of the core
formula_19 is the angular frequency the motor is driven at
From this, he showed that the most efficient motors are likely to have relatively large magnetic poles. However, the equation only directly relates to non PM motors.
Performance parameters.
Torque.
Electromagnetic motors derive torque from the vector product of the interacting fields. Calculating torque requires knowledge of the fields in the air gap. Once these have been established, the torque is the integral of all the force vectors multiplied by the vector's radius. The current flowing in the winding produces the fields. For a motor using a magnetic material the field is not proportional to the current.
A figure relating the current to the torque can inform motor selection. The maximum torque for a motor depends on the maximum current, absent thermal considerations.
When optimally designed within a given core saturation constraint and for a given active current (i.e., torque current), voltage, pole-pair number, excitation frequency (i.e., synchronous speed), and air-gap flux density, all categories of electric motors/generators exhibit virtually the same maximum continuous shaft torque (i.e., operating torque) within a given air-gap area with winding slots and back-iron depth, which determines the physical size of electromagnetic core. Some applications require bursts of torque beyond the maximum, such as bursts to accelerate an electric vehicle from standstill. Always limited by magnetic core saturation or safe operating temperature rise and voltage, the capacity for torque bursts beyond the maximum differs significantly across motor/generator types.
Electric machines without a transformer circuit topology, such as that of WRSMs or PMSMs, cannot provide torque bursts without saturating the magnetic core. At that point, additional current cannot increase torque. Furthermore, the permanent magnet assembly of PMSMs can be irreparably damaged.
Electric machines with a transformer circuit topology, such as induction machines, induction doubly-fed electric machines, and induction or synchronous wound-rotor doubly-fed (WRDF) machines, permit torque bursts because the EMF-induced active current on either side of the transformer oppose each other and thus contribute nothing to the transformer coupled magnetic core flux density, avoiding core saturation.
Electric machines that rely on induction or asynchronous principles short-circuit one port of the transformer circuit and as a result, the reactive impedance of the transformer circuit becomes dominant as slip increases, which limits the magnitude of active (i.e., real) current. Torque bursts two to three times higher than the maximum design torque are realizable.
The brushless wound-rotor synchronous doubly-fed (BWRSDF) machine is the only electric machine with a truly dual ported transformer circuit topology (i.e., both ports independently excited with no short-circuited port). The dual ported transformer circuit topology is known to be unstable and requires a multiphase slip-ring-brush assembly to propagate limited power to the rotor winding set. If a precision means were available to instantaneously control torque angle and slip for synchronous operation during operation while simultaneously providing brushless power to the rotor winding set, the active current of the BWRSDF machine would be independent of the reactive impedance of the transformer circuit and bursts of torque significantly higher than the maximum operating torque and far beyond the practical capability of any other type of electric machine would be realizable. Torque bursts greater than eight times operating torque have been calculated.
Continuous torque density.
The continuous torque density of conventional electric machines is determined by the size of the air-gap area and the back-iron depth, which are determined by the power rating of the armature winding set, the speed of the machine, and the achievable air-gap flux density before core saturation. Despite the high coercivity of neodymium or samarium-cobalt permanent magnets, continuous torque density is virtually the same amongst electric machines with optimally designed armature winding sets. Continuous torque density relates to method of cooling and permissible operation period before destruction by overheating of windings or permanent magnet damage.
Other sources state that various e-machine topologies have differing torque density. One source shows the following:
where—specific torque density is normalized to 1.0 for the surface permanent magnet (SPM)—brushless ac, 180° current conduction.
Torque density is approximately four times greater for liquid cooled motors, compared to those which are air cooled.
A source comparing direct current, induction motors (IM), PMSM and SRM showed:
Another source notes that PMSM up to 1 MW have considerably higher torque density than induction machines.
Continuous power density.
The continuous power density is determined by the product of the continuous torque density and the constant torque speed range. Electric motors can achieve densities of up to 20 kW/kg, meaning 20 kilowatts of output power per kilogram.
Acoustic noise and vibrations.
Acoustic noise and vibrations are usually classified in three sources:
The latter source, which can be responsible for the "whining noise" of electric motors, is called electromagnetically induced acoustic noise.
Standards.
The following are major design, manufacturing, and testing standards covering electric motors:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "\\mathbf{B}"
},
{
"math_id": 3,
"text": "\\mathbf{F} = I \\ell \\times \\mathbf{B}"
},
{
"math_id": 4,
"text": "P_\\text{em} = T\\omega = F v"
},
{
"math_id": 5,
"text": " \\omega "
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "F"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "P_\\text{em} = \\frac {\\omega_\\text{rpm} T}{5252}"
},
{
"math_id": 10,
"text": "\\omega_\\text{rpm}"
},
{
"math_id": 11,
"text": "P_\\text{airgap} = \\frac{R_r}{s} I_r^{2}"
},
{
"math_id": 12,
"text": "\\eta = \\frac{P_\\text{m}}{P_\\text{e}}"
},
{
"math_id": 13,
"text": "\\eta"
},
{
"math_id": 14,
"text": "P_\\text{e}"
},
{
"math_id": 15,
"text": "P_\\text{m}"
},
{
"math_id": 16,
"text": "P_\\text{e} = I V"
},
{
"math_id": 17,
"text": "P_\\text{m} = T \\omega"
},
{
"math_id": 18,
"text": "V"
},
{
"math_id": 19,
"text": "\\omega"
},
{
"math_id": 20,
"text": "G = \\frac {\\omega} {\\text{resistance} \\times \\text{reluctance}} = \\frac {\\omega \\mu \\sigma A_\\text{m} A_\\text{e}} {l_\\text{m} l_\\text{e}}"
},
{
"math_id": 21,
"text": "G"
},
{
"math_id": 22,
"text": "A_\\text{m}, A_\\text{e}"
},
{
"math_id": 23,
"text": "l_\\text{m}, l_\\text{e}"
},
{
"math_id": 24,
"text": "\\mu"
}
]
| https://en.wikipedia.org/wiki?curid=76086 |
76088934 | Solar reforming | Technology for conversion of waste
Solar reforming is the sunlight-driven conversion of diverse carbon waste resources (including solid, liquid, and gaseous waste streams such as biomass, plastics, industrial by-products, atmospheric carbon dioxide, etc.) into sustainable fuels (or energy vectors) and value-added chemicals. It encompasses a set of technologies (and processes) operating under ambient and aqueous conditions, utilizing solar spectrum to generate maximum value. Solar reforming offers an attractive and unifying solution to address the contemporary challenges of climate change and environmental pollution by creating a sustainable circular network of waste upcycling, clean fuel (and chemical) generation and the consequent mitigation of greenhouse emissions (in alignment with the United Nations Sustainable Development Goals).
Background.
The earliest sunlight-driven reforming (now referred to as photoreforming or PC reforming which forms a small sub-section of solar reforming; see "Definition and classifications" section) of waste-derived substrates involved the use of TiO2 semiconductor photocatalyst (generally loaded with a hydrogen evolution co-catalyst such as Pt). Kawai and Sakata from the Institute for Molecular Science, Okazaki, Japan in the 1980s reported that the organics derived from different solid waste matter could be used as electron donors to drive the generation of hydrogen gas over TiO2 photocatalyst composites. In 2017, Wakerley, Kuehnel and Reisner at the University of Cambridge, UK demonstrated the photocatalytic production of hydrogen using raw lignocellulosic biomass substrates in the presence of visible-light responsive CdS|CdOx quantum dots under alkaline conditions. This was followed by the utilization of less-toxic, carbon-based, visible-light absorbing photocatalyst composites (for example carbon-nitride based systems) for biomass and plastics photoreforming to hydrogen and organics by Kasap, Uekert and Reisner. In addition to variations of carbon nitride, other photocatalyst composite systems based on graphene oxides, MXenes, co-ordination polymers and metal chalcogenides were reported during this period. A major limitation of PC reforming is the use of conventional harsh alkaline pre-treatment conditions (pH >13 and high temperatures) for polymeric substrates such as condensation plastics, accounting for more than 80% of the operation costs. This was circumvented with the introduction of a new chemoenzymatic reforming pathway in 2023 by Bhattacharjee, Guo, Reisner and Hollfelder, which employed near-neutral pH, moderate temperatures for pre-treating plastics and nanoplastics. In 2020, Jiao and Xie reported the photocatalytic conversion of addition plastics such as polyethylene and polypropylene to high energy-density to C2 fuels over a Nb2O5 catalyst under natural conditions.
The photocatalytic process (referred to as PC reforming; see "Categorization and configurations" section below) offers a simple, one-pot and facile deployment scope, but has several major limitations, making it challenging for commercial implementation. In 2021, sunlight-driven photoelectrochemical (PEC) systems/technologies operating with no external bias or voltage input were introduced by Bhattacharjee and Reisner at the University of Cambridge. These PEC reforming (see "Categorization and configurations" section) systems reformed diverse pre-treated waste streams (such as lignocellulose and PET plastics) to selective value-added chemicals with the simultaneous generation of green hydrogen, and achieving areal production rates 100-10000 times higher than conventional photocatalytic processes. In 2023, Bhattacharjee, Rahaman and Reisner extended the PEC platform to a solar reactor which could reduce greenhouse gas CO2 to different energy vectors (CO, syngas, formate depending on the type of catalyst integrated) and convert waste PET plastics to glycolic acid at the same time. This further inspired the direct capture and conversion of CO2 to products from flue gas and air (direct air capture) in a PEC reforming process (with simultaneous plastic conversion). Choi and Ryu demonstrated a polyoxometallate-medated PEC process to achieve biomass conversion with unassisted hydrogen production in 2022. Similarly, Pan and Chu, in 2023 reported a PEC cell for renewable formate production from sunlight, CO2 and biomass-derived sugars. These developments has led solar reforming (and electroreforming, where renewable electricity drives redox processes; see Caterogization and configurations section) to gradually emerge as an active area of exploration.
Concept and considerations.
Definition and classifications.
Solar reforming is the sunlight-driven transformation of waste substrates to valuable products (such as sustainable fuels and chemicals) as defined by scientists Subhajit Bhattacharjee, Stuart Linley and Erwin Reisner in their 2024 Nature Reviews Chemistry article where they conceptualized and formalized the field by introducing its concepts, classification, configurations and metrics. It generally operates without external heating and pressure, and also introduces a thermodynamic advantage over traditional green hydrogen or CO2 reduction fuel producing methods such as water splitting or CO2 splitting, respectively. Depending on solar spectrum utilization, solar reforming can be classified into two categories: "solar catalytic reforming" and "solar thermal reforming". Solar catalytic reforming refers to transformation processes primarily driven by ultraviolet (UV) or visible light. It also includes the subset of 'photoreforming' encompassing utilization of high energy photons in the UV or near-UV region of the solar spectrum (for example, by semiconductor photocatalysts such as TiO2). Solar thermal reforming, on the other hand, exploits the infrared (IR) region for waste upcycling to generate products of high economic value. An important aspect of solar reforming is value creation, which means that the overall value creation from product formation must be greater than substrate value destruction. In terms of deployment architectures, solar catalytic reforming can be further categorized into: photocatalytic reforming (PC reforming), photoelectrochemical reforming (PEC reforming) and photovoltaic-electrochemical reforming (PV-EC reforming).
Advantages over conventional waste recycling and upcycling processes.
Solar reforming offers several advantages over conventional methods of waste management or fuel/chemical production. It offers a less energy-intensive and low carbon alterative to methods of waste reforming such as pyrolysis and gasification which require high energy input. Solar reforming also provides several benefits over traditional green hydrogen production methods such as water splitting (H2O → H2 + O2, ΔG° = 237 kJ mol−1). It offers a thermodynamic advantage over water splitting by circumventing the energetically and kinetically demanding water oxidation half reaction (E0 = +1.23 V vs. reversible hydrogen electrode (RHE)) by energetically neutral oxidation of waste-derived organics (CxHyOz + (2"x"−"z")H2O → (2"x"−"z"+"y"/2)H2 + "x"CO2; ΔG° ~0 kJ mol−1). This results in better performance in terms of higher production rates, and also translates to other similar processes which depend on water oxidation as the counter reaction such as CO2 splitting. Furthermore, concentrated streams of hydrogen produced from solar reforming is safer than explosive mixtures of oxygen and hydrogen (from traditional water splitting), that otherwise require additional separation costs. The added economic advantage of forming two different valuable products (for example, gaseous reductive fuels and liquid oxidative chemicals) simultaneously makes solar reforming suitable for commercial applications.
Solar reforming metrics.
Solar reforming encompasses a range of technological processes and configurations and therefore, suitable performance metrics can evaluate the commercial viability. In artificial photosynthesis, the most common metric is the solar-to-fuel conversion efficiency (ηSTF) as shown below, where 'r' is the product formation rate, 'ΔG' is the Gibbs free energy change during the process, 'A' is the sunlight irradiation area and 'P' is the total light intensity flux. The ηSTF can be adopted as a metric for solar reforming but with certain considerations. Since the ΔG values for solar reforming processes are very low (ΔG ~0 kJ mol‒1), this makes the ηSTF per definition close to zero, despite the high production rates and quantum yields. However, replacing the ΔG for product formation (during solar reforming) with that of product utilisation (|ΔGuse|; such as combustion of the hydrogen fuel generated) can give a better representation of the process efficiency.
formula_0
Since solar reforming is highly dependent on the light harvester and its area of photon collection, a more technologically relevant metric is the areal production rate (rareal) as shown, where 'n' is the moles of product formed, 'A' is the sunlight irradiation area and 't' is the time.
formula_1
Although rareal is a more consistent metric for solar reforming, it neglects some key parameters such as type of waste utilized, pre-treatment costs, product value, scaling, other process and separation costs, deployment variables, etc. Therefore, a more adaptable and robust metric is the solar-to-value creation rate ("r"STV) which can encompass all these factors and provide a more holistic and practical picture from the economic or commercial point of view. The simplified equation for "r"STV is shown below, where "Ci" and "Ck" are the costs of the product 'i' and substrate 'k', respectively. Cp is the pre-treatment cost for the waste substrate 'k', and ni and nk are amounts (in moles) of the product 'i' formed and substrate 'k' consumed during solar reforming, respectively. Note that the metric is adaptable and can be expanded to include other relevant parameters as applicable.
formula_2
Categorization and configurations.
Solar reforming depends on the properties of the light absorber and the catalysts involved, and their selection, screening and integration to generate maximum value. The design and deployment of solar reforming technologies dictates the efficiency, scale and target substrates/products. In this context, solar reforming (more specifically, solar catalytic reforming) can be classified into three architectures:
Introduction of 'Photon Economy'.
An important concept introduced in the context of solar reforming is the 'photon economy', which, as defined by Bhattacharjee, Linley and Reisner, is the maximum utilization of all incident photons for maximizing product formation and value creation. An ideal solar reforming process is one where the light absorber can absorb incident UV and visible light photons with maximum quantum yield, generating high charge carrier concentration to drive redox half reactions at maximum rate. On the other hand, the residual, non-absorbed low-energy IR photons may be used for boosting reaction kinetics, waste pre-treatment or other means of value creation (for example, desalination, etc.). Therefore, proper light and thermal management through various means (such as using solar concentrators, thermoelectric modules, among others) is encouraged to have both an atom economical and photon economical approach to extract maximum value from solar reforming processes.
Reception and media.
The technological advancements in solar reforming garnered widespread interest in recent years. The works from scientists at Cambridge on PC reforming of raw lignocellulosic biomass or pre-treated polyester plastics to produce hydrogen and organics attracted attention of several stakeholders. The recent technological breakthrough leading to the development of high-performing solar powered reactors (PEC reforming) for the simultaneous upcycling of greenhouse gas CO2 and waste plastics to sustainable products received widespread acclaim and was highlighted in several prominent national and international media outlets. Solar reforming processes primarily developed in Cambridge were also selected as "one of the eleven great ideas from British universities that could change the world" by Sunday Times (April 2020 edition) and featured in the UK Prime Minister's Speech on Net Zero, "Or the researchers at Cambridge who pioneered a new way to turn sunlight into fuel" (indicating solar reforming which was a major subset of the broader research activities at Cambridge).
Outlook and future scope.
Solar reforming is currently in the development phase and the scalable deployment of a particular solar reforming technology (PC, PEC or PV-EC) would depend on a variety of factors. These factors include deployment location and sunlight variability/intermittency, characteristics of the chosen waste stream, viable pre-treatment methods, target products, nature of the catalysts and their lifetime, fuel/chemical storage requirements, land use versus open water sources, capital and operational costs, production and solar-to-value creation rates, and governmental policies and incentives, among others. Solar reforming may not be only limited to the conventional chemical pathways discussed, and may also include other relevant industrial processes such as light-driven organic transformations, flow photochemistry, integration with industrial electrolysis, among others. The products from conventional solar reforming such as green hydrogen or other platform chemicals have a broad value-chain. It is also now understood that sustainable fuel/chemical producing technologies of the future will rely on biomass, plastics and CO2 as key carbon feedstocks to replace fossil fuels. Therefore, with sunlight being abundant and the cheapest source of energy, solar reforming is well-positioned to drive decarbonization and facilitate the transition from a linear to circular economy in the coming decades.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta_{\\mathrm{STF}}=\\frac{\\mathrm{r}_{\\mathrm{SR}}\\left(\\mathrm{mol} \\cdot \\mathrm{s}^{-1}\\right) \\times \\Delta \\mathrm{G}_{\\mathrm{SR}}\\left(\\mathrm{J} \\cdot \\mathrm{mol}^{-1}\\right)}{\\mathrm{P}_{\\text {total }}\\left(\\mathrm{W} \\cdot \\mathrm{m}^{-2}\\right) \\times \\mathrm{A}\\left(\\mathrm{m}^2\\right)}"
},
{
"math_id": 1,
"text": "\\mathrm{r}_{\\text {areal}}=\\frac{\\mathrm{n}_{\\text {product}}(\\mathrm{mol})}{\\mathrm{A}\\left(\\mathrm{m}^2\\right) \\times \\mathrm{t}(\\mathrm{h})}"
},
{
"math_id": 2,
"text": "r_{\\mathrm{STV}}= \\frac{ {\\textstyle \\sum_{i=1}^M \\displaystyle C_i ($mol^{-1})\\times n_i(mol)} - {\\textstyle \\sum_{k=1}^N \\displaystyle \\bigl(C_k+C_p\\bigr) ($mol^{-1})\\times n_k (mol)}}{A (m^2)\\times t(h)}{}"
}
]
| https://en.wikipedia.org/wiki?curid=76088934 |
7609 | Cosmic censorship hypothesis | Mathematical conjecture in physics
The weak and the strong cosmic censorship hypotheses are two mathematical conjectures about the structure of gravitational singularities arising in general relativity.
Singularities that arise in the solutions of Einstein's equations are typically hidden within event horizons, and therefore cannot be observed from the rest of spacetime. Singularities that are not so hidden are called "naked". The weak cosmic censorship hypothesis was conceived by Roger Penrose in 1969 and posits that no naked singularities exist in the universe.
Basics.
Since the physical behavior of singularities is unknown, if singularities can be observed from the rest of spacetime, causality may break down, and physics may lose its predictive power. The issue cannot be avoided, since according to the Penrose–Hawking singularity theorems, singularities are inevitable in physically reasonable situations. Still, in the absence of naked singularities, the universe, as described by the general theory of relativity, is deterministic: it is possible to predict the entire evolution of the universe (possibly excluding some finite regions of space hidden inside event horizons of singularities), knowing only its condition at a certain moment of time (more precisely, everywhere on a spacelike three-dimensional hypersurface, called the Cauchy surface). Failure of the cosmic censorship hypothesis leads to the failure of determinism, because it is yet impossible to predict the behavior of spacetime in the causal future of a singularity. Cosmic censorship is not merely a problem of formal interest; some form of it is assumed whenever black hole event horizons are mentioned.
The hypothesis was first formulated by Roger Penrose in 1969, and it is not stated in a completely formal way. In a sense it is more of a research program proposal: part of the research is to find a proper formal statement that is physically reasonable, falsifiable, and sufficiently general to be interesting. Because the statement is not a strictly formal one, there is sufficient latitude for (at least) two independent formulations: a weak form, and a strong form.
Weak and strong cosmic censorship hypothesis.
The weak and the strong cosmic censorship hypotheses are two conjectures concerned with the global geometry of spacetimes.
The weak cosmic censorship hypothesis asserts there can be no singularity visible from future null infinity. In other words, singularities need to be hidden from an observer at infinity by the event horizon of a black hole. Mathematically, the conjecture states that, for generic initial data, the causal structure is such that the maximal Cauchy development possesses a complete future null infinity.
The strong cosmic censorship hypothesis asserts that, generically, general relativity is a deterministic theory, in the same sense that classical mechanics is a deterministic theory. In other words, the classical fate of all observers should be predictable from the initial data. Mathematically, the conjecture states that the maximal Cauchy development of generic compact or asymptotically flat initial data is locally inextendible as a regular Lorentzian manifold. Taken in its strongest sense, the conjecture suggests locally inextendibility of the maximal Cauchy development as a continuous Lorentzian manifold [very Strong Cosmic Censorship]. This strongest version was disproven in 2018 by Mihalis Dafermos and Jonathan Luk for the Cauchy horizon of an uncharged, rotating black hole.
The two conjectures are mathematically independent, as there exist spacetimes for which weak cosmic censorship is valid but strong cosmic censorship is violated and, conversely, there exist spacetimes for which weak cosmic censorship is violated but strong cosmic censorship is valid.
Example.
The Kerr metric, corresponding to a black hole of mass formula_0 and angular momentum formula_1, can be used to derive the effective potential for particle orbits restricted to the equator (as defined by rotation). This potential looks like:
formula_2
where formula_3 is the coordinate radius, formula_4 and formula_5 are the test-particle's conserved energy and angular momentum respectively (constructed from the Killing vectors).
To preserve "cosmic censorship", the black hole is restricted to the case of formula_6. For there to exist an event horizon around the singularity, the requirement formula_6 must be satisfied. This amounts to the angular momentum of the black hole being constrained to below a critical value, outside of which the horizon would disappear.
The following thought experiment is reproduced from Hartle's "Gravity":
<templatestyles src="Template:Blockquote/styles.css" />Imagine specifically trying to violate the censorship conjecture. This could be done by somehow imparting an angular momentum upon the black hole, making it exceed the critical value (assume it starts infinitesimally below it). This could be done by sending a particle of angular momentum formula_7. Because this particle has angular momentum, it can only be captured by the black hole if the maximum potential of the black hole is less than formula_8.
Solving the above effective potential equation for the maximum under the given conditions results in a maximum potential of exactly formula_8. Testing other values shows that no particle with enough angular momentum to violate the censorship conjecture would be able to enter the black hole, "because" they have too much angular momentum to fall in.
Problems with the concept.
There are a number of difficulties in formalizing the hypothesis:
In 1991, John Preskill and Kip Thorne bet against Stephen Hawking that the hypothesis was false. Hawking conceded the bet in 1997, due to the discovery of the special situations just mentioned, which he characterized as "technicalities". Hawking later reformulated the bet to exclude those technicalities. The revised bet is still open (although Hawking died in 2018), the prize being "clothing to cover the winner's nakedness".
Counter-example.
An exact solution to the scalar-Einstein equations formula_11 which forms a counterexample to many formulations of the
cosmic censorship hypothesis was found by Mark D. Roberts in 1985:
formula_12
where formula_13 is a constant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "J"
},
{
"math_id": 2,
"text": " V_{\\rm{eff}}(r,e,\\ell)=-\\frac{M}{r}+\\frac{\\ell^2-a^2(e^2-1)}{2r^2}-\\frac{M(\\ell-a e)^2}{r^3},~~~\na\\equiv \\frac{J}{M} "
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "e"
},
{
"math_id": 5,
"text": "\\ell"
},
{
"math_id": 6,
"text": "a < 1"
},
{
"math_id": 7,
"text": "\\ell = 2Me"
},
{
"math_id": 8,
"text": "(e^2-1)/2"
},
{
"math_id": 9,
"text": "M<|Q|"
},
{
"math_id": 10,
"text": "r=0"
},
{
"math_id": 11,
"text": "R_{ab}=2\\phi_a\\phi_b"
},
{
"math_id": 12,
"text": "ds^2=-(1+2\\sigma)\\,dv^2+2\\,dv\\,dr+r(r-2\\sigma v)\\left(d\\theta^2 + \\sin^2 \\theta \\,d\\phi^2\\right),\\quad \\varphi = \\frac{1}{2} \\ln\\left(1 - \\frac{2\\sigma v}{r}\\right),"
},
{
"math_id": 13,
"text": "\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=7609 |
760994 | Electron mobility | Quantity in solid-state physics
In solid-state physics, the electron mobility characterises how quickly an electron can move through a metal or semiconductor when pushed or pulled by an electric field. There is an analogous quantity for holes, called hole mobility. The term carrier mobility refers in general to both electron and hole mobility.
Electron and hole mobility are special cases of electrical mobility of charged particles in a fluid under an applied electric field.
When an electric field "E" is applied across a piece of material, the electrons respond by moving with an average velocity called the drift velocity, formula_0. Then the electron mobility "μ" is defined as
formula_1
Electron mobility is almost always specified in units of cm2/(V⋅s). This is different from the SI unit of mobility, m2/(V⋅s). They are related by 1 m2/(V⋅s) = 104 cm2/(V⋅s).
Conductivity is proportional to the product of mobility and carrier concentration. For example, the same conductivity could come from a small number of electrons with high mobility for each, or a large number of electrons with a small mobility for each. For semiconductors, the behavior of transistors and other devices can be very different depending on whether there are many electrons with low mobility or few electrons with high mobility. Therefore mobility is a very important parameter for semiconductor materials. Almost always, higher mobility leads to better device performance, with other things equal.
Semiconductor mobility depends on the impurity concentrations (including donor and acceptor concentrations), defect concentration, temperature, and electron and hole concentrations. It also depends on the electric field, particularly at high fields when velocity saturation occurs. It can be determined by the Hall effect, or inferred from transistor behavior.
Introduction.
Drift velocity in an electric field.
Without any applied electric field, in a solid, electrons and holes move around randomly. Therefore, on average there will be no overall motion of charge carriers in any particular direction over time.
However, when an electric field is applied, each electron or hole is accelerated by the electric field. If the electron were in a vacuum, it would be accelerated to ever-increasing velocity (called ballistic transport). However, in a solid, the electron repeatedly scatters off crystal defects, phonons, impurities, etc., so that it loses some energy and changes direction. The final result is that the electron moves with a finite average velocity, called the drift velocity. This net electron motion is usually much slower than the normally occurring random motion.
The two charge carriers, electrons and holes, will typically have different drift velocities for the same electric field.
Quasi-ballistic transport is possible in solids if the electrons are accelerated across a very small distance (as small as the mean free path), or for a very short time (as short as the mean free time). In these cases, drift velocity and mobility are not meaningful.
Definition and units.
The electron mobility is defined by the equation:
formula_2
where:
The hole mobility is defined by a similar equation:
formula_3
Both electron and hole mobilities are positive by definition.
Usually, the electron drift velocity in a material is directly proportional to the electric field, which means that the electron mobility is a constant (independent of the electric field). When this is not true (for example, in very large electric fields), mobility depends on the electric field.
The SI unit of velocity is m/s, and the SI unit of electric field is V/m. Therefore the SI unit of mobility is (m/s)/(V/m) = m2/(V⋅s). However, mobility is much more commonly expressed in cm2/(V⋅s) = 10−4 m2/(V⋅s).
Mobility is usually a strong function of material impurities and temperature, and is determined empirically. Mobility values are typically presented in table or chart form. Mobility is also different for electrons and holes in a given material.
Derivation.
Starting with Newton's Second Law:
formula_4
where:
Since the force on the electron is −"eE":
formula_6
This is the acceleration on the electron between collisions. The drift velocity is therefore:
formula_7 where formula_8 is the mean free time
Since we only care about how the drift velocity changes with the electric field, we lump the loose terms together to get
formula_9 where formula_10
Similarly, for holes we have
formula_11 where formula_12
Note that both electron mobility and hole mobility are positive. A minus sign is added for electron drift velocity to account for the minus charge.
Relation to current density.
The drift current density resulting from an electric field can be calculated from the drift velocity. Consider a sample with cross-sectional area A, length l and an electron concentration of n. The current carried by each electron must be formula_13, so that the total current density due to electrons is given by:
formula_14
Using the expression for formula_15 gives
formula_16
A similar set of equations applies to the holes, (noting that the charge on a hole is positive). Therefore the current density due to holes is given by
formula_17
where p is the hole concentration and formula_18 the hole mobility.
The total current density is the sum of the electron and hole components:
formula_19
Relation to conductivity.
We have previously derived the relationship between electron mobility and current density
formula_19
Now Ohm's law can be written in the form
formula_20
where formula_21 is defined as the conductivity. Therefore we can write down:
formula_22
which can be factorised to
formula_23
Relation to electron diffusion.
In a region where n and p vary with distance, a diffusion current is superimposed on that due to conductivity. This diffusion current is governed by Fick's Law:
formula_24
where:
The diffusion coefficient for a charge carrier is related to its mobility by the Einstein relation. For a classical system (e.g. Boltzmann gas), it reads:
formula_26
where:
For a metal, described by a Fermi gas (Fermi liquid), quantum version of the Einstein relation should be used. Typically, temperature is much smaller than the Fermi energy, in this case one should use the following formula:
formula_27
where:
Examples.
Typical electron mobility at room temperature (300 K) in metals like gold, copper and silver is 30–50 cm2/(V⋅s). Carrier mobility in semiconductors is doping dependent. In silicon (Si) the electron mobility is of the order of 1,000, in germanium around 4,000, and in gallium arsenide up to 10,000 cm2/(V⋅s). Hole mobilities are generally lower and range from around 100 cm2/(V⋅s) in gallium arsenide, to 450 in silicon, and 2,000 in germanium.
Very high mobility has been found in several ultrapure low-dimensional systems, such as two-dimensional electron gases (2DEG) (35,000,000 cm2/(V⋅s) at low temperature), carbon nanotubes (100,000 cm2/(V⋅s) at room temperature) and freestanding graphene (200,000 cm2/(V⋅s) at low temperature). Organic semiconductors (polymer, oligomer) developed thus far have carrier mobilities below 50 cm2/(V⋅s), and typically below 1, with well performing materials measured below 10.
Electric field dependence and velocity saturation.
At low fields, the drift velocity "v""d" is proportional to the electric field "E", so mobility "μ" is constant. This value of "μ" is called the "low-field mobility".
As the electric field is increased, however, the carrier velocity increases sublinearly and asymptotically towards a maximum possible value, called the "saturation velocity" "v"sat. For example, the value of "v"sat is on the order of 1×107 cm/s for both electrons and holes in Si. It is on the order of 6×106 cm/s for Ge. This velocity is a characteristic of the material and a strong function of doping or impurity levels and temperature. It is one of the key material and semiconductor device properties that determine a device such as a transistor's ultimate limit of speed of response and frequency.
This velocity saturation phenomenon results from a process called "optical phonon scattering". At high fields, carriers are accelerated enough to gain sufficient kinetic energy between collisions to emit an optical phonon, and they do so very quickly, before being accelerated once again. The velocity that the electron reaches before emitting a phonon is:
formula_28
where "ω"phonon(opt.) is the optical-phonon angular frequency and m* the carrier effective mass in the direction of the electric field. The value of "E"phonon (opt.) is 0.063 eV for Si and 0.034 eV for GaAs and Ge. The saturation velocity is only one-half of "v"emit, because the electron starts at zero velocity and accelerates up to "v"emit in each cycle. (This is a somewhat oversimplified description.)
Velocity saturation is not the only possible high-field behavior. Another is the Gunn effect, where a sufficiently high electric field can cause intervalley electron transfer, which reduces drift velocity. This is unusual; increasing the electric field almost always "increases" the drift velocity, or else leaves it unchanged. The result is negative differential resistance.
In the regime of velocity saturation (or other high-field effects), mobility is a strong function of electric field. This means that mobility is a somewhat less useful concept, compared to simply discussing drift velocity directly.
Relation between scattering and mobility.
Recall that by definition, mobility is dependent on the drift velocity. The main factor determining drift velocity (other than effective mass) is scattering time, i.e. how long the carrier is ballistically accelerated by the electric field until it scatters (collides) with something that changes its direction and/or energy. The most important sources of scattering in typical semiconductor materials, discussed below, are ionized impurity scattering and acoustic phonon scattering (also called lattice scattering). In some cases other sources of scattering may be important, such as neutral impurity scattering, optical phonon scattering, surface scattering, and defect scattering.
Elastic scattering means that energy is (almost) conserved during the scattering event. Some elastic scattering processes are scattering from acoustic phonons, impurity scattering, piezoelectric scattering, etc. In acoustic phonon scattering, electrons scatter from state k to k', while emitting or absorbing a phonon of wave vector q. This phenomenon is usually modeled by assuming that lattice vibrations cause small shifts in energy bands. The additional potential causing the scattering process is generated by the deviations of bands due to these small transitions from frozen lattice positions.
Ionized impurity scattering.
Semiconductors are doped with donors and/or acceptors, which are typically ionized, and are thus charged. The Coulombic forces will deflect an electron or hole approaching the ionized impurity. This is known as "ionized impurity scattering". The amount of deflection depends on the speed of the carrier and its proximity to the ion. The more heavily a material is doped, the higher the probability that a carrier will collide with an ion in a given time, and the smaller the mean free time between collisions, and the smaller the mobility. When determining the strength of these interactions due to the long-range nature of the Coulomb potential, other impurities and free carriers cause the range of interaction with the carriers to reduce significantly compared to bare Coulomb interaction.
If these scatterers are near the interface, the complexity of the problem increases due to the existence of crystal defects and disorders. Charge trapping centers that scatter free carriers form in many cases due to defects associated with dangling bonds. Scattering happens because after trapping a charge, the defect becomes charged and therefore starts interacting with free carriers. If scattered carriers are in the inversion layer at the interface, the reduced dimensionality of the carriers makes the case differ from the case of bulk impurity scattering as carriers move only in two dimensions. Interfacial roughness also causes short-range scattering limiting the mobility of quasi-two-dimensional electrons at the interface.
Lattice (phonon) scattering.
At any temperature above absolute zero, the vibrating atoms create pressure (acoustic) waves in the crystal, which are termed phonons. Like electrons, phonons can be considered to be particles. A phonon can interact (collide) with an electron (or hole) and scatter it. At higher temperature, there are more phonons, and thus increased electron scattering, which tends to reduce mobility.
Piezoelectric scattering.
Piezoelectric effect can occur only in compound semiconductor due to their polar nature. It is small in most semiconductors but may lead to local electric fields that cause scattering of carriers by deflecting them, this effect is important mainly at low temperatures where other scattering mechanisms are weak. These electric fields arise from the distortion of the basic unit cell as strain is applied in certain directions in the lattice.
Surface roughness scattering.
Surface roughness scattering caused by interfacial disorder is short range scattering limiting the mobility of quasi-two-dimensional electrons at the interface. From high-resolution transmission electron micrographs, it has been determined that the interface is not abrupt on the atomic level, but actual position of the interfacial plane varies one or two atomic layers along the surface. These variations are random and cause fluctuations of the energy levels at the interface, which then causes scattering.
Alloy scattering.
In compound (alloy) semiconductors, which many thermoelectric materials are, scattering caused by the perturbation of crystal potential due to the random positioning of substituting atom species in a relevant sublattice is known as alloy scattering. This can only happen in ternary or higher alloys as their crystal structure forms by randomly replacing some atoms in one of the sublattices (sublattice) of the crystal structure. Generally, this phenomenon is quite weak but in certain materials or circumstances, it can become dominant effect limiting conductivity. In bulk materials, interface scattering is usually ignored.
Inelastic scattering.
During inelastic scattering processes, significant energy exchange happens. As with elastic phonon scattering also in the inelastic case, the potential arises from energy band deformations caused by atomic vibrations. Optical phonons causing inelastic scattering usually have the energy in the range 30-50 meV, for comparison energies of acoustic phonon are typically less than 1 meV but some might have energy in order of 10 meV. There is significant change in carrier energy during the scattering process. Optical or high-energy acoustic phonons can also cause intervalley or interband scattering, which means that scattering is not limited within single valley.
Electron–electron scattering.
Due to the Pauli exclusion principle, electrons can be considered as non-interacting if their density does not exceed the value 1016~1017 cm−3 or electric field value 103 V/cm. However, significantly above these limits electron–electron scattering starts to dominate. Long range and nonlinearity of the Coulomb potential governing interactions between electrons make these interactions difficult to deal with.
Relation between mobility and scattering time.
A simple model gives the approximate relation between scattering time (average time between scattering events) and mobility. It is assumed that after each scattering event, the carrier's motion is randomized, so it has zero average velocity. After that, it accelerates uniformly in the electric field, until it scatters again. The resulting average drift mobility is:
formula_29
where "q" is the elementary charge, "m"* is the carrier effective mass, and "τ" is the average scattering time.
If the effective mass is anisotropic (direction-dependent), "m"* is the effective mass in the direction of the electric field.
Matthiessen's rule.
Normally, more than one source of scattering is present, for example both impurities and lattice phonons. It is normally a very good approximation to combine their influences using "Matthiessen's Rule" (developed from work by Augustus Matthiessen in 1864):
formula_30
where "μ" is the actual mobility, formula_31 is the mobility that the material would have if there was impurity scattering but no other source of scattering, and formula_32 is the mobility that the material would have if there was lattice phonon scattering but no other source of scattering. Other terms may be added for other scattering sources, for example
formula_33
Matthiessen's rule can also be stated in terms of the scattering time:
formula_34
where "τ" is the true average scattering time and τimpurities is the scattering time if there was impurity scattering but no other source of scattering, etc.
Matthiessen's rule is an approximation and is not universally valid. This rule is not valid if the factors affecting the mobility depend on each other, because individual scattering probabilities cannot be summed unless they are independent of each other. The average free time of flight of a carrier and therefore the relaxation time is inversely proportional to the scattering probability. For example, lattice scattering alters the average electron velocity (in the electric-field direction), which in turn alters the tendency to scatter off impurities. There are more complicated formulas that attempt to take these effects into account.
Temperature dependence of mobility.
With increasing temperature, phonon concentration increases and causes increased scattering. Thus lattice scattering lowers the carrier mobility more and more at higher temperature. Theoretical calculations reveal that the mobility in non-polar semiconductors, such as silicon and germanium, is dominated by acoustic phonon interaction. The resulting mobility is expected to be proportional to "T" −3/2, while the mobility due to optical phonon scattering only is expected to be proportional to "T" −1/2. Experimentally, values of the temperature dependence of the mobility in Si, Ge and GaAs are listed in table.
As formula_35, where formula_36 is the scattering cross section for electrons and holes at a scattering center and formula_37 is a thermal average (Boltzmann statistics) over all electron or hole velocities in the lower conduction band or upper valence band, temperature dependence of the mobility can be determined. In here, the following definition for the scattering cross section is used: number of particles scattered into solid angle dΩ per unit time divided by number of particles per area per time (incident intensity), which comes from classical mechanics. As Boltzmann statistics are valid for semiconductors formula_38.
For scattering from acoustic phonons, for temperatures well above Debye temperature, the estimated cross section Σph is determined from the square of the average vibrational amplitude of a phonon to be proportional to "T". The scattering from charged defects (ionized donors or acceptors) leads to the cross section formula_39. This formula is the scattering cross section for "Rutherford scattering", where a point charge (carrier) moves past another point charge (defect) experiencing Coulomb interaction.
The temperature dependencies of these two scattering mechanism in semiconductors can be determined by combining formulas for τ, Σ and formula_37, to be for scattering from acoustic phonons formula_40 and from charged defects formula_41.
The effect of ionized impurity scattering, however, "decreases" with increasing temperature because the average thermal speeds of the carriers are increased. Thus, the carriers spend less time near an ionized impurity as they pass and the scattering effect of the ions is thus reduced.
These two effects operate simultaneously on the carriers through Matthiessen's rule. At lower temperatures, ionized impurity scattering dominates, while at higher temperatures, phonon scattering dominates, and the actual mobility reaches a maximum at an intermediate temperature.
Disordered Semiconductors.
While in crystalline materials electrons can be described by wavefunctions extended over the entire solid, this is not the case in systems with appreciable structural disorder, such as polycrystalline or amorphous semiconductors. Anderson suggested that beyond a critical value of structural disorder, electron states would be "localized". Localized states are described as being confined to finite region of real space, normalizable, and not contributing to transport. Extended states are spread over the extent of the material, not normalizable, and contribute to transport. Unlike crystalline semiconductors, mobility generally increases with temperature in disordered semiconductors.
Multiple trapping and release.
Mott later developed the concept of a mobility edge. This is an energy formula_42, above which electrons undergo a transition from localized to delocalized states. In this description, termed "multiple trapping and release", electrons are only able to travel when in extended states, and are constantly being trapped in, and re-released from, the lower energy localized states. Because the probability of an electron being released from a trap depends on its thermal energy, mobility can be described by an Arrhenius relationship in such a system:
formula_43
where formula_44 is a mobility prefactor, formula_45 is activation energy, formula_46 is the Boltzmann constant, and formula_47 is temperature. The activation energy is typically evaluated by measuring mobility as a function of temperature. The Urbach Energy can be used as a proxy for activation energy in some systems.
Variable Range Hopping.
At low temperature, or in system with a large degree of structural disorder (such as fully amorphous systems), electrons cannot access delocalized states. In such a system, electrons can only travel by tunnelling for one site to another, in a process called "variable range hopping". In the original theory of variable range hopping, as developed by Mott and Davis, the probability formula_48, of an electron hopping from one site formula_49, to another site formula_50, depends on their separation in space formula_51, and their separation in energy formula_52.
formula_53
Here formula_54 is a prefactor associated with the phonon frequency in the material, and formula_55 is the wavefunction overlap parameter. The mobility in a system governed by variable range hopping can be shown to be:
formula_56
where formula_44 is a mobility prefactor, formula_57 is a parameter (with dimensions of temperature) that quantifies the width of localized states, and formula_58 is the dimensionality of the system.
Measurement of semiconductor mobility.
Hall mobility.
Carrier mobility is most commonly measured using the Hall effect. The result of the measurement is called the "Hall mobility" (meaning "mobility inferred from a Hall-effect measurement").
Consider a semiconductor sample with a rectangular cross section as shown in the figures, a current is flowing in the "x"-direction and a magnetic field is applied in the "z"-direction. The resulting Lorentz force will accelerate the electrons ("n"-type materials) or holes ("p"-type materials) in the (−"y") direction, according to the right hand rule and set up an electric field "ξy". As a result there is a voltage across the sample, which can be measured with a high-impedance voltmeter. This voltage, "VH", is called the Hall voltage. "VH" is negative for "n"-type material and positive for "p"-type material.
Mathematically, the Lorentz force acting on a charge "q" is given by
For electrons:
formula_59
For holes:
formula_60
In steady state this force is balanced by the force set up by the Hall voltage, so that there is no net force on the carriers in the "y" direction. For electrons,
formula_61
formula_62
formula_63
For electrons, the field points in the −"y" direction, and for holes, it points in the +"y" direction.
The electron current "I" is given by formula_64. Sub "v""x" into the expression for "ξ""y",
formula_65
where "RHn" is the Hall coefficient for electron, and is defined as
formula_66
Since formula_67
formula_68
Similarly, for holes
formula_69
From the Hall coefficient, we can obtain the carrier mobility as follows:
formula_70
Similarly,
formula_71
Here the value of "VHp" (Hall voltage), "t" (sample thickness), "I" (current) and "B" (magnetic field) can be measured directly, and the conductivities "σ"n or "σ"p are either known or can be obtained from measuring the resistivity.
Field-effect mobility.
The mobility can also be measured using a field-effect transistor (FET). The result of the measurement is called the "field-effect mobility" (meaning "mobility inferred from a field-effect measurement").
The measurement can work in two ways: From saturation-mode measurements, or linear-region measurements. (See MOSFET for a description of the different modes or regions of operation.)
Using saturation mode.
In this technique, for each fixed gate voltage VGS, the drain-source voltage VDS is increased until the current ID saturates. Next, the square root of this saturated current is plotted against the gate voltage, and the slope "m"sat is measured. Then the mobility is:
formula_72
where "L" and "W" are the length and width of the channel and "C""i" is the gate insulator capacitance per unit area. This equation comes from the approximate equation for a MOSFET in saturation mode:
formula_73
where "V"th is the threshold voltage. This approximation ignores the Early effect (channel length modulation), among other things. In practice, this technique may underestimate the true mobility.
Using the linear region.
In this technique, the transistor is operated in the linear region (or "ohmic mode"), where VDS is small and formula_74 with slope "m"lin. Then the mobility is:
formula_75
This equation comes from the approximate equation for a MOSFET in the linear region:
formula_76
In practice, this technique may overestimate the true mobility, because if VDS is not small enough and VG is not large enough, the MOSFET may not stay in the linear region.
Optical mobility.
Electron mobility may be determined from non-contact laser photo-reflectance technique measurements. A series of photo-reflectance measurements are made as the sample is stepped through focus. The electron diffusion length and recombination time are determined by a regressive fit to the data. Then the Einstein relation is used to calculate the mobility.
Terahertz mobility.
Electron mobility can be calculated from time-resolved terahertz probe measurement. Femtosecond laser pulses excite the semiconductor and the resulting photoconductivity is measured using a terahertz probe, which detects changes in the terahertz electric field.
Time resolved microwave conductivity (TRMC).
A proxy for charge carrier mobility can be evaluated using time-resolved microwave conductivity (TRMC). A pulsed optical laser is used to create electrons and holes in a semiconductor, which are then detected as an increase in photoconductance. With knowledge of the sample absorbance, dimensions, and incident laser fluence, the parameter formula_77 can be evaluated, where formula_78 is the carrier generation yield (between 0 and 1), formula_79 is the electron mobility and formula_80 is the hole mobility. formula_81 has the same dimensions as mobility, but carrier type (electron or hole) is obscured.
Doping concentration dependence in heavily-doped silicon.
The charge carriers in semiconductors are electrons and holes. Their numbers are controlled by the concentrations of impurity elements, i.e. doping concentration. Thus doping concentration has great influence on carrier mobility.
While there is considerable scatter in the experimental data, for noncompensated material (no counter doping) for heavily doped substrates (i.e. formula_82 and up), the mobility in silicon is often characterized by the empirical relationship:
formula_83
where "N" is the doping concentration (either "ND" or "NA"), and "N"ref and α are fitting parameters. At room temperature, the above equation becomes:
Majority carriers:
formula_84
formula_85
Minority carriers:
formula_86
formula_87
These equations apply only to silicon, and only under low field.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " v_d"
},
{
"math_id": 1,
"text": "v_d = \\mu E."
},
{
"math_id": 2,
"text": "v_d = \\mu_e E."
},
{
"math_id": 3,
"text": "v_d = \\mu_h E."
},
{
"math_id": 4,
"text": "a = F/m_e^* "
},
{
"math_id": 5,
"text": "m_e^* "
},
{
"math_id": 6,
"text": "a = -\\frac{eE}{m_e^*} "
},
{
"math_id": 7,
"text": "v_d = a \\tau_c = -\\frac{e\\tau_c}{m_e^*}E,"
},
{
"math_id": 8,
"text": "\\tau_c"
},
{
"math_id": 9,
"text": "v_d = -\\mu_e E,"
},
{
"math_id": 10,
"text": "\\mu_e = \\frac{e\\tau_c}{m_e^*}"
},
{
"math_id": 11,
"text": "v_d = \\mu_h E,"
},
{
"math_id": 12,
"text": "\\mu_h = \\frac{e\\tau_c}{m_h^*}"
},
{
"math_id": 13,
"text": "-e v_d"
},
{
"math_id": 14,
"text": "J_e=\\frac{I_n}{A} = - e n v_d"
},
{
"math_id": 15,
"text": "v_d"
},
{
"math_id": 16,
"text": "J_e = e n\\mu_e E"
},
{
"math_id": 17,
"text": "J_h =e p \\mu_h E"
},
{
"math_id": 18,
"text": "\\mu_h"
},
{
"math_id": 19,
"text": "J=J_e+J_h=(en\\mu_e+ep\\mu_h)E"
},
{
"math_id": 20,
"text": "J=\\sigma E"
},
{
"math_id": 21,
"text": "\\sigma"
},
{
"math_id": 22,
"text": "\\sigma=en\\mu_e+ep\\mu_h"
},
{
"math_id": 23,
"text": "\\sigma=e(n\\mu_e+p\\mu_h)"
},
{
"math_id": 24,
"text": "F=-D_\\text{e}\\nabla n"
},
{
"math_id": 25,
"text": "\\nabla n"
},
{
"math_id": 26,
"text": "D_\\text{e} = \\frac{\\mu_\\text{e} k_\\mathrm{B} T}{e}"
},
{
"math_id": 27,
"text": "D_\\text{e} = \\frac{\\mu_\\text{e} E_F}{e}"
},
{
"math_id": 28,
"text": "\\frac{m^* v_\\text{emit}^2}{2} \\approx \\hbar \\omega_\\text{phonon (opt.)}"
},
{
"math_id": 29,
"text": "\\mu = \\frac{q}{m^*}\\overline{\\tau}"
},
{
"math_id": 30,
"text": "\\frac{1}{\\mu} = \\frac{1}{\\mu_{\\rm impurities}} + \\frac{1}{\\mu_{\\rm lattice}}."
},
{
"math_id": 31,
"text": "\\mu_{\\rm impurities}"
},
{
"math_id": 32,
"text": "\\mu_{\\rm lattice}"
},
{
"math_id": 33,
"text": "\\frac{1}{\\mu} = \\frac{1}{\\mu_{\\rm impurities}} + \\frac{1}{\\mu_{\\rm lattice}} + \\frac{1}{\\mu_{\\rm defects}} + \\cdots."
},
{
"math_id": 34,
"text": "\\frac{1}{\\tau} = \\frac{1}{\\tau_{\\rm impurities}} + \\frac{1}{\\tau_{\\rm lattice}} + \\frac{1}{\\tau_{\\rm defects}} + \\cdots ."
},
{
"math_id": 35,
"text": "\\frac{1}{\\tau }\\propto \\left \\langle v\\right \\rangle\\Sigma "
},
{
"math_id": 36,
"text": "\\Sigma "
},
{
"math_id": 37,
"text": "\\left \\langle v\\right \\rangle"
},
{
"math_id": 38,
"text": "\\left \\langle v\\right \\rangle\\sim\\sqrt{T}"
},
{
"math_id": 39,
"text": "{\\Sigma }_\\text{def}\\propto {\\left \\langle v\\right \\rangle}^{-4}"
},
{
"math_id": 40,
"text": "{\\mu }_{ph}\\sim T^{-3/2}"
},
{
"math_id": 41,
"text": "{\\mu }_\\text{def}\\sim T^{3/2}"
},
{
"math_id": 42,
"text": "E_{C}"
},
{
"math_id": 43,
"text": "\\mu=\\mu_{0}\\exp\\left(-\\frac{E_\\text{A}}{k_\\text{B}T}\\right)"
},
{
"math_id": 44,
"text": "\\mu_{0}"
},
{
"math_id": 45,
"text": "E_\\text{A}"
},
{
"math_id": 46,
"text": "k_\\text{B}"
},
{
"math_id": 47,
"text": "T"
},
{
"math_id": 48,
"text": "P_{ij}"
},
{
"math_id": 49,
"text": "i"
},
{
"math_id": 50,
"text": "j"
},
{
"math_id": 51,
"text": "r_{ij}"
},
{
"math_id": 52,
"text": "\\Delta E_{ij}"
},
{
"math_id": 53,
"text": "P_{ij} = P_{0}\\exp\\left(-2\\alpha r_{ij} - \\frac{\\Delta E_{ij}}{k_{B} T}\\right)"
},
{
"math_id": 54,
"text": "P_{0}"
},
{
"math_id": 55,
"text": "\\alpha"
},
{
"math_id": 56,
"text": "\\mu=\\mu_{0} \\exp \\left(-\\left [ \\frac{T_{0}}{T} \\right ]^{-1/(d+1)}\\right)"
},
{
"math_id": 57,
"text": "T_{0}"
},
{
"math_id": 58,
"text": "d"
},
{
"math_id": 59,
"text": "\\mathbf F_{Hn} = -q(\\mathbf v_n \\times \\mathbf B_z)"
},
{
"math_id": 60,
"text": "\\mathbf F_{Hp} = +q(\\mathbf v_p \\times \\mathbf B_z)"
},
{
"math_id": 61,
"text": "\\mathbf F_y = (-q)\\xi_y + (-q)[\\mathbf v_n \\times\\mathbf B_z] = 0"
},
{
"math_id": 62,
"text": "\\Rightarrow -q\\xi_y + qv_xB_z = 0"
},
{
"math_id": 63,
"text": " \\xi_y = v_xB_z"
},
{
"math_id": 64,
"text": "I = -qnv_xtW"
},
{
"math_id": 65,
"text": "\\xi_y = -\\frac{IB}{nqtW} = +\\frac{R_{Hn}IB}{tW}"
},
{
"math_id": 66,
"text": "R_{Hn} = -\\frac{1}{nq}"
},
{
"math_id": 67,
"text": "\\xi_y = \\frac{V_H}{W}"
},
{
"math_id": 68,
"text": "R_{Hn} = -\\frac{1}{nq} = \\frac{V_{Hn}t}{IB}"
},
{
"math_id": 69,
"text": "R_{Hp} = \\frac{1}{pq} = \\frac{V_{Hp}t}{IB}"
},
{
"math_id": 70,
"text": "\\begin{align}\n\\mu_n &= \\left(-nq\\right) \\mu_n \\left(-\\frac{1}{nq}\\right) \\\\\n&= -\\sigma_n R_{Hn} \\\\\n&= -\\frac{\\sigma_n V_{Hn} t}{IB}\n\\end{align}"
},
{
"math_id": 71,
"text": "\\mu_p = \\frac{\\sigma_p V_{Hp}t}{IB}"
},
{
"math_id": 72,
"text": "\\mu = m_\\text{sat}^2 \\frac{2L}{W} \\frac{1}{C_i}"
},
{
"math_id": 73,
"text": "I_D = \\frac{\\mu C_i}{2}\\frac{W}{L}(V_{GS}-V_{th})^2."
},
{
"math_id": 74,
"text": "I_D \\propto V_{GS}"
},
{
"math_id": 75,
"text": "\\mu = m_\\text{lin} \\frac{L}{W} \\frac{1}{V_{DS}} \\frac{1}{C_i}."
},
{
"math_id": 76,
"text": "I_D= \\mu C_i \\frac{W}{L} \\left( (V_{GS}-V_{th})V_{DS}-\\frac{V_{DS}^2}{2} \\right)"
},
{
"math_id": 77,
"text": "\\phi\\Sigma\\mu=\\phi(\\mu_{e}+\\mu_{h})"
},
{
"math_id": 78,
"text": "\\phi"
},
{
"math_id": 79,
"text": "\\mu_{e}"
},
{
"math_id": 80,
"text": "\\mu_{h}"
},
{
"math_id": 81,
"text": "\\phi\\Sigma\\mu"
},
{
"math_id": 82,
"text": "10^{18}\\mathrm{cm}^{-3} "
},
{
"math_id": 83,
"text": "\\mu = \\mu_o + \\frac{\\mu_1}{1 + \\left(\\frac{N}{N_\\text{ref}}\\right)^\\alpha}"
},
{
"math_id": 84,
"text": "\\mu_n(N_D) = 65 + \\frac{1265}{1+ \\left(\\frac{N_D}{8.5\\times10^{16}}\\right)^{0.72}}"
},
{
"math_id": 85,
"text": "\\mu_p(N_A) = 48 + \\frac{447}{1+ \\left(\\frac{N_A}{6.3\\times10^{16}}\\right)^{0.76}}"
},
{
"math_id": 86,
"text": "\\mu_n(N_A) = 232 + \\frac{1180}{1+ \\left(\\frac{N_A}{8\\times10^{16}}\\right)^{0.9}}"
},
{
"math_id": 87,
"text": "\\mu_p(N_D) = 130 + \\frac{370}{1+ \\left(\\frac{N_D}{8\\times10^{17}}\\right)^{1.25}}"
}
]
| https://en.wikipedia.org/wiki?curid=760994 |
7609997 | Orbit determination | Orbit determination is the estimation of orbits of objects such as moons, planets, and spacecraft. One major application is to allow tracking newly observed asteroids and verify that they have not been previously discovered. The basic methods were discovered in the 17th century and have been continuously refined.
"Observations" are the raw data fed into orbit determination algorithms. Observations made by a ground-based observer typically consist of time-tagged azimuth, elevation, range, and/or range rate values. Telescopes or radar apparatus are used, because naked-eye observations are inadequate for precise orbit determination. With more or better observations, the accuracy of the orbit determination process also improves, and fewer "false alarms" result.
After orbits are determined, mathematical propagation techniques can be used to predict the future positions of orbiting objects. As time goes by, the actual path of an orbiting object tends to diverge from the predicted path (especially if the object is subject to difficult-to-predict perturbations such as atmospheric drag), and a new orbit determination using new observations serves to re-calibrate knowledge of the orbit.
Satellite tracking is another major application. For the United States and partner countries, to the extent that optical and radar resources allow, the Joint Space Operations Center gathers observations of all objects in Earth orbit. The observations are used in new orbit determination calculations that maintain the overall accuracy of the satellite catalog. Collision avoidance calculations may use this data to calculate the probability that one orbiting object will collide with another. A satellite's operator may decide to adjust the orbit, if the risk of collision in the present orbit is unacceptable. (It is not possible to adjust the orbit for events of very low probability; it would soon use up the propellant the satellite carries for orbital station-keeping.) Other countries, including Russia and China, have similar tracking assets.
History.
Orbit determination has a long history, beginning with the prehistoric discovery of the planets and subsequent attempts to predict their motions. Johannes Kepler used Tycho Brahe's careful observations of Mars to deduce the elliptical shape of its orbit and its orientation in space, deriving his three laws of planetary motion in the process.
The mathematical methods for orbit determination originated with the publication in 1687 of the first edition of Newton's "Principia", which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Lambert in 1761–1777.
Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of celestial coordinates) to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets.
Observational data.
In order to determine the unknown orbit of a body, some observations of its motion with time are required. In early modern astronomy, the only available observational data for celestial objects were the right ascension and declination, obtained by observing the body as it moved in its observation arc, relative to the fixed stars, using an optical telescope. This corresponds to knowing the object's relative direction in space, measured from the observer, but without knowledge of the distance of the object, i.e. the resultant measurement contains only direction information, like a unit vector.
With radar, relative distance measurements (by timing of the radar echo) and relative velocity measurements (by measuring the Doppler effect of the radar echo) are possible using radio telescopes. However, the returned signal strength from radar decreases rapidly, as the inverse fourth power of the range to the object. This generally limits radar observations to objects relatively near the Earth, such as artificial satellites and Near-Earth objects. Larger apertures permit tracking of transponders on interplanetary spacecraft throughout the solar system, and radar astronomy of natural bodies.
Various space agencies and commercial providers operate tracking networks to provide these observations. See for a partial listing. Space-based tracking of satellites is also regularly performed. See List of radio telescopes#Space-based and Space Network.
Methods.
Orbit determination must take into account that the apparent celestial motion of the body is influenced by the observer's own motion. For instance, an observer on Earth tracking an asteroid must take into account the motion of the Earth around the Sun, the rotation of the Earth, and the observer's local latitude and longitude, as these affect the apparent position of the body.
A key observation is that (to a close approximation) all objects move in orbits that are conic sections, with the attracting body (such as the Sun or the Earth) in the prime focus, and that the orbit lies in a fixed plane. Vectors drawn from the attracting body to the body at different points in time will all lie in the orbital plane.
If the position and velocity relative to the observer are available (as is the case with radar observations), these observational data can be adjusted by the known position and velocity of the observer relative to the attracting body at the times of observation. This yields the position and velocity with respect to the attracting body. If two such observations are available, along with the time difference between them, the orbit can be determined using Lambert's method, invented in the 18th century. See Lambert's problem for details.
Even if no distance information is available, an orbit can still be determined if three or more observations of the body's right ascension and declination have been made. Gauss's method, made famous in his 1801 "recovery" of the first lost minor planet, Ceres, has been subsequently polished.
One use is in the determination of asteroid masses via the dynamic method. In this procedure Gauss's method is used twice, both before and after a close interaction between two asteroids. After both orbits have been determined the mass of one or both of the asteroids can be worked out.
Orbit determination from a state vector.
The basic orbit determination task is to determine the classical orbital elements or Keplerian elements, formula_0, from the orbital state vectors [formula_1], of an orbiting body with respect to the reference frame of its central body. The central bodies are the sources of the gravitational forces, like the Sun, Earth, Moon and other planets. The orbiting bodies, on the other hand, include planets around the Sun, artificial satellites around the Earth, and spacecraft around planets. Newton's laws of motion will explain the trajectory of an orbiting body, known as Keplerian orbit.
The steps of orbit determination from one state vector are summarized as follows:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a, e, i, \\Omega, \\omega, \\nu"
},
{
"math_id": 1,
"text": "\\vec{r}, \\vec{v}"
},
{
"math_id": 2,
"text": "\\vec{h}"
},
{
"math_id": 3,
"text": "\\vec{h} = \\vec{r} \\times \\vec{v} = \\left| \\vec{h} \\right| \\vec{k} = h\\vec{k},"
},
{
"math_id": 4,
"text": "\\vec{k}"
},
{
"math_id": 5,
"text": "\\vec{n}"
},
{
"math_id": 6,
"text": "\\vec{K}"
},
{
"math_id": 7,
"text": "\\vec{n} = \\vec{K} \\times \\vec{h}."
},
{
"math_id": 8,
"text": "\\vec{e}"
},
{
"math_id": 9,
"text": "e"
},
{
"math_id": 10,
"text": "\\vec{i}"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\vec{e} &= {\\vec{v}\\times\\vec{h}\\over{\\mu}} - {\\vec{r}\\over{\\left|\\vec{r}\\right|}} = e \\vec{i}\\\\\n&= \\left ( {{\\left |\\vec{v} \\right |}^2 \\over {\\mu} }- {1 \\over{\\left|\\vec{r}\\right|}} \\right ) \\vec{r} - {\\vec{r} \\cdot \\vec{v} \\over{\\mu}} \\vec{v} \\\\\n&= \\frac{1}{\\mu} \\left[ \\left( {{\\left |\\vec{v} \\right |}^2 }- {\\mu \\over{\\left|\\vec{r}\\right|}} \\right ) \\vec{r} - {(\\vec{r} \\cdot \\vec{v})} \\vec{v} \\right]\n\\end{align}"
},
{
"math_id": 12,
"text": "e = \\left| \\vec{e} \\right| "
},
{
"math_id": 13,
"text": "\\mu = GM"
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "e = 1"
},
{
"math_id": 19,
"text": "p = \\frac{h^2}{\\mu} = a (1-e^2)"
},
{
"math_id": 20,
"text": "a = \\frac{p}{1-e^2},"
},
{
"math_id": 21,
"text": "e \\ne 1"
},
{
"math_id": 22,
"text": "i"
},
{
"math_id": 23,
"text": "\\begin{align}\n\\cos(i) &= \\frac{\\vec{K}\\cdot\\vec{h}}{h} = \\frac{h_K}{h} \\\\\n\\Rightarrow i &= \\arccos\\left(\\frac{\\vec{K}\\cdot\\vec{h}}{h}\\right), & i \\in [0,180^\\circ],\n\\end{align}"
},
{
"math_id": 24,
"text": "h_K"
},
{
"math_id": 25,
"text": "\\Omega"
},
{
"math_id": 26,
"text": "\\begin{align}\n\\cos(\\Omega) &= \\frac{\\vec{I}\\cdot\\vec{n}}{n} = \\frac{n_I}{n} = \\cos(360 -\\Omega) \\\\\n\\Rightarrow \\Omega &= \\arccos\\left(\\frac{\\vec{I}\\cdot\\vec{n}}{n}\\right) = \\Omega_0, \\text{ or } \\\\\n\\Rightarrow \\Omega &= 360^\\circ - \\Omega_0, \\text{ if } n_J < 0, \\\\\n\\end{align}"
},
{
"math_id": 27,
"text": "n_I"
},
{
"math_id": 28,
"text": "n_J"
},
{
"math_id": 29,
"text": "\\cos(A)=\\cos(-A)=\\cos(360-A)=C"
},
{
"math_id": 30,
"text": "\\arccos(C)"
},
{
"math_id": 31,
"text": "A"
},
{
"math_id": 32,
"text": "360-A"
},
{
"math_id": 33,
"text": "\\cos"
},
{
"math_id": 34,
"text": "360 - A"
},
{
"math_id": 35,
"text": "\\omega"
},
{
"math_id": 36,
"text": "\\begin{align}\n\\cos(\\omega) &= \\frac{\\vec{n}\\cdot\\vec{e}}{n e} = \\cos(360 -\\omega) \\\\\n\\Rightarrow \\omega &= \\arccos\\left(\\frac{\\vec{n}\\cdot\\vec{e}}{n e}\\right) = \\omega_0, \\text{ or } \\\\\n\\Rightarrow \\omega &= 360^\\circ - \\omega_0, \\text{ if } e_K < 0, \\\\\n\\end{align}"
},
{
"math_id": 37,
"text": "e_K"
},
{
"math_id": 38,
"text": "\\nu"
},
{
"math_id": 39,
"text": "\\begin{align}\n\\cos(\\nu) &= \\frac{\\vec{e}\\cdot\\vec{r}}{e r} = \\cos(360 -\\nu) \\\\\n\\Rightarrow \\nu &= \\arccos\\left(\\frac{\\vec{e}\\cdot\\vec{r}}{e r}\\right) = \\nu_0, \\text{ or } \\\\\n\\Rightarrow \\nu &= 360^\\circ - \\nu_0, \\text{ if } \\vec{r}\\cdot\\vec{v} < 0.\\\\\n\\end{align}"
},
{
"math_id": 40,
"text": "\\vec{r}\\cdot\\vec{v}"
},
{
"math_id": 41,
"text": "\\arccos"
},
{
"math_id": 42,
"text": "\\phi"
},
{
"math_id": 43,
"text": "\\nu \\in [0,180^\\circ]"
},
{
"math_id": 44,
"text": "\\nu \\in [180^\\circ,360^\\circ]"
},
{
"math_id": 45,
"text": "h = r v \\sin(90-\\phi)"
},
{
"math_id": 46,
"text": "\\vec{r}\\cdot\\vec{v} = r v \\cos(90-\\phi) = h \\tan(\\phi)"
},
{
"math_id": 47,
"text": "u=\\omega+\\nu"
},
{
"math_id": 48,
"text": "\\begin{align}\n\\cos(u) &= \\frac{\\vec{n}\\cdot\\vec{r}}{n r} = \\cos(360 -u) \\\\\n\\Rightarrow u &= \\arccos\\left(\\frac{\\vec{n}\\cdot\\vec{r}}{n r}\\right) = u_0, \\text{ or } \\\\\n\\Rightarrow u &= 360^\\circ - u_0, \\text{ if } r_K < 0, \\\\\n\\end{align}"
},
{
"math_id": 49,
"text": "r_K"
},
{
"math_id": 50,
"text": "\\vec{r}"
}
]
| https://en.wikipedia.org/wiki?curid=7609997 |
761055 | Euler's Disk | Scientific educational toy
Euler's Disk, invented between 1987 and 1990 by Joseph Bendik, is a trademarked scientific educational toy. It is used to illustrate and study the dynamic system of a spinning and rolling disk on a flat or curved surface. It has been the subject of several scientific papers.
Discovery.
Joseph Bendik first noted the interesting motion of the spinning disk while working at Hughes Aircraft (Carlsbad Research Center) after spinning a heavy polishing chuck on his desk at lunch one day.
The apparatus is a dramatic visualization of energy exchanges in three different, tightly coupled processes. As the disk gradually decreases its azimuthal rotation, there is also a decrease in amplitude and increase in the frequency of the disk's axial precession.
The evolution of the disk's axial precession is easily visualized in a slow motion video by looking at the side of the disk following a single point marked on the disk. The evolution of the rotation of the disk is easily visualized in slow motion by looking at the top of the disk following an arrow drawn on the disk representing its radius.
As the disk releases the initial energy given by the user and approaches a halt, its rotation about the vertical axis slows, while its contact point oscillation increases. Lit from above, its contact point and nearby lower edge in shadow, the disk appears to levitate before halting.
Bendik named the toy after mathematician Leonhard Euler.
The commercial toy consists of a heavy, thick chrome-plated steel disk and a rigid, slightly concave, mirrored base. Included holographic magnetic stickers can be attached to the disk, to enhance the visual effect of wobbling. These attachments may make it harder to see and understand the processes at work, however.
When spun on a flat surface, the disk exhibits a spinning/rolling motion, slowly progressing through varying rates and types of motion before coming to rest. Most notably, the precession rate of the disk's axis of symmetry increases as the disk spins down. The mirror base provides a low-friction surface; its slight concavity keeps the disk from "wandering" off the surface.
Any disk, spun on a reasonably flat surface (such as a coin spun on a table), will exhibit essentially the same type of motion as an Euler Disk, but for a much shorter time. Commercial disks provide a more effective demonstration of the phenomenon, having an optimized aspect ratio and a precision polished, slightly rounded edge to maximize the spinning/rolling time.
Physics.
A spinning/rolling disk ultimately comes to rest quite abruptly, the final stage of motion being accompanied by a whirring sound of rapidly increasing frequency. As the disk rolls, the point of rolling contact describes a circle that oscillates with a constant angular velocity formula_0. If the motion is non-dissipative (frictionless), formula_0 is constant, and the motion persists forever; this is contrary to observation, since formula_0 is not constant in real life situations. In fact, the precession rate of the axis of symmetry approaches a finite-time singularity modeled by a power law with exponent approximately −1/3 (depending on specific conditions).
There are two conspicuous dissipative effects: rolling friction when the disk slips along the surface, and air drag from the resistance of air. Experiments show that rolling friction is mainly responsible for the dissipation and behavior—experiments in a vacuum show that the absence of air affects behavior only slightly, while the behavior (precession rate) depends systematically on coefficient of friction. In the limit of small angle (i.e. immediately before the disk stops spinning), air drag (specifically, viscous dissipation) is the dominant factor, but prior to this end stage, rolling friction is the dominant effect.
Steady motion with the disk center at rest.
The behavior of a spinning disk whose center is at rest can be described as follows. Let the line from the center of the disk to the point of contact with the plane be called axis formula_1. Since the center of the disk and the point of contact are instantaneously at rest (assuming there is no slipping) axis formula_1 is the instantaneous axis of
rotation. The angular momentum is formula_2 which holds for any thin, circularly symmetric disk with mass formula_3; formula_4 for a disk with mass concentrated at the rim, formula_5 for a uniform disk (like Euler disk), formula_6 is the radius of the disk, and formula_0 is the angular velocity along formula_1.
The contact force formula_7 is formula_8 where formula_9 is the gravitational acceleration and formula_10 is the vertical axis pointing upwards. The torque about the center of mass is formula_11 which we can rewrite as formula_12 where formula_13. We can conclude that both the angular momentum formula_14, and the disk are precessing about the vertical axis formula_10 at rate At the same time formula_15 is the angular velocity of the point of contact with the plane. Let's define axis formula_16 to lie along the symmetry axis of the disk and pointing downwards. Then it holds that formula_17, where formula_18 is the inclination angle of the disc with respect to the horizontal plane. The angular velocity can be thought of as composed of two parts formula_19, where formula_20 is the angular velocity of the disk along its symmetry axis. From the geometry we easily conclude that:
formula_21
Plugging formula_22 into equation (1) we finally get
As formula_18 adiabatically approaches zero, the angular velocity of the point of contact formula_15 becomes very large, and one hears a high-frequency sound associated with the spinning disk. However, the rotation of the figure on the face of the coin, whose angular velocity is formula_23 approaches zero. The total angular velocity formula_24 also vanishes as well as the total energy
formula_25
as formula_18 approaches zero. Here we have used the equation (2).
As formula_18 approaches zero the disk finally loses contact with the table and the disk then quickly settles on to the horizontal surface. One hears sound at a frequency formula_26, which becomes dramatically higher, formula_27, as the figure rotation rate slows, formula_28, until the sound abruptly ceases.
Levitation Illusion.
As a circularly symmetric disk settles, the separation between a fixed point
on the supporting surface and the moving disk above oscillates at increasing frequency, in sync with the rotation axis angle off vertical.
The levitation illusion results when the disk edge reflects light when tilted slightly up above the supporting surface, and in shadow when tilted slightly down in contact. The shadow is not perceived, and the rapidly flashing reflections from the edge above supporting surface are perceived as steady elevation. See persistence of vision.
The levitation illusion can be enhanced by optimizing the curve of the lower edge so the shadow line remains high as the disk settles. A mirror can further enhance the effect by hiding the support surface and showing separation between
moving disk surface and mirror image.
Disk imperfections, seen in shadow, that could hamper the illusion, can be hidden in a skin pattern that blurs under motion.
US Quarter example.
A clean US Quarter (minted 1970-2022), rotating on a flat hand mirror, viewed from the side near the mirror surface, demonstrates the phenomenon for a few seconds.
Lit by a point source directly over the center of the soon to settle quarter,
side ridges are illuminated when the rotation axis is away from the viewer,
and in shadow when the rotation axis is toward the viewer. Vibration blurs the ridges and heads or tails is too foreshortened to show rotation.
History of research.
Moffatt.
In the early 2000s, research was sparked by an article in the April 20, 2000 edition of "Nature", where Keith Moffatt showed that viscous dissipation in the thin layer of air between the disk and the table would be sufficient to account for the observed abruptness of the settling process. He also showed that the motion concluded in a finite-time singularity. His first theoretical hypothesis was contradicted by subsequent research, which showed that rolling friction is actually the dominant factor.
Moffatt showed that, as time formula_29 approaches a particular time formula_30 (which is mathematically a constant of integration), the viscous dissipation approaches infinity. The singularity that this implies is not realized in practice, because the magnitude of the vertical acceleration cannot exceed the acceleration due to gravity (the disk loses contact with its support surface). Moffatt goes on to show that the theory breaks down at a time formula_31 before the final settling time formula_30, given by:
formula_32
where formula_6 is the radius of the disk, formula_9 is the acceleration due to Earth's gravity, formula_33 the dynamic viscosity of air, and formula_3 the mass of the disk. For the commercially available Euler's Disk toy (see link in "External links" below), formula_31 is about formula_34 seconds, at which time the angle between the coin and the surface, formula_18, is approximately 0.005 radians and the rolling angular velocity, formula_15, is about 500 Hz.
Using the above notation, the total spinning/rolling time is:
formula_35
where formula_36 is the initial inclination of the disk, measured in radians. Moffatt also showed that, if formula_37, the finite-time singularity in formula_15 is given by
formula_38
Experimental results.
Moffatt's theoretical work inspired several other workers to experimentally investigate the dissipative mechanism of a spinning/rolling disk, with results that partially contradicted his explanation. These experiments used spinning objects and surfaces of various geometries (disks and rings), with varying coefficients of friction, both in air and in a vacuum, and used instrumentation such as high speed photography to quantify the phenomenon.
In the 30 November 2000 issue of "Nature", physicists Van den Engh, Nelson and Roach discuss experiments in which disks were spun in a vacuum. Van den Engh used a rijksdaalder, a Dutch coin, whose magnetic properties allowed it to be spun at a precisely determined rate. They found that slippage between the disk and the surface could account for observations, and the presence or absence of air only slightly affected the disk's behavior. They pointed out that Moffatt's theoretical analysis would predict a very long spin time for a disk in a vacuum, which was not observed.
Moffatt responded with a generalized theory that should allow experimental determination of which dissipation mechanism is dominant, and pointed out that the dominant dissipation mechanism would always be viscous dissipation in the limit of small formula_18 (i.e., just before the disk settles).
Later work at the University of Guelph by Petrie, Hunt and Gray showed that carrying out the experiments in a vacuum (pressure 0.1 pascal) did not significantly affect the energy dissipation rate. Petrie "et al." also showed that the rates were largely unaffected by replacing the disk with a ring shape, and that the no-slip condition was satisfied for angles greater than 10°. Another work by Caps, Dorbolo, Ponte, Croisier, and Vandewalle has concluded that the air is a minor source of energy dissipation. The major energy dissipation process is the rolling and slipping of the disk on the supporting surface. It was experimentally shown that the inclination angle, the precession rate, and the angular velocity follow the power law behavior.
On several occasions during the 2007–2008 Writers Guild of America strike, talk show host Conan O'Brien would spin his wedding ring on his desk, trying to spin the ring for as long as possible. The quest to achieve longer and longer spin times led him to invite MIT professor Peter Fisher onto the show to experiment with the problem. Spinning the ring in a vacuum had no identifiable effect, while a Teflon spinning support surface gave a record time of 51 seconds, corroborating the claim that rolling friction is the primary mechanism for kinetic energy dissipation.
Various kinds of rolling friction as primary mechanism for energy dissipation have been studied by Leine who confirmed experimentally that the frictional resistance of the movement of the contact point over the rim of the disk is most likely the primary dissipation mechanism on a time-scale of seconds.
In popular culture.
Euler's Disks appear in the 2006 film "Snow Cake" and in the TV show "The Big Bang Theory", season 10, episode 16, which aired February 16, 2017.
The sound team for the 2001 film "Pearl Harbor" used a spinning Euler's Disk as a sound effect for torpedoes. A short clip of the sound team playing with Euler's Disk was played during the Academy Awards presentations.
The principles of the Euler Disk were used with specially made rings on a table as a futuristic recording medium in the 1960 movie "The Time Machine".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "\\widehat{\\mathbf{3}}"
},
{
"math_id": 2,
"text": "\\mathbf{L} =kMa^2\\omega\\widehat{\\mathbf{3}}"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "k=1/2"
},
{
"math_id": 5,
"text": "k=1/4"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "\\mathbf{F}"
},
{
"math_id": 8,
"text": "M g \\widehat{\\mathbf{z}}"
},
{
"math_id": 9,
"text": "g"
},
{
"math_id": 10,
"text": "\\widehat{\\mathbf{z}}"
},
{
"math_id": 11,
"text": "\\mathbf{N}=a \\widehat{\\mathbf{3}} \\times Mg\\widehat{\\mathbf{z}}=\\frac{d\\mathbf{L}}{dt}"
},
{
"math_id": 12,
"text": "\\frac{d\\mathbf{L}}{dt}= \\boldsymbol{\\Omega}\\times\\mathbf{L}"
},
{
"math_id": 13,
"text": "\\boldsymbol{\\Omega} = - \\frac{g}{ak\\omega} \\widehat{\\mathbf{z}}"
},
{
"math_id": 14,
"text": "\\mathbf{L}"
},
{
"math_id": 15,
"text": "\\Omega"
},
{
"math_id": 16,
"text": "\\widehat{\\mathbf{1}}"
},
{
"math_id": 17,
"text": "\\widehat{\\mathbf{z}} = - \\cos \\alpha \\widehat{\\mathbf{1}} - \\sin \\alpha \\widehat{\\mathbf{3}}"
},
{
"math_id": 18,
"text": "\\alpha"
},
{
"math_id": 19,
"text": "\\omega\\widehat{\\mathbf{3}} = \\Omega \\widehat{\\mathbf{z}} + \\omega_\\text{rel} \\widehat{\\mathbf{1}} "
},
{
"math_id": 20,
"text": "\\omega_\\text{rel}"
},
{
"math_id": 21,
"text": "\\begin{align}\n \\omega &= -\\Omega \\sin \\alpha, \\\\\n \\omega_\\text{rel} &= \\Omega \\cos \\alpha\\\\\n\\end{align}"
},
{
"math_id": 22,
"text": "\\omega = -\\Omega \\sin \\alpha"
},
{
"math_id": 23,
"text": "\\Omega - \\omega_\\text{rel} = \\Omega(1 - \\cos \\alpha),"
},
{
"math_id": 24,
"text": "\\omega=-\\sqrt{\\frac{g \\sin \\alpha}{a k}}"
},
{
"math_id": 25,
"text": "E=Mga\\sin \\alpha + \\tfrac{1}{2} kMa^2 \\omega^2 = Mga\\sin \\alpha + \\tfrac{1}{2} M k a^2 \\frac{g \\sin \\alpha}{a k} = \\tfrac{3}{2} M g a \\sin \\alpha "
},
{
"math_id": 26,
"text": "\\frac{\\Omega}{2\\pi}"
},
{
"math_id": 27,
"text": "\\frac{1}{2\\pi} \\sqrt{\\frac{g}{ak}} \\sqrt{\\frac{1}{\\sin \\alpha}}"
},
{
"math_id": 28,
"text": "2 \\sqrt{\\frac{g}{ak}} \\frac{(\\sin \\frac{\\alpha}{2})^2}{\\sqrt{\\sin \\alpha}}"
},
{
"math_id": 29,
"text": "t"
},
{
"math_id": 30,
"text": "t_0"
},
{
"math_id": 31,
"text": "\\tau"
},
{
"math_id": 32,
"text": "\\tau \\simeq \\left[\\left(\\frac{2a}{9g}\\right)^3 \\frac{2\\pi\\mu a}{M}\\right]^{1/5}"
},
{
"math_id": 33,
"text": "\\mu"
},
{
"math_id": 34,
"text": "10^{-2}"
},
{
"math_id": 35,
"text": "t_0 = \\frac{\\alpha_0^3 M}{2\\pi\\mu a}"
},
{
"math_id": 36,
"text": "\\alpha_0"
},
{
"math_id": 37,
"text": "t_0-t>\\tau"
},
{
"math_id": 38,
"text": "\\Omega\\sim(t_0-t)^{-1/6}"
}
]
| https://en.wikipedia.org/wiki?curid=761055 |
76115030 | Feigenbaum's First Constant | Mathematical constant
The first Feigenbaum constant δ is the limiting ratio of each bifurcation interval to the next between every period doubling, of a one-parameter map
formula_0
where "f"("x") is a function parameterized by the bifurcation parameter a.
It is given by the limit
formula_1
where "an" are discrete values of a at the "n"th period doubling.
Illustration.
Non-linear maps.
To see how this number arises, consider the real one-parameter map
formula_2
Here "a" is the bifurcation parameter, "x" is the variable. The values of "a" for which the period doubles (e.g. the largest value for "a" with no period-2 orbit, or the largest "a" with no period-4 orbit), are "a"1, "a"2 etc. These are tabulated below:
The ratio in the last column converges to the first Feigenbaum constant. The same number arises for the logistic map
formula_3
with real parameter "a" and variable "x". Tabulating the bifurcation values again:
Fractals.
In the case of the Mandelbrot set for complex quadratic polynomial
formula_4
the Feigenbaum constant is the limiting ratio between the diameters of successive circles on the real axis in the complex plane (see animation on the right).
Bifurcation parameter is a root point of period-2"n" component. This series converges to the Feigenbaum point "c" = −1.401155... The ratio in the last column converges to the first Feigenbaum constant.
Other maps also reproduce this ratio; in this sense the Feigenbaum constant in bifurcation theory is analogous to π in geometry and "e" in calculus.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_{i+1} = f(x_i),"
},
{
"math_id": 1,
"text": "\\delta = \\lim_{n \\to \\infty} \\frac{a_{n-1} - a_{n-2}}{a_n - a_{n-1}} = 4.669\\,201\\,609\\,\\ldots,"
},
{
"math_id": 2,
"text": "f(x)=a-x^2."
},
{
"math_id": 3,
"text": " f(x) = a x (1 - x) "
},
{
"math_id": 4,
"text": " f(z) = z^2 + c "
}
]
| https://en.wikipedia.org/wiki?curid=76115030 |
76115113 | Feigenbaum's second constant | Mathematical constant
The second Feigenbaum constant or Feigenbaum's alpha constant (sequence in the OEIS),
formula_0
is the ratio between the width of a tine and the width of one of its two subtines (except the tine closest to the fold). A negative sign is applied to "α" when the ratio between the lower subtine and the width of the tine is measured.
These numbers apply to a large class of dynamical systems (for example, dripping faucets to population growth).
A simple rational approximation is × × = .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha = 2.502\\,907\\,875\\,095\\,892\\,822\\,283\\,902\\,873\\,218...,"
}
]
| https://en.wikipedia.org/wiki?curid=76115113 |
7611764 | Hall algebra | In mathematics, the Hall algebra is an associative algebra with a basis corresponding to isomorphism classes of finite abelian "p"-groups. It was first discussed by but forgotten until it was rediscovered by Philip Hall (1959), both of whom published no more than brief summaries of their work. The Hall polynomials are the structure constants of the Hall algebra. The Hall algebra plays an important role in the theory of Masaki Kashiwara and George Lusztig regarding canonical bases in quantum groups. generalized Hall algebras to more general categories, such as the category of representations of a quiver.
Construction.
A finite abelian "p"-group "M" is a direct sum of cyclic "p"-power components formula_0 where
formula_1 is a partition of formula_2 called the "type" of "M". Let formula_3 be the number of subgroups "N" of "M" such that "N" has type formula_4 and the quotient "M/N" has type formula_5. Hall proved that the functions "g" are polynomial functions of "p" with integer coefficients. Thus we may replace "p" with an indeterminate "q", which results in the Hall polynomials
formula_6
Hall next constructs an associative ring formula_7 over formula_8, now called the Hall algebra. This ring has a basis consisting of the symbols formula_9 and the structure constants of the multiplication in this basis are given by the Hall polynomials:
formula_10
It turns out that "H" is a commutative ring, freely generated by the elements formula_11 corresponding to the elementary "p"-groups. The linear map from "H" to the algebra of symmetric functions defined on the generators by the formula
formula_12
(where "e""n" is the "n"th elementary symmetric function) uniquely extends to a ring homomorphism and the images of the basis elements formula_9 may be interpreted via the Hall–Littlewood symmetric functions. Specializing "q" to 0, these symmetric functions become Schur functions, which are thus closely connected with the theory of Hall polynomials. | [
{
"math_id": 0,
"text": "C_{p^{\\lambda_i}},"
},
{
"math_id": 1,
"text": "\\lambda=(\\lambda_1,\\lambda_2,\\ldots)"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "g^\\lambda_{\\mu,\\nu}(p)"
},
{
"math_id": 4,
"text": "\\nu"
},
{
"math_id": 5,
"text": "\\mu"
},
{
"math_id": 6,
"text": "g^\\lambda_{\\mu,\\nu}(q)\\in\\mathbb{Z}[q]. \\, "
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": "\\mathbb{Z}[q]"
},
{
"math_id": 9,
"text": "u_\\lambda"
},
{
"math_id": 10,
"text": " u_\\mu u_\\nu = \\sum_\\lambda g^\\lambda_{\\mu,\\nu}(q) u_\\lambda. \\, "
},
{
"math_id": 11,
"text": "u_{\\mathbf1^n}"
},
{
"math_id": 12,
"text": "u_{\\mathbf 1^n} \\mapsto q^{-n(n-1)/2}e_n \\, "
}
]
| https://en.wikipedia.org/wiki?curid=7611764 |
76121355 | Filon quadrature | Integration method for oscillatory integrals
In numerical analysis, Filon quadrature or Filon's method is a technique for numerical integration of oscillatory integrals. It is named after English mathematician Louis Napoleon George Filon, who first described the method in 1934.
Description.
The method is applied to oscillatory definite integrals in the form:
formula_0
where formula_1 is a relatively slowly-varying function and formula_2 is either sine or cosine or a complex exponential that causes the rapid oscillation of the integrand, particularly for high frequencies. In Filon quadrature, the formula_1 is divided into formula_3 subintervals of length formula_4, which are then interpolated by parabolas. Since each subinterval is now converted into a Fourier integral of quadratic polynomials, these can be evaluated in closed-form by integration by parts. For the case of formula_5, the integration formula is given as:
formula_6
where
formula_7
formula_8
formula_9
formula_10
formula_11
formula_12
Explicit Filon integration formulas for sine and complex exponential functions can be derived similarly. The formulas above fail for small formula_13 values due to catastrophic cancellation; Taylor series approximations must be in such cases to mitigate numerical errors, with formula_14 being recommended as a possible switchover point for 44-bit mantissa.
Modifications, extensions and generalizations of Filon quadrature have been reported in numerical analysis and applied mathematics literature; these are known as Filon-type integration methods. These include Filon-trapezoidal and Filon–Clenshaw–Curtis methods.
Applications.
Filon quadrature is widely used in physics and engineering for robust computation of Fourier-type integrals. Applications include evaluation of oscillatory Sommerfeld integrals for electromagnetic and seismic problems in layered media and numerical solution to steady incompressible flow problems in fluid mechanics, as well as various different problems in neutron scattering, quantum mechanics and metallurgy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_a^b f(x) g(x) dx"
},
{
"math_id": 1,
"text": "f(x)"
},
{
"math_id": 2,
"text": "g(x)"
},
{
"math_id": 3,
"text": "2N"
},
{
"math_id": 4,
"text": "h"
},
{
"math_id": 5,
"text": "g(x)=\\cos(kx)"
},
{
"math_id": 6,
"text": "\\int_a^b f(x) \\cos(kx) dx \\approx h ( \\alpha \\left[ f(b) \\sin(kb)-f(a) \\sin(ka)\\right] + \\beta C_{2n} + \\gamma C_{2n-1} )"
},
{
"math_id": 7,
"text": "\\alpha=\\left(\\theta^2 + \\theta \\sin(\\theta)\\cos(\\theta)-2 \\sin^2(\\theta)\\right)/\\theta^3"
},
{
"math_id": 8,
"text": "\\beta=2\\left[\\theta (1+\\cos^2(\\theta)) - 2\\sin(\\theta)\\cos(\\theta) \\right]/\\theta^3"
},
{
"math_id": 9,
"text": "\\gamma=4(\\sin(\\theta)-\\theta \\cos(\\theta))/\\theta^3"
},
{
"math_id": 10,
"text": "C_{2n}=\\frac{1}{2}f(a)\\cos(ka) + f(a+2h)\\cos(k(a+2h)) + f(a+4h)\\cos(k(a+4h)) + \\ldots + \\frac{1}{2}f(b)\\cos(kb)"
},
{
"math_id": 11,
"text": "C_{2n-1}=f(a+h)\\cos(k(a+h)) + f(a+3h)\\cos(k(a+3h)) + \\ldots + f(b-h)\\cos(k(b-h))"
},
{
"math_id": 12,
"text": "\\theta=kh"
},
{
"math_id": 13,
"text": "\\theta"
},
{
"math_id": 14,
"text": "\\theta=1/6"
}
]
| https://en.wikipedia.org/wiki?curid=76121355 |
761221 | Disk algebra | In mathematics, specifically in functional and complex analysis, the disk algebra "A"(D) (also spelled disc algebra) is the set of holomorphic functions
"ƒ" : D → formula_0,
(where D is the open unit disk in the complex plane formula_0) that extend to a continuous function on the closure of D. That is,
formula_1
where "H"∞(D) denotes the Banach space of bounded analytic functions on the unit disc D (i.e. a Hardy space).
When endowed with the pointwise addition ("ƒ" + "g")("z")
"ƒ"("z") + "g"("z"), and pointwise multiplication ("ƒg")("z")
"ƒ"("z")"g"("z"), this set becomes an algebra over C, since if "ƒ" and "g" belong to the disk algebra then so do "ƒ" + "g" and "ƒg".
Given the uniform norm,
formula_2
by construction it becomes a uniform algebra and a commutative Banach algebra.
By construction the disc algebra is a closed subalgebra of the Hardy space "H"∞. In contrast to the stronger requirement that a continuous extension to the circle exists, it is a lemma of Fatou that a general element of "H"∞ can be radially extended to the circle almost everywhere.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C}"
},
{
"math_id": 1,
"text": "A(\\mathbf{D}) = H^\\infty(\\mathbf{D})\\cap C(\\overline{\\mathbf{D}}),"
},
{
"math_id": 2,
"text": "\\|f\\| = \\sup\\{|f(z)|\\mid z\\in \\mathbf{D}\\}=\\max\\{ |f(z)|\\mid z\\in \\overline{\\mathbf{D}}\\},"
}
]
| https://en.wikipedia.org/wiki?curid=761221 |
76122969 | Action principles | Fundamental mechanical principles
Action principles lie at the heart of fundamental physics, from classical mechanics through quantum mechanics, particle physics, and general relativity. Action principles start with an energy function called a Lagrangian describing the physical system. The accumulated value of this energy function between two states of the system is called the action. Action principles apply the calculus of variation to the action. The action depends on the energy function, and the energy function depends on the position, motion, and interactions in the system: variation of the action allows the derivation of the equations of motion without vector or forces.
Several distinct action principles differ in the constraints on their initial and final conditions.
The names of action principles have evolved over time and differ in details of the endpoints of the paths and the nature of the variation. Quantum action principles generalize and justify the older classical principles. Action principles are the basis for Feynman's version of quantum mechanics, general relativity and quantum field theory.
The action principles have applications as broad as physics, including many problems in classical mechanics but especially in modern problems of quantum mechanics and general relativity. These applications built up over two centuries as the power of the method and its further mathematical development rose.
This article introduces the action principle concepts and summarizes other articles with more details on concepts and specific principles.
Common concepts.
Action principles are "integral" approaches rather than the "differential" approach of Newtonian mechanics.162 The core ideas are based on energy, paths, an energy function called the Lagrangian along paths, and selection of a path according to the "action", a continuous sum or integral of the Lagrangian along the path.
Energy, not force.
Introductory study of mechanics, the science of interacting objects, typically begins with Newton's laws based on the concept of force, defined by the acceleration it causes when applied to mass: formula_0 This approach to mechanics focuses on a single point in space and time, attempting to answer the question: "What happens next?". Mechanics based on action principles begin with the concept of action, an energy tradeoff between kinetic energy and potential energy, defined by the physics of the problem. These approaches answer questions relating starting and ending points: Which trajectory will place a basketball in the hoop? If we launch a rocket to the Moon today, how can it land there in 5 days? The Newtonian and action-principle forms are equivalent, and either one can solve the same problems, but selecting the appropriate form will make solutions much easier.
The energy function in the action principles is not the total energy (conserved in an isolated system), but the Lagrangian, the difference between kinetic and potential energy. The kinetic energy combines the energy of motion for all the objects in the system; the potential energy depends upon the instantaneous position of the objects and drives the motion of the objects. The motion of the objects places them in new positions with new potential energy values, giving a new value for the Lagrangian.125
Using energy rather than force gives immediate advantages as a basis for mechanics. Force mechanics involves 3-dimensional vector calculus, with 3 space and 3 momentum coordinates for each object in the scenario; energy is a scalar magnitude combining information from all objects, giving an immediate simplification in many cases. The components of force vary with coordinate systems; the energy value is the same in all coordinate systems.xxv Force requires an inertial frame of reference;65 once velocities approach the speed of light, special relativity profoundly affects mechanics based on forces. In action principles, relativity merely requires a different Lagrangian: the principle itself is independent of coordinate systems.
Paths, not points.
The explanatory diagrams in force-based mechanics usually focus on a single point, like the center of momentum, and show vectors of forces and velocities. The explanatory diagrams of action-based mechanics have two points with actual and possible paths connecting them. These diagrammatic conventions reiterate the different strong points of each method.
Depending on the action principle, the two points connected by paths in a diagram may represent two particle positions at different times, or the two points may represent values in a configuration space or in a phase space. The mathematical technology and terminology of action principles can be learned by thinking in terms of physical space, then applied in the more powerful and general abstract spaces.
Action along a path.
Action principles assign a number—the action—to each possible path between two points. This number is computed by adding an energy value for each small section of the path multiplied by the time spent in that section:
action formula_1
where the form of the kinetic (formula_2) and potential (formula_3) energy expressions depend upon the physics problem, and their value at each point on the path depends upon relative coordinates corresponding to that point. The energy function is called a Lagrangian; in simple problems it is the kinetic energy minus the potential energy of the system.
Path variation.
A system moving between two points takes one particular path; other similar paths are not taken. Each path corresponds to a value of the action.
An action principle predicts or explains that the particular path taken has a stationary value for the system's action: similar paths near the one taken have very similar action value. This variation in the action value is key to the action principles.
The symbol formula_4 is used to indicate the path variations so an action principle appears mathematically as
formula_5
meaning that at the stationary point, the variation of the action formula_6 with some fixed constraints formula_7 is zero.38
For action principles, the stationary point may be a minimum or a saddle point, but not a maximum. Elliptical planetary orbits provide a simple example of two paths with equal action – one in each direction around the orbit; neither can be the minimum or "least action".175 The path variation implied by formula_4 is not the same as a differential like formula_8. The action integral depends on the coordinates of the objects, and these coordinates depend upon the path taken. Thus the action integral is a functional, a function of a function.
Conservation principles.
An important result from geometry known as Noether's theorem states that any conserved quantities in a Lagrangian imply a continuous symmetry and conversely. For examples, a Lagrangian independent of time corresponds to a system with conserved energy; spatial translation independence implies momentum conservation; angular rotation invariance implies angular momentum conservation.489
These examples are global symmetries, where the independence is itself independent of space or time; more general "local" symmetries having a functional dependence on space or time lead to gauge theory. The observed conservation of isospin was used by Chen Ning Yang and Robert Mills in 1953 to construct a gauge theory for mesons, leading some decades later to modern particle physics theory.202
Distinct principles.
Action principles apply to a wide variety of physical problems, including all of fundamental physics. The only major exceptions are cases involving friction or when only the initial position and velocities are given. Different action principles have different meaning for the variations; each specific application of an action principle requires a specific Lagrangian describing the physics. A common name for any or all of these principles is "the principle of least action". For a discussion of the names and historical origin of these principles see action principle names.
Fixed endpoints with conserved energy.
When total energy and the endpoints are fixed, Maupertuis's least action principle applies. For example, to score points in basketball the ball must leave the shooters hand and go through the hoop, but the time of the flight is not constrained. Maupertuis's least action principle is written mathematically as the stationary condition
formula_9
on the abbreviated action
formula_10
(sometimes written formula_11), where formula_12 are the particle momenta or the conjugate momenta of generalized coordinates, defined by the equation
formula_13
where formula_14 is the Lagrangian. Some textbooks write76356 formula_15 as formula_16, to emphasize that the variation used in this form of the action principle differs from Hamilton's variation. Here the total energy formula_17 is fixed during the variation, but not the time, the reverse of the constraints on Hamilton's principle. Consequently, the same path and end points take different times and energies in the two forms. The solutions in the case of this form of Maupertuis's principle are orbits: functions relating coordinates to each other in which time is simply an index or a parameter.
Time-independent potentials; no forces.
For time-invariant system, the action formula_18 relates simply to the abbreviated action formula_19 on the stationary path as434
formula_20
for energy formula_17 and time difference formula_21. For a rigid body with no net force, the actions are identical, and the variational principles become equivalent to Fermat's principle of least time:360
formula_22
Fixed events.
When the physics problem gives the two endpoints as a position and a time, that is as events, Hamilton's action principle applies. For example, imagine planning a trip to the Moon. During your voyage the Moon will continue its orbit around the Earth: it's a moving target. Hamilton's principle for objects at positions formula_23 is written mathematically as
formula_24
The constraint formula_21 means that we only consider paths taking the same time, as well as connecting the same two points formula_25 and formula_26. The Lagrangian formula_27 is the difference between kinetic energy and potential energy at each point on the path.62 Solution of the resulting equations gives the world line formula_23. Starting with Hamilton's principle, the local differential Euler–Lagrange equation can be derived for systems of fixed energy. The action formula_18 in Hamilton's principle is the Legendre transformation of the action in Maupertuis' principle.
Classical field theory.
The concepts and many of the methods useful for particle mechanics also apply to continuous fields. The action integral runs over a Lagrangian density, but the concepts are so close that the density is often simply called the Lagrangian.15
Quantum action principles.
For quantum mechanics, the action principles have significant advantages: only one mechanical postulate is needed, if a covariant Lagrangian is used in the action, the result is relativistically correct, and they transition clearly to classical equivalents.128
Both Richard Feynman and Julian Schwinger developed quantum action principles based on early work by Paul Dirac. Feynman's integral method was not a variational principle but reduces to the classical least action principle; it led to his Feynman diagrams. Schwinger's differential approach relates infinitesimal amplitude changes to infinitesimal action changes.138
Feynman's action principle.
When quantum effects are important, new action principles are needed. Instead of a particle following a path, quantum mechanics defines a probability amplitude formula_28 at one point formula_29 and time formula_30 related to a probability amplitude at a different point later in time:
formula_31
where formula_32 is the classical action.
Instead of single path with stationary action, all possible paths add (the integral over formula_29), weighted by a complex probability amplitude formula_33. The phase of the amplitude is given by the action divided by the Planck constant or quantum of action: formula_34. When the action of a particle is much larger than formula_35, formula_36, the phase changes rapidly along the path: the amplitude averages to a small number.
Thus the Planck constant sets the boundary between classical and quantum mechanics.
All of the paths contribute in the quantum action principle. At the end point, where the paths meet, the paths with similar phases add, and those with phases differing by formula_37 subtract. Close to the path expected from classical physics, phases tend to align; the tendency is stronger for more massive objects that have larger values of action. In the classical limit, one path dominates – the path of stationary action.
Schwinger's action principle.
Schwinger's approach relates variations in the transition amplitudes formula_38 to variations in an action matrix element:
formula_39
where the action operator is
formula_40
The Schwinger form makes analysis of variation of the Lagrangian itself, for example, variation in potential source strength, especially transparent.138
The optico-mechanical analogy.
For every path, the action integral builds in value from zero at the starting point to its final value at the end. Any nearby path has similar values at similar distances from the starting point. Lines or surfaces of constant partial action value can be drawn across the paths, creating a wave-like view of the action. Analysis like this connects particle-like rays of geometrical optics with the wavefronts of Huygens–Fresnel principle.
<templatestyles src="Template:Blockquote/styles.css" />
[Maupertuis] ... thus pointed to that remarkable analogy between optical and mechanical phenomena which was observed much earlier by John Bernoulli and which was later fully developed in Hamilton's ingenious optico-mechanical theory. This analogy played a fundamental role in the development of modern wave-mechanics.
Applications.
Action principles are applied to derive differential equations like the Euler–Lagrange equations44 or as direct applications to physical problems.
Classical mechanics.
Action principles can be directly applied to many problems in classical mechanics, e.g. the shape of elastic rods under load,9
the shape of a liquid between two vertical plates (a capillary),22
or the motion of a pendulum when its support is in motion.39
Chemistry.
Quantum action principles are used in the quantum theory of atoms in molecules (QTAIM), a way of decomposing the computed electron density of molecules in to atoms as a way of gaining insight into chemical bonding.
General relativity.
Inspired by Einstein's work on general relativity, the renowned mathematician David Hilbert applied the principle of least action to derive the field equations of general relativity.186 His action, now known as the Einstein–Hilbert action,
formula_41
contained a relativistically invariant volume element formula_42 and the Ricci scalar curvature formula_43. The scale factor formula_44 is the Einstein gravitational constant.
Other applications.
The action principle is so central in modern physics and mathematics that it is widely applied including in thermodynamics, fluid mechanics, the theory of relativity, quantum mechanics, particle physics, and string theory.
History.
The action principle is preceded by earlier ideas in optics. In ancient Greece, Euclid wrote in his "Catoptrica" that, for the path of light reflecting from a mirror, the angle of incidence equals the angle of reflection. Hero of Alexandria later showed that this path has the shortest length and least time.
Building on the early work of Pierre Louis Maupertuis, Leonhard Euler, and Joseph Louis Lagrange defining versions of principle of least action,580
William Rowan Hamilton and in tandem Carl Gustav Jacobi developed a variational form for classical mechanics known as the Hamilton–Jacobi equation.201
In 1915, David Hilbert applied the variational principle to derive Albert Einstein's equations of general relativity.
In 1933, the physicist Paul Dirac demonstrated how this principle can be used in quantum calculations by discerning the quantum mechanical underpinning of the principle in the quantum interference of amplitudes. Subsequently Julian Schwinger and Richard Feynman independently applied this principle in quantum electrodynamics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = ma."
},
{
"math_id": 1,
"text": "S = \\int_{t_1}^{t_2} \\big( \\text{KE}(t) - \\text{PE}(t)\\big) \\,dt,"
},
{
"math_id": 2,
"text": "\\text{KE}"
},
{
"math_id": 3,
"text": "\\text{PE}"
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "(\\delta A)_C = 0,"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "dt"
},
{
"math_id": 9,
"text": "\n (\\delta W)_E = 0\n"
},
{
"math_id": 10,
"text": "\n W[\\mathbf{q}]\\ \\stackrel{\\text{def}}{=}\\ \\int_{q_1}^{q_2} \\mathbf{p} \\cdot \\mathbf{dq},\n"
},
{
"math_id": 11,
"text": "S_0"
},
{
"math_id": 12,
"text": "\\mathbf{p} = (p_1, p_2, \\ldots, p_N)"
},
{
"math_id": 13,
"text": "\n p_k\\ \\stackrel{\\text{def}}{=}\\ \\frac{\\partial L}{\\partial\\dot{q}_k},\n"
},
{
"math_id": 14,
"text": "L(\\mathbf{q}, \\dot{\\mathbf{q}}, t)"
},
{
"math_id": 15,
"text": "(\\delta W)_E = 0"
},
{
"math_id": 16,
"text": "\\Delta S_0"
},
{
"math_id": 17,
"text": "E"
},
{
"math_id": 18,
"text": "S"
},
{
"math_id": 19,
"text": "W"
},
{
"math_id": 20,
"text": "\n \\Delta S = \\Delta W - E\\Delta t\n"
},
{
"math_id": 21,
"text": "\\Delta t = t_2 - t_1"
},
{
"math_id": 22,
"text": "\n \\delta(t_2 - t_1) = 0.\n"
},
{
"math_id": 23,
"text": "\\mathbf{q}(t)"
},
{
"math_id": 24,
"text": "\n (\\delta \\mathcal{S})_{\\Delta t} = 0,\\ \\mathrm{where}\\ \\mathcal{S}[\\mathbf{q}]\\ \\stackrel{\\mathrm{def}}{=}\\ \\int_{t_1}^{t_2} L(\\mathbf{q}(t), \\dot{\\mathbf{q}}(t), t) \\,dt.\n"
},
{
"math_id": 25,
"text": "\\mathbf{q}(t_1)"
},
{
"math_id": 26,
"text": "\\mathbf{q}(t_2)"
},
{
"math_id": 27,
"text": "L = T - V"
},
{
"math_id": 28,
"text": "\\psi(x_k, t)"
},
{
"math_id": 29,
"text": "x_k"
},
{
"math_id": 30,
"text": "t"
},
{
"math_id": 31,
"text": "\\psi(x_{k+1}, t + \\varepsilon) = \\frac{1}{A} \\int e^{iS(x_{k+1}, x_k)/\\hbar} \\psi(x_k, t) \\,dx_k,"
},
{
"math_id": 32,
"text": "S(x_{k+1}, x_k)"
},
{
"math_id": 33,
"text": "e^{iS/\\hbar}"
},
{
"math_id": 34,
"text": "S/\\hbar"
},
{
"math_id": 35,
"text": "\\hbar"
},
{
"math_id": 36,
"text": "S/\\hbar \\gg 1"
},
{
"math_id": 37,
"text": "\\pi"
},
{
"math_id": 38,
"text": "(q_\\text{f}|q_\\text{i})"
},
{
"math_id": 39,
"text": "\\delta(q_{r_\\text{f}}|q_{r_\\text{i}}) = i(q_{r_\\text{f}}|\\delta S|q_{r_\\text{i}}),"
},
{
"math_id": 40,
"text": "S = \\int_{t_\\text{i}}^{t_\\text{f}} L \\,dt."
},
{
"math_id": 41,
"text": "S = \\frac{1}{2\\kappa} \\int R \\sqrt{-g} \\, \\mathrm{d}^4x,"
},
{
"math_id": 42,
"text": "\\sqrt{-g} \\, \\mathrm{d}^4x"
},
{
"math_id": 43,
"text": "R"
},
{
"math_id": 44,
"text": "\\kappa"
}
]
| https://en.wikipedia.org/wiki?curid=76122969 |
76126836 | Hellings-Downs curve | Gravitational wave detection tool
The Hellings-Downs curve (also known as the Hellings and Downs curve) is a theoretical tool used to establish the telltale signature that a galactic-scale pulsar timing array has detected gravitational waves, typically of wavelengths formula_0. The method entails searching for spatial correlations of the timing residuals from pairs of pulsars and comparing the data with the Hellings-Downs curve. When the data fit exceeds the standard "5 sigma" threshold, the pulsar timing array can declare detection of gravitational waves. More precisely, the Hellings-Downs curve is the expected correlations of the timing residuals from pairs of pulsars as a function of their angular separation on the sky as seen from Earth. This theoretical correlation function assumes Einstein's general relativity and a gravitational wave background that is isotropic.
Pulsar timing array residuals.
Albert Einstein's theory of general relativity predicts that a mass will deform spacetime causing gravitational waves to emanate outward from the source. These gravitational waves will affect the travel time of any light that interacts with them. A pulsar timing residual is the difference between the expected time of arrival and the observed time of arrival of light from pulsars. Because pulsars flash with such a consistent rhythm, it is hypothesised that if a gravitational wave is present, a specific pattern may be observed in the timing residuals from pairs of pulsars. The Hellings-Downs curve is used to infer the presence of gravitational waves by finding patterns of angular correlations in the timing residual data of different pulsar pairings. More precisely, the expected correlations on the vertical axis of the Hellings-Downs curve are the expected values of pulsar-pairs correlations averaged over all pulsar-pairs with the same angular separation and over gravitational-wave sources very far away with noninterfering random phases. Pulsar timing residuals are measured using pulsar timing arrays.
History.
Not long after the first suggestions of pulsars being used for gravitational wave detection in the late 1970’s, Donald Backer discovered the first millisecond pulsar in 1982. The following year Ron Hellings and George Downs published the foundations of the Hellings-Downs curve in their 1983 paper "Upper Limits on the Isotropic Gravitational Radiation Background from Pulsar Timing Analysis". Donald Backer would later go on to become one of the founders of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).
Examples in the scientific literature.
In 2023, NANOGrav used pulsar timing array data collected over 15 years in their latest publications supporting the existence of a gravitational wave background. A total of 2,211 millisecond pulsar pair combinations (67 individual pulsars) were used by the NANOGrav team to construct their Hellings-Downs plot comparison. The NANOGrav team wrote that "The observation of Hellings–Downs correlations points to the gravitational-wave origin of this signal." The Hellings-Downs curve has also been referred to as the "smoking gun" or "fingerprint" of the gravitational-wave background. These examples highlight the critical role that the Hellings-Downs curve plays in contemporary gravitational wave research.
Equation of the Hellings-Downs curve.
Reardon et al. (2023) from the Parkes pulsar timing array team give the following equation for the Hellings-Downs curve, which in the literature is also called the overlap reduction function:
formula_1
where:
formula_2,
formula_3 is the kronecker delta function
formula_4 represents the angle of separation between the two pulsars formula_5 and formula_6 as seen from Earth
formula_7 is the expected angular correlation function.
This curve assumes an isotropic gravitational wave background that obeys Einstein's general relativity. It is valid for "long-arm" detectors like pulsar timing arrays, where the wavelengths of typical gravitational waves are much shorter than the "long-arm" distance between Earth and typical pulsars.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\lambda=1-10ly)"
},
{
"math_id": 1,
"text": "\\Gamma_{ab}=\\frac{1}{2}\\delta_{ab}+\\frac{1}{2}-\\frac{x_{ab}}{4}+\\frac{3}{2}x_{ab}lnx_{ab}"
},
{
"math_id": 2,
"text": "x_{ab}=(1-\\cos\\zeta_{ab})/2"
},
{
"math_id": 3,
"text": "\\delta_{ab}"
},
{
"math_id": 4,
"text": "\\zeta_{ab}"
},
{
"math_id": 5,
"text": "{a}"
},
{
"math_id": 6,
"text": "{b}"
},
{
"math_id": 7,
"text": "\\Gamma_{ab}"
}
]
| https://en.wikipedia.org/wiki?curid=76126836 |
76139545 | Anna Nevius | American biostatistician
Anna Bruce Nevius is a retired American biostatistician who worked for many years in the Center for Veterinary Medicine of the US Food and Drug Administration, and chaired the Biopharmaceutical Section of the American Statistical Association.
Early life and education.
Nevius is originally from North Carolina, and graduated from high school in 1961 in Murphy, North Carolina. As a mathematics major at Carson–Newman College in Tennessee, she discovered her interest in statistics by taking a course in the subject in her senior year.
She began a doctoral program in statistics at Kansas State University, but after marrying fellow statistics student S. Edward Nevius in 1968, she left with a master's degree to follow her husband to Alaska, where he was posted from 1968 to 1970 by the United States Public Health Service. In the early 1970s, she and her husband moved to Florida, where he earned a doctorate in statistics from Florida State University.
From 1975 to 1978 she worked as an instructor and statistical consultant in mathematics and statistics at the University of Nebraska, where her husband had a tenure-track faculty position. After her husband moved to the Food and Drug Administration in 1979, she returned to graduate study at the University of Maryland, College Park,
and completed a Ph.D. in applied statistics in 1984. Her dissertation, "Techniques for Combining formula_0 Contingency Tables: A Simulation Study", was supervised by C. Mitchell Dayton.
Career and later life.
Nevius entered government service working for the Internal Revenue Service, but soon shifted to the Center for Veterinary Medicine, where she remained for the rest of her career. She chaired the Biopharmaceutical Section of the American Statistical Association for 2009. She retired three times and was rehired twice; after her second retirement, in 2022, and her husband's death at the end of 2022, she returned to the center again in 2023 as a rehired annuitant.
Recognition.
Nevius was elected as a Fellow of the American Statistical Association in 2012.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\times 2"
}
]
| https://en.wikipedia.org/wiki?curid=76139545 |
76142154 | Vibrational spectroscopic map | Vibrational spectroscopic map
Vibrational spectroscopic maps are a series of ab initio, semiempirical, or empirical models tailored to specific IR probes to describe vibrational solvatochromic effects on molecular spectra quantitatively.
Coherent multidimensional spectroscopy, a nonlinear spectroscopy utilizing multiple time-delayed pulses, is a technique that enables the measurement of solvation-induced frequency shifts and the time-correlations of the fluctuating frequencies. Researchers employ various organic and biochemical methods to introduce small vibrational probes into molecular systems into a variety of chemicals, proteins, nucleic acids, etc. These probes, labeled with infrared (IR) markers, were subject to spectroscopic investigations to obtain quantitative insights into various features of chemical and biological systems. In general, interpreting the experimental multidimensional spectra to get information on the underlying molecular processes requires theoretical modeling.
The vibrational frequency shifts observed due to complex intermolecular interactions of small IR probes with surroundings in the condensed phase are minute, often representing fractions of thermal energy. Numerical accuracy associated with advanced quantum mechanical calculations are not sufficient to accurately model these shifts. Consequently, researchers commonly resort to mapping procedures, which correlate certain physical variables calculated for the probe molecule with spectroscopic properties such as vibrational frequencies. These mapping procedures are referred to as vibrational spectroscopic maps within the field.
Typically, the physical variables employed in vibrational frequency maps include electric potentials, electric fields, distributed higher multipole moments, and other relevant factors evaluated at specific points surrounding the molecule.
As an example, the vibrational frequency associated with a localized vibrational mode is correlated with the electrostatic potential and electric field values at a designated set of points known as distributed sites within the infrared (IR) chromophore.
Theoretical foundation.
The vibrational frequency shift, denoted as formula_0, for the "j"th normal mode of a given probe molecule is defined as the difference between the actual vibrational frequency formula_1 of the mode in a solution and the frequency formula_2 in the gas phase.
formula_3
ref name=":2"></ref>.
From an effective Hamiltonian for the solute in the presence of molecular environment, one can derive the effective vibrational force constant (or Hessian) matrix approximately as follows:
formula_4
where the subscript 0 means the quantity is evaluated at the gas-phase geometry.
In the limiting case that the vibrational couplings of the normal mode of interest with other vibrational modes are relatively weak, the vibrational frequency shift under such a weak coupling approximation (WCA) in solution from the gas-phase frequency is given by
formula_5
Here, formula_6 and formula_7 are the electric anharmonicity (EA) and mechanical anharmonicity (MA) operators, respectively. These operators are defined as
formula_8
and
formula_9
By substituting a relevant expression for the intermolecular interaction potential into the WCA expression for formula_10, one can derive the vibrational frequency shift based on the specific theoretical potential model under consideration.
Semiempirical approaches.
While several rigorous theories for vibrational solvatochromism based on physical approximations have been proposed, these sophisticated models often necessitate extensive quantum chemistry calculations performed at elevated levels of precision with a large basis set. Current electronic structure simulation methods fall short in providing vibrational frequencies directly comparable to experimentally measured frequency shifts, especially when they are on the order of a few wavenumbers.
To accurately calculate coefficients in vibrational solvatochromism expressions, researchers frequently turn to employing multivariate leastsquare fitting. This technique involves fitting a sufficiently extensive set of training data obtained from quantum chemistry calculations of vibrational frequency shifts for numerous clusters containing a solute and multiple solvent molecules.
An early approach aimed to express the solvation-induced vibrational frequency shift in terms of the solvent electric potentials evaluated at distributed atomic sites on the target solute molecule. This method involves calculating the solvent electric potentials at these specific solute sites through the utilization of atomic partial charges from surrounding solvent molecules. The vibrational frequency shift of the solute molecule, denoted as formula_11, for the "j"th vibrational mode can be represented as
formula_12
Here, formula_13 represents the vibrational frequency of the "j"th normal mode in solution, formula_14 signifies the vibrational frequency in the gas phase, "N" denotes the number of distributed sites on the solute molecule, formula_15 denotes the solvent electric potential at the kth site of the solute molecule, and formula_16 are the parameters to be determined through least-square fitting to a training database comprising clusters containing a solute and multiple solvent molecules. This method provides a means to quantify the impact of solvation on the vibrational frequencies of the solute molecule.
Another widely used model for characterizing vibrational solvatochromic frequency shifts involves expressing the frequency shift in terms of solvent electric fields evaluated at distributed sites on the target solute molecule.
Developments.
Vibrational spectroscopic maps have been developed for a diverse range of vibrational modes, including various molecular systems and functional groups. Some of the notable vibrational modes include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta\\omega_j"
},
{
"math_id": 1,
"text": "\\omega_j"
},
{
"math_id": 2,
"text": "\\omega_{j,0}"
},
{
"math_id": 3,
"text": "\\Delta\\omega\\equiv\\omega_j-\\omega_{j,0}"
},
{
"math_id": 4,
"text": "k_{jk}\\approx M_j\\omega_j^2\\delta_{jk}+\\frac{\\partial^2U(\\mathbf Q)}{\\partial Q_j\\partial Q_k} \\bigg\\vert_{0}\n- \\sum_i \\frac{q_{ijk}}{M_i\\omega_i^2} \n\\frac{\\partial U(Q)}{\\partial Q_i} \\bigg\\vert_0\n"
},
{
"math_id": 5,
"text": "\\Delta\\omega_j^{WCA}=[\\hat{F}_j^{EA}+\\hat{F}_j^{MA}]U(\\mathbf Q)\\bigg\\vert_0 "
},
{
"math_id": 6,
"text": "\\hat{F}_j^{EA}\n"
},
{
"math_id": 7,
"text": "\\hat{F}_j^{MA}\n"
},
{
"math_id": 8,
"text": "\\hat{F}_j^{EA}=\\frac{1}{2M_j\\omega_j} \\frac{\\partial^2}{\\partial Q_j^2} \n"
},
{
"math_id": 9,
"text": "\\hat{F}_j^{MA}=-\\frac{1}{2M_j\\omega_j}\\sum_i \\frac{g_{ijj}}{M_i\\omega_i^2} \\frac{\\partial}{\\partial Q_i} "
},
{
"math_id": 10,
"text": "\\Delta\\omega_j^{WCA} "
},
{
"math_id": 11,
"text": "\\Delta\\omega_j(\\mathbf Q) "
},
{
"math_id": 12,
"text": "\\Delta\\omega_j(\\mathbf Q)=\\omega_j(\\mathbf Q)-\\omega_{j0}=\\sum_{k=1}^N b_{jk}\\phi_k(\\mathbf Q) "
},
{
"math_id": 13,
"text": "\\omega_j(\\mathbf Q) "
},
{
"math_id": 14,
"text": "\\omega_{j0} "
},
{
"math_id": 15,
"text": "\\phi_k(\\mathbf Q) "
},
{
"math_id": 16,
"text": "b_{jk} "
}
]
| https://en.wikipedia.org/wiki?curid=76142154 |
76142988 | Category of matrices | Basic definition and properties of the category of matrices
In mathematics, the category of matrices, often denoted formula_0, is the category whose objects are natural numbers and whose morphisms are matrices, with composition given by matrix multiplication.
Construction.
Let formula_1 be an formula_2 real matrix, i.e. a matrix with formula_3 rows and formula_4 columns.
Given a formula_5 matrix formula_6, we can form the matrix multiplication formula_7 or formula_8 only when formula_9, and in that case the resulting matrix is of dimension formula_10.
In other words, we can only multiply matrices formula_1 and formula_6 when the number of rows of formula_1 matches the number of columns of formula_6.
One can keep track of this fact by declaring an formula_2 matrix to be of type formula_11, and similarly a formula_5 matrix to be of type formula_12. This way, when formula_9 the two arrows have matching source and target, formula_13, and can hence be composed to an arrow of type formula_14.
This is precisely captured by the mathematical concept of a category, where the arrows, or morphisms, are the matrices, and they can be composed only when their domain and codomain are compatible (similar to what happens with functions). In detail, the category formula_15 is constructed as follows:
More generally, one can define the category formula_20 of matrices over a fixed field formula_21, such as the one of complex numbers.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{Mat}"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "n\\times m"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "p\\times q"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "BA"
},
{
"math_id": 8,
"text": "B\\circ A"
},
{
"math_id": 9,
"text": "q=n"
},
{
"math_id": 10,
"text": "p\\times m"
},
{
"math_id": 11,
"text": "m\\to n"
},
{
"math_id": 12,
"text": "q\\to p"
},
{
"math_id": 13,
"text": "m\\to n\\to p"
},
{
"math_id": 14,
"text": "m\\to p"
},
{
"math_id": 15,
"text": "\\mathbf{Mat}_\\mathbb{R}"
},
{
"math_id": 16,
"text": "n\\times n"
},
{
"math_id": 17,
"text": "A:m\\to n"
},
{
"math_id": 18,
"text": "B:n\\to p"
},
{
"math_id": 19,
"text": "p\\times n"
},
{
"math_id": 20,
"text": "\\mathbf{Mat}_\\mathbb{F}"
},
{
"math_id": 21,
"text": "\\mathbb{F}"
},
{
"math_id": 22,
"text": "\\mathbb{R}^n"
},
{
"math_id": 23,
"text": "\\mathbb{R}^m\\to\\mathbb{R}^n"
},
{
"math_id": 24,
"text": "n\\to n"
}
]
| https://en.wikipedia.org/wiki?curid=76142988 |
76149757 | Vibrational solvatochromism | Molecular changes due to solvent changes
Vibrational solvatochromism refers to changes in the vibrational frequencies of molecules due to variations in the solvent environment. Solvatochromism is a broader term that describes changes in the electronic or vibrational properties of a molecule in response to changes in the solvent polarity or composition. In the context of vibrational solvatochromism, researchers study how the vibrational spectra of a molecule, which represent the different vibrational modes of its chemical bonds, are influenced by the properties of the solvent.
Understanding vibrational solvatochromism helps researchers to characterize molecular environments and study molecular dynamics in different solvents and biological environments.
Dielectric continuum model.
By considering the intermolecular interaction of the solute molecule with a dielectric continuum solvent, one can obtain a general relationship between the vibrational frequency and intermolecular interaction potential. This relationship is given by the sum of three contributions: (1) Coulombic term describing an interaction between the permanent dipole moment of the molecule and electric field, (2) induction term describing interaction with the induced dipole moment, (3) electric field-correction term which arises from the change of the electric field along the normal coordinate of the vibration. When we consider only the linear terms with respect to the Onsagar reaction field, , the frequency shift for the "j"th normal mode can be given as
formula_0
where formula_1 and formula_2 are the effective gas-phase and solvent-induced vibrational dipole moment, respectively. Despite the limited validity due to the approximate nature of the dielectric continuum solvent model, researchers still often use this theory for vibrational solvatochromism, especially when a more refined model is challenging to implement.
Electrostatic Effect: Distributed Multipole Analysis.
The solvent electric field experienced by a given solute molecule in solution is highly nonuniform in space. For a realistic description of vibrational solvatochromism, one should consider the local electric potential created by surrounding solvent molecules. Assuming that the solute-solvent intermolecular interaction potential can be fully described by the distributed charges, dipoles, and high-order multipoles interacting with solvent electric potential and its gradients, it was shown that the vibrational solvatochromic frequency shift is given as
formula_3
Here, the vibrational solvatochromic charge (formula_4), dipole (formula_5), quadrupole (formula_6), and octupole (formula_7) terms can be determined using any distributed multipole expansion method.[5] The above Equation can be interpreted as a type of vibrational spectroscopic map.
Quantum chemistry calculations conducted for various IR probes have revealed that terms up to vibrational solvatochromic quadrupoles are essential for adequately describing the vibrational frequency shift.
Electrostatic Effect: Semiempirical Approaches.
The vibrational frequency shift, denoted as formula_8, for the "j"th normal mode is defined as the difference between the actual vibrational frequency formula_9 of the mode in a solution and the frequency formula_10 in the gas phase.
An early approach aimed to express the solvation-induced vibrational frequency shift in terms of the solvent electric potentials evaluated at distributed atomic sites on the target solute molecule. This method involves calculating the solvent electric potentials at these specific solute sites through the utilization of atomic partial charges from surrounding solvent molecules. The vibrational frequency shift of the solute molecule, denoted as formula_11, for the "j"th vibrational mode with an atomic configuration formula_12 of the solvent molecules can be represented as
formula_13
Here, formula_14 represents the vibrational frequency of the "j"th normal mode in solution, formula_15 signifies the vibrational frequency in the gas phase, formula_16 denotes the number of distributed sites on the solute molecule, formula_17 denotes the solvent electric potential at the "k"th site of the solute molecule, and formula_18 are the parameters to be determined through least-square fitting to a training database comprising clusters containing a solute and multiple solvent molecules. This method provides a means to quantify the impact of solvation on the vibrational frequencies of the solute molecule.
Another widely used model for characterizing vibrational solvatochromic frequency shifts involves expressing the frequency shift in terms of solvent electric fields evaluated at distributed sites on the target solute molecule. This model is represented by the equation:
formula_19
Here, formula_20 is the "m"th Cartesian component of the solvent electric field at the "k"th site on the solute molecule, and formula_21 represent parameters to be determined through least-square fitting to a training database of clusters containing a solute and multiple solvent molecules. This approach provides a framework for quantifying the influence of solvent electric fields on the vibrational frequencies of the solute molecule.
General solute-solvent interaction effects.
Buckingham developed the general theory describing the vibrational frequency shifts of a spatially localized normal mode in solution based on the intermolecular interaction potential. Cho later generalized this theory to any arbitrary normal mode. Solvation-induced vibrational frequencies and the resulting new set of normal modes of the solute molecule in solution can be directly obtained by diagonalizing the Hessian matrix derived from an effective Hamiltonian for the solute in the presence of a molecular environment. In the limiting case that the vibrational couplings of the normal mode of interest with other vibrational modes are relatively weak, the vibrational frequency shift is given by
formula_22
where formula_23 and formula_24 are the electric anharmonicity (EA) and mechanical anharmonicity (MA), respectively, defined as
formula_25
and
formula_26
where formula_27 is the cubic anharmonic constant. There exist cases in which the weak coupling approximation cannot be acceptable, for example, when normal modes are coupled and delocalized. In those cases, an additional term describing the mode coupling contribution to the frequency shift should be included.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta\\omega_j=[-{\\mathbf \\mu}_j-\\frac{1}{2}{\\mathbf \\mu}_j^{Ind}]\\cdot{\\mathbf E}^{Ons}+\\Delta\\omega_j({\\mathbf E}^{Ons})"
},
{
"math_id": 1,
"text": "{\\mathbf \\mu}_j"
},
{
"math_id": 2,
"text": "{\\mathbf \\mu}_j^{Ind}"
},
{
"math_id": 3,
"text": "\\Delta\\omega_j=\\sum_x \\{l_{x,j}\\phi_x+{\\mathbf \\mu}_{x,j}\\cdot\\nabla\\phi_x+\n\\frac{1}{3} \\Theta_{x,j}:\\nabla\\otimes\\nabla\\phi_x\n+\\frac{1}{15}\\Omega_{x,j}:\\nabla\\otimes\\nabla\\otimes\\nabla\\phi_x+\\cdots\\}"
},
{
"math_id": 4,
"text": "l_{x,j}"
},
{
"math_id": 5,
"text": "{\\mathbf \\mu}_{x,j}"
},
{
"math_id": 6,
"text": "\\Theta_{x,j}"
},
{
"math_id": 7,
"text": "\\Omega_{x,j}"
},
{
"math_id": 8,
"text": "\\Delta\\omega_j"
},
{
"math_id": 9,
"text": "\\omega_j"
},
{
"math_id": 10,
"text": "\\omega_{j,0}"
},
{
"math_id": 11,
"text": "\\Delta\\omega_j({\\mathbf Q})"
},
{
"math_id": 12,
"text": "{\\mathbf Q}"
},
{
"math_id": 13,
"text": "\\Delta\\omega_j({\\mathbf Q})=\\omega_j({\\mathbf Q})-\\omega_{j0}=\\sum_{k=1}^N b_{jk}\\phi_k({\\mathbf Q})"
},
{
"math_id": 14,
"text": "\\omega_j({\\mathbf Q})"
},
{
"math_id": 15,
"text": "\\omega_{j0}"
},
{
"math_id": 16,
"text": "N"
},
{
"math_id": 17,
"text": "\\phi_k({\\mathbf Q})"
},
{
"math_id": 18,
"text": "b_{jk}"
},
{
"math_id": 19,
"text": "\\Delta\\omega_j({\\mathbf Q})=\\sum_{m=x,y,z}\\sum_{k=1}^N\\mu_{jk}^m\nE_k^m({\\mathbf Q})"
},
{
"math_id": 20,
"text": "E_k^m({\\mathbf Q})"
},
{
"math_id": 21,
"text": "\\mu_{jk}^m"
},
{
"math_id": 22,
"text": "\\Delta\\omega_j^{WCA}=[\\hat{F}_j^{EA}+\\hat{F}_j^{MA}]U(\\mathbf Q)\\bigg\\vert_0\n "
},
{
"math_id": 23,
"text": "\\hat{F}_j^{EA} "
},
{
"math_id": 24,
"text": "\\hat{F}_j^{MA} "
},
{
"math_id": 25,
"text": "\\hat{F}_j^{EA}=\\frac{1}{2M_j\\omega_j} \\frac{\\partial^2}{\\partial Q_j^2}\n "
},
{
"math_id": 26,
"text": "\\hat{F}_j^{MA}=-\\frac{1}{2M_j\\omega_j}\\sum_i \\frac{g_{ijj}}{M_i\\omega_i^2} \\frac{\\partial}{\\partial Q_i} "
},
{
"math_id": 27,
"text": "g_{ijj}"
}
]
| https://en.wikipedia.org/wiki?curid=76149757 |
76155246 | Taylor circle | Special circle in geometry
The Taylor circle is a special circle associated with a triangle. It is named after Henry Martyn Taylor (1842–1927), who discussed it in 1882. However it was already considered by Eugène Charles Catalan in 1879 and first proposed by the French mathematician Eutaris in 1877.
Consider the three feet of the three altitudes of triangle. For each foot of an altitude construct the perpendiculars onto the two other triangles sides. The intersections of those perpendiculars with the triangles sides yield six points and those six points are located on a common circle, the Taylor circle.
The radius of the Taylor circle can be computed by the following formula taking the three angles of a triangle and the radius of its circumcircle:
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_T = R\\sqrt{\\sin^2\\alpha\\sin^2\\beta\\sin^2\\gamma+\\cos^2\\alpha\\cos^2\\beta\\cos^2\\gamma}"
}
]
| https://en.wikipedia.org/wiki?curid=76155246 |
76155257 | Bayes correlated equilibrium | Game theory solution
In game theory, a Bayes correlated equilibrium is a solution concept for static games of incomplete information. It is both a generalization of the correlated equilibrium perfect information solution concept to bayesian games, and also a broader solution concept than the usual Bayesian Nash equilibrium thereof. Additionally, it can be seen as a generalized multi-player solution of the Bayesian persuasion information design problem.
Intuitively, a Bayes correlated equilibrium allows for players to correlate their actions in a way such that no player has an incentive to deviate for every possible type they may have. It was first proposed by Dirk Bergemann and Stephen Morris.
Formal definition.
Preliminaries.
Let formula_0 be a set of players, and formula_1 a set of possible states of the world. A "game" is defined as a tuple formula_2, where formula_3 is the set of possible actions (with formula_4) and formula_5 is the utility function for each player, and formula_6 is a full support common prior over the states of the world.
An "information structure" is defined as a tuple formula_7, where formula_8 is a set of possible signals (or types) each player can receive (with formula_9), and formula_10 is a signal distribution function, informing the probability formula_11 of observing the joint signal formula_12 when the state of the world is formula_13.
By joining those two definitions, one can define formula_14 as an incomplete information game. A "decision rule" for the incomplete information game formula_14 is a mapping formula_15. Intuitively, the value of decision rule formula_16 can be thought of as a joint recommendation for players to play the joint mixed strategy formula_17 when the joint signal received is formula_12 and the state of the world is formula_13.
Definition.
A Bayes correlated equilibrium (BCE) is defined to be a decision rule formula_18 which is obedient: that is, one where no player has an incentive to unilaterally deviate from the recommended joint strategy, for any possible type they may be. Formally, decision rule formula_18 is obedient (and a Bayes correlated equilibrium) for game formula_19 if, for every player formula_20, every signal formula_21 and every action formula_22, we have
formula_23
formula_24
for all formula_25.
That is, every player obtains a higher expected payoff by following the recommendation from the decision rule than by deviating to any other possible action.
Relation to other concepts.
Bayesian Nash equilibrium.
Every Bayesian Nash equilibrium (BNE) of an incomplete information game can be thought of a as BCE, where the recommended joint strategy is simply the equilibrium joint strategy.
Formally, let formula_14 be an incomplete information game, and let formula_26 be an equilibrium joint strategy, with each player formula_27 playing formula_28. Therefore, the definition of BNE implies that, for every formula_20, formula_21 and formula_22 such that formula_29, we have
formula_30
formula_31
for every formula_25.
If we define the decision rule formula_18 on formula_32 as formula_33 for all formula_12 and formula_13, we directly get a BCE.
Correlated equilibrium.
If there is no uncertainty about the state of the world (e.g., if formula_1 is a singleton), then the definition collapses to Aumann's correlated equilibrium solution. In this case, formula_34 is a BCE if, for every formula_20, we have
formula_35
for every formula_25, which is equivalent to the definition of a correlated equilibrium for such a setting.
Bayesian persuasion.
Additionally, the problem of designing a BCE can be thought of as a multi-player generalization of the Bayesian persuasion problem from Emir Kamenica and Matthew Gentzkow. More specifically, let formula_36 be the information designer's objective function. Then her "ex-ante" expected utility from a BCE decision rule formula_18 is given by:
formula_37
If the set of players formula_0 is a singleton, then choosing an information structure to maximize formula_38 is equivalent to a Bayesian persuasion problem, where the information designer is called a Sender and the player is called a Receiver.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I"
},
{
"math_id": 1,
"text": "\\Theta"
},
{
"math_id": 2,
"text": "G = \\langle (A_i, u_i)_{i \\in I}, \\Theta, \\psi \\rangle"
},
{
"math_id": 3,
"text": "A_i"
},
{
"math_id": 4,
"text": "A = \\prod_{i \\in I} A_i"
},
{
"math_id": 5,
"text": "u_i : A\\times \\Theta \\rightarrow \\mathbb{R}"
},
{
"math_id": 6,
"text": " \\psi \\in \\Delta_{++} (\\Theta)"
},
{
"math_id": 7,
"text": "S = \\langle (T_i)_{i \\in I}, \\pi \\rangle"
},
{
"math_id": 8,
"text": "T_i"
},
{
"math_id": 9,
"text": "T = \\prod_{i \\in I} T_i"
},
{
"math_id": 10,
"text": "\\pi : \\Theta \\rightarrow \\Delta (T)"
},
{
"math_id": 11,
"text": "\\pi (t | \\theta)"
},
{
"math_id": 12,
"text": "t \\in T"
},
{
"math_id": 13,
"text": "\\theta \\in \\Theta"
},
{
"math_id": 14,
"text": "\\Gamma = (G, S)"
},
{
"math_id": 15,
"text": "\\sigma: T \\times \\Theta \\rightarrow \\Delta (A)"
},
{
"math_id": 16,
"text": "\\sigma (a | t, \\theta)"
},
{
"math_id": 17,
"text": "\\sigma (a) \\in \\Delta(A)"
},
{
"math_id": 18,
"text": "\\sigma"
},
{
"math_id": 19,
"text": " \\Gamma = (G, S)"
},
{
"math_id": 20,
"text": "i \\in I"
},
{
"math_id": 21,
"text": "t_i \\in T_i"
},
{
"math_id": 22,
"text": "a_i \\in A_i"
},
{
"math_id": 23,
"text": "\\sum_{a_{-i}, t_{-i}, \\theta} \\psi (\\theta) \\pi (t_i, t_{-i} | \\theta) \\sigma (a_i, a_{-i} | t_i, t_{-i} ,\\theta) u_i(a_i, a_{-i}, \\theta) "
},
{
"math_id": 24,
"text": " \\geq \\sum_{a_{-i}, t_{-i}, \\theta} \\psi (\\theta) \\pi (t_i, t_{-i} | \\theta) \\sigma (a_i, a_{-i} | t_i, t_{-i} ,\\theta) u_i(a'_i, a_{-i}, \\theta) "
},
{
"math_id": 25,
"text": "a'_i \\in A_i"
},
{
"math_id": 26,
"text": "s : T \\rightarrow \\Delta(A)"
},
{
"math_id": 27,
"text": "i "
},
{
"math_id": 28,
"text": "s_i (a_i | t_i) \\in \\Delta (A_i) "
},
{
"math_id": 29,
"text": "s_i (a_i | t_i) > 0"
},
{
"math_id": 30,
"text": "\\sum_{a_{-i}, t_{-i}, \\theta} \\psi (\\theta) \\pi (t_i, t_{-i} | \\theta) \\left(\\prod_{j \\neq i} s_j (a_j | t_j) \\right) u_i(a_i, a_{-i}, \\theta) "
},
{
"math_id": 31,
"text": " \\geq \\sum_{a_{-i}, t_{-i}, \\theta} \\psi (\\theta) \\pi (t_i, t_{-i} | \\theta) \\left(\\prod_{j \\neq i} s_j (a_j | t_j) \\right) u_i(a'_i, a_{-i}, \\theta) "
},
{
"math_id": 32,
"text": "\\Gamma"
},
{
"math_id": 33,
"text": "\\sigma (a | t, \\theta) = s(a | t) = \\prod_{i} s_i (a_i | t_i)"
},
{
"math_id": 34,
"text": "\\sigma \\in \\Delta (A)"
},
{
"math_id": 35,
"text": "\\sum_{a_{-i} \\in A{-i}} \\sigma (a_i, a_{-i}) u_i(a_i, a_{-i}) \\geq \\sum_{a_{-i} \\in A{-i}} \\sigma (a_i, a_{-i}) u_i(a'_i, a_{-i}) "
},
{
"math_id": 36,
"text": "v : A \\times \\Theta \\rightarrow \\mathbb R"
},
{
"math_id": 37,
"text": "V(\\sigma) = \\sum_{a, t, \\theta} \\psi (\\theta) \\pi(t | \\theta) \\sigma (a | t, \\theta) v(a, \\theta) "
},
{
"math_id": 38,
"text": "V(\\sigma)"
}
]
| https://en.wikipedia.org/wiki?curid=76155257 |
76155381 | Leiden algorithm | Clustering and community detection algorithm
The Leiden algorithm is a community detection algorithm developed by Traag "et al"
at Leiden University. It was developed as a modification of the
Louvain method to address the issues with disconnected communities.
Graph components.
Before defining the Leiden algorithm, it will be helpful to define some of the components of a graph.
Vertices and edges.
A graph is composed of vertices (nodes) and edges. Each edge is connected to two vertices, and each vertex may be connected to zero or more edges. Edges are typically represented by straight lines, while nodes are represented by circles or points. In set notation, let formula_0 be the set of vertices, and formula_1 be the set of edges:
formula_2
where formula_3 is the directed edge from vertex formula_4 to vertex formula_5. We can also write this as an ordered pair:
formula_6
Community.
A community is a unique set of nodes:
formula_7
and the union of all communities must be the total set of vertices:
formula_8
Partition.
A partition is the set of all communities:
formula_9
Quality.
Similar to modularity, the quality function is used to assess how well the communities have been allocated. The Leiden algorithm uses the Constant Potts Model (CPM):
formula_10
Algorithm.
The Leiden algorithm is similar to that of the Louvain method, with some important modifications.
Step 1: First, each node in the network is assigned to its own community.
Step 2: Next, we decide which communities to move the nodes into and update the partition formula_11.
Step 3: Assign each node in the graph to its own community in a new partition called formula_12.
Step 4: The goal of this step is to separate poorly-connected communities:
Step 5: Use the refined partition formula_12 to aggregate the graph. Each community in formula_12 becomes a node in the new graph formula_13.
"Example": Suppose that we have:
formula_14
Then our new set of nodes will be:
formula_15
Step 6: Update the partition formula_11 using the aggregated graph. We keep the communities from partition formula_11, but the communities can be separated into multiple nodes from the refined partition formula_12:
formula_16
"Example": Suppose that formula_17 is a poorly-connected community from the partition formula_11:
formula_18
Then suppose during the refinement step, it was separated into two communities, formula_19 and formula_20:
formula_21
When we aggregate the graph, the new nodes will be:
formula_22
but we will keep the old partition:
formula_23
Step 7: Repeat Steps 2 - 6 until each community consists of only one node.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "\n\\begin{align}\nV &:= \\{v_1, v_2, \\dots, v_n \\} \\\\\nE &:= \\{e_{ij}, e_{ik}, \\dots, e_{kl} \\}\n\\end{align}\n"
},
{
"math_id": 3,
"text": "e_{ij}"
},
{
"math_id": 4,
"text": "v_i"
},
{
"math_id": 5,
"text": "v_j"
},
{
"math_id": 6,
"text": "\n\\begin{align}\ne_{ij} &:= (v_i, v_j)\n\\end{align}\n"
},
{
"math_id": 7,
"text": "\n\\begin{align}\nC_i &\\subseteq V \\\\\nC_i &\\bigcap C_j = \\emptyset ~ \\forall ~ i \\neq j\n\\end{align}\n"
},
{
"math_id": 8,
"text": "\n\\begin{align}\nV &= \\bigcup_{i=1} C_i\n\\end{align}\n"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n\\mathcal{P} &= \\{C_1, C_2, \\dots, C_n \\}\n\\end{align}\n"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n\\mathcal{H}(G, \\mathcal{P}) &= \\sum_{C \\in \\mathcal{P}}\n|E(C, C)| - \\gamma \\binom{||C||}{2}\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\\mathcal{P}"
},
{
"math_id": 12,
"text": "\\mathcal{P}_{\\text{refined}}"
},
{
"math_id": 13,
"text": "G_{\\text{agg}}"
},
{
"math_id": 14,
"text": "\n\\begin{align}\nV &= \\{ v_1, v_2, v_3, v_4, v_5, v_6, v_7 \\} \\\\\nC_1 &= \\{ v_1, v_2, v_3, v_4 \\} \\\\\nC_2 &= \\{ v_5, v_6, v_7 \\} \\\\\n\\mathcal{P} &= \\{ C_1, C_2 \\} \\\\\n\\mathcal{P}_{\\text{refined}} &= \\{ C_{1a}, C_{1b}, C_2 \\}\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\n\\begin{align}\nV_{agg} &= \\{ C_{1a} \\mapsto w_{1a} , C_{1b} \\mapsto w_{1b}, C_2 \\mapsto w_{2} \\}\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\n\\begin{align}\n\\mathcal{P} &= \\{ \\{v ~ | ~ v \\subseteq C, v \\in V(G_{\\text{agg}})\\} ~ | ~ C \\in \\mathcal{P} \\}\n\\end{align}\n"
},
{
"math_id": 17,
"text": "C"
},
{
"math_id": 18,
"text": "\n\\begin{align}\nC &= \\{ v_1, v_2, v_3, v_4, v_5 \\} \\\\\n\\mathcal{P} &= \\{ C \\}\n\\end{align}\n"
},
{
"math_id": 19,
"text": "C_1"
},
{
"math_id": 20,
"text": "C_2"
},
{
"math_id": 21,
"text": "\n\\begin{align}\nC_1 &= \\{ v_1, v_2, v_3 \\} \\\\\nC_2 &= \\{ v_4, v_5 \\} \\\\\n\\mathcal{P}_{\\text{refined}} &= \\{ C_1, C_2 \\}\n\\end{align}\n"
},
{
"math_id": 22,
"text": "\n\\begin{align}\nV(G_{\\text{agg}}) &= \\{ C_1, C_2 \\}\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\n\\begin{align}\n\\mathcal{P} &= \\{ \\{ C_1, C_2 \\} \\}\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=76155381 |
7615996 | Monotone likelihood ratio | Statistical property
A monotonic likelihood ratio in distributions formula_0 and formula_1
The ratio of the density functions above is monotone in the parameter formula_2 so formula_3 satisfies the monotone likelihood ratio property.
In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions formula_0 and formula_1 bear the property if
formula_4
that is, if the ratio is nondecreasing in the argument formula_5.
If the functions are first-differentiable, the property may sometimes be stated
formula_6
For two distributions that satisfy the definition with respect to some argument formula_2 we say they "have the MLRP in formula_7" For a family of distributions that all satisfy the definition with respect to some statistic formula_8 we say they "have the MLR in formula_9"
Intuition.
The MLRP is used to represent a data-generating process that enjoys a straightforward relationship between the magnitude of some observed variable and the distribution it draws from. If formula_0 satisfies the MLRP with respect to formula_1, the higher the observed value formula_10, the more likely it was drawn from distribution formula_11 rather than formula_12 As usual for monotonic relationships, the likelihood ratio's monotonicity comes in handy in statistics, particularly when using maximum-likelihood estimation. Also, distribution families with MLR have a number of well-behaved stochastic properties, such as first-order stochastic dominance and increasing hazard ratios. Unfortunately, as is also usual, the strength of this assumption comes at the price of realism. Many processes in the world do not exhibit a monotonic correspondence between input and output.
Example: Working hard or slacking off.
Suppose you are working on a project, and you can either work hard or slack off. Call your choice of effort formula_13 and the quality of the resulting project formula_14 If the MLRP holds for the distribution of formula_15 conditional on your effort formula_13, the higher the quality the more likely you worked hard. Conversely, the lower the quality the more likely you slacked off.
1: Choose effort formula_16 where formula_17 means high effort, and formula_18 means low effort.
2: Observe formula_15 drawn from formula_19 By Bayes' law with a uniform prior,
formula_20
3: Suppose formula_21 satisfies the MLRP. Rearranging, the probability the worker worked hard is
formula_22
which, thanks to the MLRP, is monotonically increasing in formula_15 (because formula_23 is decreasing in formula_15).
Hence if some employer is doing a "performance review" he can infer his employee's behavior from the merits of his work.
Families of distributions satisfying MLR.
Statistical models often assume that data are generated by a distribution from some family of distributions and seek to determine that distribution. This task is simplified if the family has the monotone likelihood ratio property (MLRP).
A family of density functions formula_24 indexed by a parameter formula_25 taking values in an ordered set formula_26 is said to have a monotone likelihood ratio (MLR) in the statistic formula_27 if for any formula_28
formula_29 is a non-decreasing function of formula_9
Then we say the family of distributions "has MLR in formula_27".
Hypothesis testing.
If the family of random variables has the MLRP in formula_8 a uniformly most powerful test can easily be determined for the hypothesis formula_30 versus formula_31
Example: Effort and output.
Example: Let formula_13 be an input into a stochastic technology – worker's effort, for instance – and formula_32 its output, the likelihood of which is described by a probability density function formula_33 Then the monotone likelihood ratio property (MLRP) of the family formula_11 is expressed as follows: For any formula_34 the fact that formula_35 implies that the ratio formula_36 is increasing in formula_37
Relation to other statistical properties.
Monotone likelihoods are used in several areas of statistical theory, including point estimation and hypothesis testing, as well as in probability models.
Exponential families.
One-parameter exponential families have monotone likelihood-functions. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with
formula_38
has a monotone non-decreasing likelihood ratio in the sufficient statistic formula_39 provided that formula_40 is non-decreasing.
Uniformly most powerful tests: The Karlin–Rubin theorem.
Monotone likelihood functions are used to construct uniformly most powerful tests, according to the Karlin–Rubin theorem.
Consider a scalar measurement having a probability density function parameterized by a scalar parameter formula_41 and define the likelihood ratio formula_42
If formula_43 is monotone non-decreasing, in formula_2 for any pair formula_44 (meaning that the greater formula_10 is, the more likely formula_45 is), then the threshold test:
formula_46
where formula_47 is chosen so that formula_48
is the UMP test of size formula_49 for testing formula_50 vs. formula_51
Note that exactly the same test is also UMP for testing formula_52 vs. formula_51
Median unbiased estimation.
Monotone likelihood-functions are used to construct median-unbiased estimators, using methods specified by Johann Pfanzagl and others.
One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss functions.713
Lifetime analysis: Survival analysis and reliability.
If a family of distributions formula_53 has the monotone likelihood ratio property in formula_8
But not conversely: neither monotone hazard rates nor stochastic dominance imply the MLRP.
Proofs.
Let distribution family formula_55 satisfy MLR in formula_2 so that for formula_56 and formula_57
formula_58
or equivalently:
formula_59
Integrating this expression twice, we obtain:
First-order stochastic dominance.
Combine the two inequalities above to get first-order dominance:
formula_60
Monotone hazard rate.
Use only the second inequality above to get a monotone hazard rate:
formula_61
Uses.
Economics.
The MLR is an important condition on the type distribution of agents in mechanism design and economics of information, where Paul Milgrom defined "favorableness" of signals (in terms of stochastic dominance) as a consequence of MLR.
Most solutions to mechanism design models assume type distributions that satisfy the MLR to take advantage of solution methods that may be easier to apply and interpret.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ f(x)\\ "
},
{
"math_id": 1,
"text": "\\ g(x)\\ "
},
{
"math_id": 2,
"text": "\\ x\\ ,"
},
{
"math_id": 3,
"text": "\\ \\frac{\\ f(x)\\ }{ g(x) }\\ "
},
{
"math_id": 4,
"text": "\\ \\text{for every }x_2 > x_1, \\quad \\frac{ f(x_2) }{\\ g(x_2)\\ } \\geq \\frac{ f(x_1) }{\\ g(x_1)\\ }\\ "
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "\\frac{\\ \\partial}{\\ \\partial x} \\left( \\frac{ f(x) }{\\ g(x)\\ } \\right) \\geq 0\\ "
},
{
"math_id": 7,
"text": "\\ x ~."
},
{
"math_id": 8,
"text": "\\ T(X)\\ ,"
},
{
"math_id": 9,
"text": "\\ T(X) ~."
},
{
"math_id": 10,
"text": "\\ x\\ "
},
{
"math_id": 11,
"text": "\\ f\\ "
},
{
"math_id": 12,
"text": "\\ g ~."
},
{
"math_id": 13,
"text": "\\ e\\ "
},
{
"math_id": 14,
"text": "\\ q ~."
},
{
"math_id": 15,
"text": "\\ q\\ "
},
{
"math_id": 16,
"text": "\\ e \\in \\{ H, L \\}\\ "
},
{
"math_id": 17,
"text": "\\ H\\ "
},
{
"math_id": 18,
"text": "\\ L\\ "
},
{
"math_id": 19,
"text": "\\ f( q\\ |\\ e) ~."
},
{
"math_id": 20,
"text": "\\ \\operatorname{\\mathbb P} \\bigl[\\ e = H\\ \\big|\\ q\\ \\bigr] = \\frac{ f(q\\ |\\ H) }{\\ f(q\\ |\\ H) + f(q\\ |\\ L)\\ }\\ "
},
{
"math_id": 21,
"text": "\\ f(q\\ |\\ e)\\ "
},
{
"math_id": 22,
"text": "\\ \\frac{ 1 }{\\ 1 + f(q\\ |\\ L) / f(q\\ |\\ H)\\ }\\ "
},
{
"math_id": 23,
"text": "\\ \\frac{ f(q\\ |\\ L) }{\\ f(q\\ |\\ H)\\ }\\ "
},
{
"math_id": 24,
"text": "\\ \\bigl\\{\\ f_\\theta (x)\\ \\big|\\ \\theta \\in \\Theta\\ \\bigr\\}\\ "
},
{
"math_id": 25,
"text": "\\ \\theta\\ "
},
{
"math_id": 26,
"text": "\\ \\Theta\\ "
},
{
"math_id": 27,
"text": "\\ T(X)\\ "
},
{
"math_id": 28,
"text": "\\ \\theta_1 < \\theta_2\\ ,"
},
{
"math_id": 29,
"text": "\\ \\frac{ f_{\\theta_2}( X = x_1,\\ x_2,\\ x_3,\\ \\ldots\\ ) }{\\ f_{\\theta_1}( X = x_1,\\ x_2,\\ x_3,\\ \\ldots\\ )\\ }\\ "
},
{
"math_id": 30,
"text": "\\ H_0\\ :\\ \\theta \\le \\theta_0\\ "
},
{
"math_id": 31,
"text": "\\ H_1\\ :\\ \\theta > \\theta_0 ~."
},
{
"math_id": 32,
"text": "\\ y\\ "
},
{
"math_id": 33,
"text": "\\ f(y;e) ~."
},
{
"math_id": 34,
"text": "\\ e_1, e_2\\ ,"
},
{
"math_id": 35,
"text": "e_2 > e_1"
},
{
"math_id": 36,
"text": "\\ \\frac{\\ f( y ; e_2 )\\ }{ f( y ; e_1 ) }\\ "
},
{
"math_id": 37,
"text": "\\ y ~."
},
{
"math_id": 38,
"text": "\\ f_\\theta(x) = c(\\theta)\\ h(x)\\ \\exp\\Bigl(\\ \\pi\\left( \\theta \\right)\\ T\\left( x \\right)\\ \\Bigr)\\ "
},
{
"math_id": 39,
"text": "\\ T(x)\\ ,"
},
{
"math_id": 40,
"text": "\\ \\pi(\\theta)\\ "
},
{
"math_id": 41,
"text": "\\ \\theta\\ ,"
},
{
"math_id": 42,
"text": "\\ \\ell(x) = \\frac{ f_{\\theta_1}(x) }{\\ f_{\\theta_0}(x)\\ } ~."
},
{
"math_id": 43,
"text": "\\ \\ell(x)\\ "
},
{
"math_id": 44,
"text": "\\ \\theta_1 \\geq \\theta_0\\ "
},
{
"math_id": 45,
"text": "\\ H_1\\ "
},
{
"math_id": 46,
"text": "\\ \\varphi(x) = \n\\begin{cases}\n1 & \\text{if } x > x_0 \\\\\n0 & \\text{if } x < x_0\n\\end{cases}\\ "
},
{
"math_id": 47,
"text": "\\ x_0\\ "
},
{
"math_id": 48,
"text": "\\ \\operatorname{\\mathbb E} \\bigl\\{\\ \\varphi(X)\\ \\big|\\ \\theta_0\\ \\bigr\\} = \\alpha\\ "
},
{
"math_id": 49,
"text": "\\ \\alpha\\ "
},
{
"math_id": 50,
"text": "\\ H_0\\ :\\ \\theta \\leq \\theta_0 ~~"
},
{
"math_id": 51,
"text": "~~ H_1: \\theta > \\theta_0 ~."
},
{
"math_id": 52,
"text": "\\ H_0\\ :\\ \\theta = \\theta_0 ~~"
},
{
"math_id": 53,
"text": "\\ f_\\theta(x)\\ "
},
{
"math_id": 54,
"text": "T(X)"
},
{
"math_id": 55,
"text": "\\ f_\\theta\\ "
},
{
"math_id": 56,
"text": "\\ \\theta_1 > \\theta_0\\ "
},
{
"math_id": 57,
"text": "\\ x_1 > x_0\\ :"
},
{
"math_id": 58,
"text": "\\ \\frac{ f_{\\theta_1}(x_1) }{\\ f_{\\theta_0}(x_1)\\ } \\geq \\frac{\\ f_{\\theta_1}(x_0)\\ }{ f_{\\theta_0}(x_0) }\\ ,"
},
{
"math_id": 59,
"text": "\\ f_{\\theta_1}(x_1)\\ f_{\\theta_0}(x_0) \\geq f_{\\theta_1}(x_0)\\ f_{\\theta_0}(x_1) ~."
},
{
"math_id": 60,
"text": "F_{\\theta_1}(x) \\leq F_{\\theta_0}(x) ~ \\forall x\\ "
},
{
"math_id": 61,
"text": "\\ \\frac{ f_{\\theta_1}(x) }{\\ 1 - F_{\\theta_1}(x)\\ } \\leq \\frac{ f_{\\theta_0}(x) }{\\ 1 - F_{\\theta_0}(x)\\ } ~ \\forall x\\ "
}
]
| https://en.wikipedia.org/wiki?curid=7615996 |
761731 | Quantity theory of money | Theory in monetary economics
The quantity theory of money (often abbreviated QTM) is a hypothesis within monetary economics which states that the general price level of goods and services is directly proportional to the amount of money in circulation (i.e., the money supply), and that the causality runs from money to prices. This implies that the theory potentially explains inflation. It originated in the 16th century and has been proclaimed the oldest surviving theory in economics.
According to some, the theory was originally formulated by Renaissance mathematician Nicolaus Copernicus in 1517, whereas others mention Martín de Azpilcueta and Jean Bodin as independent originators of the theory. It has later been discussed and developed by several prominent thinkers and economists including John Locke, David Hume, Irving Fisher and Alfred Marshall. Milton Friedman made a restatement of the theory in 1956 and made it into a cornerstone of monetarist thinking.
The theory is often stated in terms of the equation MV = PY, where M is the money supply, V is the velocity of money, and PY is the nominal value of output or nominal GDP (P itself being a price index and Y the amount of real output). This equation is known as the quantity equation or the equation of exchange and is itself uncontroversial, as it can be seen as an accounting identity, residually defining velocity as the ratio of nominal output to the supply of money. Assuming additionally that Y is exogenous, being independently determined by other factors, that V is constant, and that M is exogenous and under the control of the central bank, the equation is turned into a theory which says that inflation (the change in P over time) can be controlled by setting the growth rate of M. However, all three assumptions are arguable and have been challenged over time. Output is generally believed to be affected by monetary policy at least temporarily, velocity has historically changed in unanticipated ways because of shifts in the money demand function, and some economists believe the money supply to be endogenously determined and hence not controlled by the monetary authorities.
The QTM played an important role in the monetary policy of the 1970s and 1980s when several leading central banks (including the Federal Reserve, the Bank of England and Bundesbank) based their policies on a money supply target in accordance with the theory. However, the results were not satisfactory, and strategies focusing specifically on monetary aggregates were generally abandoned during the 1980s and 1990s. Today, most major central banks in practice follow inflation targeting by suitably changing interest rates, and monetary aggregates play little role in monetary policy considerations in most countries.
Origins and development.
Before 1900: Early contributions.
Economic historian Mark Blaug has called the quantity theory of money "the oldest surviving theory in economics", its origins originating in the 16th century. Nicolaus Copernicus noted in 1517 that money usually depreciates in value when it is too abundant, which is by some historians taken as the first mention of the theory. Robert Dimand in the chapter on the history of monetary economics in "The New Palgrave Dictionary of Economics" identified Martín de Azpilcueta (1536) and Jean Bodin (1568) as the originators of a proper theory usable for explaining the observed quadrupling of prices during the phenomenon known as the Price revolution following the influx of silver from the New World to Europe.
John Locke studied the velocity of circulation, and David Hume in 1752 used the quantity theory to develop his price–specie flow mechanism explaining balance of payments adjustments. Also Henry Thornton, John Stuart Mill and Simon Newcomb among others contributed to the development of the quantity theory.
During the 19th century, a main rival of the quantity theory was the real bills doctrine, which says that the issue of money does not raise prices, as long as the new money is issued in exchange for assets of sufficient value. According to proponents of the real bills doctrine, money supply responded passively in response to money demand. Consequently, there could be no causal influence from money to prices; conversely, the connection ran in the opposite direction: Money demand was determined by income and prices, which were affected by inflation, caused by various real (i.e., non-monetary) reasons.
1900–1950: Fisher, Wicksell, Marshall and Keynes.
The eminent economist Irving Fisher, building upon work by Newcomb, developed the theory further in what has been called "The Golden Age of the quantity theory", formalizing the equation of exchange and attempting to measure the velocity of money independently empirically. Fisher insisted on the long-run neutrality of money, but admitted that money was not neutral during transition periods of up to 10 years. Another renowned monetary economist, Knut Wicksell, criticized the quantity theory of money, citing the notion of a "pure credit economy". Wicksell instead emphasized real shocks as a cause of observed price movements and developed his theory of the natural rate of interest to explain why the monetary authority should stabilize by setting the interest rate rather than the quantity of money – a position that has received renewed attention during the 21st century, exemplified in the influential Taylor rule of monetary policy.
The extremely influential neoclassical economist Alfred Marshall, Professor at Cambridge, expounded the quantity theory in a version which stated that desired cash balances (i.e., money demand) was proportional to nominal income. The proposition is normally written M = kPY, where k is the proportionality factor. This is known as the Cambridge equation, a variant of the quantity theory. As the coefficient k is the reciprocal of V, the income velocity of circulation of money in the equation of exchange, the two versions of the quantity theory are formally equivalent, though the Cambridge variant focuses on money demand as an important element of the theory.
Marshall's disciple John Maynard Keynes extended his monetary analysis in several ways and eventually integrated it into his "General Theory of Employment, Interest and Money", published in 1936, which formed the cornerstone of the Keynesian Revolution. Keynes accepted the quantity theory in principle as accurate over the long run, but not over the short run, coining in his 1923 book "A Tract on Monetary Reform" the famous sentence, "In the long run, we are all dead". He emphasized that money demand (or, in his terminology, liquidity preference) depended on the interest rate as well as nominal income, and contended that contrary to contemporaneous thinking, velocity and output were not stable, but highly variable and as such, the quantity of money was of little importance in driving prices. Rather, changes in the money supply could have effects on real variables like output.
At the same time as Keynes personally and his followers which contributed to the resulting theoretical foundation of Keynesian economics in principle recognized a role for monetary policy in stabilizing economic fluctuations over the business cycle, in practice they believed that fiscal policy was more efficient for this purpose, maintaining that changes in interest rates had little effect on demand and output. The Keynesian paradigm came to dominate macroeconomic thinking until the 1970s, assigning little attention to monetary policy.
Monetarism.
However, from the 1950s and increasingly during the 1960s, the Keynesian view was challenged by an initially small, but increasingly influential minority, the monetarists, the intellectual leader of which was Milton Friedman. In response to the Keynesian view of the world, he made a restatement of the quantity theory in 1956 and used it as a cornerstone for monetarist thinking.
Friedman agreed that money could affect output in the short run. Indeed, he believed that monetary policy was much more powerful in this respect than fiscal policy. Together with Anna Schwartz, he wrote in 1963 the influential book "A Monetary History of the United States", concluding that movements in money explained most of the fluctuations in output, and reinterpreted the Great Depression as the result of a major mistake in American monetary policy, failing to avoid a large contraction in the money supply during the 1930s.
At the same time, Friedman was sceptical as to the use of active monetary policy to stabilise output, believing that knowledge of the economy was too little to ensure that such policies would improve rather than worsen the situation. Instead, he advocated a simple monetary policy rule of maintaining a steady growth rate in money supply, which would not result in perfect short-run stabilisation, but in accordance with the quantity theory would ensure a steady long-run inflation rate. This came to be the main policy recommendation of the monetarists.
Consequently, the monetarist application of the quantity-theory approach aimed at removing monetary policy as a source of macroeconomic instability by targeting a constant, low growth rate of the money supply. The zenith of monetarist influence came during the late 1970s and the 1980s, after inflation had risen in many countries during the 1970s caused by the 1970s energy crisis, and the fixed exchange rate system among major Western economies known as the Bretton Woods system had been dissolved. In that situation several central banks turned to a money supply target in an attempt to reduce inflation. For instance the U.S. Federal Reserve System led by chairman Paul Volcker announced a money growth target, starting from October 1979.
The results were not satisfactory, however, because the relationship between monetary aggregates and other macroeconomic variables proved to be rather unstable. Similar results prevailed in other countries. Firstly, the relation between money growth and inflation turned out to be not very tight, even over 10-year periods, and secondly, the relation between the money supply and the interest rate in the short run turned out to be unreliable, too, making money growth an unreliable instrument to affect demand and output. The reason for both problems was frequent shifts in the demand for money during the period, partly because of changes in financial intermediation. This made velocity unpredictable and weakened the link between money and prices implied by the quantity theory. Milton Friedman later acknowledged that direct money supply targeting was less successful than he had hoped.
New classical economists.
For a third group of post-war macroeconomists beside Keynesians and monetarists, the new classical economists, the quantity theory of money was also a doctrine of fundamental importance, but Robert E. Lucas and other leading new classical economists made serious efforts to specify and refine its theoretical meaning. These theoretical considerations involved serious changes as to the scope of countercyclical economic policy. The new classical model held that even in the short run, monetary policy could not be used to stabilize output as only unexpected changes in money could affect real variables. However, this view did not gain widespread support, failing to be confirmed by empirical tests. Empirically, evidence generally supports that there is a short-run linkage between money and economic activity.
After 1990: Decline of money supply targeting.
Following the difficulties of the 1980s in conducting a satisfactory monetary policy by money supply targeting, most central banks, including the U.S. Federal Reserve, turned away from focusing on monetary aggregates, instead implementing their policies by setting short-term interest rates. Among monetary researchers, the demise of the money supply as a policy variable was recognized and rationalized by Michael Woodford.
From 1990, the new principle of inflation targets as the basis for a country's monetary policy gained popularity, starting with New Zealand and eventually spreading to most developed countries. Inflation targeting countries set interest rates to influence economic activity via the monetary transmission mechanism, eventually affecting inflation to fulfill their inflation argets. The communication of inflation targets helps to anchor the public inflation expectations, it makes central banks more accountable for their actions, and it reduces economic uncertainty among the participants in the economy.
Money supply (M2) for some time remained a leading economic indicator in the United States, but lost its status as such in the Conference Board Leading Economic Index in 2012, after it was ascertained that it had performed poorly as a leading indicator since 1989. Also in the policy making of the European Central Bank from 1999, monetary aggregates, which were initially officially assigned a prominent role as one of two pillars upon which the ECB monetary policy rested, were assigned a graduately more peripheral role among the indicators informing the bank's interest rate decisions.
The equation of exchange.
In its modern form, the quantity theory builds upon the following definitional relationship, formulated algebraically by Irving Fisher in 1911:
formula_0
where
formula_1 is the total amount of money in circulation on average in an economy during the period, say a year.
formula_2 is the transactions velocity of money, that is the average frequency across all transactions with which a unit of money is spent. This reflects availability of financial institutions, economic variables, and choices made as to how fast people turn over their money.
formula_3 and formula_4 are the price and quantity of the i-th transaction.
formula_5 is a column vector of the formula_3, and the superscript T is the transpose operator.
formula_6 is a column vector of the formula_4.
Mainstream economics accepts a simplification, the equation of exchange, also called the quantity "equation":
formula_7
where
formula_8 is the price level associated with transactions for the economy during the period,
formula_9 is an index of the real value of aggregate transactions.
The previous equation presents the difficulty that the associated data are not available for all transactions. With the development of national income and product accounts, emphasis shifted to national-income or final-product transactions, rather than gross transactions. Economists may alternatively use a specification where
formula_10 is the velocity of money in final expenditures or, equivalently, the income velocity of money,
formula_11 is an index of the real value of final expenditures or, equivalently, income.
As an example, formula_12 might represent currency plus deposits in checking and savings accounts held by the public, formula_11 real output (which equals real expenditure in macroeconomic equilibrium) with formula_13 the corresponding price level, and formula_14 the nominal (money) value of output. In one empirical formulation, velocity was taken to be "the ratio of net national product in current prices to the money stock".
From the quantity equation to the quantity theory.
The quantity equation itself as stated above is uncontroversial, as it amounts to an identity or, equivalently, simply a definition of velocity: From the equation, velocity can be defined residually as the ratio of nominal output to the stock of money: formula_15. Developing a theory out of the equation requires assumptions be made about the causal relationships among the four variables in this one equation. The crucial question is to which extent each of these variables is dependent upon the others. Without further restrictions, the equation does not require that a change in the money supply would change the value of any or all of formula_13, formula_11, or formula_14. For example, a 10% increase in formula_12 could be accompanied by a change of 1/(1 + 10%) in formula_10, leaving formula_14 unchanged.
The quantity "theory" of money consequently goes further, resting in its basic form on three additional assumptions:
Under these three assumptions, there is a causal effect of "M" on "P", and the central bank, by controlling money supply, will be able to directly control the price level of the economy. Specifically, a constant growth rate in the money stock will lead to a constant inflation rate, as long as real output grows at a constant rate.
The realism of each of the three assumptions has been debated over time, though, making the prominent monetarist economist David Laidler declare in 1991 that the quantity theory "is always and everywhere controversial". Firstly, most economists think that output can be affected by e.g. changes in demand including those that originate from monetary (or fiscal) policy in the short run, i.e. at any point in the business cycle, though in the medium and long run the assumption is more warranted. Indeed, the possibility of influencing and mitigating short-run output fluctuations is the basis for the stabilization policies of most central banks in developed countries today. Secondly, there is general agreement that velocity does change over time, and sometimes in unpredictable ways, because of changes in the money demand function; this may e.g. be the consequence of changes in the infrastructure of payment systems. This was considered a major problem during the 1970s and 1980s when several major central banks including the Federal Reserve tried conducting monetary policy following a money supply target. Thirdly, the exogeneity and control by the monetary authority of the money supply is questioned by some economists. James Tobin noted in 1970 that money might be correlated with output because money passively reacts to output. Central banks and consequently monetary bases can be said to react to events in the economy, and most of typical money supply measures are created by private commercial banks who may also be considered to be affected by the general economic atmosphere when carrying out their banking activities.
Cambridge approach.
Economists Alfred Marshall, A.C. Pigou, and John Maynard Keynes (before he developed his own, eponymous school of thought) associated with Cambridge University, took a slightly different approach to the quantity equation, focusing on money demand instead of money supply. They argued that a certain portion of the money supply will not be used for transactions; instead, it will be held for the convenience and security of having cash on hand. This portion of cash is commonly represented as "k", a portion of nominal income (formula_16). The Cambridge economists also thought wealth would play a role, but wealth is often omitted for simplicity. The Cambridge equation is thus:
formula_17
Assuming that the economy is at equilibrium (formula_18), formula_19 is exogenous, and "k" is fixed in the short run, the Cambridge equation is equivalent to the equation of exchange with velocity equal to the inverse of "k":
formula_20
The Cambridge version of the quantity equation was used in both Keynes's attack on the quantity theory and the Monetarist revival of the theory.
Evidence.
As restated by Milton Friedman, the quantity theory emphasizes the following relationship of the nominal value of expenditures formula_21 and the price level formula_13 to the quantity of money formula_22:
formula_23
formula_24
The plus signs indicate that a change in the money supply is hypothesized to change nominal expenditures and the price level in the same direction (for other variables held constant).
Milton Friedman made an influential case for the theory in his 1956 paper "Studies in the quantity theory of money". Later, Friedman wrote in 1987 that the empirical regularity of a "connection between substantial changes in the quantity of money and in the level of prices" was perhaps the most-evidenced economic phenomenon on record, adding that "The statistical connection itself, however, tells nothing about direction of influence". According to Friedman, the short-run relation of a change in the money supply in the past has been relatively more associated with a change in real output formula_11 than the price level formula_13 in (1), but with much variation in the precision, timing, and size of the relation. For the "long"-run, there has been stronger support for (1) and (2) and no systematic association of formula_11 and formula_12.
In a more recent examination of data from 109 countries from 1991 onwards, it was found that inflation and money growth did not exhibit a proportional development; however, excess money growth did act as a predictor of inflation, but the effect during the time period examined was relatively low.
In 2016, Professor Harald Uhlig and two coauthors looked upon a cross-section of countries in the years 1970-2005. They found that for moderate-inflation countries (defined as countries with average inflation rates below 12%), the direct relationship between average inflation and the growth rate of money was very tenuous at best, though the fit could be improved by correcting for variation in output growth and the opportunity cost of money. They also found that for countries following inflation targeting, the fit of a one-for-one relationship between money growth and inflation was considerably lower than for other countries.
Though more disputed in the 1970s, surveys of members of the American Economics Association since the 1990s have shown that most professional American economists generally agree with the statement: "Inflation is caused primarily by too much growth in the money supply."
Criticism by non-mainstream economists.
In the 1860's, Karl Marx modified the quantity theory by arguing that the labor theory of value requires that prices, under equilibrium conditions, are determined by socially necessary labor time needed to produce the commodity and that quantity of money was a function of the quantity of commodities, the prices of commodities, and the velocity.
In 1912, Ludwig von Mises agreed that there was a core of truth in the quantity theory, but criticized its focus on the supply of money without adequately explaining the demand for money. He said the theory "fails to explain the mechanism of variations in the value of money".
In his 1976 book "The Denationalisation of Money", Friedrich Hayek described the quantity theory of money "as no more than a useful rough approximation to a really adequate explanation". According to him, the theory "becomes wholly useless where several concurrent distinct kinds of money are simultaneously in use in the same territory."
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M\\cdot V_T =\\sum_{i} (p_i\\cdot q_i)=\\mathbf{p}^\\mathrm{T}\\mathbf{q},"
},
{
"math_id": 1,
"text": "M\\,"
},
{
"math_id": 2,
"text": "V_T\\,"
},
{
"math_id": 3,
"text": "p_i\\,"
},
{
"math_id": 4,
"text": "q_i\\,"
},
{
"math_id": 5,
"text": "\\mathbf{p}"
},
{
"math_id": 6,
"text": "\\mathbf{q}"
},
{
"math_id": 7,
"text": "M\\cdot V_T = P_T\\cdot T,"
},
{
"math_id": 8,
"text": "P_T"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "V"
},
{
"math_id": 11,
"text": "Q"
},
{
"math_id": 12,
"text": "M"
},
{
"math_id": 13,
"text": "P"
},
{
"math_id": 14,
"text": "P\\cdot Q"
},
{
"math_id": 15,
"text": "V=(P\\cdot Q)/M"
},
{
"math_id": 16,
"text": "P \\cdot Y"
},
{
"math_id": 17,
"text": "M^{\\textit{d}}=\\textit{k} \\cdot P\\cdot Y."
},
{
"math_id": 18,
"text": "M^{\\textit{d}} = M"
},
{
"math_id": 19,
"text": "Y"
},
{
"math_id": 20,
"text": "M\\cdot\\frac{1}{k} = P\\cdot Y."
},
{
"math_id": 21,
"text": "PQ "
},
{
"math_id": 22,
"text": "M "
},
{
"math_id": 23,
"text": "(1) PQ={f}(\\overset{+}M)"
},
{
"math_id": 24,
"text": "(2) P={g}(\\overset{+}M)"
}
]
| https://en.wikipedia.org/wiki?curid=761731 |
76174 | Universal quantification | Mathematical use of "for all"
In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any", "for all", or "for any". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.
It is usually denoted by the turned A (∀) logical operator symbol, which, when used together with a predicate variable, is called a universal quantifier ("∀"x"", "∀("x")", or sometimes by "("x")" alone). Universal quantification is distinct from "existential" quantification ("there exists"), which only asserts that the property or relation holds for at least one member of the domain.
Quantification in general is covered in the article on quantification (logic). The universal quantifier is encoded as in Unicode, and as codice_0 in LaTeX and related formula editors.
Basics.
Suppose it is given that
2·0 = 0 + 0, and 2·1 = 1 + 1, and 2·2 = 2 + 2, etc.
This would seem to be a logical conjunction because of the repeated use of "and". However, the "etc." cannot be interpreted as a conjunction in formal logic. Instead, the statement must be rephrased:
For all natural numbers "n", one has 2·"n" = "n" + "n".
This is a single statement using universal quantification.
This statement can be said to be more precise than the original one. While the "etc." informally includes natural numbers, and nothing more, this was not rigorously given. In the universal quantification, on the other hand, the natural numbers are mentioned explicitly.
This particular example is true, because any natural number could be substituted for "n" and the statement "2·"n" = "n" + "n"" would be true. In contrast,
For all natural numbers "n", one has 2·"n" > 2 + "n"
is false, because if "n" is substituted with, for instance, 1, the statement "2·1 > 2 + 1" is false. It is immaterial that "2·"n" > 2 + "n"" is true for "most" natural numbers "n": even the existence of a single counterexample is enough to prove the universal quantification false.
On the other hand,
for all composite numbers "n", one has 2·"n" > 2 + "n"
is true, because none of the counterexamples are composite numbers. This indicates the importance of the "domain of discourse", which specifies which values "n" can take. In particular, note that if the domain of discourse is restricted to consist only of those objects that satisfy a certain predicate, then for universal quantification this requires a logical conditional. For example,
For all composite numbers "n", one has 2·"n" > 2 + "n"
is logically equivalent to
For all natural numbers "n", if "n" is composite, then 2·"n" > 2 + "n".
Here the "if ... then" construction indicates the logical conditional.
Notation.
In symbolic logic, the universal quantifier symbol formula_0 (a turned "A" in a sans-serif font, Unicode U+2200) is used to indicate universal quantification. It was first used in this way by Gerhard Gentzen in 1935, by analogy with Giuseppe Peano's formula_1 (turned E) notation for existential quantification and the later use of Peano's notation by Bertrand Russell.
For example, if "P"("n") is the predicate "2·"n" > 2 + "n"" and N is the set of natural numbers, then
formula_2
is the (false) statement
"for all natural numbers "n", one has 2·"n" > 2 + "n".
Similarly, if "Q"("n") is the predicate "n" is composite", then
formula_3
is the (true) statement
"for all natural numbers "n", if "n" is composite, then 2·"n" > 2 + n".
Several variations in the notation for quantification (which apply to all forms) can be found in the "Quantifier" article.
Properties.
Negation.
The negation of a universally quantified function is obtained by changing the universal quantifier into an existential quantifier and negating the quantified formula. That is,
formula_4
where formula_5 denotes negation.
For example, if "P"("x") is the propositional function ""x" is married", then, for the set X of all living human beings, the universal quantification
Given any living person "x", that person is married
is written
formula_6
This statement is false. Truthfully, it is stated that
It is not the case that, given any living person "x", that person is married
or, symbolically:
formula_7.
If the function "P"("x") is not true for "every" element of X, then there must be at least one element for which the statement is false. That is, the negation of formula_6 is logically equivalent to "There exists a living person "x" who is not married", or:
formula_8
It is erroneous to confuse "all persons are not married" (i.e. "there exists no person who is married") with "not all persons are married" (i.e. "there exists a person who is not married"):
formula_9
Other connectives.
The universal (and existential) quantifier moves unchanged across the logical connectives ∧, ∨, →, and ↚, as long as the other operand is not affected; that is:
formula_11
Conversely, for the logical connectives ↑, ↓, ↛, and ←, the quantifiers flip:
formula_12
Rules of inference.
A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the universal quantifier.
"Universal instantiation" concludes that, if the propositional function is known to be universally true, then it must be true for any arbitrary element of the universe of discourse. Symbolically, this is represented as
formula_13
where "c" is a completely arbitrary element of the universe of discourse.
"Universal generalization" concludes the propositional function must be universally true if it is true for any arbitrary element of the universe of discourse. Symbolically, for an arbitrary "c",
formula_14
The element "c" must be completely arbitrary; else, the logic does not follow: if "c" is not arbitrary, and is instead a specific element of the universe of discourse, then P("c") only implies an existential quantification of the propositional function.
The empty set.
By convention, the formula formula_15 is always true, regardless of the formula "P"("x"); see vacuous truth.
Universal closure.
The universal closure of a formula φ is the formula with no free variables obtained by adding a universal quantifier for every free variable in φ. For example, the universal closure of
formula_16
is
formula_17.
As adjoint.
In category theory and the theory of elementary topoi, the universal quantifier can be understood as the right adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the existential quantifier is the left adjoint.
For a set formula_18, let formula_19 denote its powerset. For any function formula_20 between sets formula_18 and formula_21, there is an inverse image functor formula_22 between powersets, that takes subsets of the codomain of "f" back to subsets of its domain. The left adjoint of this functor is the existential quantifier formula_23 and the right adjoint is the universal quantifier formula_24.
That is, formula_25 is a functor that, for each subset formula_26, gives the subset formula_27 given by
formula_28
those formula_10 in the image of formula_29 under formula_30. Similarly, the universal quantifier formula_31 is a functor that, for each subset formula_26, gives the subset formula_32 given by
formula_33
those formula_10 whose preimage under formula_30 is contained in formula_29.
The more familiar form of the quantifiers as used in first-order logic is obtained by taking the function "f" to be the unique function formula_34 so that formula_35 is the two-element set holding the values true and false, a subset "S" is that subset for which the predicate formula_36 holds, and
formula_37
formula_38
which is true if formula_29 is not empty, and
formula_39
which is false if S is not X.
The universal and existential quantifiers given above generalize to the presheaf category.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\forall "
},
{
"math_id": 1,
"text": "\\exists"
},
{
"math_id": 2,
"text": " \\forall n\\!\\in\\!\\mathbb{N}\\; P(n) "
},
{
"math_id": 3,
"text": " \\forall n\\!\\in\\!\\mathbb{N}\\; \\bigl( Q(n) \\rightarrow P(n) \\bigr) "
},
{
"math_id": 4,
"text": "\\lnot \\forall x\\; P(x)\\quad\\text {is equivalent to}\\quad \\exists x\\;\\lnot P(x) "
},
{
"math_id": 5,
"text": "\\lnot"
},
{
"math_id": 6,
"text": "\\forall x \\in X\\, P(x)"
},
{
"math_id": 7,
"text": "\\lnot\\ \\forall x \\in X\\, P(x)"
},
{
"math_id": 8,
"text": "\\exists x \\in X\\, \\lnot P(x)"
},
{
"math_id": 9,
"text": "\\lnot\\ \\exists x \\in X\\, P(x) \\equiv\\ \\forall x \\in X\\, \\lnot P(x) \\not\\equiv\\ \\lnot\\ \\forall x\\in X\\, P(x) \\equiv\\ \\exists x \\in X\\, \\lnot P(x)"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "\\begin{align}\nP(x) \\land (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\land Q(y)) \\\\\nP(x) \\lor (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\lor Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\nP(x) \\to (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\to Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\nP(x) \\nleftarrow (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\nleftarrow Q(y)) \\\\\nP(x) \\land (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\land Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\nP(x) \\lor (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\lor Q(y)) \\\\\nP(x) \\to (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\to Q(y)) \\\\\nP(x) \\nleftarrow (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) &\\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\nleftarrow Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset\n\\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align}\nP(x) \\uparrow (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\uparrow Q(y)) \\\\\nP(x) \\downarrow (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\downarrow Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\nP(x) \\nrightarrow (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\nrightarrow Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\nP(x) \\gets (\\exists{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\forall{y}{\\in}\\mathbf{Y}\\, (P(x) \\gets Q(y)) \\\\\nP(x) \\uparrow (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\uparrow Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\nP(x) \\downarrow (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\downarrow Q(y)) \\\\\nP(x) \\nrightarrow (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\nrightarrow Q(y)) \\\\\nP(x) \\gets (\\forall{y}{\\in}\\mathbf{Y}\\, Q(y)) & \\equiv\\ \\exists{y}{\\in}\\mathbf{Y}\\, (P(x) \\gets Q(y)),& \\text{provided that } \\mathbf{Y}\\neq \\emptyset \\\\\n\\end{align}"
},
{
"math_id": 13,
"text": " \\forall{x}{\\in}\\mathbf{X}\\, P(x) \\to P(c)"
},
{
"math_id": 14,
"text": " P(c) \\to\\ \\forall{x}{\\in}\\mathbf{X}\\, P(x)."
},
{
"math_id": 15,
"text": "\\forall{x}{\\in}\\emptyset \\, P(x)"
},
{
"math_id": 16,
"text": "P(y) \\land \\exists x Q(x,z)"
},
{
"math_id": 17,
"text": "\\forall y \\forall z ( P(y) \\land \\exists x Q(x,z))"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "\\mathcal{P}X"
},
{
"math_id": 20,
"text": "f:X\\to Y"
},
{
"math_id": 21,
"text": "Y"
},
{
"math_id": 22,
"text": "f^*:\\mathcal{P}Y\\to \\mathcal{P}X"
},
{
"math_id": 23,
"text": "\\exists_f"
},
{
"math_id": 24,
"text": "\\forall_f"
},
{
"math_id": 25,
"text": "\\exists_f\\colon \\mathcal{P}X\\to \\mathcal{P}Y"
},
{
"math_id": 26,
"text": "S \\subset X"
},
{
"math_id": 27,
"text": "\\exists_f S \\subset Y"
},
{
"math_id": 28,
"text": "\\exists_f S =\\{ y\\in Y \\;|\\; \\exists x\\in X.\\ f(x)=y \\quad\\land\\quad x\\in S \\},"
},
{
"math_id": 29,
"text": "S"
},
{
"math_id": 30,
"text": "f"
},
{
"math_id": 31,
"text": "\\forall_f\\colon \\mathcal{P}X\\to \\mathcal{P}Y"
},
{
"math_id": 32,
"text": "\\forall_f S \\subset Y"
},
{
"math_id": 33,
"text": "\\forall_f S =\\{ y\\in Y \\;|\\; \\forall x\\in X.\\ f(x)=y \\quad\\implies\\quad x\\in S \\},"
},
{
"math_id": 34,
"text": "!:X \\to 1"
},
{
"math_id": 35,
"text": "\\mathcal{P}(1) = \\{T,F\\}"
},
{
"math_id": 36,
"text": "S(x)"
},
{
"math_id": 37,
"text": "\\begin{array}{rl}\\mathcal{P}(!)\\colon \\mathcal{P}(1) & \\to \\mathcal{P}(X)\\\\ T &\\mapsto X \\\\ F &\\mapsto \\{\\}\\end{array}"
},
{
"math_id": 38,
"text": "\\exists_! S = \\exists x. S(x),"
},
{
"math_id": 39,
"text": "\\forall_! S = \\forall x. S(x),"
}
]
| https://en.wikipedia.org/wiki?curid=76174 |
76179196 | Neil Hindman | Neil Hindman (born April 14, 1943) is an American mathematician and Professor Emeritus at Howard University. His research focuses on various areas within mathematics, including topology, Stone-Čech compactification, discrete systems, and Ramsey theory.
Life and education.
Neil Hindman actively participated in civil rights work during his college years. In the summer of 1964, he served as a freedom school coordinator in Mississippi.
Hindman completed his Bachelor of Arts degree in mathematics and physics in 1965 at Westmar College. He then pursued a graduate degree, earning a Master of Arts in mathematics from the University of Massachusetts in 1967. Subsequently, Hindman continued his academic journey at Wesleyan University, where he received his Ph.D. in 1969. Under the supervision of W. W. Comfort, Hindman wrote his doctoral thesis on "P-like spaces and their product with P-spaces."
Academic career.
Neil Hindman began his academic career as a visiting assistant professor at Wesleyan University, serving from September 1969 to June 1970. Following this, he joined California State University, Los Angeles, as an assistant professor in September 1970. From September 1975 to August 1976, Hindman held a visiting associate professorship at SUNY (The State University of New York) at Binghamton. By December 1979, he had risen to the rank of Professor at California State University, Los Angeles.
In January 1980, Hindman transitioned to Howard University, where he assumed the role of associate professor, continuing to impart knowledge in mathematics. He dedicated several decades to teaching and research at Howard University, ultimately retiring as a Professor of Mathematics in June 2017.
Mathematical work.
One of Hindman's early contributions was his dissertation for his Ph.D. thesis, conducted in collaboration with W. W. Comfort and S. Negrepontis. Their research explored conditions for defining F'-spaces and investigated concepts such as weakly Lindelöf spaces and P-spaces, shedding light on the structure of F-spaces in topology. This pioneering work significantly advanced theoretical models and analytical techniques within the field.
Hindman's Theorem, formulated and proven by Neil Hindman, addresses a conjecture originally proposed by Graham and Rothschild. The theorem asserts that any partition of the natural numbers formula_0 into a finite number of classes contains at least one class with a sequence such that all finite sums of distinct elements from this sequence also belong to the same class. Hindman's Theorem confirms the conjecture by Graham and Rothschild and establishes its equivalence with the existence of an ultrafilter on formula_0. This theorem highlights the relationship between the partition regularity of the natural numbers and ultrafilters, offering a fundamental result with broad implications across various mathematical domains.
Hindman remains active in the fields of Ramsey Theory and Topology, with a particular focus on the Stone–Čech compactification. | [
{
"math_id": 0,
"text": "\\mathbb{N}"
}
]
| https://en.wikipedia.org/wiki?curid=76179196 |
76187094 | Model-based clustering | Model-based clustering in statistics
In statistics, cluster analysis is the algorithmic grouping of objects into homogeneous
groups based on numerical measurements. Model-based clustering bases this on a statistical model for the data, usually a mixture model. This has several advantages, including a principled statistical basis for clustering,
and ways to choose the number of clusters, to choose the best clustering model, to assess the uncertainty of the clustering, and to identify outliers that do not belong to any group.
Model-based clustering.
Suppose that for each of formula_0 observations we have data on
formula_1 variables, denoted by formula_2
for observation formula_3. Then
model-based clustering expresses the probability density function of
formula_4 as a finite mixture, or weighted average of
formula_5 component probability density functions:
formula_6
where formula_7 is a probability density function with
parameter formula_8, formula_9 is the corresponding
mixture probability where formula_10.
Then in its simplest form, model-based clustering views each component
of the mixture model as a cluster, estimates the model parameters, and assigns
each observation to cluster corresponding to its most likely mixture component.
Gaussian mixture model.
The most common model for continuous data is that formula_7 is a multivariate normal distribution with mean vector formula_11
and covariance matrix formula_12, so that
formula_13.
This defines a Gaussian mixture model. The parameters of the model,
formula_9 and formula_8 for formula_14,
are typically estimated by maximum likelihood estimation using the
expectation-maximization algorithm (EM); see also
EM algorithm and GMM model.
Bayesian inference is also often used for inference about finite
mixture models. The Bayesian approach also allows for the case where the number of components, formula_5, is infinite, using a Dirichlet process prior, yielding a Dirichlet process mixture model for clustering.
Choosing the number of clusters.
An advantage of model-based clustering is that it provides statistically
principled ways to choose the number of clusters. Each different choice of the number of groups formula_5 corresponds to a different mixture model. Then standard statistical model selection criteria such as the
Bayesian information criterion (BIC) can be used to choose formula_5. The integrated completed likelihood (ICL) is a different criterion designed to choose the number of clusters rather than the number of mixture components in the model; these will often be different if highly non-Gaussian clusters are present.
Parsimonious Gaussian mixture model.
For data with high dimension, formula_1, using a full covariance matrix for each mixture component requires estimation of many parameters, which can result in a loss of precision, generalizabity and interpretability. Thus it is common to use more parsimonious component covariance matrices exploiting their geometric interpretation. Gaussian clusters are ellipsoidal, with their volume, shape and orientation determined by the covariance matrix. Consider the eigendecomposition of a matrix
formula_15
where formula_16 is the matrix of eigenvectors of
formula_12,
formula_17
is a diagonal matrix whose elements are proportional to
the eigenvalues of formula_12 in descending order,
and formula_18 is the associated constant of proportionality.
Then formula_18 controls the volume of the ellipsoid,
formula_19 its shape, and formula_16 its orientation.
Each of the volume, shape and orientation of the clusters can be
constrained to be equal (E) or allowed to vary (V); the orientation can
also be spherical, with identical eigenvalues (I). This yields 14 possible clustering models, shown in this table:
It can be seen that many of these models are more parsimonious, with far fewer
parameters than the unconstrained model that has 90 parameters when
formula_20 and formula_21.
Several of these models correspond to well-known heuristic clustering methods.
For example, k-means clustering is equivalent to estimation of the
EII clustering model using the classification EM algorithm. The Bayesian information criterion (BIC)
can be used to choose the best clustering model as well as the number of clusters. It can also be used as the basis for a method to choose the variables
in the clustering model, eliminating variables that are not useful for clustering.
Different Gaussian model-based clustering methods have been developed with
an eye to handling high-dimensional data. These include the pgmm method, which is based on the mixture of
factor analyzers model, and the HDclassif method, based on the idea of subspace clustering.
The mixture-of-experts framework extends model-based clustering to include covariates.
Example.
We illustrate the method with a dateset consisting of three measurements
(glucose, insulin, sspg) on 145 subjects for the purpose of diagnosing
diabetes and the type of diabetes present.
The subjects were clinically classified into three groups: normal,
chemical diabetes and overt diabetes, but we use this information only
for evaluating clustering methods, not for classifying subjects.
The BIC plot shows the BIC values for each combination of the number of
clusters, formula_5, and the clustering model from the Table.
Each curve corresponds to a different clustering model.
The BIC favors 3 groups, which corresponds to the clinical assessment.
It also favors the unconstrained covariance model, VVV.
This fits the data well, because the normal patients have low values of
both sspg and insulin, while the distributions of the chemical and
overt diabetes groups are elongated, but in different directions.
Thus the volumes, shapes and orientations of the three groups are clearly
different, and so the unconstrained model is appropriate, as selected
by the model-based clustering method.
The classification plot shows the classification of the subjects by model-based
clustering. The classification was quite accurate, with a 12% error rate
as defined by the clinical classificiation.
Other well-known clustering methods performed worse with higher
error rates, such as single-linkage clustering with 46%,
average link clustering with 30%, complete-linkage clustering
also with 30%, and k-means clustering with 28%.
Outliers in clustering.
An outlier in clustering is a data point that does not belong to any of
the clusters. One way of modeling outliers in model-based clustering is
to include an additional mixture component that is very dispersed, with
for example a uniform distribution. Another approach is to replace the multivariate
normal densities by formula_22-distributions, with the idea that the long tails of the
formula_22-distribution would ensure robustness to outliers.
However, this is not breakdown-robust.
A third approach is the "tclust" or data trimming approach
which excludes observations identified as
outliers when estimating the model parameters.
Non-Gaussian clusters and merging.
Sometimes one or more clusters deviate strongly from the Gaussian assumption.
If a Gaussian mixture is fitted to such data, a strongly non-Gaussian
cluster will often be represented by several mixture components rather than
a single one. In that case, cluster merging can be used to find a better
clustering. A different approach is to use mixtures
of complex component densities to represent non-Gaussian clusters.
Non-continuous data.
Categorical data.
Clustering multivariate categorical data is most often done using the
latent class model. This assumes that the data arise from a finite
mixture model, where within each cluster the variables are independent.
Mixed data.
These arise when variables are of different types, such
as continuous, categorical or ordinal data. A latent class model for
mixed data assumes local independence between the variable. The location model relaxes the local independence
assumption. The clustMD approach assumes that
the observed variables are manifestations of underlying continuous Gaussian
latent variables.
Count data.
The simplest model-based clustering approach for multivariate
count data is based on finite mixtures with locally independent Poisson
distributions, similar to the latent class model.
More realistic approaches allow for dependence and overdispersion in the
counts.
These include methods based on the multivariate Poisson distribution,
the multivarate Poisson-log normal distribution, the integer-valued
autoregressive (INAR) model and the Gaussian Cox model.
Sequence data.
These consist of sequences of categorical values from a finite set of
possibilities, such as life course trajectories.
Model-based clustering approaches include group-based trajectory and
growth mixture models and a distance-based
mixture model.
Rank data.
These arise when individuals rank objects in order of preference. The data
are then ordered lists of objects, arising in voting, education, marketing
and other areas. Model-based clustering methods for rank data include
mixtures of Plackett-Luce models and mixtures of Benter models,
and mixtures of Mallows models.
Network data.
These consist of the presence, absence or strength of connections between
individuals or nodes, and are widespread in the social sciences and biology.
The stochastic blockmodel carries out model-based clustering of the nodes
in a network by assuming that there is a latent clustering and that
connections are formed independently given the clustering. The latent position cluster model
assumes that each node occupies a position in an unobserved latent space,
that these positions arise from a mixture of Gaussian distributions,
and that presence or absence of a connection is associated with distance
in the latent space.
Software.
Much of the model-based clustering software is in the form of a publicly
and freely available R package. Many of these are listed in the
CRAN Task View on Cluster Analysis and Finite Mixture Models.
The most used such package is
mclust,
which is used to cluster continuous data and has been downloaded over
8 million times.
The poLCA package clusters
categorical data using the latent class model.
The clustMD package clusters
mixed data, including continuous, binary, ordinal and nominal variables.
The flexmix package
does model-based clustering for a range of component distributions.
The mixtools package can cluster
different data types. Both flexmix and mixtools
implement model-based clustering with covariates.
History.
Model-based clustering was first invented in 1950 by Paul Lazarsfeld
for clustering multivariate discrete data, in the form of the
latent class model.
In 1959, Lazarsfeld gave a lecture on latent structure analysis
at the University of California-Berkeley, where John H. Wolfe was an M.A. student.
This led Wolfe to think about how to do the same thing for continuous
data, and in 1965 he did so, proposing the Gaussian mixture model for
clustering.
He also produced the first software for estimating it, called NORMIX.
Day (1969), working independently, was the first to publish a journal
article on the approach.
However, Wolfe deserves credit as the inventor of model-based clustering
for continuous data.
Murtagh and Raftery (1984) developed a model-based clustering method
based on the eigenvalue decomposition of the component covariance matrices.
McLachlan and Basford (1988) was the first book on the approach,
advancing methodology and sparking interest.
Banfield and Raftery (1993) coined the term "model-based clustering",
introduced the family of parsimonious models,
described an information criterion for
choosing the number of clusters, proposed the uniform model for outliers,
and introduced the mclust software.
Celeux and Govaert (1995) showed how to perform maximum likelihood estimation
for the models.
Thus, by 1995 the core components of the methodology were in place,
laying the groundwork for extensive development since then.
Further reading.
Free download: https://math.univ-cotedazur.fr/~cbouveyr/MBCbook/
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "y_i = (y_{i,1},\\ldots,y_{i,d})"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "y_i"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "p(y_i) = \\sum_{g=1}^G \\tau_g f_g (y_i \\mid \\theta_g), "
},
{
"math_id": 7,
"text": "f_g"
},
{
"math_id": 8,
"text": "\\theta_g"
},
{
"math_id": 9,
"text": "\\tau_g"
},
{
"math_id": 10,
"text": "\\sum_{g=1}^G \\tau_g = 1"
},
{
"math_id": 11,
"text": "\\mu_g"
},
{
"math_id": 12,
"text": "\\Sigma_g"
},
{
"math_id": 13,
"text": "\\theta_g = (\\mu_g, \\Sigma_g)"
},
{
"math_id": 14,
"text": "g=1,\\ldots,G"
},
{
"math_id": 15,
"text": " \\Sigma_g = \\lambda_g D_g A_g D_g^T , "
},
{
"math_id": 16,
"text": "D_g"
},
{
"math_id": 17,
"text": "A_g = \\mbox{diag} \\{ A_{1,g},\\ldots,A_{d,g} \\}"
},
{
"math_id": 18,
"text": "\\lambda_g"
},
{
"math_id": 19,
"text": "A_g"
},
{
"math_id": 20,
"text": "G=4"
},
{
"math_id": 21,
"text": "d=9"
},
{
"math_id": 22,
"text": "t"
}
]
| https://en.wikipedia.org/wiki?curid=76187094 |
76188460 | Entropy-vorticity wave | Entropy-vorticity waves (or sometimes entropy-vortex waves) refer to small-amplitude waves carried by the gas within which entropy, vorticity, density but not pressure perturbations are propagated. Entropy-vortivity waves are essentially isobaric, incompressible, rotational perturbations along with entropy perturbations. This wave differs from the other well-known small-amplitude wave that is a sound wave, which propagates with respect to the gas within which density, pressure but not entropy perturbations are propagated. The classification of small disturbances into acoustic, entropy and vortex modes were introduced by Leslie S. G. Kovasznay.
Entropy-vorticity waves are ubiquitous in supersonic problems, particularly those involving shock waves. Since these perturbations are carried by the gas, they are convected by the flow downstream of the shock wave, but they cannot be propagates in the upstream direction (behind the shock wave) unlike the acoustic wave, which can propagate upstream and can catch up the shock wave. As such, they are useful in understanding many highspeed flows and are important in many applications such as in solid-propellant rockets and detonations.
Mathematical description.
Consider a gas flow with a uniform velocity field formula_0 and having a pressure formula_1, density formula_2, entropy formula_3 and sound speed formula_4. Now we add small perturbations to these variables, which are denoted with a symbol formula_5. The perturbed variables being small quatities satisfy linearized form of the Euler equations, which is given by
formula_6
where in the continuity equation, we have used the relation formula_7 (since formula_8 and formula_9) and the used the entropy equation to simplify it. Taking perturbations to be of the plane-wave form formula_10, the linearised equations can be reduced to algebraic equations
formula_11
The last equation shows that either formula_12, which corresponds to sound waves in which entropy does not change or formula_13. The later condition indicating that perturbations are carried by the gas corresponds to the entropy-vortex wave. In this case, we have
formula_14
where formula_15 is the vorticity perturbation. As we can see, the entropy perturbation formula_16 and the vorticity perturbation formula_17 are independent meaning that one can have entropy waves without vorticity waves or vorticity waves with entropy waves or both entropy and vorticity waves.
In non-reacting multicomponent gas, we can also have compositional perturbations since in this case, formula_18, where formula_19 is the mass fraction of ith specices of total formula_20 chemical species. In the entropy-vorticity wave, we have then
formula_21
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf v"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "s"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "\\delta"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\frac{\\partial\\delta p}{\\partial t} + \\mathbf v\\cdot \\nabla \\delta p + \\rho c^2 \\nabla\\cdot \\delta\\mathbf v &= 0,\\\\\n\\frac{\\partial\\delta\\mathbf v}{\\partial t} + (\\mathbf v\\cdot \\nabla)\\delta\\mathbf v + \\frac{1}{\\rho}\\nabla\\delta p &=0,\\\\\n\\frac{\\partial\\delta s}{\\partial t} + \\mathbf v\\cdot \\nabla \\delta s &=0,\n\\end{align}"
},
{
"math_id": 7,
"text": "\\delta\\rho = \\delta p/c^2+(\\partial \\rho/\\partial s)_p \\delta s"
},
{
"math_id": 8,
"text": "\\rho=\\rho(p,s)"
},
{
"math_id": 9,
"text": "c^2=(\\partial p/\\partial \\rho)_s"
},
{
"math_id": 10,
"text": "e^{i\\mathbf k\\cdot \\mathbf r-i\\omega t}"
},
{
"math_id": 11,
"text": "\\begin{align}\n(\\mathbf v\\cdot\\mathbf k-\\omega)\\delta p + \\rho c^2 \\mathbf k\\cdot\\delta\\mathbf v&=0,\\\\\n(\\mathbf v\\cdot\\mathbf k-\\omega)\\delta\\mathbf v +\\mathbf k\\delta p/\\rho &=0,\\\\\n(\\mathbf v\\cdot\\mathbf k-\\omega)\\delta s &=0.\n\\end{align}"
},
{
"math_id": 12,
"text": "\\delta s=0"
},
{
"math_id": 13,
"text": "\\mathbf v\\cdot\\mathbf k-\\omega=0"
},
{
"math_id": 14,
"text": "\\omega = \\mathbf v\\cdot\\mathbf k, \\quad \\delta s\\neq 0, \\quad \\delta p =0, \\quad \\delta \\rho = \\left(\\frac{\\partial \\rho}{\\partial s}\\right)_p \\delta s, \\quad \\mathbf k\\cdot\\delta\\mathbf v=0, \\quad \\delta\\boldsymbol\\omega=i\\mathbf k\\times\\delta \\mathbf v\\neq 0,"
},
{
"math_id": 15,
"text": "\\delta\\boldsymbol\\omega=\\nabla\\times\\delta\\mathbf v"
},
{
"math_id": 16,
"text": "\\delta s"
},
{
"math_id": 17,
"text": "\\delta\\boldsymbol\\omega"
},
{
"math_id": 18,
"text": "\\rho=\\rho(p,s,Y_i)"
},
{
"math_id": 19,
"text": "Y_i"
},
{
"math_id": 20,
"text": "N"
},
{
"math_id": 21,
"text": "\\delta\\rho = \\left(\\frac{\\partial \\rho}{\\partial s}\\right)_{p,Y_i} \\delta s + \\sum_{i=1}^N \\left(\\frac{\\partial \\rho}{\\partial Y_i}\\right)_{s,p,Y_j(j\\neq i)} \\delta Y_i."
}
]
| https://en.wikipedia.org/wiki?curid=76188460 |
761886 | Arthur Amos Noyes | Arthur Amos Noyes (September 13, 1866 – June 3, 1936) was an American chemist, inventor and educator, born in Newburyport, Massachusetts, son of Amos and Anna Page Noyes, née Andrews. He received a PhD in 1890 from Leipzig University under the guidance of Wilhelm Ostwald.
He served as the acting president of MIT between 1907 and 1909 and as professor of chemistry at the California Institute of Technology from 1919 to 1936. "Although [the Noyes] laboratory at MIT was like an institute in its intramural funding (from Carnegie Institute of Washington and Noyes's patent royalties), Noyes recruited many of his disciples as undergraduates and took a deep interest in undergraduate engineering education, both at MIT and later at Caltech. Roscoe Gilkey Dickinson was one of his famous students.
Noyes was a major influence both on the educational philosophy of the core curriculum of Caltech as well as in the negotiations leading to the creation of the National Research Council along with George Ellery Hale and Robert Millikan. He also served on the board of trustees for Science Service, now known as Society for Science & the Public, between 1921 and 1927.
Noyes was an elected member of the American Academy of Arts and Sciences, the United States National Academy of Sciences, and the American Philosophical Society.
Noyes–Whitney equation.
Along with Willis Rodney Whitney, he formulated the Noyes–Whitney equation in 1897, which relates the rate of dissolution of solids to the properties of the solid and the dissolution medium. It is an important equation in pharmaceutical science. The relation is given by:
formula_0
Where:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{dW}{dt} = \\frac{DA(C_{s}-C)}{L}"
},
{
"math_id": 1,
"text": "\\frac{dW}{dt}"
},
{
"math_id": 2,
"text": "C_{s}"
}
]
| https://en.wikipedia.org/wiki?curid=761886 |
7618877 | Primon gas | Model from mathematical physics
In mathematical physics, the primon gas or Riemann gas discovered by Bernard Julia is a model illustrating correspondences between number theory and methods in quantum field theory, statistical mechanics and dynamical systems such as the Lee-Yang theorem. It is a quantum field theory of a set of non-interacting particles, the primons; it is called a gas or a "free model" because the particles are non-interacting. The idea of the primon gas was independently discovered by Donald Spector. Later works by Ioannis Bakas and Mark Bowick, and Spector explored the connection of such systems to string theory.
The model.
State space.
Consider a Hilbert space H with an orthonormal basis of states formula_0 labelled by the prime numbers "p". Second quantization gives a new Hilbert space K, the bosonic Fock space on H, where states describe collections of primes - which we can call primons if we think of them as analogous to particles in quantum field theory. This Fock space has an orthonormal basis given by finite multisets of primes. In other words, to specify one of these basis elements we can list the number formula_1 of primons for each prime formula_2:
formula_3
where the total formula_4 is finite. Since any positive natural number formula_5 has a unique factorization into primes:
formula_6
we can also denote the basis elements of the Fock space as simply formula_7 where formula_8
In short, the Fock space for primons has an orthonormal basis given by the positive natural numbers, but we think of each such number formula_5 as a collection of primons: its prime factors, counted with multiplicity.
Identifying the Hamiltonian via the Koopman operator.
Given the state formula_9, we may use the Koopman operator formula_10 to lift dynamics from the space of states to the space of observables:
formula_11
where formula_12 is an algorithm for integer factorisation, analogous to the discrete logarithm, and formula_13 is the successor function. Thus, we have:
formula_14
A precise motivation for defining the Koopman operator formula_10 is that it represents a global linearisation of formula_13, which views linear combinations of eigenstates as
integer partitions. In fact, the reader may easily check that the successor function is
not a linear function:
formula_15
Hence, formula_10 is canonical.
Energies.
If we take a simple quantum Hamiltonian "H" to have eigenvalues proportional to log "p", that is,
formula_16
with
formula_17
for some positive constant formula_18, we are naturally led to
formula_19
Statistics of the phase-space dimension.
Let's suppose we would like to know the average time, suitably-normalised, that the Riemann gas spends in a particular subspace. How might this frequency be related to the dimension of this subspace?
If we characterize distinct linear subspaces as Erdős-Kac data which have the form of sparse binary vectors, using the Erdős-Kac theorem we may actually demonstrate that this frequency depends upon nothing more than the dimension of the subspace. In fact, if formula_20 counts the number of unique prime divisors of formula_21 then the Erdős-Kac law tells us that for large formula_5:
formula_22
has the standard normal distribution.
What is even more remarkable is that although the Erdős-Kac theorem has the form of a statistical observation, it could not have been discovered using statistical methods. Indeed, for formula_23 the normal order of formula_24 only begins to emerge
for formula_25.
Statistical mechanics.
The partition function "Z" of the primon gas is given by the Riemann zeta function:
formula_26
with "s" = "E"/"k"B"T" where "k"B is the Boltzmann constant and "T" is the absolute temperature.
The divergence of the zeta function at "s" = 1 corresponds to the divergence of the partition function at a Hagedorn temperature of "T"H = "E"/"k"B.
Supersymmetric model.
The above second-quantized model takes the particles to be bosons. If the particles are taken to be fermions, then the Pauli exclusion principle prohibits multi-particle states which include squares of primes. By the spin–statistics theorem, field states with an even number of particles are bosons, while those with an odd number of particles are fermions. The fermion operator (−1)F has a very concrete realization in this model as the Möbius function formula_27, in that the Möbius function is positive for bosons, negative for fermions, and zero on exclusion-principle-prohibited states.
More complex models.
The connections between number theory and quantum field theory can be somewhat further extended into connections between topological field theory and K-theory, where, corresponding to the example above, the spectrum of a ring takes the role of the spectrum of energy eigenvalues, the prime ideals take the role of the prime numbers, the group representations take the role of integers, group characters taking the place the Dirichlet characters, and so on. | [
{
"math_id": 0,
"text": "|p\\rangle"
},
{
"math_id": 1,
"text": "k_p = 0 , 1, 2, \\dots"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "|k_2, k_3, k_5, k_7, k_{11}, \\ldots, k_p, \\ldots\\rangle"
},
{
"math_id": 4,
"text": "\\sum_p k_p"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "n = 2^{k_2} \\cdot 3^{k_3} \\cdot 5^{k_5} \\cdot 7^{k_7} \\cdot 11^{k_{11}} \\cdots p^{k_p} \\cdots"
},
{
"math_id": 7,
"text": "|n\\rangle"
},
{
"math_id": 8,
"text": "n = 1,2,3, \\dots. "
},
{
"math_id": 9,
"text": "x_n = n"
},
{
"math_id": 10,
"text": "\\Phi"
},
{
"math_id": 11,
"text": "\\Phi \\circ \\textbf{log} \\circ x_n = \\textbf{log} \\circ F \\circ x_n = \\textbf{log} \\circ x_{n+1} "
},
{
"math_id": 12,
"text": "\\textbf{log}"
},
{
"math_id": 13,
"text": "F"
},
{
"math_id": 14,
"text": "\\textbf{log} \\circ x_n = \\bigoplus_k a_k \\cdot \\ln p_k "
},
{
"math_id": 15,
"text": "\\forall n \\in \\mathbb{N}, F(n) = n+1 \\implies \\forall x,y \\in \\mathbb{N}^*, F(x+y) \\neq F(x)+F(y)"
},
{
"math_id": 16,
"text": "H|p\\rangle = E_p |p\\rangle"
},
{
"math_id": 17,
"text": "E_p=E \\log p "
},
{
"math_id": 18,
"text": "E"
},
{
"math_id": 19,
"text": "E_n = \\sum_p k_p E_p = E \\cdot \\sum_p k_p \\log p = E \\log n"
},
{
"math_id": 20,
"text": "\\omega(n)"
},
{
"math_id": 21,
"text": "n \\in \\mathbb{N}"
},
{
"math_id": 22,
"text": " \\frac{\\omega(n)-\\ln \\ln n}{\\sqrt{\\ln \\ln n}} \\sim \\mathcal{N}(0,1)\n"
},
{
"math_id": 23,
"text": "X \\sim U([1,N])"
},
{
"math_id": 24,
"text": "\\omega(X)"
},
{
"math_id": 25,
"text": "N \\geq 10^{100}"
},
{
"math_id": 26,
"text": "Z(T) := \\sum_{n=1}^\\infty \\exp \\left(\\frac{-E_n}{k_\\text{B} T}\\right) = \\sum_{n=1}^\\infty \\exp \\left(\\frac{-E \\log n}{k_\\text{B} T}\\right) = \\sum_{n=1}^\\infty \\frac{1}{n^s} = \\zeta (s) "
},
{
"math_id": 27,
"text": "\\mu(n)"
}
]
| https://en.wikipedia.org/wiki?curid=7618877 |
76196534 | Final-over-Final Constraint | Proposed Constraint in Theoretical Linguistics
In Linguistics, specifically in Generative Syntax, the Final-over-Final Constraint (FOFC) is a proposed constraint in word-order variation in natural language concerning the hierarchical structure seen in Extended Projections, which asserts that a Head-Final phrase cannot immediately dominate a Head-Initial phrase if they are in the same extended projection. The Final-over-Final constraint has been suggested as a potential Linguistic Universal, following the Chomskyan research program in which the existence of linguistic universals is assumed to arise from an innate biological component of the language faculty that allows humans to learn language. Specifically, it is defined as:Final-over-Final Constraint: If formula_0 and formula_1 are members of the same extended projection, then a Head-Final formula_2 cannot immediately dominate a Head-Initial formula_3, as below:
This effect was first noticed by Anders Holmberg in Finnish, when comparing it with the similarly disharmonic Head-Initial over Head-Final structure:.
<templatestyles src="Interlinear/styles.css" />
<templatestyles src="Interlinear/styles.css" />
<templatestyles src="Interlinear/styles.css" />
<templatestyles src="Interlinear/styles.css" />
Accounting for the FOFC with the Linear Correspondence Axiom (LCA).
Biberauer, Holmberg and Roberts (2014) propose an account of the FOFC derived from Kayne's Antisymmetry Theory and the Linear Correspondence Axiom (LCA), in which all maximal projections follow the 'specifier head-complement template' as below, and all variation in word-order arises due to movement.
Biberaer et al. assume that all movement is triggered by the presence of a movement diacritic formula_4 with no semantic content such that movement to the specifier of a head formula_5 is triggered by the presence of formula_6 on formula_5. Functional heads cannot introduce formula_6, though they may inherit it from the head of their complement. Then from this, the proposal is that the following more formally defined constraint holds. Final-over-Final Constraint: If a head formula_7 in the extended projection EP of a lexical head L, EP(L), has formula_6 associated with its formula_8-feature, then so does formula_9, where formula_10 is c-selected by formula_7 in EP(L).
Other accounts of the FOFC.
There have been attempts, notably by Carlo Cecchetto and Hedde Zeijlstra, to account for the FOFC asymmetry without making use of the LCA, instead basing their accounts as coming from restrictions in parsing on rightward-dependencies.
Cecchetto proposes that if backward dependencies cannot cross phrase structure boundaries, then the Right-roof constraint (a locality condition on rightward movement) and FOFC are 'two faces of the same coin', as they both constrain the generation of structures that involve backward localisation; a trace, in the case of the Right-roof constraint, or in regards to the selected head of a selecting head in the case of FOFC, and so the FOFC-violating configuration will only be possible if formula_1 is a movement target for formula_11 rather than formula_5 as backward localisation is costly for the parser and will only be possible if it is very local.
Zeijlstra's account, meanwhile, derives largely from Abels & Neeleman's account of Greenberg's Universal 20, which observes that head movement within an extended projection cannot be rightward unless the movement is string-vacuous, which not only circumvents the theoretical and empirical challenges to LCA, but also accounts for particles which often form counter-examples to FOFC.
Counterexamples and Challenges to the FOFC.
It seems to be the case that clause-final particles in VO languages form a natural class of counterexamples to the FOFC. Thus, it must be then investigated whether such counterexamples do indeed violate FOFC, and if so, then any account of FOFC must be revised to account for such counterexamples. For example, sentence-final Tense-Aspect-Mood particles appear in many East Asian and Central African languages (Examples from Mumuye; Shimizu 1983: 107 & 112)
<templatestyles src="Interlinear/styles.css" />
<templatestyles src="Interlinear/styles.css" />
Notably, none of these particles exhibit inflectional morphology and as such do not exhibit any φ-agreement, and so it seems that a theory that concerns FOFC should account for the fact that particles that exhibit inflection seem to be pervasively FOFC-compliant, however non-inflected particles often are not.
Further reading.
[1] M. Sheehan, T. Biberauer, I. Roberts, and A. Holmberg, "The Final-Over-Final Condition: A Syntactic Universal". The MIT Press, 2017. doi: 10.7551/mitpress/8687.001.0001.
[2] M. Sheehan, ‘Explaining the Final-over-Final Constraint: Formal and Functional Approaches*’, in "Theoretical Approaches to Disharmonic Word Order", T. Biberauer and M. Sheehan, Eds., Oxford University Press, 2013, pp. 407–444. doi: 10.1093/acprof:oso/9780199684359.003.0015. | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\beta "
},
{
"math_id": 2,
"text": "\\beta P "
},
{
"math_id": 3,
"text": "\\alpha P"
},
{
"math_id": 4,
"text": "\\wedge"
},
{
"math_id": 5,
"text": "\\alpha "
},
{
"math_id": 6,
"text": "\\wedge "
},
{
"math_id": 7,
"text": "\\alpha_i"
},
{
"math_id": 8,
"text": "[\\plusmn V] "
},
{
"math_id": 9,
"text": "\\alpha_{i + 1}"
},
{
"math_id": 10,
"text": "\\alpha_{i+1}"
},
{
"math_id": 11,
"text": "\\alpha P "
}
]
| https://en.wikipedia.org/wiki?curid=76196534 |
761974 | Wave power | Transport of energy by wind waves, and the capture of that energy to do useful work
Wave power is the capture of energy of wind waves to do useful work – for example, electricity generation, water desalination, or pumping water. A machine that exploits wave power is a wave energy converter (WEC).
Waves are generated primarily by wind passing over the sea's surface and also by tidal forces, temperature variations, and other factors. As long as the waves propagate slower than the wind speed just above, energy is transferred from the wind to the waves. Air pressure differences between the windward and leeward sides of a wave crest and surface friction from the wind cause shear stress and wave growth.
Wave power as a descriptive term is different from tidal power, which seeks to primarily capture the energy of the current caused by the gravitational pull of the Sun and Moon. However, wave power and tidal power are not fundamentally distinct and have significant cross-over in technology and implementation. Other forces can create currents, including breaking waves, wind, the Coriolis effect, cabbeling, and temperature and salinity differences.
As of 2023, wave power is not widely employed for commercial applications, after a long series of trial projects. Attempts to use this energy began in 1890 or earlier, mainly due to its high power density. Just below the ocean's water surface the wave energy flow, in time-average, is typically five times denser than the wind energy flow 20 m above the sea surface, and 10 to 30 times denser than the solar energy flow.
In 2000 the world's first commercial wave power device, the Islay LIMPET was installed on the coast of Islay in Scotland and connected to the UK national grid. In 2008, the first experimental multi-generator wave farm was opened in Portugal at the Aguçadoura wave park. Both projects have since ended.
Wave energy converters can be classified based on their working principle as either:
<templatestyles src="Template:TOC limit/styles.css" />
History.
The first known patent to extract energy from ocean waves was in 1799, filed in Paris by Pierre-Simon Girard and his son. An early device was constructed around 1910 by Bochaux-Praceique to power his house in Royan, France. It appears that this was the first oscillating water-column type of wave-energy device. From 1855 to 1973 there were 340 patents filed in the UK alone.
Modern pursuit of wave energy was pioneered by Yoshio Masuda's 1940s experiments. He tested various concepts, constructing hundreds of units used to power navigation lights. Among these was the concept of extracting power from the angular motion at the joints of an articulated raft, which Masuda proposed in the 1950s.
The oil crisis in 1973 renewed interest in wave energy. Substantial wave-energy development programmes were launched by governments in several countries, in particular in the UK, Norway and Sweden. Researchers re-examined waves' potential to extract energy, notably Stephen Salter, Johannes Falnes, Kjell Budal, Michael E. McCormick, David Evans, Michael French, Nick Newman, and C. C. Mei.
Salter's 1974 invention became known as Salter's duck or "nodding duck", officially the Edinburgh Duck. In small-scale tests, the Duck's curved cam-like body can stop 90% of wave motion and can convert 90% of that to electricity, giving 81% efficiency. In the 1980s, several other first-generation prototypes were tested, but as oil prices ebbed, wave-energy funding shrank. Climate change later reenergized the field.
The world's first wave energy test facility was established in Orkney, Scotland in 2003 to kick-start the development of a wave and tidal energy industry. The European Marine Energy Centre(EMEC) has supported the deployment of more wave and tidal energy devices than any other single site. Subsequent to its establishment test facilities occurred also in many other countries around the world, providing services and infrastructure for device testing.
The £10 million Saltire prize challenge was to be awarded to the first to be able to generate 100 GWh from wave power over a continuous two-year period by 2017 (about 5.7 MW average). The prize was never awarded. A 2017 study by Strathclyde University and Imperial College focused on the failure to develop "market ready" wave energy devices – despite a UK government investment of over £200 million over 15 years.
Public bodies have continued and in many countries stepped up the research and development funding for wave energy during the 2010s. This includes both EU, US and UK where the annual allocation has typically been in the range 5-50 million USD. Combined with private funding, this has led to a large number of ongoing wave energy projects (see List of wave power projects).
Physical concepts.
Like most fluid motion, the interaction between ocean waves and energy converters is a high-order nonlinear phenomenon. It is described using the incompressible Navier-Stokes equations
formula_0where formula_1 is the fluid velocity, formula_2 is the pressure, formula_3 the density, formula_4 the viscosity, and formula_5 the net external force on each fluid particle (typically gravity). Under typical conditions, however, the movement of waves is described by Airy wave theory, which posits that
In situations relevant for energy harvesting from ocean waves these assumptions are usually valid.
Airy equations.
The first condition implies that the motion can be described by a velocity potential formula_6:formula_7which must satisfy the Laplace equation,formula_8In an ideal flow, the viscosity is negligible and the only external force acting on the fluid is the earth gravity formula_9. In those circumstances, the Navier-Stokes equations reduces to formula_10which integrates (spatially) to the Bernoulli conservation law:formula_11
Linear potential flow theory.
When considering small amplitude waves and motions, the quadratic term formula_12 can be neglected, giving the linear Bernoulli equation,formula_13and third Airy assumptions then implyformula_14These constraints entirely determine sinusoidal wave solutions of the form formula_15where formula_16 determines the wavenumber of the solution and formula_17 and formula_18 are determined by the boundary constraints (and formula_16). Specifically,formula_19The surface elevation formula_20 can then be simply derived as formula_21a plane wave progressing along the x-axis direction.
Consequences.
Oscillatory motion is highest at the surface and diminishes exponentially with depth. However, for standing waves (clapotis) near a reflecting coast, wave energy is also present as pressure oscillations at great depth, producing microseisms. Pressure fluctuations at greater depth are too small to be interesting for wave power conversion.
The behavior of Airy waves offers two interesting regimes: water deeper than half the wavelength, as is common in the sea and ocean, and shallow water, with wavelengths larger than about twenty times the water depth. Deep waves are dispersionful: Waves of long wavelengths propagate faster and tend to outpace those with shorter wavelengths. Deep-water group velocity is half the phase velocity. Shallow water waves are dispersionless: group velocity is equal to phase velocity, and wavetrains propagate undisturbed.
The following table summarizes the behavior of waves in the various regimes:
Wave power formula.
In deep water where the water depth is larger than half the wavelength, the wave energy flux is
formula_22
with "P" the wave energy flux per unit of wave-crest length, "H""m0" the significant wave height, "T""e" the wave energy period, "ρ" the water density and "g" the acceleration by gravity. The above formula states that wave power is proportional to the wave energy period and to the square of the wave height. When the significant wave height is given in metres, and the wave period in seconds, the result is the wave power in kilowatts (kW) per metre of wavefront length.
For example, consider moderate ocean swells, in deep water, a few km off a coastline, with a wave height of 3 m and a wave energy period of 8 s. Solving for power produces
formula_23
or 36 kilowatts of power potential per meter of wave crest.
In major storms, the largest offshore sea states have significant wave height of about 15 meters and energy period of about 15 seconds. According to the above formula, such waves carry about 1.7 MW of power across each meter of wavefront.
An effective wave power device captures a significant portion of the wave energy flux. As a result, wave heights diminish in the region behind the device.
Energy and energy flux.
In a sea state, the mean energy density per unit area of gravity waves on the water surface is proportional to the wave height squared, according to linear wave theory:
formula_24
where "E" is the mean wave energy density per unit horizontal area (J/m2), the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy, both contributing half to the wave energy density "E", as can be expected from the equipartition theorem.
The waves propagate on the surface, where crests travel with the phase velocity while the energy is transported horizontally with the group velocity. The mean transport rate of the wave energy through a vertical plane of unit width, parallel to a wave crest, is the energy flux (or wave power, not to be confused with the output produced by a device), and is equal to:
formula_25 with "cg" the group velocity (m/s).
Due to the dispersion relation for waves under gravity, the group velocity depends on the wavelength "λ", or equivalently, on the wave period "T".
Wave height is determined by wind speed, the length of time the wind has been blowing, fetch (the distance over which the wind excites the waves) and by the bathymetry (which can focus or disperse the energy of the waves). A given wind speed has a matching practical limit over which time or distance do not increase wave size. At this limit the waves are said to be "fully developed". In general, larger waves are more powerful but wave power is also determined by wavelength, water density, water depth and acceleration of gravity.
Wave energy converters.
Wave energy converters (WECs) are generally categorized by the method, by location and by the power take-off system. Locations are shoreline, nearshore and offshore. Types of power take-off include: hydraulic ram, elastomeric hose pump, pump-to-shore, hydroelectric turbine, air turbine, and linear electrical generator.
The four most common approaches are:
Point absorber buoy.
This device floats on the surface, held in place by cables connected to the seabed. The point-absorber has a device width much smaller than the incoming wavelength λ. Energy is absorbed by radiating a wave with destructive interference to the incoming waves. Buoys use the swells' rise and fall to generate electricity directly via linear generators, generators driven by mechanical linear-to-rotary converters, or hydraulic pumps. Energy extracted from waves may affect the shoreline, implying that sites should remain well offshore.
One point absorber design tested at commercial scale by CorPower features a negative spring that improves performance and protects the buoy in very large waves. It also has an internal pneumatic cylinder that keeps the buoy at a fixed distance from the seabed regardless of the state of the tide. Under normal operating conditions, the buoy bobs up and down at double the wave amplitude by adjusting the phase of its movements. It rises with a slight delay from the wave, which allows it to extract more energy. The firm claimed a 300% increase (600 kW) in power generation compared to a buoy without phase adjustments in tests completed in 2024.
Surface attenuator.
These devices use multiple floating segments connected to one another. They are oriented perpendicular to incoming waves. A flexing motion is created by swells, and that motion drives hydraulic pumps to generate electricity. The Pelamis Wave Energy Converter is one of the more well-known attenuator concepts, although this is no longer being developed.
Oscillating wave surge converter.
These devices typically have one end fixed to a structure or the seabed while the other end is free to move. Energy is collected from the relative motion of the body compared to the fixed point. Converters often come in the form of floats, flaps, or membranes. Some designs incorporate parabolic reflectors to focus energy at the point of capture. These systems capture energy from the rise and fall of waves.
Oscillating water column.
Oscillating water column devices can be located onshore or offshore. Swells compress air in an internal chamber, forcing air through a turbine to create electricity. Significant noise is produced as air flows through the turbines, potentially affecting nearby birds and marine organisms. Marine life could possibly become trapped or entangled within the air chamber. It draws energy from the entire water column.
Overtopping device.
Overtopping devices are long structures that use wave velocity to fill a reservoir to a greater water level than the surrounding ocean. The potential energy in the reservoir height is captured with low-head turbines. Devices can be on- or offshore.
Submerged pressure differential.
Submerged pressure differential based converters use flexible (typically reinforced rubber) membranes to extract wave energy. These converters use the difference in pressure at different locations below a wave to produce a pressure difference within a closed power take-off hydraulic system. This pressure difference is usually used to produce flow, which drives a turbine and electrical generator. Submerged pressure differential converters typically use flexible membranes as the working surface between the water and the power take-off. Membranes are pliant and low mass, which can strengthen coupling with the wave's energy. Their pliancy allows large changes in the geometry of the working surface, which can be used to tune the converter for specific wave conditions and to protect it from excessive loads in extreme conditions.
A submerged converter may be positioned either on the seafloor or in midwater. In both cases, the converter is protected from water impact loads which can occur at the free surface. Wave loads also diminish in non-linear proportion to the distance below the free surface. This means that by optimizing depth, protection from extreme loads and access to wave energy can be balanced.
Floating in-air converters.
Floating in-air converters potentially offer increased reliability because the device is located above the water, which also eases inspection and maintenance. Examples of different concepts of floating in-air converters include:
Submerged wave energy converters.
In early 2024, a fully submerged wave energy converter using point absorber-type wave energy technology was approved in Spain. The converter includes a buoy that is moored to the bottom and situated below the surface, out of sight of people and away from storm waves.
Environmental effects.
Common environmental concerns associated with marine energy include:
Potential.
Wave energy's worldwide theoretical potential has been estimated to be greater than 2 TW. Locations with the most potential for wave power include the western seaboard of Europe, the northern coast of the UK, and the Pacific coastlines of North and South America, Southern Africa, Australia, and New Zealand. The north and south temperate zones have the best sites for capturing wave power. The prevailing westerlies in these zones blow strongest in winter.
The National Renewable Energy Laboratory (NREL) estimated the theoretical wave energy potential for various countries. It estimated that the US' potential was equivalent to 1170 TWh per year or almost 1/3 of the country's electricity consumption. The Alaska coastline accounted for ~50% of the total.
The technical and economical potential will be lower than the given values for the theoretical potential.
Challenges.
Environmental impacts must be addressed. Socio-economic challenges include the displacement of commercial and recreational fishermen, and may present navigation hazards. Supporting infrastructure, such as grid connections, must be provided. Commercial WECs have not always been successful. In 2019, for example, Seabased Industries AB in Sweden was liquidated due to "extensive challenges in recent years, both practical and financial".
Current wave power generation technology is subject to many technical limitations. These limitations stem from the complex and dynamic nature of ocean waves, which require robust and efficient technology to capture the energy. Challenges include designing and building wave energy devices that can withstand the corrosive effects of saltwater, harsh weather conditions, and extreme wave forces. Additionally, optimizing the performance and efficiency of wave energy converters, such as oscillating water column (OWC) devices, point absorbers, and overtopping devices, requires overcoming engineering complexities related to the dynamic and variable nature of waves. Furthermore, developing effective mooring and anchoring systems to keep wave energy devices in place in the harsh ocean environment, and developing reliable and efficient power take-off mechanisms to convert the captured wave energy into electricity, are also technical challenges in wave power generation. As the wave energy dissipation by a submerged flexible mound breakwater is greater than that of a rigid submerged structure, greater wave energy dissipation is expected due to highly deformed shape of the structure.
Wave farms.
A wave farm (wave power farm or wave energy park) is a group of colocated wave energy devices. The devices interact hydrodynamically and electrically, according to the number of machines, spacing and layout, wave climate, coastal and benthic geometry, and control strategies. The design process is a multi-optimization problem seeking high power production, low costs and limited power fluctuations. Nearshore wave farms have substantial impact on beach dynamics. For instance, wave farms significantly reduce erosion which demonstrates that this synergy between coastal protection and energy production enhances the economic viability of wave energy. Additional research finds that wave farms located near lagoons can potentially provide effective coastal protection during maritime spatial planning.
Patents.
A UK-based company has developed a Waveline Magnet that can achieve a levelized cost of electricity of £0.01/kWh with minimal levels of maintenance.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\frac{\\partial\\vec{u}}{\\partial t}+(\\vec{u}\\cdot\\vec{\\nabla})\\vec{u}&=\\nu\\Delta\\vec{u}+\\frac{\\vec{F_\\text{ext}}-\\vec{\\nabla}p}{\\rho} \\\\\n\\vec{\\nabla}\\cdot\\vec{u}&=0\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\vec u(t, x, y, z)"
},
{
"math_id": 2,
"text": "p\n\n"
},
{
"math_id": 3,
"text": "\\rho\n\n"
},
{
"math_id": 4,
"text": "\\nu\n\n"
},
{
"math_id": 5,
"text": "\\vec{F_\\text{ext}}\n\n"
},
{
"math_id": 6,
"text": " \\phi(t,x,y,z)"
},
{
"math_id": 7,
"text": " {\\vec{\\nabla}\\times\\vec{u}=\\vec{0}}\\Leftrightarrow{\\vec{u}=\\vec{\\nabla}\\phi}\\text{,}"
},
{
"math_id": 8,
"text": " \\nabla^2\\phi=0\\text{.}"
},
{
"math_id": 9,
"text": " \\vec{F_\\text{ext}}=(0,0,-\\rho g)"
},
{
"math_id": 10,
"text": "{\\partial\\vec\\nabla\\phi \\over\\partial t}+{1 \\over2}\\vec \\nabla\\bigl(\\vec\\nabla\\phi\\bigr)^2=\n-{1 \\over \\rho}\\cdot\\vec\\nabla p +{1 \\over \\rho}\\vec\\nabla\\bigl(\\rho gz\\bigr),\n\n"
},
{
"math_id": 11,
"text": "{\\partial\\phi \\over\\partial t}+{1 \\over2}\\bigl(\\vec\\nabla\\phi\\bigr)^2\n+{1 \\over \\rho} p + gz=(\\text{const})\\text{.}\n\n"
},
{
"math_id": 12,
"text": "\\left(\\vec{\\nabla}\\phi\\right)^2\n\n"
},
{
"math_id": 13,
"text": "{\\partial\\phi \\over\\partial t}+{1 \\over \\rho} p + gz=(\\text{const})\\text{.}\n\n"
},
{
"math_id": 14,
"text": "\\begin{align}\n&{\\partial^2\\phi \\over\\partial t^2} + g{\\partial\\phi \\over\\partial z}=0\\quad\\quad\\quad(\\text{surface}) \\\\\n&{\\partial\\phi \\over\\partial z}=0\\phantom{{\\partial^2\\phi \\over\\partial t^2}+{}}\\,\\,\\quad\\quad\\quad(\\text{seabed})\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\\phi=A(z)\\sin{\\!(kx-\\omega t)}\\text{,}\n\n"
},
{
"math_id": 16,
"text": "k\n\n"
},
{
"math_id": 17,
"text": "A(z)\n\n"
},
{
"math_id": 18,
"text": "\\omega\n\n"
},
{
"math_id": 19,
"text": "\\begin{align}\n&A(z)={gH \\over 2\\omega}{\\cosh(k(z+h)) \\over \\cosh(kh)} \\\\\n&\\omega=gk\\tanh(kh)\\text{.}\n\\end{align}\n"
},
{
"math_id": 20,
"text": "\\eta\n\n"
},
{
"math_id": 21,
"text": "\\eta=-{1 \\over g}{\\partial \\phi \\over \\partial t}={H \\over 2}\\cos(kx-\\omega t)\\text{:}\n"
},
{
"math_id": 22,
"text": "\n P = \\frac{\\rho g^2}{64\\pi} H_{m0}^2 T_e\n \\approx \\left(0.5 \\frac{\\text{kW}}{\\text{m}^3 \\cdot \\text{s}} \\right) H_{m0}^2\\; T_e,\n"
},
{
"math_id": 23,
"text": "\n P \\approx 0.5 \\frac{\\text{kW}}{\\text{m}^3 \\cdot \\text{s}} (3 \\cdot \\text{m})^2 (8 \\cdot \\text{s}) \\approx 36 \\frac{\\text{kW}}{\\text{m}},\n"
},
{
"math_id": 24,
"text": "E=\\frac{1}{16}\\rho g H_{m0}^2,"
},
{
"math_id": 25,
"text": "P = E\\, c_g, "
}
]
| https://en.wikipedia.org/wiki?curid=761974 |
7620308 | Elementary reaction | Chemical reaction with a single step and transition state
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule A dissociates or isomerises to form the products(s)
formula_0
At constant temperature, the rate of such a reaction is proportional to the concentration of the species A
formula_1
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, A and B, react together to form the product(s)
formula_2
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species A and B
formula_3
The rate expression for an elementary bimolecular reaction is sometimes referred to as the law of mass action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. | [
{
"math_id": 0,
"text": "\\mbox{A} \\rightarrow \\mbox{products.}"
},
{
"math_id": 1,
"text": "\\frac{d[\\mbox{A}]}{dt}=-k[\\mbox{A}]."
},
{
"math_id": 2,
"text": "\\mbox{A + B} \\rightarrow \\mbox{products.}"
},
{
"math_id": 3,
"text": "\\frac{d[\\mbox{A}]}{dt}=\\frac{d[\\mbox{B}]}{dt}=-k[\\mbox{A}][\\mbox{B}]."
}
]
| https://en.wikipedia.org/wiki?curid=7620308 |
762043 | Marginal product | Change in output resulting from employing one more unit of a particular input
In economics and in particular neoclassical economics, the marginal product or marginal physical productivity of an input (factor of production) is the change in output resulting from employing one more unit of a particular input (for instance, the change in output when a firm's labor is increased from five to six units), assuming that the quantities of other inputs are kept constant.
The marginal product of a given input can be expressed
as:
formula_0
where formula_1 is the change in the firm's use of the input (conventionally a one-unit change) and formula_2 is the change in the quantity of output produced (resulting from the change in the input). Note that the quantity formula_3 of the "product" is typically defined ignoring external costs and benefits.
If the output and the input are infinitely divisible, so the marginal "units" are infinitesimal, the marginal product is the mathematical derivative of the production function with respect to that input. Suppose a firm's output "Y" is given by the production function:
formula_4
where "K" and "L" are inputs to production (say, capital and labor, respectively). Then the marginal product of capital ("MPK") and marginal product of labor ("MPL") are given by:
formula_5
formula_6
In the "law" of diminishing marginal returns, the marginal product initially increases when more of an input (say labor) is employed, keeping the other input (say capital) constant. Here, labor is the variable input and capital is the fixed input (in a hypothetical two-inputs model). As more and more of variable input (labor) is employed, marginal product starts to fall. Finally, after a certain point, the marginal product becomes negative, implying that the additional unit of labor has "decreased" the output, rather than increasing it. The reason behind this is the diminishing marginal productivity of labor.
The marginal product of labor is the slope of the total product curve, which is the production function plotted against labor usage for a fixed level of usage of the capital input.
In the neoclassical theory of competitive markets, the marginal product of labor equals the real wage. In aggregate models of perfect competition, in which a single good is produced and that good is used both in consumption and as a capital good, the marginal product of capital equals its rate of return. As was shown in the Cambridge capital controversy, this proposition about the marginal product of capital cannot generally be sustained in multi-commodity models in which capital and consumption goods are distinguished.
Relationship of marginal product (MPP) with the total product (TPP).
The relationship can be explained in three phases-
(1) Initially, as the quantity of variable input is increased, TPP rises at an increasing rate. In this phase, MPP also rises.
(2) As more and more quantities of the variable inputs are employed, TPP increases at a diminishing rate. In this phase, MPP starts to fall.
(3) When the TPP reaches its maximum, MPP is zero. Beyond this point, TPP starts to fall and MPP becomes negative.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "MP = \\frac{\\Delta Y}{\\Delta X}"
},
{
"math_id": 1,
"text": "\\Delta X"
},
{
"math_id": 2,
"text": "\\Delta Y"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "Y=F(K,L)"
},
{
"math_id": 5,
"text": "MPK=\\frac{\\partial F}{\\partial K}"
},
{
"math_id": 6,
"text": "MPL=\\frac{\\partial F}{\\partial L}"
}
]
| https://en.wikipedia.org/wiki?curid=762043 |
762048 | Diminishing returns | Economic theory
In economics, diminishing returns are the decrease in marginal (incremental) output of a production process as the amount of a single factor of production is incrementally increased, holding all other factors of production equal ("ceteris paribus"). The law of diminishing returns (also known as the law of diminishing marginal productivity) states that in productive processes, increasing a factor of production by one unit, while holding all other production factors constant, will at some point return a lower unit of output per incremental unit of input. The law of diminishing returns does not cause a decrease in overall production capabilities, rather it defines a point on a production curve whereby producing an additional unit of output will result in a loss and is known as negative returns. Under diminishing returns, output remains positive, but productivity and efficiency decrease.
The modern understanding of the law adds the dimension of holding other outputs equal, since a given process is understood to be able to produce co-products. An example would be a factory increasing its saleable product, but also increasing its CO2 production, for the same input increase. The law of diminishing returns is a fundamental principle of both micro and macro economics and it plays a central role in production theory.
The concept of diminishing returns can be explained by considering other theories such as the concept of exponential growth. It is commonly understood that growth will not continue to rise exponentially, rather it is subject to different forms of constraints such as limited availability of resources and capitalisation which can cause economic stagnation. This example of production holds true to this common understanding as production is subject to the four factors of production which are land, labour, capital and enterprise. These factors have the ability to influence economic growth and can eventually limit or inhibit continuous exponential growth. Therefore, as a result of these constraints the production process will eventually reach a point of maximum yield on the production curve and this is where marginal output will stagnate and move towards zero. Innovation in the form of technological advances or managerial progress can minimise or eliminate diminishing returns to restore productivity and efficiency and to generate profit.
This idea can be understood outside of economics theory, for example, population. The population size on Earth is growing rapidly, but this will not continue forever (exponentially). Constraints such as resources will see the population growth stagnate at some point and begin to decline. Similarly, it will begin to decline towards zero but not actually become a negative value, the same idea as in the diminishing rate of return inevitable to the production process.
History.
The concept of diminishing returns can be traced back to the concerns of early economists such as Johann Heinrich von Thünen, Jacques Turgot, Adam Smith, James Steuart, Thomas Robert Malthus, and David Ricardo. The law of diminishing returns can be traced back to the 18th century, in the work of Jacques Turgot. He argued that "each increase [in an input] would be less and less productive." In 1815, David Ricardo, Thomas Malthus, Edward West, and Robert Torrens applied the concept of diminishing returns to land rent. These works were relevant to the committees of Parliament in England, who were investigating why grain prices were so high, and how to reduce them. The four economists concluded that the prices of the products had risen due to the Napoleonic Wars, which affected international trade and caused farmers to move to lands which were undeveloped and further away. In addition, at the end of the Napoleonic Wars, grain imports were restored which caused a decline in prices because the farmers needed to attract customers and sell their products faster.
Classical economists such as Malthus and Ricardo attributed the successive diminishment of output to the decreasing quality of the inputs whereas Neoclassical economists assume that each "unit" of labor is identical. Diminishing returns are due to the disruption of the entire production process as additional units of labor are added to a fixed amount of capital. The law of diminishing returns remains an important consideration in areas of production such as farming and agriculture.
Proposed on the cusp of the First Industrial Revolution, it was motivated with single outputs in mind. In recent years, economists since the 1970s have sought to redefine the theory to make it more appropriate and relevant in modern economic societies. Specifically, it looks at what assumptions can be made regarding number of inputs, quality, substitution and complementary products, and output co-production, quantity and quality.
The origin of the law of diminishing returns was developed primarily within the agricultural industry. In the early 19th century, David Ricardo as well as other English economists previously mentioned, adopted this law as the result of the lived experience in England after the war. It was developed by observing the relationship between prices of wheat and corn and the quality of the land which yielded the harvests. The observation was that at a certain point, that the quality of the land kept increasing, but so did the cost of produce etc. Therefore, each additional unit of labour on agricultural fields, actually provided a diminishing or marginally decreasing return.
Example.
A common example of diminishing returns is choosing to hire more people on a factory floor to alter current manufacturing and production capabilities. Given that the capital on the floor (e.g. manufacturing machines, pre-existing technology, warehouses) is held constant, increasing from one employee to two employees is, theoretically, going to more than double production possibilities and this is called increasing returns.
If 50 people are employed, at some point, increasing the number of employees by two percent (from 50 to 51 employees) would increase output by two percent and this is called constant returns.
Further along the production curve at, for example 100 employees, floor space is likely getting crowded, there are too many people operating the machines and in the building, and workers are getting in each other's way. Increasing the number of employees by two percent (from 100 to 102 employees) would increase output by less than two percent and this is called "diminishing returns."
After achieving the point of maximum output, employing additional workers, this will give negative returns.
Through each of these examples, the floor space and capital of the factor remained constant, i.e., these inputs were held constant. By only increasing the number of people, eventually the productivity and efficiency of the process moved from increasing returns to diminishing returns.
To understand this concept thoroughly, acknowledge the importance of marginal output or marginal returns. Returns eventually diminish because economists measure productivity with regard to additional units (marginal). Additional inputs significantly impact efficiency or returns more in the initial stages. The point in the process before returns begin to diminish is considered the optimal level. Being able to recognize this point is beneficial, as other variables in the production function can be altered rather than continually increasing labor.
Further, examine something such as the Human Development Index, which would presumably continue to rise so long as GDP per capita (in purchasing power parity terms) was increasing. This would be a rational assumption because GDP per capita is a function of HDI. Even GDP per capita will reach a point where it has a diminishing rate of return on HDI. Just think, in a low income family, an average increase of income will likely make a huge impact on the wellbeing of the family. Parents could provide abundantly more food and healthcare essentials for their family. That is a significantly increasing rate of return. But, if you gave the same increase to a wealthy family, the impact it would have on their life would be minor. Therefore, the rate of return provided by that average increase in income is diminishing.
Mathematics.
Signify formula_0
Increasing Returns: formula_1
Constant Returns: formula_2
Diminishing Returns: formula_3
Production function.
There is a widely recognised production function in economics: "Q= f(NR, L, K, t, E)":
Link with output elasticity.
Start from the equation for the marginal product: formula_4
To demonstrate diminishing returns, two conditions are satisfied; marginal product is positive, and marginal product is decreasing.
Elasticity, a function of input and output, formula_5, can be taken for small input changes. If the above two conditions are satisfied, then formula_6.
This works intuitively;
Returns and costs.
There is an inverse relationship between returns of inputs and the cost of production, although other features such as input market conditions can also affect production costs. Suppose that a kilogram of seed costs one dollar, and this price does not change. Assume for simplicity that there are no fixed costs. One kilogram of seeds yields one ton of crop, so the first ton of the crop costs one dollar to produce. That is, for the first ton of output, the marginal cost as well as the average cost of the output is per ton. If there are no other changes, then if the second kilogram of seeds applied to land produces only half the output of the first (showing diminishing returns), the marginal cost would equal per half ton of output, or per ton, and the average cost is per 3/2 tons of output, or /3 per ton of output. Similarly, if the third kilogram of seeds yields only a quarter ton, then the marginal cost equals per quarter ton or per ton, and the average cost is per 7/4 tons, or /7 per ton of output. Thus, diminishing marginal returns imply increasing marginal costs and increasing average costs.
Cost is measured in terms of opportunity cost. In this case the law also applies to societies – the opportunity cost of producing a single unit of a good generally increases as a society attempts to produce more of that good. This explains the bowed-out shape of the production possibilities frontier.
Justification.
"Ceteris paribus".
Part of the reason one input is altered "ceteris paribus", is the idea of disposability of inputs. With this assumption, essentially that some inputs are above the efficient level. Meaning, they can decrease without perceivable impact on output, after the manner of excessive fertiliser on a field.
If input disposability is assumed, then increasing the principal input, while decreasing those excess inputs, could result in the same "diminished return", as if the principal input was changed "certeris paribus". While considered "hard" inputs, like labour and assets, diminishing returns would hold true. In the modern accounting era where inputs can be traced back to movements of financial capital, the same case may reflect constant, or increasing returns.
It is necessary to be clear of the "fine structure" of the inputs before proceeding. In this, "ceteris paribus" is disambiguating.
See also.
<templatestyles src="Div col/styles.css"/>
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "Output = O \\ ,\\ Input = I \\ ,\\ O = f(I)"
},
{
"math_id": 1,
"text": "2\\cdot f(I)<f(2\\cdot I)"
},
{
"math_id": 2,
"text": "2\\cdot f(I)=f(2\\cdot I)"
},
{
"math_id": 3,
"text": "2\\cdot f(I)>f(2\\cdot I)"
},
{
"math_id": 4,
"text": "{\\Delta Out \\over \\Delta In_1}= {{f(In_2, In_1 +\\Delta In_1)-f(In_1,In_2)} \\over \\Delta In_1}"
},
{
"math_id": 5,
"text": "\\epsilon ={In\\over Out}\\cdot{\\delta Out\\over \\delta In}"
},
{
"math_id": 6,
"text": "0<\\epsilon <1"
},
{
"math_id": 7,
"text": "{In\\over Out}"
},
{
"math_id": 8,
"text": "{\\delta Out\\over \\delta In}"
},
{
"math_id": 9,
"text": "0<\\epsilon"
},
{
"math_id": 10,
"text": "{\\delta Out \\over Out}"
},
{
"math_id": 11,
"text": "{\\delta In \\over In}"
},
{
"math_id": 12,
"text": "{\\delta Out \\over Out}/{\\delta In \\over In}={In\\over Out}\\cdot{\\delta Out\\over \\delta In}=\\epsilon < 1\n"
}
]
| https://en.wikipedia.org/wiki?curid=762048 |
7620568 | Computation history | In computer science, a computation history is a sequence of steps taken by an abstract machine in the process of computing its result. Computation histories are frequently used in proofs about the capabilities of certain machines, and particularly about the undecidability of various formal languages.
Formally, a computation history is a (normally finite) sequence of configurations of a formal automaton. Each configuration fully describes the status of the machine at a particular point. To be valid, certain conditions must hold:
In addition, to be complete, a computation history must be finite and
The definitions of "valid initial configuration", "valid transition", and "valid terminal configuration" vary for different kinds of formal machines.
A deterministic automaton has exactly one computation history for a given initial configuration, though the history may be infinite and therefore incomplete.
Finite State Machines.
For a finite state machine formula_0, a configuration is simply
the current state of the machine, together with the remaining input. The first configuration must be the initial state of formula_0 and the complete input. A transition from a configuration formula_1 to
a configuration formula_2 is allowed if formula_3 for
some input symbol formula_4 and if formula_0 has a transition from
formula_5 to formula_6 on input formula_4. The final
configuration must have the empty string formula_7 as its remaining
input; whether formula_0 has accepted or rejected the input depends
on whether the final state is an accepting state.
Turing Machines.
Computation histories are more commonly used in reference to Turing machines. The configuration of a single-tape Turing machine consists of the contents of the tape, the position of the read/write head on the tape, and the current state of the associated state machine; this is usually written
formula_8
where formula_9 is the current state of the machine, represented in some
way that's distinguishable from the tape language, and where formula_9 is
positioned immediately before the position of the read/write head.
Consider a Turing machine formula_0 on input formula_10. The first
configuration must be formula_11, where formula_12
is the initial state of the Turing machine. The machine's state in the final
configuration must be either formula_13 (the accept state) or formula_14
(the reject state). A configuration formula_15 is a valid successor
to configuration formula_16 if there's a transition from the state in
formula_16 to the state in formula_15 which manipulates the
tape and moves the read/write head in a way that produces the result in
formula_15.
Decidability results.
Computation histories can be used to show that certain problems for
pushdown automata are undecidable. This is because the language of
non-accepting computation histories of a Turing machine formula_0
on input formula_10 is a context-free language recognizable by a
non-deterministic pushdown automaton.
We encode a Turing computation history formula_17 as the
string formula_18, where formula_19
is the encoding of configuration formula_16, as discussed above, and where
every other configuration is written in reverse. Before reading a particular
configuration, the pushdown automaton makes a non-deterministic choice
to either ignore the configuration or read it completely onto the stack.
In addition, the automaton verifies that the first configuration is the correct
initial configuration (if not, it accepts) and that the state of the final
configuration of the history is the accept state (if not, it accepts). Since
a non-deterministic automaton accepts if there's any valid way for it to accept,
the automaton described here will discover if the history is not a valid
accepting history and will accept if so, and reject if not.
This same trick cannot be used to recognize "accepting" computation histories
with an NPDA, since non-determinism could be used to skip past a test that would
otherwise fail. A linear-bounded Turing machine is sufficient to recognize
accepting computation histories.
This result allows us to prove that formula_20, the language
of pushdown automata which accept all input, is undecidable. Suppose
we have a decider for it, formula_21. For any Turing machine
formula_0 and input formula_10, we can form the pushdown automaton
formula_22 which accepts non-accepting computation histories for that
machine. formula_23 will accept if and only if there are no
accepting computation histories for formula_0 on formula_10; this
would allow us to decide formula_24, which we know to be undecidable.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "(S,I)"
},
{
"math_id": 2,
"text": "(T,J)"
},
{
"math_id": 3,
"text": "I=aJ"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "\\epsilon"
},
{
"math_id": 8,
"text": "...0011010101q00110101010..."
},
{
"math_id": 9,
"text": "q"
},
{
"math_id": 10,
"text": "w"
},
{
"math_id": 11,
"text": "q_0 w_0 w_1 ..."
},
{
"math_id": 12,
"text": "q_0"
},
{
"math_id": 13,
"text": "q_a"
},
{
"math_id": 14,
"text": "q_r"
},
{
"math_id": 15,
"text": "c_{i+1}"
},
{
"math_id": 16,
"text": "c_i"
},
{
"math_id": 17,
"text": "c_0,c_1,...,c_n"
},
{
"math_id": 18,
"text": "C_0 \\# C^r_1 \\# C_2 \\# C^r_3 \\# ... \\# C_n"
},
{
"math_id": 19,
"text": "C_i"
},
{
"math_id": 20,
"text": "ALL_{PDA}"
},
{
"math_id": 21,
"text": "D"
},
{
"math_id": 22,
"text": "P"
},
{
"math_id": 23,
"text": "D(P)"
},
{
"math_id": 24,
"text": "A_{TM}"
}
]
| https://en.wikipedia.org/wiki?curid=7620568 |
762203 | Order embedding | In order theory, a branch of mathematics, an order embedding is a special kind of monotone function, which provides a way to include one partially ordered set into another. Like Galois connections, order embeddings constitute a notion which is strictly weaker than the concept of an order isomorphism. Both of these weakenings may be understood in terms of category theory.
Formal definition.
Formally, given two partially ordered sets (posets) formula_0 and formula_1, a function formula_2 is an "order embedding" if formula_3 is both order-preserving and order-reflecting, i.e. for all formula_4 and formula_5 in formula_6, one has
formula_7
Such a function is necessarily injective, since formula_8 implies formula_9 and formula_10. If an order embedding between two posets formula_6 and formula_11 exists, one says that formula_6 can be embedded into formula_11.
Properties.
An order isomorphism can be characterized as a surjective order embedding. As a consequence, any order embedding "f" restricts to an isomorphism between its domain "S" and its image "f"("S"), which justifies the term "embedding". On the other hand, it might well be that two (necessarily infinite) posets are mutually order-embeddable into each other without being order-isomorphic.
An example is provided by the open interval formula_12 of real numbers and the corresponding closed interval formula_13. The function formula_15 maps the former to the subset formula_16 of the latter and the latter to the subset formula_17 of the former, see picture. Ordering both sets in the natural way, formula_3 is both order-preserving and order-reflecting (because it is an affine function). Yet, no isomorphism between the two posets can exist, since e.g. formula_13 has a least element while formula_12 does not.
For a similar example using arctan to order-embed the real numbers into an interval, and the identity map for the reverse direction, see e.g. Just and Weese (1996).
A retract is a pair formula_18 of order-preserving maps whose composition formula_19 is the identity. In this case, formula_3 is called a coretraction, and must be an order embedding. However, not every order embedding is a coretraction. As a trivial example, the unique order embedding formula_20 from the empty poset to a nonempty poset has no retract, because there is no order-preserving map formula_21. More illustratively, consider the set formula_6 of divisors of 6, partially ordered by "x" divides "y", see picture. Consider the embedded sub-poset formula_22. A retract of the embedding formula_14 would need to send formula_23 to somewhere in formula_22 above both formula_24 and formula_25, but there is no such place.
Additional perspectives.
Posets can straightforwardly be viewed from many perspectives, and order embeddings are basic enough that they tend to be visible from everywhere. For example:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(S, \\leq)"
},
{
"math_id": 1,
"text": "(T, \\preceq)"
},
{
"math_id": 2,
"text": "f: S \\to T"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "x\\leq y \\text{ if and only if } f(x)\\preceq f(y)."
},
{
"math_id": 8,
"text": "f(x) = f(y)"
},
{
"math_id": 9,
"text": "x \\leq y"
},
{
"math_id": 10,
"text": "y \\leq x"
},
{
"math_id": 11,
"text": "T"
},
{
"math_id": 12,
"text": "(0,1)"
},
{
"math_id": 13,
"text": "[0,1]"
},
{
"math_id": 14,
"text": "id: \\{ 1,2,3 \\} \\to S"
},
{
"math_id": 15,
"text": "f(x) = (94x+3) / 100"
},
{
"math_id": 16,
"text": "(0.03,0.97)"
},
{
"math_id": 17,
"text": "[0.03,0.97]"
},
{
"math_id": 18,
"text": "(f,g)"
},
{
"math_id": 19,
"text": "g \\circ f"
},
{
"math_id": 20,
"text": "f: \\emptyset \\to \\{1\\}"
},
{
"math_id": 21,
"text": "g: \\{1\\} \\to \\emptyset"
},
{
"math_id": 22,
"text": "\\{ 1,2,3 \\}"
},
{
"math_id": 23,
"text": "6"
},
{
"math_id": 24,
"text": "2"
},
{
"math_id": 25,
"text": "3"
}
]
| https://en.wikipedia.org/wiki?curid=762203 |
7622892 | Point source | Single, negligibly-sized object from which light, sound, energy, etc. emanates
A point source is a single identifiable "localised" source of something. A point source has negligible extent, distinguishing it from other source geometries. Sources are called point sources because in mathematical modeling, these sources can usually be approximated as a mathematical point to simplify analysis.
The actual source need not be physically small, if its size is negligible relative to other length scales in the problem. For example, in astronomy, stars are routinely treated as point sources, even though they are in actuality much larger than the Earth.
In three dimensions, the density of something leaving a point source decreases in proportion to the inverse square of the distance from the source, if the distribution is isotropic, and there is no absorption or other loss.
Mathematics.
In mathematics, a point source is a singularity from which flux or flow is emanating. Although singularities such as this do not exist in the observable universe, mathematical point sources are often used as approximations to reality in physics and other fields.
Visible electromagnetic radiation (light).
Generally, a source of light can be considered a point source if the resolution of the imaging instrument is too low to resolve the source's apparent size. There are two types and sources of light: a point source and an extended source.
Mathematically an object may be considered a point source if its angular size, formula_0, is much smaller than the resolving power of the telescope:
formula_1, where formula_2 is the wavelength of light and formula_3 is the telescope diameter.
Examples:
Other electromagnetic radiation.
Radio wave sources which are smaller than one radio wavelength are also generally treated as point sources. Radio emissions generated by a fixed electrical circuit are usually polarized, producing anisotropic radiation. If the propagating medium is lossless, however, the radiant power in the radio waves at a given distance will still vary as the inverse square of the distance if the angle remains constant to the source polarization.
Gamma ray and X-ray sources may be treated as a point source if sufficiently small. Radiological contamination and nuclear sources are often point sources. This has significance in health physics and radiation protection.
Examples:
Sound.
Sound is an oscillating pressure wave. As the pressure oscillates up and down, an audio point source acts in turn as a fluid point source and then a fluid point sink. (Such an object does not exist physically, but is often a good simplified model for calculations.)
Examples:
A coaxial loudspeaker is designed to work as a point source to allow a wider field for listening.
Ionizing radiation.
Point sources are used as a means of calibrating ionizing radiation instruments. They are usually a sealed capsule and are most commonly used for gamma, x-ray and beta measuring instruments.
Heat.
In vacuum, heat escapes as radiation isotropically. If the source remains stationary in a compressible fluid such as air, flow patterns can form around the source due to convection, leading to an anisotropic pattern of heat loss. The most common form of anisotropy is the formation of a thermal plume above the heat source.
Examples:
Fluid.
Fluid point sources are commonly used in fluid dynamics and aerodynamics. A point source of fluid is the inverse of a fluid point sink (a point where fluid is removed). Whereas fluid sinks exhibit complex rapidly changing behaviour such as is seen in vortices (for example water running into a plug-hole or tornadoes generated at points where air is rising), fluid sources generally produce simple flow patterns, with stationary isotropic point sources generating an expanding sphere of new fluid. If the fluid is moving (such as wind in air or currents in water) a plume is generated from the point source.
Examples:
Pollution.
Sources of various types of pollution are often considered as point sources in large-scale studies of pollution.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta"
},
{
"math_id": 1,
"text": "\\theta << \\lambda / D"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "D"
}
]
| https://en.wikipedia.org/wiki?curid=7622892 |
7622915 | Parry–Daniels map | In mathematics, the Parry–Daniels map is a function studied in the context of dynamical systems. Typical questions concern the existence of an invariant or ergodic measure for the map.
It is named after the English mathematician Bill Parry and the British statistician Henry Daniels, who independently studied the map in papers published in 1962.
Definition.
Given an integer "n" ≥ 1, let Σ denote the "n"-dimensional simplex in R"n"+1 given by
formula_0
Let "π" be a permutation such that
formula_1
Then the Parry–Daniels map
formula_2
is defined by
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma := \\{ x = (x_0, x_1, \\dots, x_n) \\in \\mathbb{R}^{n + 1} | 0 \\leq x_i \\leq 1 \\mbox{ for each } i \\mbox{ and } x_0 + x_1 + \\dots + x_n = 1 \\}."
},
{
"math_id": 1,
"text": "x_{\\pi(0)} \\leq x_{\\pi (1)} \\leq \\dots \\leq x_{\\pi (n)}."
},
{
"math_id": 2,
"text": "T_{\\pi} : \\Sigma \\to \\Sigma"
},
{
"math_id": 3,
"text": "T_\\pi (x_0, x_1, \\dots, x_n) := \\left( \\frac{x_{\\pi (0)}}{x_{\\pi (n)}} , \\frac{x_{\\pi (1)} - x_{\\pi (0)}}{x_{\\pi (n)}}, \\dots, \\frac{x_{\\pi (n)} - x_{\\pi (n - 1)}}{x_{\\pi (n)}} \\right)."
}
]
| https://en.wikipedia.org/wiki?curid=7622915 |
7623862 | Rubber elasticity | Property of crosslinked rubber
Rubber elasticity refers to a property of crosslinked rubber, namely that it can be stretched up to a factor of 10 from its original length, and returns very nearly to its original length upon release. This can be repeated many times with no apparent degradation to the rubber.
Rubber is a member of a larger class of materials called elastomers. Elastomers have played a key role in the development of new technologies in the 20th century, and made a substantial contribution to the global economy.
Rubber elasticity is produced by many complex molecular processes, and its complicated explanation requires a knowledge base consisting of advanced mathematics, chemistry, statistical physics, and the concept of entropy. Entropy may be thought of as a measure of the thermal energy that is stored in a molecule.
Common rubbers, such as polybutadiene and polyisoprene (also called natural rubber), are produced by a process called polymerization. The process starts off with very long molecules (polymers) that are built up sequentially by adding short molecular backbone units through chemical reactions. A rubber polymer follows a random, zigzag path in three dimensions, intermingling with many other rubber molecules. An elastomer is created by the addition of a small amount of a cross linking molecule such as sulfur.
When heated, the crosslinking molecule causes a reaction that chemically joins (bonds) two of the rubber molecules together at some point (a crosslink). Because each rubber polymer is very long, each one participates in many crosslinks with many other rubber molecules forming a continuous network.
History.
Following its introduction to Europe from America in the late 15th century, natural rubber (polyisoprene) was regarded mostly as a curiosity. Its most useful application was its ability to erase pencil marks on paper by rubbing, hence its name. One of its most peculiar properties is a slight (but detectable) increase in temperature that occurs when a sample of rubber is stretched. If it is allowed to quickly retract, an equal amount of cooling is observed. This phenomenon caught the attention of the English physicist John Gough. In 1805 he published some qualitative observations on this characteristic as well as how the required stretching force increased with temperature.<br>
By the mid-nineteenth century, the theory of thermodynamics was being developed and within this framework, the English mathematician and physicist Lord Kelvin showed that the change in mechanical energy required to stretch a rubber sample should be proportional to the increase in temperature. This would later be associated with a change in entropy. The connection to thermodynamics was firmly established in 1859 when the English physicist James Joule published the first careful measurements of the temperature increase that occurred as a rubber sample was stretched. This work confirmed the theoretical predictions of Lord Kelvin.<br>
In 1838 the American inventor Charles Goodyear found that natural rubber's elastic properties could be immensely improved by adding a small amount of sulfur to produce chemical cross-links between adjacent polyisoprene molecules.
Before it is cross-linked, the liquid natural rubber consists of very long polymer molecules, containing thousands of isoprene backbone units, connected head-to-tail (commonly referred to as chains). Every chain follows a random, three-dimensional path through the polymer liquid and is in contact with thousands of other nearby chains. When heated to about 150C, reactive cross-linker molecules, such as sulfur or dicumyl peroxide, can decompose and the subsequent chemical reactions produce a chemical bond between adjacent chains. A crosslink can be visualized as the letter 'X' but with some of its arms pointing out of the plane. The result is a three dimensional molecular network.
All of the polyisoprene molecules are connected together at multiple points by these chemical bonds (network nodes) resulting in a single giant molecule and all information about the original long polymers is lost. A rubber band is a single molecule, as is a latex glove. The sections of polyisoprene between two adjacent cross-links are called network chains and can contain up to several hundred isoprene units. In natural rubber, each cross-link produces a network node with four chains emanating from it. It is the network that gives rise to these elastic properties.
Because of the enormous economic and technological importance of rubber, predicting how a molecular network responds to mechanical strains has been of enduring interest to scientists and engineers. To understand the elastic properties of rubber, theoretically, it is necessary to know both the physical mechanisms that occur at the molecular level and how the random-walk nature of the polymer chain defines the network. The physical mechanisms that occur within short sections of the polymer chains produce the elastic forces and the network morphology determines how these forces combine to produce the macroscopic stress that is observed when a rubber sample is deformed (e.g. subjected to tensile strain).
Molecular-level models.
There are actually several physical mechanisms that produce the elastic forces within the network chains as a rubber sample is stretched. Two of these arise from entropy changes and one is associated with the distortion of the molecular bond angles along the chain backbone. These three mechanisms are immediately apparent when a moderately thick rubber sample is stretched manually.
Initially, the rubber feels quite stiff (i.e. the force must be increased at a high rate with respect to the strain). At intermediate strains, the required increase in force is much lower to cause the same amount of stretch. Finally, as the sample approaches the breaking point, its stiffness increases markedly. What the observer is noticing are the changes in the modulus of elasticity that are due to the different molecular mechanisms. These regions can be seen in Fig. 1, a typical stress vs. strain measurement for natural rubber. The three mechanisms (labelled Ia, Ib, and II) predominantly correspond to the regions shown on the plot.
The concept of entropy comes to us from the area of mathematical physics called statistical mechanics which is concerned with the study of large thermal systems, e.g. rubber networks at room temperature. Although the detailed behavior of the constituent chains are random and far too complex to study individually, we can obtain very useful information about their "average" behavior from a statistical mechanics analysis of a large sample. There are no other examples of how entropy changes can produce a force in our everyday experience. One may regard the entropic forces in polymer chains as arising from the thermal collisions that their constituent atoms experience with the surrounding material. It is this constant jostling that produces a resisting (elastic) force in the chains as they are forced to become straight.
While stretching a rubber sample is the most common example of elasticity, it also occurs when rubber is compressed. Compression may be thought of as a two dimensional expansion as when a balloon is inflated. The molecular mechanisms that produce the elastic force are the same for all types of strain.
When these elastic force models are combined with the complex morphology of the network, it is not possible to obtain simple analytic formulae to predict the macroscopic stress. It is only via numerical simulations on computers that it is possible to capture the complex interaction between the molecular forces and the network morphology to predict the stress and ultimate failure of a rubber sample as it is strained.
The Molecular Kink Paradigm for rubber elasticity.
The Molecular Kink Paradigm proceeds from the intuitive notion that molecular chains that make up a natural rubber (polyisoprene) network are constrained by surrounding chains to remain within a "tube." Elastic forces produced in a chain, as a result of some applied strain, are propagated along the chain contour within this tube. Fig. 2 shows a representation of a four-carbon isoprene backbone unit with an extra carbon atom at each end to indicate its connections to adjacent units on a chain. It has three single C-C bonds and one double bond. It is principally by rotating about the C-C single bonds that a polyisoprene chain randomly explores its possible conformations.
Sections of chain containing between two and three isoprene units have sufficient flexibility that they may be considered statistically de-correlated from one another. That is, there is no directional correlation along the chain for distances greater than this distance, referred to as a Kuhn length. These non-straight regions evoke the concept of "kinks" and are in fact a manifestation of the random-walk nature of the chain.
Since a kink is composed of several isoprene units, each having three carbon-carbon single bonds, there are many possible conformations available to a kink, each with a distinct energy and end-to-end distance. Over time scales of seconds to minutes, only these relatively short sections of the chain (i.e. kinks) have sufficient volume to move freely amongst their possible rotational conformations. The thermal interactions tend to keep the kinks in a state of constant flux, as they make transitions between all of their possible rotational conformations. Because the kinks are in thermal equilibrium, the probability that a kink resides in any rotational conformation is given by a Boltzmann distribution and we may associate an entropy with its end-to-end distance. The probability distribution for the end-to-end distance of a Kuhn length is approximately Gaussian and is determined by the Boltzmann probability factors for each state (rotational conformation). As a rubber network is stretched, some kinks are forced into a restricted number of more extended conformations having a greater end-to-end distance and it is the resulting decrease in entropy that produces an elastic force along the chain.
There are three distinct molecular mechanisms that produce these forces, two of which arise from changes in entropy that is referred to as the low chain extension regime, Ia and the moderate chain extension regime, Ib. The third mechanism occurs at high chain extension, as it is extended beyond its initial equilibrium contour length by the distortion of the chemical bonds along its backbone. In this case, the restoring force is spring-like and is referred to as regime II. The three force mechanisms are found to roughly correspond to the three regions observed in tensile stress vs. strain experiments, shown in Fig. 1.
The initial morphology of the network, immediately after chemical cross-linking, is governed by two random processes: (1) The probability for a cross-link to occur at any isoprene unit and, (2) the random walk nature of the chain conformation. The end-to-end distance probability distribution for a fixed chain length (i.e. fixed number of isoprene units) is described by a random walk. It is the joint probability distribution of the network chain lengths and the end-to-end distances between their cross-link nodes that characterizes the network morphology. Because both the molecular physics mechanisms that produce the elastic forces and the complex morphology of the network must be treated simultaneously, simple analytic elasticity models are not possible; an explicit 3-dimensional numerical model is required to simulate the effects of strain on a representative volume element of a network.
Low chain extension regime, Ia.
The Molecular Kink Paradigm envisions a representative network chain as a series of vectors that follow the chain contour within its tube. Each vector represents the equilibrium end-to-end distance of a kink. The actual 3-dimensional path of the chain is not pertinent, since all elastic forces are assumed to operate along the chain contour. In addition to the chain's contour length, the only other important parameter is its tortuosity, the ratio of its contour length to its end-to-end distance. As the chain is extended, in response to an applied strain, the induced elastic force is assumed to propagate uniformly along its contour. Consider a network chain whose end points (network nodes) are more or less aligned with the tensile strain axis. As the initial strain is applied to the rubber sample, the network nodes at the ends of the chain begin to move apart and all of the kink vectors along the contour are stretched simultaneously. Physically, the applied strain forces the kinks to stretch beyond their thermal equilibrium end-to-end distances, causing a decrease in their entropy. The increase in free energy associated with this change in entropy, gives rise to a (linear) elastic force that opposes the strain. The force constant for the low strain regime can be estimated by sampling molecular dynamics (MD) trajectories of a kink (i.e. short chains) composed of 2–3 isoprene units, at relevant temperatures (e.g. 300K). By taking many samples of the coordinates over the course of the simulations, the probability distributions of end-to-end distance for a kink can be obtained. Since these distributions (which turn out to be approximately Gaussian) are directly related to the number of states, they may be associated with the entropy of the kink at any end-to-end distance. By numerically differentiating the probability distribution, the change in entropy, and hence free energy, with respect to the kink end-to-end distance can be found. The force model for this regime is found to be linear and proportional to the temperature divided by the chain tortuosity.
Moderate chain extension regime, Ib.
At some point in the low extension regime (i.e. as all of the kinks along the chain are being extended simultaneously) it becomes energetically more favorable to have one kink transition to an extended conformation in order to stretch the chain further. The applied strain can force a single isoprene unit within a kink into an extended conformation, slightly increasing the end-to-end distance of the chain, and the energy required to do this is less than that needed to continue extending all of the kinks simultaneously. Numerous experiments strongly suggest that stretching a rubber network is accompanied by a decrease in entropy. As shown in Fig. 2, an isoprene unit has three single C-C bonds and there are two or three preferred rotational angles (orientations) about these bonds that have energy minima. Of the 18 allowed rotational conformations, only 6 have extended end-to-end distances and forcing the isoprene units in a chain to reside in some subset of the extended states must reduce the number of rotational conformations available for thermal motion. It is this reduction in the number of available states that causes the entropy to decrease. As the chain continues to straighten, all of the isoprene units in the chain are eventually forced into extended conformations and the chain is considered to be "taut." A force constant for chain extension can be estimated from the resulting change in free energy associated with this entropy change. As with regime Ia, the force model for this regime is linear and proportional to the temperature divided by the chain tortuosity.
High chain extension regime, II.
When all of the isoprene units in a network chain have been forced to reside in just a few extended rotational conformations, the chain becomes taut. It may be regarded as sensibly straight, except for the zigzag path that the C-C bonds make along the chain contour. However, further extension is still possible by bond distortions (e.g. bond angle increases), bond stretches, and dihedral angle rotations. These forces are spring-like and are not associated with entropy changes. A taut chain can be extended by only about 40%. At this point the force along the chain is sufficient to mechanically rupture the C-C covalent bond. This tensile force limit has been calculated via quantum chemistry simulations and it is approximately 7 nN, about a factor of a thousand greater than the entropic chain forces at low strain. The angles between adjacent backbone C-C bonds in an isoprene unit vary between about 115–120 degrees and the forces associated with maintaining these angles are quite large, so within each unit, the chain backbone always follows a zigzag path, even at bond rupture. This mechanism accounts for the steep upturn in the elastic stress, observed at high strains (Fig. 1).
Network morphology.
Although the network is completely described by only two parameters (the number of network nodes per unit volume and the statistical de-correlation length of the polymer, the Kuhn length), the way in which the chains are connected is actually quite complicated. There is a wide variation in the lengths of the chains and most of them are not connected to the nearest neighbor network node. Both the chain length and its end-to-end distance are described by probability distributions. The term "morphology" refers to this complexity. If the cross-linking agent is thoroughly mixed, there is an equal probability for any isoprene unit to become a network node. For dicumyl peroxide, the cross linking efficiency in natural rubber is unity, but this is not the case for sulfur. The initial morphology of the network is dictated by two random processes: the probability for a cross-link to occur at any isoprene unit and the Markov random walk nature of a chain conformation. The probability distribution function for how far one end of a chain end can ‘wander’ from the other is generated by a Markov sequence. This conditional probability density function relates the chain length formula_0 in units of the Kuhn length formula_1 to the end-to-end distance formula_2:
The probability that any isoprene unit becomes part of a cross-link node is proportional to the ratio of the concentrations of the cross-linker molecules (e.g., dicumyl-peroxide) to the isoprene units: formula_3 The factor of two comes about because two isoprene units (one from each chain) participate in the cross-link. The probability for finding a chain containing formula_4 isoprene units is given by:
where formula_5.
The equation can be understood as simply the probability that an isoprene unit is NOT a cross-link (1−"px") in "N"−1 successive units along a chain. Since "P"("N") decreases with "N", shorter chains are more probable than longer ones. Note that the number of statistically independent backbone segments is not the same as the number of isoprene units. For natural rubber networks, the Kuhn length contains about 2.2 isoprene units, so formula_6. The product of equations (1) and (3) (the joint probability distribution) relates the network chain length (formula_4) and end-to-end distance (formula_2) between its terminating cross-link nodes:
The complex morphology of a natural rubber network can be seen in Fig. 3, which shows the probability density vs. end-to-end distance (in units of mean node spacing) for an "average" chain. For the common experimental cross-link density of 4x1019 cm−3, an average chain contains about 116 isoprene units (52 Kuhn lengths) and has a contour length of about 50 nm. Fig. 3 shows that a significant fraction of chains span several node spacings, i.e., the chain ends overlap other network chains. Natural rubber, cross-linked with dicumyl peroxide, has tetra-functional cross-links (i.e. each cross-link node has 4 network chains emanating from it). Depending on their initial tortuosity and the orientation of their endpoints with respect to the strain axis, each chain associated with an active cross-link node can have a different elastic force constant as it resists the applied strain. To preserve force equilibrium (zero net force) on each cross-link node, a node may be forced to move in tandem with the chain having the highest force constant for chain extension. It is this complex node motion, arising from the random nature of the network morphology, that makes the study of the mechanical properties of rubber networks so difficult. As the network is strained, paths composed of these more extended chains emerge that span the entire sample, and it is these paths that carry most of the stress at high strains.
Numerical network simulation model.
To calculate the elastic response of a rubber sample, the three chain force models (regimes Ia, Ib and II) and the network morphology must be combined in a micro-mechanical network model. Using the joint probability distribution in equation (4) and the force extension models, it is possible to devise numerical algorithms to both construct a faithful representative volume element of a network and to simulate the resulting mechanical stress as it is subjected to strain. An iterative relaxation algorithm is used to maintain approximate force equilibrium at each network node as strain is imposed. When the force constant obtained for kinks having 2 or 3 isoprene units (approximately one Kuhn length) is used in numerical simulations, the predicted stress is found to be consistent with experiments. The results of such a calculation are shown in Fig. 1 (dashed red line) for sulfur cross-linked natural rubber and compared with experimental data (solid blue line). These simulations also predict a steep upturn in the stress as network chains become taut and, ultimately, material failure due to bond rupture. In the case of sulfur cross-linked natural rubber, the S-S bonds in the cross-link are much weaker than the C-C bonds on the chain backbone and are the network failure points. The plateau in the simulated stress, starting at a strain of about 7, is the limiting value for the network. Stresses greater than about 7 MPa cannot be supported and the network fails. Near this stress limit, the simulations predict that less than 10% of the chains are taut, i.e. in the high chain extension regime and less than 0.1% of the chains have ruptured. While the very low rupture fraction may seem surprising, it is not inconsistent with the common experience of stretching a rubber band until it breaks. The elastic response of the rubber after breaking is not noticeably different from the original.
Experiments.
Variation of tensile stress with temperature.
For molecular systems in thermal equilibrium, the addition of energy (e.g. by mechanical work) can cause a change in entropy. This is known from the theories of thermodynamics and statistical mechanics. Specifically, both theories assert that the change in energy must be proportional to the entropy change times the absolute temperature. This rule is only valid so long as the energy is restricted to thermal states of molecules. If a rubber sample is stretched far enough, energy may reside in nonthermal states such as the distortion of chemical bonds and the rule does not apply. At low to moderate strains, theory predicts that the required stretching force is due to a change in entropy in the network chains.
It is therefore expected that the force necessary to stretch a sample to some value of strain should be proportional to the temperature of the sample. Measurements showing how the tensile stress in a stretched rubber sample varies with temperature are shown in Fig. 4. In these experiments, the strain of a stretched rubber sample was held fixed as the temperature was varied between 10 and 70 degrees Celsius. For each value of fixed strain, it is seen that the tensile stress varied linearly (to within experimental error). These experiments provide the most compelling evidence that entropy changes are the fundamental mechanism for rubber elasticity.
The positive linear behavior of the stress with temperature sometimes leads to the mistaken notion that rubber has a negative coefficient of thermal expansion (i.e. the length of a sample shrinks when heated). Experiments have shown conclusively that, like almost all other materials, the coefficient of thermal expansion natural rubber is positive.
Snap-back velocity.
When stretching a piece of rubber (e.g. a rubber band) it will deform lengthwise in a uniform manner. When one end of the sample is released, it snaps back to its original length too quickly for the naked eye to resolve the process. An intuitive expectation is that it returns to its original length in the same manner as when it was stretched (i.e. uniformly). Experimental observations by Mrowca et al. suggest that this expectation is inaccurate. To capture the extremely fast retraction dynamics, they utilized an experimental method devised by Exner and Stefan in 1874. Their method consisted of a rapidly rotating glass cylinder which, after being coated with lamp black, was placed next to the stretched rubber sample. Styli, attached to the mid-point and free end of the rubber sample, were held in contact with the glass cylinder. Then, as the free end of the rubber snapped back, the styli traced out helical paths in the lamp black coating of the rotating cylinder. By adjusting the rotation speed of the cylinder, they could record the position of the styli in less than one complete rotation. The trajectories were transferred to a graph by rolling the cylinder on a piece of damp blotter paper. The mark left by a stylus appeared as a white line (no lamp black) on the paper.
Their data, plotted as the graph in Fig. 5, shows the position of end and midpoint styli as the sample rapidly retracts to its original length. The sample was initially stretched 9.5 in (~24 cm) beyond its unstrained length and then released. The styli returned to their original positions (i.e. a displacement of 0 in) in a little over 6 ms. The linear behavior of the displacement vs. time indicates that, after a brief acceleration, both the end and the midpoint of the sample snapped back at a constant velocity of about 50 m/s or 112 mph. However, the midpoint stylus did not start to move until about 3 ms after the end was released. Evidently, the retraction process travels as a wave, starting at the free end.
At high extensions some of the energy stored in the stretched network chain is due to a change in its entropy, but most of the energy is stored in bond distortions (regime II, above) which do not involve an entropy change. If one assumes that all of the stored energy is converted to kinetic energy, the retraction velocity may be calculated directly from the familiar conservation equation "E" = <templatestyles src="Fraction/styles.css" />1⁄2 "mv"2. Numerical simulations, based on the molecular kink paradigm, predict velocities consistent with this experiment.
Historical approaches to elasticity theory.
Eugene Guth and Hubert M. James proposed the entropic origins of rubber elasticity in 1941.
Thermodynamics.
Temperature affects the elasticity of elastomers in an unusual way. When the elastomer is assumed to be in a stretched state, heating causes them to contract. Vice versa, cooling can cause expansion.
This can be observed with an ordinary rubber band. Stretching a rubber band will cause it to release heat, while releasing it after it has been stretched will lead it to absorb heat, causing its surroundings to become cooler. This phenomenon can be explained with the Gibbs free energy. Rearranging Δ"G"=Δ"H"−"T"Δ"S", where "G" is the free energy, "H" is the enthalpy, and "S" is the entropy, we obtain "T" Δ"S" = Δ"H" − Δ"G". Since stretching is nonspontaneous, as it requires external work, "T"Δ"S" must be negative. Since "T" is always positive (it can never reach absolute zero), the Δ"S" must be negative, implying that the rubber in its natural state is more entangled (with more microstates) than when it is under tension. Thus, when the tension is removed, the reaction is spontaneous, leading Δ"G" to be negative. Consequently, the cooling effect must result in a positive ΔH, so Δ"S" will be positive there.
The result is that an elastomer behaves somewhat like an ideal monatomic gas, inasmuch as (to good approximation) elastic polymers do "not" store any potential energy in stretched chemical bonds or elastic work done in stretching molecules, when work is done upon them. Instead, all work done on the rubber is "released" (not stored) and appears immediately in the polymer as thermal energy. In the same way, all work that the elastic does on the surroundings results in the disappearance of thermal energy in order to do the work (the elastic band grows cooler, like an expanding gas). This last phenomenon is the critical clue that the ability of an elastomer to do work depends (as with an ideal gas) only on entropy-change considerations, and not on any stored (i.e. potential) energy within the polymer bonds. Instead, the energy to do work comes entirely from thermal energy, and (as in the case of an expanding ideal gas) only the positive entropy change of the polymer allows its internal thermal energy to be converted efficiently into work.
Polymer chain theories.
Invoking the theory of rubber elasticity, a polymer chain in a cross-linked network may be seen as an entropic spring. When the chain is stretched, the entropy is reduced by a large margin because there are fewer conformations available. As such there is a restoring force which causes the polymer chain to return to its equilibrium or unstretched state, such as a high entropy random coil configuration, once the external force is removed. This is the reason why rubber bands return to their original state. Two common models for rubber elasticity are the freely-jointed chain model and the worm-like chain model.
Freely-jointed chain model.
The freely joined chain, also called an ideal chain, follows the random walk model. Microscopically, the 3D random walk of a polymer chain assumes the overall end-to-end distance is expressed in terms of the x, y and z directions:
formula_7
In the model, formula_8 is the length of a rigid segment, formula_9 is the number of segments of length formula_8, formula_10 is the distance between the fixed and free ends, and formula_11 is the "contour length" or formula_12. Above the glass transition temperature, the polymer chain oscillates and formula_2 changes over time. The probability distribution of the chain is the product of the probability distributions of the individual components, given by the following Gaussian distribution:
formula_13
Therefore, the ensemble average end-to-end distance is simply the standard integral of the probability distribution over all space. Note that the movement could be backwards or forwards, so the net average formula_14 will be zero. However, the root mean square can be a useful measure of the distance.
formula_15
The Flory theory of rubber elasticity suggests that rubber elasticity has primarily entropic origins. By using the following basic equations for Helmholtz free energy and its discussion about entropy, the force generated from the deformation of a rubber chain from its original unstretched conformation can be derived. The formula_16 is the number of conformations of the polymer chain. Since the deformation does not involve enthalpy change, the change in free energy can simply be calculated as the change in entropyformula_17. Note that the force equation resembles the behavior of a spring and follows Hooke's law: formula_18, where "F" is the force, "k" is the spring constant and "x" is the distance. Usually, the neo-Hookean model can be used on cross-linked polymers to predict their stress-strain relations:
formula_19
formula_20
formula_21
formula_22
Note that the elastic coefficient formula_23 is temperature dependent. If rubber temperature increases, the elastic coefficient increases as well. This is the reason why rubber under constant stress shrinks when its temperature increases.
We can further expand the Flory theory into a macroscopic view, where bulk rubber material is discussed. Assume the original dimension of the rubber material is formula_24, formula_25 and formula_26, a deformed shape can then be expressed by applying an individual extension ratio formula_27 to the length (formula_28, formula_29, formula_30). So microscopically, the deformed polymer chain can also be expressed with the extension ratio: formula_31, formula_32, formula_33. The free energy change due to deformation can then be expressed as follows:
formula_34
Assume that the rubber is cross-linked and isotropic, the random walk model gives formula_35, formula_36 and formula_37 are distributed according to a normal distribution. Therefore, they are equal in space, and all of them are 1/3 of the overall end-to-end distance of the chain: formula_38. Plugging in the change of free energy equation above, it is easy to get:
formula_39
The free energy change per volume is just:
formula_40
where formula_41 is the number of strands in network, the subscript "def" means "deformation", formula_42, which is the number density per volume of polymer chains, formula_43 which is the ratio between the end-to-end distance of the chain and the theoretical distance that obey random walk statistics. If we assume incompressibility, the product of extension ratios is 1, implying no change in the volume: formula_44.
Case study: Uniaxial deformation:
In a uniaxial deformed rubber, because formula_44 it is assumed that formula_45. So the previous free energy per volume equation is:
formula_46
The engineering stress (by definition) is the first derivative of the energy in terms of the extension ratio, which is equivalent to the concept of strain:
formula_47
and the Young's Modulus formula_48 is defined as derivative of the stress with respect to strain, which measures the stiffness of the rubber in laboratory experiments.
formula_49 where formula_50, formula_51 is the mass density of the chain, formula_52 is the number average molecular weight of a network strand between crosslinks. Here, this type of analysis links the thermodynamic theory of rubber elasticity to experimentally measurable parameters. In addition, it gives insights into the cross-linking condition of the materials.
Worm-like chain model.
The worm-like chain model (WLC) takes the energy required to bend a molecule into account. The variables are the same except that formula_53, the persistence length, replaces formula_8. Then, the force follows this equation:
formula_54
Therefore, when there is no distance between chain ends ("r"=0), the force required to do so is zero, and to fully extend the polymer chain (formula_55), an infinite force is required, which is intuitive. Graphically, the force begins at the origin and initially increases linearly with formula_2. The force then plateaus but eventually increases again and approaches infinity as the chain length approaches formula_56.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "p_x = 2 \\frac \\text{[cross-link]} \\text{[isoprene]}"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "N\\geq 1"
},
{
"math_id": 6,
"text": "N \\sim 2.2 n"
},
{
"math_id": 7,
"text": "\\vec{R} = R_x\\hat{x} + R_y\\hat{y} + R_z\\hat{z}"
},
{
"math_id": 8,
"text": "b "
},
{
"math_id": 9,
"text": "N "
},
{
"math_id": 10,
"text": "R "
},
{
"math_id": 11,
"text": "L_\\text{c} "
},
{
"math_id": 12,
"text": "Nb"
},
{
"math_id": 13,
"text": "P(\\vec{R}) = P(R_x) P(R_y) P(R_z) = \\left( \\frac{2 n b^2 \\pi}{3}\\right)^{-{3}/{2}} \\exp \\left( \\frac{-3R^2}{2Nb^2} \\right)"
},
{
"math_id": 14,
"text": "\\langle R\\rangle"
},
{
"math_id": 15,
"text": "\\begin{align}\n \\langle R\\rangle &= 0 \\\\\n \\langle R^2\\rangle &= \\int_0^\\infty R^24\\pi R^2 P(\\vec{R})dR = Nb^2 \\\\\n \\langle R^2\\rangle^\\frac{1}{2} &= \\sqrt{N} b\n\\end{align}"
},
{
"math_id": 16,
"text": "\\Omega"
},
{
"math_id": 17,
"text": "-T\\Delta S"
},
{
"math_id": 18,
"text": "F = kx"
},
{
"math_id": 19,
"text": "\\Omega = C \\exp \\left ( \\frac{-3\\vec{R}^2}{2Nb^2} \\right )\n"
},
{
"math_id": 20,
"text": "S = k_\\text{B} \\ln \\Omega \\, \\approx \\frac{-3k_\\text{B} \\vec{R}^2}{2Nb^2} "
},
{
"math_id": 21,
"text": "\\Delta F(\\vec{R}) \\approx -T\\Delta S_d(\\vec{R}^2) = C+\\frac{3 k_\\text{B} T}{N b^2} \\vec{R}^2"
},
{
"math_id": 22,
"text": "f =\\frac{dF(\\vec{R})}{d\\vec{R}} = \\frac{d}{d\\vec{R}}\\left(\\frac{3k_\\text{B}T\\vec{R}^2}{2Nb^2}\\right) = \\frac{3k_\\text{B}T}{Nb^2} \\vec{R}"
},
{
"math_id": 23,
"text": "3 k_\\text{B} T/N b"
},
{
"math_id": 24,
"text": "L_x"
},
{
"math_id": 25,
"text": "L_y"
},
{
"math_id": 26,
"text": "L_z"
},
{
"math_id": 27,
"text": "\\lambda_i"
},
{
"math_id": 28,
"text": "\\lambda_x L_x"
},
{
"math_id": 29,
"text": "\\lambda_y L_y"
},
{
"math_id": 30,
"text": "\\lambda_z L_z"
},
{
"math_id": 31,
"text": "\\lambda_x R_x"
},
{
"math_id": 32,
"text": "\\lambda_y R_y"
},
{
"math_id": 33,
"text": "\\lambda_z R_z"
},
{
"math_id": 34,
"text": "\\begin{align}\n \\Delta F_\\text{def} (\\vec{R}) &= - \\frac{3k_\\text{B}T\\vec{R}^2}{2Nb^2} = -\\frac{3k_\\text{B}T\\left(\\left(R_x^2-R_{x0}^2\\right)+\\left(R_y^2-R_{y0}^2\\right)+\\left(R_z^2-R_{z0}^2\\right)\\right)}\n{2Nb^2}\\\\\n &=-\\frac{3k_\\text{B} T\\left(\\left(\\lambda_x^2-1\\right) R_{x0}^2 + \\left(\\lambda_y^2-1\\right) R_{y0}^2 + \\left(\\lambda_z^2-1\\right) R_{z0}^2\\right)}{2Nb^2}\n\\end{align}"
},
{
"math_id": 35,
"text": "R_x"
},
{
"math_id": 36,
"text": "R_y"
},
{
"math_id": 37,
"text": "R_z"
},
{
"math_id": 38,
"text": "\\langle R_{x0}^2 \\rangle=\\langle R_{y0}^2 \\rangle = \\langle R_{z0}^2 \\rangle = \\langle R^2 \\rangle/3"
},
{
"math_id": 39,
"text": "\\begin{align}\n\\Delta F_\\text{def} (\\vec{R})&=-\\frac{k_\\text{B} T n_s \\langle R^2 \\rangle \\left(\\lambda_x^2 + \\lambda_y^2 + \\lambda_z^2 - 3\\right)}\n{2Nb^2}\\\\\n &=-\\frac{k_\\text{B}T n_s \\langle R^2 \\rangle \\left(\\lambda_x^2+\\lambda_y^2+\\lambda_z^2-3\\right)}{2R_0^2}\n\\end{align}"
},
{
"math_id": 40,
"text": "\\Delta f_\\text{def} = \\frac{\\Delta F_\\text{def}(\\vec{R})}{V} = -\\frac{k_\\text{B}T v_s \\beta \\left(\\lambda_x^2 + \\lambda_y^2 + \\lambda_z^2 - 3\\right)}{2}"
},
{
"math_id": 41,
"text": "n_s"
},
{
"math_id": 42,
"text": "v_s = n_s / V"
},
{
"math_id": 43,
"text": "\\beta = \\langle R^2\\rangle / R_0^2"
},
{
"math_id": 44,
"text": "\\lambda_x \\lambda_y \\lambda_z = 1"
},
{
"math_id": 45,
"text": "\\lambda_x = \\lambda_y = \\lambda_z^{-1/2}"
},
{
"math_id": 46,
"text": "\\Delta f_\\text{def} = \\frac{\\Delta F_\\text{def}(\\vec{R})}{V} = -\\frac{k_\\text{B}T v_s \\beta \\left(\\lambda_x^2 + \\lambda_y^2 + \\lambda_z^2 - 3\\right)}{2}\n= \\frac{k_\\text{B}T v_s\\beta}{2} \\left(\\lambda_z^2 + \\frac{2}{\\lambda_z}-3\\right)"
},
{
"math_id": 47,
"text": "\\sigma_\\text{eng}=\\frac{d(\\Delta f_\\text{def})}{\\lambda_z} = k_\\text{B}T v_s \\beta\\left(\\lambda_z-\\frac{1}{\\lambda_z^2}\\right)"
},
{
"math_id": 48,
"text": "E"
},
{
"math_id": 49,
"text": "E=\\frac{d(\\sigma_\\text{eng})}{d\\lambda_z}=k_\\text{B}T v_s \\beta \\left.\\left(1+ \\frac{2}{\\lambda_z^3}\\right)\\right|_{\\lambda_z=1}\n= 3 k_\\text{B}T v_s\\beta = \\frac{3\\rho \\beta RT}{M_s}"
},
{
"math_id": 50,
"text": "v_s = \\rho N_a / M_s"
},
{
"math_id": 51,
"text": "\\rho"
},
{
"math_id": 52,
"text": "M_s"
},
{
"math_id": 53,
"text": "L_\\text{p}"
},
{
"math_id": 54,
"text": "F \\approx \\frac{k_\\text{B} T}{L_\\text{p}} \\left ( \\frac{1}{4 \\left( 1- \\frac{r}{L_{\\rm c}} \\right )^2} - \\frac{1}{4} + \\frac{r}{L_\\text{c}} \\right ) "
},
{
"math_id": 55,
"text": " r = L_\\text{c} "
},
{
"math_id": 56,
"text": "L_\\text{c}"
}
]
| https://en.wikipedia.org/wiki?curid=7623862 |
7624108 | Standard linear solid model | The standard linear solid (SLS), also known as the Zener model after Clarence Zener, is a method of modeling the behavior of a viscoelastic material using a linear combination of springs and dashpots to represent elastic and viscous components, respectively. Often, the simpler Maxwell model and the Kelvin–Voigt model are used. These models often prove insufficient, however; the Maxwell model does not describe creep or recovery, and the Kelvin–Voigt model does not describe stress relaxation. SLS is the simplest model that predicts both phenomena.
Definition.
Materials undergoing strain are often modeled with mechanical components, such as springs (restorative force component) and dashpots (damping component).
Connecting a spring and damper in series yields a model of a Maxwell material while connecting a spring and damper in parallel yields a model of a Kelvin–Voigt material. In contrast to the Maxwell and Kelvin–Voigt models, the SLS is slightly more complex, involving elements both in series and in parallel. Springs, which represent the elastic component of a viscoelastic material, obey Hooke's law:
formula_0
where σ is the applied stress, E is the Young's modulus of the material, and ε is the strain. The spring represents the elastic component of the model's response.
Dashpots represent the viscous component of a viscoelastic material. In these elements, the applied stress varies with the time rate of change of the strain:
formula_1
where η is viscosity of the dashpot component.
Solving the model.
In order to model this system, the following physical relations must be realized:
For parallel components: formula_2, and formula_3.
For series components: formula_4, and formula_5.
Maxwell representation.
This model consists of two systems in parallel. The first, referred to as the Maxwell arm, contains a spring (formula_6) and dashpot (viscosity formula_7) in series. The other system contains only a spring (formula_8).
These relationships help relate the various stresses and strains in the overall system and the Maxwell arm:
formula_9
formula_10
formula_11
formula_12
where the subscripts formula_13, formula_14, formula_15 and formula_16 refer to Maxwell, dashpot, spring one and spring two, respectively.
Using these relationships, their time derivatives, and the above stress-strain relationships for the spring and dashpot elements, the system can be modeled as follows:
formula_17
The equation can also be expressed as:
formula_18
or, in dot notation:
formula_19
The relaxation time, formula_20, is different for each material and is equal to
formula_21
Kelvin-Voigt representation.
This model consists of two systems in series. The first, referred to as the Kelvin arm, contains a spring (formula_6) and dashpot (viscosity formula_7) in parallel. The other system contains only a spring (formula_8).
These relationships help relate the various stresses and strains in the overall system and the Kelvin arm:
formula_22
formula_23
formula_24
formula_25
where the subscripts formula_26, formula_14, formula_15,and formula_16 refer to Kelvin, dashpot, spring one, and spring two, respectively.
Using these relationships, their time derivatives, and the above stress-strain relationships for the spring and dashpot elements, the system can be modeled as follows:
formula_27
or, in dot notation:
formula_28
The retardation time, formula_29, is different for each material and is equal to
formula_30
Model characteristics.
The standard linear solid model combines aspects of the Maxwell and Kelvin–Voigt models to accurately describe the overall behavior of a system under a given set of loading conditions. The behavior of a material applied to an instantaneous stress is shown as having an instantaneous component of the response. Instantaneous release of a stress also results in a discontinuous decrease in strain, as is expected. The shape of the time-dependent strain curve is true to the type of equation that characterizes the behavior of the model over time, depending upon how the model is loaded.
Although this model can be used to accurately predict the general shape of the strain curve, as well as behavior for long time and instantaneous loads, the model lacks the ability to accurately model material systems numerically.
The fluid model equivalent to the standard linear solid model includes a dashpot in series with the Kelvin–Voigt model and is called the Jeffreys model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_{s} = E \\varepsilon"
},
{
"math_id": 1,
"text": " \\sigma_D = \\eta \\frac {d\\varepsilon} {dt} "
},
{
"math_id": 2,
"text": "\\sigma_{tot} = \\sigma_1 + \\sigma_2"
},
{
"math_id": 3,
"text": "\\varepsilon_{tot} = \\varepsilon_1 = \\varepsilon_2"
},
{
"math_id": 4,
"text": "\\sigma_{tot} = \\sigma_1 = \\sigma_2"
},
{
"math_id": 5,
"text": "\\varepsilon_{tot} = \\varepsilon_1 + \\varepsilon_2"
},
{
"math_id": 6,
"text": "E = E_2"
},
{
"math_id": 7,
"text": "\\eta"
},
{
"math_id": 8,
"text": "E = E_1"
},
{
"math_id": 9,
"text": "\\sigma_{tot} = \\sigma_{m} + \\sigma_{S_1}"
},
{
"math_id": 10,
"text": "\\varepsilon_{tot} = \\varepsilon_{m} = \\varepsilon_{S_1}"
},
{
"math_id": 11,
"text": "\\sigma_{m} = \\sigma_{D} = \\sigma_{S_2}"
},
{
"math_id": 12,
"text": "\\varepsilon_{m} = \\varepsilon_{D} + \\varepsilon_{S_2}"
},
{
"math_id": 13,
"text": "m"
},
{
"math_id": 14,
"text": "D"
},
{
"math_id": 15,
"text": "S_1"
},
{
"math_id": 16,
"text": "S_2 "
},
{
"math_id": 17,
"text": " \\frac {d\\varepsilon(t)} {dt} = \\frac { \\frac {E_2} {\\eta} \\left ( \\frac {\\eta} {E_2}\\frac {d\\sigma(t)} {dt} + \\sigma(t) - E_1 \\varepsilon(t) \\right )}{E_1 + E_2} "
},
{
"math_id": 18,
"text": "\\sigma(t) + \\frac {\\eta} {E_2} \\frac{d\\sigma(t)}{dt} = E_1 \\varepsilon(t) + \\frac {\\eta (E_1 + E_2)} {E_2} \\frac{d\\varepsilon(t)}{dt}"
},
{
"math_id": 19,
"text": "\\sigma + \\frac {\\eta} {E_2} \\dot {\\sigma} = E_1 \\varepsilon + \\frac {\\eta (E_1 + E_2)} {E_2} \\dot {\\varepsilon}"
},
{
"math_id": 20,
"text": " \\tau "
},
{
"math_id": 21,
"text": " \\tau = \\frac {\\eta} {E_2} "
},
{
"math_id": 22,
"text": "\\sigma_{tot} = \\sigma_{k} = \\sigma_{S_1}"
},
{
"math_id": 23,
"text": "\\varepsilon_{tot} = \\varepsilon_{k} + \\varepsilon_{S_1}"
},
{
"math_id": 24,
"text": "\\sigma_{k} = \\sigma_{D} + \\sigma_{S_2}"
},
{
"math_id": 25,
"text": "\\varepsilon_{k} = \\varepsilon_{D} = \\varepsilon_{S_2}"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "\\sigma(t) + \\frac{\\eta}{E_1+E_2}\\frac{d\\sigma(t)}{dt} = \\frac{E_1E_2}{E_1+E_2}\\varepsilon(t) + \\frac{E_1\\eta}{E_1+E_2} \\frac{d\\varepsilon(t)}{dt}"
},
{
"math_id": 28,
"text": "\\sigma + \\frac{\\eta}{E_1+E_2} \\dot \\sigma = \\frac{E_1E_2}{E_1+E_2}\\varepsilon + \\frac{E_1\\eta}{E_1+E_2} \\dot \\varepsilon"
},
{
"math_id": 29,
"text": " \\bar{\\tau} "
},
{
"math_id": 30,
"text": " \\bar{\\tau} = \\frac {\\eta} {E_2} "
}
]
| https://en.wikipedia.org/wiki?curid=7624108 |
7624304 | Parity game | Mathematical game played on a directed graph
A parity game is played on a colored directed graph, where each node has been colored by a priority – one of (usually) finitely many natural numbers. Two players, 0 and 1, move a (single, shared) token along the edges of the graph. The owner of the node that the token falls on selects the successor node (does the next move). The players keep moving the token, resulting in a (possibly infinite) path, called a play.
The winner of a finite play is the player whose opponent is unable to move. The winner of an infinite play is determined by the priorities appearing in the play. Typically, player 0 wins an infinite play if the largest priority that occurs infinitely often in the play is even. Player 1 wins otherwise. This explains the word "parity" in the title.
Parity games lie in the third level of the Borel hierarchy, and are consequently determined.
Games related to parity games were implicitly used in Rabin's
proof of decidability of the monadic second-order theory of "n" successors (S2S for "n" = 2), where determinacy of such games was
proven. The Knaster–Tarski theorem leads to a relatively simple proof of determinacy of parity games.
Moreover, parity games are history-free determined. This means that if a player has a winning strategy then that player has a winning strategy that depends only on the current board position, and not on the history of the play.
Solving a game.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in computer science:
Can parity games be solved in polynomial time?
"Solving" a parity game played on a finite graph means deciding, for a given starting position, which of the two players has a winning strategy. It has been shown that this problem is in NP and co-NP, more precisely UP and co-UP, as well as in QP (quasipolynomial time). It remains an open question whether this decision problem is solvable in PTime.
Given that parity games are history-free determined, solving a given parity game is equivalent to solving the following simple looking graph-theoretic problem. Given a finite colored directed bipartite graph with "n" vertices formula_0, and "V" colored with colors from "1" to "m", is there a choice function selecting a single out-going edge from each vertex of formula_1, such that the resulting subgraph has the property that in each cycle the largest occurring color is even.
Recursive algorithm for solving parity games.
Zielonka outlined a recursive algorithm that solves parity games. Let formula_2 be a parity game, where formula_1 resp. formula_3 are the sets of nodes belonging to player 0 resp. 1, formula_0 is the set of all nodes, formula_4 is the total set of edges, and formula_5 is the priority assignment function.
Zielonka's algorithm is based on the notation of attractors. Let formula_6 be a set of nodes and formula_7 be a player. The i-attractor of U is the least set of nodes formula_8 containing U such that i can force a visit to U from every node in formula_8. It can be defined by a fix-point computation:
formula_9
In other words, one starts with the initial set U. Then, for each step (formula_10) one adds all nodes belonging to player 0 that can reach the previous set (formula_11) with a single edge and all nodes belonging to player 1 that must reach the previous set (formula_11) no matter which edge player 1 takes.
Zielonka's algorithm is based on a recursive descent on the number of priorities. If the maximal priority is 0, it is immediate to see that player 0 wins the whole game (with an arbitrary strategy). Otherwise, let p be the largest one and let formula_12 be the player associated with the priority. Let formula_13 be the set of nodes with priority p and let formula_14 be the corresponding attractor of player i.
Player i can now ensure that every play that visits A infinitely often is won by player i.
Consider the game formula_15 in which all nodes and affected edges of A are removed. We can now solve the smaller game formula_16 by recursion and obtain a pair of winning sets formula_17. If formula_18 is empty, then so is formula_19 for the game G, because player formula_20 can only decide to escape from formula_21 to A which also results in a win for player i.
Otherwise, if formula_18 is not empty, we only know for sure that player formula_20 can win on formula_18 as player i cannot escape from formula_18 to A (since A is an i-attractor). We therefore compute the attractor formula_22 and remove it from G to obtain the smaller game formula_23. We again solve it by recursion and obtain a pair of winning sets formula_24. It follows that formula_25 and formula_26.
In simple pseudocode, the algorithm might be expressed as this:
function formula_27
p := maximal priority in G
if formula_28
return formula_29
else
U := nodes in G with priority p
formula_30
formula_31
formula_32
if formula_33
return formula_34
formula_35
formula_36
return formula_37
Related games and their decision problems.
A slight modification of the above game, and the related graph-theoretic problem, makes solving the game NP-hard. The modified game has the Rabin acceptance condition, and thus every vertex is colored by a set of colors instead of a single color. Accordingly, we say a vertex "v" has color "j" if the color "j" belongs to the color set of "v". An infinite play is winning for player 0 if there exists "i" such that infinitely many vertices in the play have color "2i", yet finitely many have color "2i+1".
Parity is the special case where every vertex has a single color.
Specifically, in the above bipartite graph scenario, the problem now is to determine if there
is a choice function selecting a single out-going edge from each vertex of "V"0, such that the resulting subgraph has the property that in each cycle (and hence each strongly connected component) it is the case that there exists an "i" and a node with color 2"i", and no node with color 2"i" + 1...
Note that as opposed to parity games, this game is no longer symmetric with respect to players 0 and 1.
Relation with logic and automata theory.
Despite its interesting complexity theoretic status, parity game solving can be seen as the algorithmic backend to problems in automated verification and controller synthesis. The model-checking problem for the modal μ-calculus for instance is known to be equivalent to parity game solving. Also, decision problems like validity or satisfiability for modal logics can be reduced to parity game solving.
External links.
Two state-of-the-art parity game solving toolsets are the following: | [
{
"math_id": 0,
"text": "V = V_0 \\cup V_1"
},
{
"math_id": 1,
"text": "V_0"
},
{
"math_id": 2,
"text": "G=(V, V_0,V_1,E,\\Omega)"
},
{
"math_id": 3,
"text": "V_1"
},
{
"math_id": 4,
"text": "E \\subseteq V \\times V"
},
{
"math_id": 5,
"text": "\\Omega: V \\rightarrow \\mathbb{N}"
},
{
"math_id": 6,
"text": "U \\subseteq V"
},
{
"math_id": 7,
"text": "i=0,1"
},
{
"math_id": 8,
"text": "Attr_i(U)"
},
{
"math_id": 9,
"text": "\\begin{align}\nAttr_i(U)^0 &:= U\n\\\\\nAttr_i(U)^{j+1} &:= Attr_i(U)^j \\cup \\{v \\in V_i \\mid \\exists (v,w) \\in E: w \\in Attr_i(U)^j \\} \\cup \\{v \\in V_{1-i} \\mid \\forall (v,w) \\in E: w \\in Attr_i(U)^j \\}\n\\\\\nAttr_i(U) &:= \\bigcup_{j=0}^\\infty Attr_i(U)^j\n\\end{align}"
},
{
"math_id": 10,
"text": "Attr_i(U)^{j+1}"
},
{
"math_id": 11,
"text": "Attr_i(U)^{j}"
},
{
"math_id": 12,
"text": "i = p \\bmod 2"
},
{
"math_id": 13,
"text": "U = \\{v \\mid \\Omega(v) = p\\}"
},
{
"math_id": 14,
"text": "A = Attr_i(U)"
},
{
"math_id": 15,
"text": "G' = G \\setminus A"
},
{
"math_id": 16,
"text": "G'"
},
{
"math_id": 17,
"text": "W'_i, W'_{1-i}"
},
{
"math_id": 18,
"text": "W'_{1-i}"
},
{
"math_id": 19,
"text": "W_{1-i}"
},
{
"math_id": 20,
"text": "1-i"
},
{
"math_id": 21,
"text": "W_i'"
},
{
"math_id": 22,
"text": "B = Attr_{1-i}(W'_{1-i})"
},
{
"math_id": 23,
"text": "G'' = G \\setminus B"
},
{
"math_id": 24,
"text": "W''_i, W''_{1-i}"
},
{
"math_id": 25,
"text": "W_i = W''_i"
},
{
"math_id": 26,
"text": "W_{1-i} = W''_{1-i} \\cup B"
},
{
"math_id": 27,
"text": "solve(G)"
},
{
"math_id": 28,
"text": "p = 0"
},
{
"math_id": 29,
"text": "W_0, W_1 := V, \\{\\}"
},
{
"math_id": 30,
"text": "i := p \\bmod 2"
},
{
"math_id": 31,
"text": "A := Attr_i(U)"
},
{
"math_id": 32,
"text": "W_0', W_1' := solve(G \\setminus A)"
},
{
"math_id": 33,
"text": "W_{1-i}' = \\{\\}"
},
{
"math_id": 34,
"text": "W_i , W_{1-i} := V, \\{\\}"
},
{
"math_id": 35,
"text": "B := Attr_{1-i}(W_{1-i}')"
},
{
"math_id": 36,
"text": "W_0'', W_1'' := solve(G \\setminus B)"
},
{
"math_id": 37,
"text": "W_i , W_{1-i} := W_i'', W_{1-i}'' \\cup B"
}
]
| https://en.wikipedia.org/wiki?curid=7624304 |
7625671 | Tetralemma | The tetralemma is a figure that features prominently in the logic of India.
Definition.
It states that with reference to any a logical proposition X, there are four possibilities:
formula_0 (affirmation)
formula_1 (negation)
formula_2 (both)
formula_3 (neither)
Catuskoti.
The history of fourfold negation, the Catuskoti (Sanskrit), is evident in the logico-epistemological tradition of India, given the categorical nomenclature Indian logic in Western discourse. Subsumed within the auspice of Indian logic, 'Buddhist logic' has been particularly focused in its employment of the fourfold negation, as evidenced by the traditions of Nagarjuna and the Madhyamaka, particularly the school of Madhyamaka given the retroactive nomenclature of Prasangika by the Tibetan Buddhist logico-epistemological tradition. Though tetralemma was also used as a form inquiry rather than logic in the Nasadiya Sukta of Rigveda (creation hymn) though seems to be rarely used as a tool of logic before Buddhism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X "
},
{
"math_id": 1,
"text": "\\neg X"
},
{
"math_id": 2,
"text": "X \\land\\neg X"
},
{
"math_id": 3,
"text": "\\neg (X \\lor \\neg X) "
}
]
| https://en.wikipedia.org/wiki?curid=7625671 |
76257936 | Early translations of the New Testament | Early translations of the New Testament – translations of the New Testament created in the 1st millennium. Among them, the ancient translations are highly regarded. They play a crucial role in modern criticism of New Testament's text. These translations reached the hands of scholars in copies and also underwent changes, but the subsequent history of their text was independent of the Greek text-type and are therefore helpful in reconstructing it. Three of them – Syriac, Latin, Coptic – date from the late 2nd century and are older than the surviving full Greek manuscripts of the New Testament. They were written before the first revisions of the Greek New Testament and are therefore the most highly regarded. They are obligatorily cited in all critical editions of the Greek text-type. Translations produced after 300 (Armenian, Georgian, Ethiopic) are already dependent on the reviews, but are nevertheless important and are generally cited in the critical apparatus. The Gothic and Slavic translations are rarely cited in critical editions. Omitted are those of the translations of the first millennium that were not translated directly from the Greek original, but based on another translation (based on the "Vulgate", Peshitta and others).
Translations from the second half of the first millennium are less important than ancient translations for reconstructing the original text of the New Testament, because they were written later. Nevertheless, they are taken into account; it may always happen that they convey any of the lessons of Scripture better than the ancient translations. Textual critics are primarily interested in which family of the Greek text-type they support. Therefore, they cannot be ignored when reconstructing the history of the New Testament. Among the translations of the first millennium, the Persian and Caucaso-Albanian translations are completely lost.
In the 27th edition of Nestle–Åland's Greek New Testament (NA27), the critical apparatus cites translations into the following languages: Latin (Old Latin and "Vulgate"), Syriac, Coptic dialects (Sahidic, Bohairic, Akhmimite, Sub-Ahmimite, Middle Egyptian, Middle Egyptian Faihumic, Protobohairic), Armenian, Georgian, Gothic, Ethiopian, Church Slavonic. Omitted are translations into Arabic, Nubian, Sogdian, Old English, Old Low German, Old High German, Old French.
Syriac translations.
Old Syriac translations.
Syria played a great role in the beginnings of Christianity. In the 1st century, Antakya was an important missionary center, while in the 2nd century it served as a bridge between Palestine and "Western" Christianity. The disciples of Jesus were first called Christians in Antakya (Acts 11:26). This is probably where the Gospels of Matthew and Luke, the "Didache", the "Ignatian" in 107, and the Gospel of Thomas in the late 2nd century were written. In Syria, Greek and Syriac language influences crossed. In Antakya itself, Greek was dominant, but outside of it (Damascus, Edessa) there was much less knowledge of Greek. As time went on, the influence of Greek diminished. In Palestine, a dialect of Syriac was spoken, close to the Aramaic language, i.e. the one spoken by Jesus and the apostles, while the first Syriac translations were produced at a time when the oral tradition about Jesus and the apostles was still alive. The key terminology of apostolic teaching could not yet be forgotten, and to some extent could be preserved in the earliest translations. The Syriac translations provide a better understanding of the New Testament's authors, who wrote in Greek but were thinking in Semitic. This is why textual critics have a special respect for the Syriac translations.
The history of Syriac translations has been the subject of a lot of research and still seems very complicated. The oldest translation of the New Testament into Syriac is probably the "Diatessaron" (Harmony of the Four Gospels), made by Tatian around 170. Tatian created his own chronological order, in some places radically diverging from the chronology of each of the Gospels. Repetitive texts were discarded, resulting in the "Diatessaron" accounting for 72% of the total volume of the four Gospels. Tatian made the greatest use of the Gospel of John, the least use of the Gospel of Mark. The 56 canonical verses of the Gospels do not find their counterparts in the "Diatessaron". The genealogies of Jesus are omitted, as well as the texts that speak of Christ's humanity, and Joseph is not called Mary's husband. Nor is there a "Pericope adulterae" (John 7:53-8:11). It represents a Western text-type. Old Testament citations follow the Peshitta text-type. It is preserved in Arabic and Latin translations; only fragments are preserved in Greek.
Another translation – this time of the entire New Testament – was made around 180 (or not much earlier). It is quoted by Ephrem the Syrian. It is called the Old Syriac translation, and was made from an old Greek text-type representing the Western text-type. It is preserved only in two early manuscripts: the Curetonian (4th century) and the Sinaitic (5th century). The former was published by William Cureton in 1858 and is marked syrcur, while the latter was discovered by A.S. Lewis in 1892 in Sinai, is marked syrsin and is a palimpsest. The manuscripts probably transmit a text close to that of about 200. Both manuscripts contain the Gospels themselves, and in them some gaps. The Old Syriac translation of Paul's Epistles has not survived. Science knows it only from the quotations of the Eastern Church Fathers.
Peshitta.
At the beginning of the fifth century, all the NT books (with the exception of 2 Peter, 2 John, 3 John, Jude, Revelation) were re-translated, perhaps to blur the differences that existed between the various existing translations. The Peshitta is designated by the symbol syrp.
Until the late 19th century, it was thought that the Peshitta was written in the 2nd century. After Burkitt's publication, most biblical scholars agreed that this work was done by Bishop Rabbula of Edessa (d. 436). Rabbula's authorship is questionable, however, because the quotations that appear in his writings do not always agree with the Peshitta. It was not until the 10th century that the translation was called the Peshitta ("pešitto" – simple, ordinary), by Moshe bar Kefa, because it was translated into colloquial, ordinary language (to make it accessible to everyone). The Peshitta contained 22 books of the NT (lacking 2 Peter, 2 and 3 John, Jude, Revelation), and did not include the texts of Luke 22:17-18; John 7:53 – 8:11. The Peshitta became the translation in force in the Syriac Church, both Eastern and Western, and this means that it was written before the schism of 431. The text-type of the Peshitta was published in print in 1555 in Venice. An edition of the critical text of the Peshitta is being compiled in Leiden.
The text-type of the Peshitta is heterogeneous, the Gospels generally represent the Byzantine text-type, and some parts of Acts represent the Western text-type, with numerous infiltrations of the Alexandrian text-type (e.g., Matthew 14:12; 15:4; Mark 1:2; John 1:18) and the Caesarean text-type. G.H. Gwilliam has shown that in Matthew 1-14 the Peshitta agrees with the "Textus Receptus" only 108 times, with the Codex Vaticanus 65 times, in 137 cases it differs from both and supports either Old Syriac or Old Latin translations, and in 31 cases it has its own variants. In the Gospel of Mark, the text does not follow either the Alexandrian, Western or Caesarean traditions. Hope Broome Dows examined 135 selected variants of the Gospel of Mark. 48.9% of these variants were consistent with the Byzantine text, 29.1% with the Western text. Many of the variants in Mark's Gospel are consistent with the Old Syriac translation. This research refuted the circulating opinion that Peshitta conveys the Byzantine text. In the Acts of the Apostles, it transmits many Western variants.
More than 300 Peshitta manuscripts have survived, almost half of which are in the British Library. The other significant collection of manuscripts is in Cambridge. Most of these manuscripts are written in Syriac alphabet. More important manuscripts:
Subsequent Syriac translations.
One of the still unsolved mysteries in the history of the New Testament is the Philoxenian (syrph) and Harkleian (syrh) translations. According to one version, they are the Peshitta's revisions. First, in 508 Philoxenus, Bishop of Manbij on the Euphrates River, was said to have revised the Peshitta (Philoxenian), while in 616 Thomas of Harkel (Harkelian) revised the Philoxenian translation. According to the second version, these are two separate translations. About 35 manuscripts representing the Harkel translation have survived; they date from the seventh century upward and show some similarity to the Western text-type represented by the Codex Bezae. The Philoxenian translation is known only from the markings of Thomas' "critical apparatus" in the Harkleian manuscripts. It already contained the four small Universal Epistles and the Apocalypse, which Peshitta did not have. The most important Harkleian manuscript is kept at Trinity College in Dublin. Around 500, a translation was made into Syro-Palestinian (the Aramaic-Galilean dialect spoken by Jesus). It contains 2 Peter, 2 John, 3 John, Jude and Revelation, and represents the Caesarean text-type and is completely independent of other Syriac translations. It is preserved in the Lectionaries and other fragmentary manuscripts. The three most important manuscripts date from 1030, 1104 and 1118. It is designated by the symbol syrpal.
Limitations of Syriac translations.
Syriac, as a Semitic language, has no endings and could not afford to take as much liberty in changing the word order as Greek did. The verb is conjugated in a completely different way than in Greek. The Syriac language had the so-called "status emphaticus," the use of which does not always correspond to the Greek genitive.
In transcriptions of proper names, the consonant "ξ" is rendered with two Syriac consonants "ܭܣ". The letter τ was transcribed by "ܛ", while "θ" was transcribed by "ܬ" (e.g., the name "Τιμοθεε", Timothy, "ܛܡܬܐܘܣ"). In the case of Semitisms, efforts were made to reproduce the original Semitic sound, but not all Semitic names were recognized (e.g., Aretas in 2 Corinthians 11:32). The Greek NT text-type renders the name of the city "jirušalaim" in two ways: "Ιερουσαλημ/Ιεροσολυμα", Syriac translations revert to a unified form: "Urišlem". "Διαβολος" is regularly rendered by "satana". "Σιμων/Πετρος/Κηφας". "Σιμων Πετρος" is almost always rendered by "šem‛un kepa". "Πετρος", however, is sometimes rendered by "Ptrws". The differences in verb conjugation are the most difficult.
Latin translations.
Old Latin translations.
The Western Church originally used Greek, so the need to translate the Bible into Latin did not immediately arise. The first Latin translations appeared first in North Africa (around 170) and then in Rome and Gaul. Their number steadily increased and by the middle of the fourth century had reached forty. All these translations were based on the Septuagint and slavishly adhere to the Greek text. Translations produced before the "Vulgate" were called Old Latin translations – "Vetus latina". The most important and respected were "Aphra" and "Itala", but neither of them gained widespread recognition throughout the Church. Both translations represent the Western text-type. "Aphra" deviated more from the Greek text-type, while "Itala" had a bit of Byzantine influence, the number of which increased over time. The translations underwent constant transformations, and their textual variants multiplied.
Not a single Old Latin manuscript transmitting the full text of the NT has survived to our day. However, 32 manuscripts containing the Gospels, 12 Acts, 4 Paul's epistles and 1 Revelation, plus a number of fragments have survived, making a total of 89 manuscripts. They date from the 4th to 13th centuries. Most of them represent "Itala", while a small number of manuscripts of "Aphra" have survived. The most valuable manuscripts are:
"Vulgate".
"Itala" was disordered, and the disorder increased as time passed. In this situation, Pope Damasus commissioned Jerome to make a new translation of the entire Scriptures into Latin. The work on the Gospels took about a year and was completed in 383. It was a revision of the "Itala", which Jerome confronted with Greek manuscripts. He worked out the Gospels most thoroughly, and this translation was called the "Vulgate" ("vulgus" – common, ordinary); it met with criticism and was revised many times, but by the end of the 6th century it already enjoyed authority until it became the official Bible of the Western Church.
Since the "Vulgate" functioned alongside Old Latin translations for some time, this influenced Old Latin reminiscences in the "Vulgate" text-type and the correction of Old Latin text-types in the "Vulgate" fashion. In addition, due to the carelessness of copyists, the text-type was further distorted. In this situation, Alcuin (735–804) and Theodulf (750–821) attempted to revise and purify the "Vulgate" text-type, but their efforts contributed to the growth of mixed versions. Lanfrank of Bec (1005–1089) and Stephen Harding (d. 1134) later worked on revising the text of the "Vulgate", also fruitlessly, so in the late Middle Ages "correctoria" were created, among them the Paris Bible. After the invention of printing in Europe, the "Vulgate" became the first printed book – the Gutenberg Bible (1452–1456) was created.
The first critical edition of the "Vulgate" text-type was the work of Robert Estienne in 1528. In 1546, the Council of Trent passed a resolution on the need to prepare a revised "Vulgate". This was accomplished by Pope Sixtus V in 1590 (the Sixtine Vulgate), but was critically imperfect, so Sixtus' successor Clement VIII led to the publication of the revised Sixto-Clementine Vulgate, corrected in 4900 places (1592, 1593, 1598).
In 1907 Pope Pius X appointed a commission to revise the "Vulgate". The premise was to correct the text-type in the spirit of the achievements of modern linguistics, to cleanse the spelling of medieval trappings and to translate anew what Jerome had departed too far from the original. The work took several decades, with the entire work published between 1926 and 1969.
The "Vulgata Stuttgartiana" is more similar to the Sixto-Clementine than to the "editio nova", and represents an attempt to bring the text as close to Jerome's original "Vulgate" as possible. It is based mainly on the 8th century Codex Amiatinus.
More than 10,000 manuscripts of the "Vulgate" have survived to modern times. Their exact number is not known. Some of the more famous and valued manuscripts of the "Vulgate" include:
Limitations of Latin translations.
A limitation of the Latin translations are the transcriptions of Semitic names and terms: "Caiphas/Caiaphas", "Scarioth/Iscariotes", "Istrahel/Israhel", "Isac/Isaac". Another type of limitation is caused by Latinisms (e.g., ἑκατονταρχης/κεντυριων → "centurio"). Due to the limitations of Latin grammar, the "aorist" and "perfectum" tenses cannot be distinguished. Both "ελαλησα" and "λελαληκα" must be rendered by "locutus sum". The Latin language does not have a genitive. Sometimes the genitive is rendered by the demonstrative pronoun in the expressions "hic" "mundus" or "hoc saeculum". Greek has many forms of the negation participle (οὐ, οὐκ, οὐχ, οὐχί, μή, οὐ μή, μὴ οὐ), however, Latin has far fewer of these and as a result they are translated either by "non" or "nonne". The participle οὐδε/μηδε is translated as "neque" and the adjective οὐδεiς/μηδεις translated as "nemo". Greek synonyms such as καταγγέλλω and ἁναγγγέλλω, οἰκέω and κατοικέω are not precisely distinguished in Latin. Sometimes there is a problem with the prepositions ἐκ and ἁπό, ἁπό and ὐπό, ἐν and ἐπί, rendered by the Latin "a", "de" and "ex". Also, the prepositions εἰς and ἐν in Hellenistic Greek may have been used interchangeably. This problem applies to all synonyms.
In general, Latin translations were very literal and tried to render the same Greek word with the same Latin word. But this was not always the case, especially with "Versio Aphra", which rendered the same Greek words with different Latin terms. The term αὐτου is rendered by eius or "illius", ην by "erat" or "fuit".
Coptic translations.
Initially, the Church in Egypt used the Greek language. The transition to Coptic – the last form of the Egyptian language – took place between 180 and 200. However, the Coptic language functioned in as many as seven dialects. The New Testament was translated into five of them. The Coptic translations represent the Alexandrian text-type, while the Sahidic (copsa) and Bohairic (copbo) translations have traces of the Western text-type. The Sahidic translation was quite free, while the Bohairic translation was very slavish, tending to translate every word, even using grammatical borrowings. 52 manuscripts are bilingual and they contain – in addition to the Coptic text-type – the Greek text-type; 2 manuscripts are trilingual and they contain the following text-types: Greek, Coptic, Arabic.
Sahidic dialect.
At first, a partial translation was made into the Sahidic (copsa) dialect spoken in upper Egypt (knowledge of Greek was not common here). Later it was supplemented with missing books. The exact date of the translation is unknown, with scholars giving dates from the mid-2nd century to the early 4th century. This translation generally represents the Alexandrian textual tradition. However, it contains quite a lot of Western impositions in the texts of the Gospel of John and Acts. These are accidental impositions and it is difficult to find any regularity in them. The text-type of the Sahidic dialect is located between the text of the Greek codices A and B, the closest for the formula_0, it is also close for Codex T, which is a bilingual Greek-Sahidic codex. Based on later manuscripts, a slow process of text revision can be observed. In the 9th century, the Sahidic dialect began to give way to Bohairic. Manuscripts representing this translation were discovered in the 18th century. There are 560 known Sahidic manuscripts cataloged by the Münster institute, none of them complete.
In Acts, essentially the Alexandrian text-type passes on, with a small number of lessons from the Western text-type. In the "apostolic decree" of Acts 15:19 n there was a superimposition of Alexandrian and Western lessons. In Paul's Epistles, the text is Alexandrian with a Western tinge, close to the formula_1 and the Vatican Codex. In the catholic epistles, the Sahidic translation represents the classical Alexandrian text-type and is distant for all other text-types, very often resonating with the Codex B. The Sahidic manuscripts omit the same verses as the Greek manuscripts representing the Alexandrian tradition. The order of the Gospels is John, Matthew, Mark, and Luke. The Letter to the Hebrews is placed after 2 Corinthians and before Galatians. In many manuscripts, the Book of Revelation is missing.
More important manuscripts.
Source:
George Horner prepared a critical edition of the Sahidic text between 1911 and 1924.
Bohairic dialect.
The translation into the Bohairic dialect (copbo), used in the Nile Delta, was made in the 3rd century, or at the latest in the early 4th century. The history of the Bohairic translation is the most complicated among all Coptic dialects. It was previously assumed that the Bohairic translation was based on a Greek manuscript representing the late Alexandrian text-type. However, two fragments discovered in the 20th century, dating from the 4th to 5th centuries, changed scholars' views on the history of the Bohairic text-type. The text-type transmitted by them is so different from later known manuscripts that it has been designated as proto-Bohairic text (copbo). It even differs in language. The order of the books is: the Gospels (John, Matthew, Mark, and Luke), Paul's Letters (Hebrews placed after 2 Thessalonians and before 1 Timothy), Catholic Epistles, Acts, Revelation. Only a few manuscripts contain the Book of Revelation.
In this translation, the Sahidic translation was used, as evidenced in some parts of the text-type. Influences from the Western text-type are also visible, while the Byzantine text-type is difficult to discern. By the 11th century, when the patriarchate was moved from Alexandria to Cairo, the Bohairic dialect had already become the dominant language in the Coptic Church. Consequently, the translation of the New Testament into the Bohairic dialect (copbo) became the official text of the Coptic Church in Egypt. It is uncertain whether the Bohairic text-type was ever revised.
More than a hundred manuscripts of the Bohairic dialect have been preserved, but they are of late origin. The oldest complete set of the Gospels dates back to 1174, followed by one from 1178 to 1180, and another from 1192. The remaining manuscripts date from the 13th century onwards. Bodmer discovered a papyrus – Papyrus Bodmer III – containing the majority of the Gospel of John, dated to the 4th century (possibly also the 5th century). There is also a fragment of the Epistle to the Philippians, which presents the text in Sahidic form. All manuscripts contain Mark 16:9–20, while the texts of John 5:4 and John 7:53–8:11 have been omitted in all major manuscripts. George Horner prepared a critical edition of the Bohairic text between 1898 and 1905.
Other Coptic dialects.
Later, the New Testament was translated into the dialects of Middle Egypt: Fayumic (copfay), Achmimic (copach), and Subachmimic (copach2). These translations were based partly on the Greek text-type and partly on earlier Coptic translations, primarily in the Sahidic dialect. The exact time of their creation is difficult to determine, but it is known that they existed in the 4th century. These translations, for the most part, represent the Alexandrian text-type, and their dependence on the Western text-type is noticeable, with occasional parallels to the Old Latin translations. The manuscripts that have survived to our times do not transmit the complete New Testament in the Middle Egyptian dialects. One of them, in the Fayumic dialect, contains only John 6:11–15:11 (with gaps). A manuscript with the text-type of the Gospel of John in the Subachmimic dialect is dated to the years 350–375. It is closer to the Sahidic translation than the Bohairic. The Schøyen Codex contains the Gospel of Matthew and is dated to the 4th century.
Limitations of the Coptic dialects.
The Coptic alphabet was based on the Greek alphabet, to which 24 letters were added, along with 7 letters borrowed from the Demotic script (Ϣ, Ϥ, Ϧ, Ϩ, Ϫ, Ϭ, Ϯ). Five letters were used only in words of Greek origin. However, the Coptic language does not distinguish between "d" and "t". This is evident in transcriptions of terms such as: σκανδαλον or ενδυμα. On the other hand, it distinguishes sounds that were not known to the Greeks. The Coptic language has only two genders, lacking an equivalent for the μεν particle, although it was sometimes transcribed (along with the δε particle). Many terms have been borrowed from the Greek language (e.g., αλλα, χαρις, σκανδαλον, δικαιοσυνη, κοινονεια, σωμα, ψυχη, αγαθος, πονηρος, προφητης, μαθητης, μαρτυρια, σταυρος, γραμμαρ, σοφος, χρονος, εξουσια, θαλασσα, Σατανας, and many others). Due to itacism, many words are written in two ways: αρχιερευς/αρχηερευς, μαθητης/μαθιτης, Ιταλια/Ηταλια, Δαειδ/Δαυιδ. Abbreviations for "nomina sacra" are made according to the same principle as in Greek texts (e.g., ΘΣ, ΙΗΣ, ΙΣΗΛ, ΠΝΑ). The Coptic translation, due to its literalness, is more useful than Syriac and Latin translations in reconstructing the Greek text.
Gothic translation.
The Gothic translation stands out among ancient translations because its date, translator, and circumstances of its creation are known. Ulfilas, Urphilas, or Wulfila (310–383), the "Apostle of the Goths," worked in the regions of Dacia and the Bosphorus, converting the Ostrogoths to the Arian Christian faith. As the bishop of Taurida, he participated in the proceedings of the First Council of Nicaea (325). Before embarking on the translation work, he first created the Gothic alphabet, based on Greek, as he did not want to use Old Germanic runes. Another problem was the lack of terminology, so he expanded the vocabulary through borrowings from Greek and Latin. The sentence structure was based on Greek syntax. Ulfilas' translation was used in the Ostrogothic Kingdom in Italy, which lasted briefly (488–554). Shortly after their conversion to Catholicism, the Gothic language disappeared, and there was no one left interested in reading Ulfilas' translation.
The text of the New Testament represents the Byzantine textual tradition (Family E) and is close to the quotations of Chrysostom. The text of Paul's Letters is close to the Peshitta. The surviving manuscripts contain a considerable amount of Western textual elements, perhaps added later during the Ostrogothic Kingdom period.
The most important manuscript is the Codex Argenteus, also known as the "Silver Bible," written in silver and gold letters on parchment soaked in purple dye. It contains the four Gospels, in the order: Matthew, John, Luke, Mark. It consists of 188 pages today (originally there were 336). It was prepared for Theodoric the Great (455–526), the king of the Ostrogoths, shortly after his coronation in Ravenna or Brescia. After Theodoric's death, the codex was forgotten. It was not listed in any catalogs or book lists. It was discovered in the 16th century. During the Thirty Years' War, it was taken by the Swedes and has since been kept in Uppsala. In 1970, in Speyer, one of the missing pages of the codex, known as the Speyer Fragment, was discovered, which concludes the Gospel of Mark. Since then, the codex consists of 188 pages.
The remaining manuscripts of the Gothic New Testament, with one exception, are palimpsests and are fragmentary. The Codex Carolinus contains Romans 11–15, with a bilingual Latin-Gothic text; it is a palimpsest and is kept in Wolfenbüttel. The Codex Ambrosianus A and Codex Ambrosianus B contain fragments of all of Paul's Letters, but only 2 Corinthians has survived in its entirety. Codex Ambrosianus C contains fragments of Matthew 25–27. All three date from the 5th/6th century and are kept in Milan. Codex Taurinensis contains 4 pages with fragments of Galatians and Colossians. There are no manuscripts preserving Acts, the General Epistles, and the Apocalypse. These books have been completely lost.
In 1908, Wilhelm Streitberg prepared an edition of the Gothic Bible based on the manuscripts available to him. In 1919, the second revised edition was published. In 1965, E.A. Ebbinghaus released the fifth revised edition of the Gothic Bible.
Armenian translation.
Between 410 and 414, Mesrop and Isaac translated the entire Bible into Armenian. The New Testament contained 22 books (influenced by the Peshitta). The translation was likely made from Greek, but the influence of the Syriac translation is noticeable. The translator consulted the Syriac translation. According to another explanation, the initial translation was made from Syriac and was later revised based on Greek manuscripts. The original Armenian translation has not survived.
Erroll F. Rhodes enumerated 1244 Armenian manuscripts in 1959. However, Rhodes did not take into account lectionaries and commentaries. The total number of manuscripts exceeds 1600, with 100 containing the entire Bible. This number is greater than any other ancient translation except for the "Vulgate". A significant portion of these manuscripts is preserved in the Bodleian Library. The oldest manuscripts date from the 9th to 10th centuries. The oldest dated manuscript, MS. 991, dates back to 887 and contains the four Gospels written in majuscule. MS. 2374, from the year 989, contains an explanation that Mark 16:9–20 was written by the presbyter Aristion. The significance of this note is minimal because it does not come from the original scribe; it was added in the 14th century. The earliest manuscripts of Acts, the General Epistles, Paul's Letters, and the Apocalypse are bilingual Greek-Armenian codices (Arm. 27, Arm. 9, Gregory–Åland 301; Rhodes 151) and are preserved in the French National Library in Paris.
The Armenian translation includes Third Epistle to the Corinthians and Corinthians to Paul among the books of the New Testament. A characteristic of Armenian manuscripts is commenting on the longer ending of the Gospel of Mark. Out of 220 manuscripts examined by Colwell, only 88 contain the ending without commentary, 99 manuscripts end at Mark 16:8, and the remaining manuscripts include the ending along with a scholion questioning its authenticity.
The translation likely originally represented the Caesarean text-type. Still, by the 5th century, the translation had been revised based on the Byzantine standard, retaining only some remnants of the Caesarean text-type.
The Armenian translation was printed in 1666 in Amsterdam. In 1789, Zohrab published the first critical edition of the New Testament.
Proper names are usually transcribed, and only in a few cases are they rendered using traditional Armenian terms. Greek letters θ and τ are usually rendered differently (t' or t), double consonants ξ and ψ are rendered as k's and p's (never as ks and ps), and diacritics are usually ignored. Armenian nouns decline through seven cases but do not have a vocative case and are not gendered. Verbs have fewer participial forms and lack the subjunctive mood. Greek synonyms are usually not distinguished.
Georgian translation.
At the end of the 5th century, a translation into Georgian was made, likely from Armenian, before the creation of the Armenian "Vulgate". Conybeare claimed that the translation was made directly from Greek; however, later, due to an indeterminate number of Syriacisms, he concluded that it was translated from Syriac. Nevertheless, Conybeare based his opinion only on two manuscripts (from 913 and 995). The text of the translation represents a mixed textual tradition, with the older manuscripts showing a Caesarean element predominance, while in later manuscripts, the Byzantine tradition is prevalent.
Among the oldest manuscripts containing the Gospels are: Adysh from 897, Opiza from 913, and Tbet from 995. In the critical apparatus of many editions, the manuscript Adysh is designated by the symbol geo1, while the other two manuscripts are designated by the symbol geo2. Adysh represents the Caesarean text-type close to Codices Θ, 565, and 700, while geo2 exhibits textual affinity with "f"1 and "f"13. The oldest manuscripts do not contain the text of John 7:53–8:11. The oldest manuscripts containing Acts and the Letters date from the second half of the 10th century. Gregory counted 17 Georgian manuscripts of the New Testament in 1902.
According to the research of J. Molitor, who examined the Letter of James, the Georgian translation presents 53 variants that he identified as Syrian (i.e., Byzantine), 51 Armenian variants, and 59 Syrian-Armenian variants. There are 163 non-Greek variants, and 66 are unusual for Oriental translations. Based on these findings, Molitor concluded that the Letter of James was translated from a Syrian-Armenian translation.
The Georgian translation was revised in the 10th century by Euthymius. Euthymius used Greek manuscripts representing the Byzantine standard text-type. The Apocalypse, translated by Euthymius, was added. The "Pericope de Adultera" (John 7:51–8:11) was added to the Gospel of John.
The text of the Gospels was printed in 1709 in Tbilisi. The complete Bible was published in 1743 in Moscow.
Ethiopian translation.
After the Council of Chalcedon in 451, the Monophysites were persecuted in Byzantium. A significant portion of them found refuge in Ethiopia. Among them were nine active Syrian monks who, due to their zeal and piety, attained saintly status. Besides founding monasteries and propagating Monophysite theology, they were also said to have translated the holy scriptures into the Ethiopian language. However, it is also possible that the translation was not completed until the 6th or 7th century. To this day, it has not been conclusively determined whether the Gospels were translated from Greek or Syriac. There is no doubt, however, that the other books were translated from Greek. Paul's Letters exhibit surprising statistical agreement with formula_1 and the Codex Vaticanus, especially in those passages where these two manuscripts are not supported by any Greek manuscript. In other parts of the New Testament, the Ethiopian translation represents an early Byzantine text-type. In the 12th to 14th centuries, the Ethiopian translation was harmonized with the Arabic text-type.
Over three hundred manuscripts containing one or more books of the New Testament have been preserved. Twenty-six of them were created before the end of the 15th century, while the rest are from the 16th to 19th centuries. The oldest manuscript is Abba Garima I, which contains the Gospels of Mark and Matthew. Radiocarbon dating has shown that it dates from 330 to 540 (previously thought to be from the 9th or 10th century).Abba Garima II contains the Gospels of Luke and John and dates from 430 to 650 (previously thought to be from the 11th century). Other manuscripts are even older; for example, Abba Garima III, containing the Gospels, dates from the 11th century. Lalibela contains the four Gospels and dates from 1181 to 1221 AD. Abba Garima B. 20 from the 14th century contains Paul's Letters written in five languages (in columns from left to right): Ethiopian, Syriac, Coptic (Bohairic), Arabic, and Armenian, as well as the Catholic Epistles and Acts in four languages (without the Armenian column). The Ethiopian translation was printed in 1548 in Rome.
Persian translation.
The origin, authorship, and initial parts of the New Testament translated into Persian remain uncertain. According to the testimony of John Chrysostom in the 4th century, there existed a translation into the Persian language. In the 5th century, Theodoret wrote that the Persians "venerate the writings of Peter, Paul, John, Matthew, Luke, and Mark as those that descended from heaven". The translation was probably made from the Peshitta. However, no fragment of the New Testament text-type from that translation has survived to our times. Several fragmentary pages with the text of the Book of Psalms in archaic Pahlavi have survived. Two translations of the Gospels into New Persian, although not strictly classified as early translations, are sometimes cited in critical apparatus. The first of these translations was made from the Peshitta, and its manuscript dates from 1341, while the second was made from Greek, with its manuscript probably dating from the 14th century. In the early 20th century, C.R. Gregory described 37 manuscripts of the Gospels from the 14th to 19th centuries. Kirsopp and Silva Lake suggested that the Persian translation contains traces of the Caesarean textual tradition. The "Diatessaron" in the Persian language has been preserved, translated from the Syriac language. The oldest manuscript dates from the 13th century.
Arabic translation.
The translator of the Arabic version remains unknown, with various traditions attributing it to different individuals. What is certain is that by the 7th century, the translation already existed. There were several translations, some from Greek, others from the Old Syriac translation, and still others from Coptic.
More than 75 manuscripts of the Arabic translation have survived. The oldest manuscript, Sinai Arabicus 151, dates back to 867 and contains Acts and the General Epistles. MS. Borg. Arab. 95, from the 9th century, contains the text of the four Gospels on 173 pages. Sinai Arabicus 72 contains the four Gospels and dates to 897. Several late copies of the "Diatessaron" in Arabic have also been preserved. 16 manuscripts present a bilingual Greek-Arabic text-type (including 0136, 0137, 211, 609). There are also trilingual manuscripts – two of them contain the text in Greek, Coptic, and Arabic, and one in Greek, Latin, and Arabic (minuscule 460).
The text of the four Gospels was printed in Rome in 1590–1591. The complete text of the New Testament was published in the Paris Polyglot and London Polyglot.
All printed editions of the text of the four Gospels represent the Alexandrian textual tradition. Robert Boyd thoroughly examined 63 textual variants from 1 Corinthians in the manuscript Sinai Arab. 155 and concluded that they represent the Alexandrian textual tradition with few Byzantine interpolations. Considering the quality of the text-type, Boyd concluded that the translation was made before the 7th century. Metzger noted that the matter is not settled because the translator could have used an old Greek manuscript, and the translation could have been made in the 7th century. In the Arabic translation, the Greek letter χ is often rendered as ﺵ, corresponding to "sh" (e.g., Tyshikus instead of Tychikus). Veria in Macedonia is called Aleppo (with the clarification "West").
Sogdian translation.
At the beginning of the Middle Ages in Central Asia, the most influential language was Sogdian. It is unknown who and when made the translation, and only small fragments of Matthew, Luke, John, 1 Corinthians, and Galatians have survived. Generally, these are interlinear Syriac-Sogdian text-types dating from the 10th to the 11th centuries. The text of these fragments shows close textual affinity with the Syriac Peshitta. The most important partially preserved manuscript is a lectionary with fragments of the mentioned three Gospels (no fragment of Mark has been found), which besides the influence of the Peshitta also contains elements indicating Old Syriac or "Diatessaron" sources. All Sogdian translation fragments come from one place and were discovered in 1905 in the ruins of the former Nestorian monastery in Bulayiq near Turpan by the expedition of Albert von Le Coq.
Caucasian Albanian translation.
One of the peoples living in the Caucasus were the Albanians. During the 5th to 11th centuries, they were followers of the Christian faith and had their own literature. Unfortunately, all literature in their own language has been lost. According to Armenian tradition, St. Mesrop, in addition to creating the Armenian and Georgian alphabets, also created the Albanian alphabet and is said to have evangelized the Albanians through his two disciples. Somewhat later, according to the same tradition, Bishop Jeremiah was supposed to have translated the Scriptures into the Caucasian Albanian language. It is unknown how much of the Bible was translated, as the translation has been entirely lost.
Nubian translation.
Between Egypt and Ethiopia, there were three Nubian kingdoms. The first Christians arrived in Nubia during the persecutions of Diocletian. However, Christianity spread in this country only in the 6th century. It is not known when the translation into the Nubian language was made; the oldest manuscript fragments date back to the 8th century. The textual character deviates from the classical division of Greek manuscripts. It has some features of the Byzantine text-type as well as all other text-types, including Family 1739.
The first manuscript in the Nubian language was discovered in 1906. It contains a fragment of a lectionary. In the 20th century, other fragments in this language were also discovered.
Church Slavonic translation.
The first translation into the Slavic language was initiated before the year 863 by St. Cyril (d. 869). The work was undertaken with the Balkan Slavs in mind and initially was limited only to the Gospels and liturgical passages (from Acts, the Epistles, and the Psalms). The translation was made based on an early form of the Byzantine text-type – the same one used in the Peshitta. However, significant differences exist between the manuscripts. Some of them contain a large number of Western readings, likely resulting from a revision based on the "Vulgate" in Moravia (after 863). After Cyril's death, his brother, St. Methodius, continued the work and completed the translation of the entire Bible except for the Books of Maccabees. However, his work has been lost.
In the critical editions of the Greek New Testament of Nestle-Åland, the following manuscripts are cited:
Three manuscripts convey bilingual Greek-Slavonic text-type (manuscripts 525, 2136, 2137).
Church Slavonic manuscripts were first cited in the edition of the Greek New Testament by Franz Karl Alter in 1786–1787. Josef Vais prepared a critical edition of the text of the four Gospels in 1935–1936. The edition records approximately 2500 variants.
Other translations.
At the end of the first millennium, translations into: Old English (8th/9th century), Old Low German, Old High German, and Old French (Provençal) emerged. All four translations were made from the "Vulgate", whose text-type had already been influenced by "Itala", and therefore, for research on the Greek text-type of the New Testament, these translations are of lesser significance. However, the Old English translation is important for reconstructing the history of the Latin Bible.
The significance for textual criticism.
In contemporary critical editions of the Greek New Testament, the greatest importance is attributed to translations into Latin, Syriac, and Coptic dialects. Each of these translations was made directly from the Greek language in the early period and has been examined in great detail. For 19th-century textual criticism, these translations were important because they were from a period for which no known Greek manuscript existed (the oldest being from the 4th century). In the 20th century, many Greek manuscripts from the early period were discovered, reducing the significance of these translations. While Westcott and Hort attempted to reconstruct the Greek text-type of the 2nd century based on the Old Latin and Old Syriac translations (in conjunction with Codex Bezae), today it is no longer necessary due to the large number of Greek manuscripts from the 2nd century. Nevertheless, these translations are still important, primarily for linking Greek textual families to their region of origin and reconstructing local textual traditions, aided by patristic citations.
In the 27th edition of the Nestle-Åland's Greek New Testament (NA27), the critical apparatus also cites translations into Armenian, Georgian, Gothic, Ethiopian, and Church Slavonic languages. These translations are rarely cited and only when they have special significance for particular variants (e.g., Mk 16:8). A drawback of these translations is that over time they underwent certain modifications under the influence of other translations.
Translations into Arabic, Nubian, Sogdian, Old English, Old Low German, Old High German, and Old French are omitted in critical editions.
Only those variants attested by Greek manuscripts or independently attested by another translation are cited in the critical apparatus. There are a few exceptions, such as Jas 1:17, where besides the translation, only the patristic testimony is referenced. Before including any variant in the critical apparatus, the difference between the Greek language and the language of the translation is examined, and any variants resulting from limitations in the structure of the given language or stylistic differences are disregarded. In situations where translation variants are doubtful, they are not taken into account. | [
{
"math_id": 0,
"text": "\\mathfrak{P}^{75}"
},
{
"math_id": 1,
"text": "\\mathfrak{P}^{46}"
}
]
| https://en.wikipedia.org/wiki?curid=76257936 |
7625922 | Violin acoustics | Area of study within musical acoustics
Violin acoustics is an area of study within musical acoustics concerned with how the sound of a violin is created as the result of interactions between its many parts. These acoustic qualities are similar to those of other members of the violin family, such as the viola.
The energy of a vibrating string is transmitted through the bridge to the body of the violin, which allows the sound to radiate into the surrounding air. Both ends of a violin string are effectively stationary, allowing for the creation of standing waves. A range of simultaneously produced harmonics each affect the timbre, but only the fundamental frequency is heard. The frequency of a note can be raised by the increasing the string's tension, or decreasing its length or mass. The number of harmonics present in the tone can be reduced, for instance by the using the left hand to shorten the string length. The loudness and timbre of each of the strings is not the same, and the material used affects sound quality and ease of articulation. Violin strings were originally made from catgut but are now usually made of steel or a synthetic material. Most strings are wound with metal to increase their mass while avoiding excess thickness.
During a bow stroke, the string is pulled until the string's tension causes it to return, after which it receives energy again from the bow. Violin players can control bow speed, the force used, the position of the bow on the string, and the amount of hair in contact with the string. The static forces acting on the bridge, which supports one end of the strings' playing length, are large: dynamic forces acting on the bridge force it to rock back and forth, which causes the vibrations from the strings to be transmitted. A violin's body is strong enough to resist the tension from the strings, but also light enough to vibrate properly. It is made of two arched wooden plates with ribs around the sides and has two f-holes on either side of the bridge. It acts as a sound box to couple the vibration of strings to the surrounding air, with the different parts of the body all respond differently to the notes that are played, and every part (including the bass bar concealed inside) contributing to the violin's characteristic sound. In comparison to when a string is bowed, a plucked string dampens more quickly.
The other members of the violin family have different, but similar timbres. The viola and the double bass’s characteristics contribute to them being used less in the orchestra as solo instruments, in contrast to the cello (violoncello), which is not adversely affected by having the optimum dimensions to correspond with the pitch of its open strings.
Historical background.
The nature of vibrating strings was studied by the ancient Ionian Greek philosopher Pythagoras, who is thought to have been the first to observe the relationship between the lengths of vibrating strings and the consonant sounds they make. In the sixteenth century, the Italian lutenist and composer Vincenzo Galilei pioneered the systematic testing and measurement of stretched strings, using lute strings. He discovered that while the ratio of an interval is proportional to the length of the string, it was directly proportional to the square root of the tension. His son Galileo Galilei published the relationship between frequency, length, tension and diameter in "Two New Sciences" (1638). The earliest violin makers, though highly skilled, did not advance any scientific knowledge of the acoustics of stringed instruments.
During the nineteenth century, the multi-harmonic sound from a bowed string was first studied in detail by the French physicist Félix Savart. The German physicist Hermann von Helmholtz investigated the physics of the plucked string, and showed that the bowed string travelled in a triangular shape with the apex moving at a constant speed.
The violin's modes of vibration were researched in Germany during the 1930s by Hermann Backhaus and his student Hermann Meinel, whose work included the investigation of frequency responses of violins. Understanding of the acoustical properties of violins was developed by F.A. Saunders in the 1930s and 40s, work that was continued over the following decades by Saunders and his assistant Carleen Hutchins, and also Werner Lottermoser, Jürgen Meyer, and Simone Sacconi. Hutchins' work dominated the field of violin acoustics for twenty years from the 1960s onwards, until it was superseded by the use of modal analysis, a technique that was, according to the acoustician George Bissinger, "of enormous importance for understanding [the] acoustics of the violin".
Strings.
The open strings of a violin are of the same length from the bridge to the nut of the violin, but vary in pitch because they have different masses per unit length. Both ends of a violin string are essentially stationary when it vibrates, allowing for the creation of standing waves (eigenmodes), caused by the superposition of two sine waves travelling past each other.
A vibrating string does not produce a single frequency. The sound may be described as a combination of a fundamental frequency and its overtones, which cause the sound to have a quality that is individual to the instrument, known as the timbre. The timbre is affected by the number and comparative strength of the overtones (harmonics) present in a tone. Even though they are produced at the same time, only the fundamental frequency—which has the greatest amplitude—is heard. The violin is unusual in that it produces frequencies beyond the upper audible limit for humans.
The fundamental frequency and overtones of the resulting sound depend on the material properties of the string: tension, length, and mass, as well as damping effects and the stiffness of the string. Violinists stop a string with a left-hand fingertip, shortening its playing length. Most often the string is stopped against the violin's fingerboard, but in some cases a string lightly touched with the fingertip is enough, causing an artificial harmonic to be produced. Stopping the string at a shorter length has the effect of raising its pitch, and since the fingerboard is unfretted, any frequency on the length of the string is possible. There is a difference in timbre between notes made on an 'open' string and those produced by placing the left hand fingers on the string, as the finger acts to reduce the number of harmonics present. Additionally, the loudness and timbre of the four strings is not the same.
The fingering positions for a particular interval vary according to the length of the vibrating part of the string. For a violin, the whole tone interval on an open string is about —at the other end of the string, the same interval is less than a third of this size. The equivalent numbers are successively larger for a viola, a cello (violoncello) and a double bass.
When the violinist is directed to pluck a string (Ital. "pizzicato"), the sound produced dies away, or dampens, quickly: the dampening is more striking for a violin compared with the other members of the violin family because of its smaller dimensions, and the effect is greater if an open string is plucked. During a "pizzicato" note, the decaying higher harmonics diminish more quickly than the lower ones.
The vibrato effect on a violin is achieved when muscles in the arm, hand and wrist act to cause the pitch of a note to oscillate. A typical vibrato has a frequency of 6 Hz and causes the pitch to vary by a quarter of a tone.
Tension.
The tension (T) in a stretched string is given by
formula_0
where E is the Young's modulus, S is the cross-sectional area, ΔL is the extension, and L is the string length. For vibrations with a large amplitude, the tension is not constant.
Increasing the tension on a string results in a higher frequency note: the frequency of the vibrating string, which is directly proportional to the square root of the tension, can be represented by the following equation:
formula_1
where f is the fundamental frequency of the string, T is the tension force and M is the mass.
The strings of a violin are attached to adjustable tuning pegs and (with some strings) finer tuners. Tuning each string is done by loosening or tightening it until the desired pitch is reached. The tension of a violin string ranges from .
Length.
For any wave travelling at a speed v, travelling a distance λ in one period T,
formula_2.
For a frequency f
formula_3
For the fundamental frequency of a vibrating string on a violin, the string length is λ, where λ is the associated wavelength, so
formula_4.
Materials.
String material influences the overtone mix and affects the quality of the sound. Response and ease of articulation are also affected by choice of string materials.
Violin strings were originally made from catgut, which is still available and used by some professional musicians, although strings made of other materials are less expensive to make and are not as sensitive to temperature. Modern strings are made of steel-core, stranded steel-core, or a synthetic material such as Perlon. Violin strings (with the exception of most E strings) are helically wound with metal chosen for its density and cost. The winding on a string increases the mass of the string, alters the tone (quality of sound produced) to make it sound brighter or warmer, and affects the response. A plucked steel string sounds duller than one made of gut, as the action does not deform steel into a pointed shape as easily, and so does not produce as many higher frequency harmonics.
The bridge.
The bridge, which is placed on the top of the body of the violin where the soundboard is highest, supports one end of the strings' playing length. The static forces acting on the bridge are large, and dependent on the tension in the strings: passes down through the bridge as a result of a tension in the strings of . The string 'break' angle made by the string across the bridge affects the downward force, and is typically 13 to 15° to the horizontal.
The bridge transfers energy from the strings to the body of the violin. As a first approximation, it is considered to act as a node, as otherwise the fundamental frequencies and their related harmonics would not be sustained when a note is played, but its motion is critical in determining how energy is transmitted from the strings to the body, and the behaviour of the strings themselves.
One component of its motion is side-to-side rocking as it moves with the string. It may be usefully viewed as a mechanical filter, or an arrangement of masses and "springs" that filters and shapes the timbre of the sound. The bridge is shaped to emphasize a singer's formant at about 3000 Hz.
Since the early 1980s it has been known that high quality violins have vibrated better at frequencies around 2–3 kHz because of an effect attributed to the resonance properties of the bridge, and now referred as the 'bridge-hill' effect.
Muting is achieved by fitting a clip onto the bridge, which absorbs a proportion of the energy transmitted to the body of the instrument. Both a reduction in sound intensity and a different timbre are produced, so that using a mute is not seen by musicians as the main method to use when wanting to play more quietly.
The bow.
A violin can sustain its tone by the process of bowing, when friction causes the string to be pulled sideways by the bow until an opposing force caused by the string's tension becomes great enough to cause the string to slip back. The string returns to its equilibrium position and then moves sideways past this position, after which it receives energy again from the moving bow. The bow consists of a flat ribbon of parallel horse hairs stretched between the ends of a stick, which is generally made of Pernambuco wood, used because of its particular elastic properties. The hair is coated with rosin to provide a controlled 'stick-slip oscillation' as it moves at right angles to the string. In 2004, Jim Woodhouse and Paul Galluzzo of Cambridge University described the motion of a bowed string as being "the only stick-slip oscillation which is reasonably well understood".
The length, weight, and balance point of modern bows are standardized. Players may notice variations in sound and handling from bow to bow, based on these parameters as well as stiffness and moment of inertia. A violinist or violist would naturally tend to play louder when pushing the bow across the string (an 'up-bow'), as the leverage is greater. At its quietest, the instrument has a power of 0.0000038 watts, compared with 0.09 watts for a small orchestra: the range of sound pressure levels of the instrument is from 25 to 30dB.
Physics of bowing.
Violinists generally bow between the bridge and the fingerboard, and are trained to keep the bow perpendicular to the string. In bowing, the three most prominent factors under the player's immediate control are bow speed, force, and the place where the hair crosses the string (known as the 'sounding point'): a vibrating string with a shorter length causes the sounding point to be positioned closer to the bridge. The player may also vary the amount of hair in contact with the string, by tilting the bow stick more or less away from the bridge. The string twists as it is bowed, which adds a 'ripple' to the waveform: this effect is increased if the string is more massive.
Bowing directly above the fingerboard (Ital. "sulla tastiera") produces what the 20th century American composer and author Walter Piston described as a "very soft, floating quality", caused by the string being forced to vibrate with a greater amplitude. "Sul ponticello"—when the bow is played close to the bridge—is the opposite technique, and produces what Piston described as a "glassy and metallic" sound, due to normally unheard harmonics becoming able to affect the timbre.
Helmholtz motion.
<templatestyles src="Template:Quote_box/styles.css" />
"...The foot d of the ordinate of its highest point moves backwards and forwards with a constant velocity on the horizontal line ab, while the highest point of the string describes in succession the two parabolic arcs ac1b and bc2a, and the string itself is always stretched in the two lines ac1 and bc1 or ac2 and bc2."
Hermann von Helmholtz, "On the Sensations of Tone" (1865).
Modern research on the physics of violins began with Helmholtz, who showed that the shape of the string as it is bowed is in the form of a 'V', with an apex (known as the 'Helmholtz corner') that moves along the main part of the string at a constant speed. Here, the nature of the friction between bow and string changes, and slipping or sticking occurs, depending on the direction the corner is moving. The wave produced rotates as the Helmholtz corner moves along a plucked string, which caused a reduced amount of energy to be transmitted to the bridge when the plane of rotation is not parallel to the fingerboard. Less energy still is supplied when the string is bowed, as a bow tends to dampen any oscillations that are at an angle to the bow hair, an effect enhanced if an uneven bow pressure is applied, e.g. by a novice player.
The Indian physicist C. V. Raman was the first to obtain an accurate model for describing the mechanics of the bowed string, publishing his research in 1918. His model was able to predict the motion described by Helmholtz (known nowadays as Helmholtz motion), but he had to assume that the vibrating string was perfectly flexible, and lost energy when the wave was reflected with a reflection coefficient that depended upon the bow speed. Raman's model was later developed by the mathematicians Joseph Keller and F.G. Friedlander.
Helmholtz and Raman produced models that included sharp cornered waves: the study of smoother corners was undertaken by Cremer and Lazarus in 1968, who showed that significant smoothing occurs (i.e. there are fewer harmonics present) only when normal bowing forces are applied. The theory was further developed during the 1970s and 1980s to produce a digital waveguide model, based on the complex relationship behaviour of the bow's velocity and the frictional forces that were present. The model was a success in simulating Helmholtz motion (including the 'flattening' effect of the motion caused by larger forces), and was later extended to take into account the string's bending stiffness, its twisting motion, and the effect on the string of body vibrations and the distortion of the bow hair. However, the model assumed that the coefficient of friction due to the rosin was solely determined by the bow's speed, and ignored the possibility that the coefficient could depend on other variables. By the early 2000s, the importance of variables such the energy supplied by friction to the rosin on the bow, and the player's input into the action of the bow were recognised, showing the need for an improved model.
The body.
The body of a violin is oval and hollow, and has two f-shaped holes, called sound holes, located on either side of the bridge. The body must be strong enough to support the tension from the strings, but also light and thin enough to vibrate properly. It is made of two arched wooden plates known as the belly and the backplate, whose sides are formed by thin curved ribs. It acts as a sound box to couple the vibration of strings to the surrounding air, making it audible. In comparison, the strings, which move almost no air, are silent.
The existence of expensive violins is dependent on small differences in their physical behaviour in comparison with cheaper ones. Their construction, and especially the arching of the belly and the backplate, has a profound effect on the overall sound quality of the instrument, and its many different resonant frequencies are caused by the nature of the wooden structure. The different parts all respond differently to the notes that are played, displaying what Carleen Hutchins described as 'wood resonances'. The response of the string can be tested by detecting the motion produced by the current through a metal string when it is placed in an oscillating magnetic field.
Such tests have shown that the optimum 'main wood resonance' (the wood resonance with the lowest frequency) occurs between 392 and 494 Hz, equivalent to a tone below and above A4.
The ribs are reinforced at their edges with lining strips, which provide extra gluing surface where the plates are attached. The wooden structure is filled, glued and varnished using materials which all contribute to a violin's characteristic sound. The air in the body also acts to enhance the violin's resonating properties, which are affected by the volume of enclosed air and the size of the f-holes.
The belly and the backplate can display modes of vibration when they are forced to vibrate at particular frequencies. The many modes that exist can be found using fine dust or sand, sprinkled on the surface of a violin-shaped plate. When a mode is found, the dust accumulates at the (stationary) nodes: elsewhere on the plate, where it is oscillating, the dust fails to appear. The patterns produced are named after the German physicist Ernst Chladni, who first developed this experimental technique.
Modern research has used sophisticated techniques such as holographic interferometry, which enables analysis of the motion of the violin surface to be measured, a method first developed by scientists in the 1960s, and the finite element method, where discrete parts of the violin are studied with the aim of constructing an accurate simulation. The British physicist Bernard Richardson has built virtual violins using these techniques. At East Carolina University, the American acoustician George Bissinger has used laser technology to produce frequency responses that have helped him to determine how the efficiency and damping of the violin's vibrations depend on frequency. Another technique, known as modal analysis, involves the use of 'tonal copies' of old instruments to compare a new instrument with an older one. The effects of changing the new violin in the smallest way can be identified, with the aim of replicating the tonal response of the older model.
The bass bar and the sound post.
A bass bar and a sound post concealed inside the body both help transmit sound to the back of the violin, with the sound post also serving to support the structure. The bass bar is glued to the underside of the top, whilst the sound post is held in place by friction. The bass bar was invented to strengthen the structure, and is positioned directly below one of the bridge's feet. Near the foot of the bridge, but not directly below it, is the sound post.
When the bridge receives energy from the strings, it rocks, with the sound post acting as a pivot and the bass bar moving with the plate as the result of leverage. This behaviour enhances the violin tone quality: if the sound post's position is adjusted, or if the forces acting on it are changed, the sound produced by the violin can be adversely affected. Together they make the shape of the violin body asymmetrical, which allows different vibrations to occur, which causing the timbre to become more complex.
In addition to the normal modes of the body structure, the enclosed air in the body exhibits Helmholtz resonance modes as it vibrates.
Wolf tones.
Bowing is an example of resonance where maximum amplification occurs at the natural frequency of the system, and not the forcing frequency, as the bow has no periodic force. A wolf tone is produced when small changes in the fundamental frequency—caused by the motion of the bridge—become too great, and the note becomes unstable. A sharp resonance response from the body of a cello (and occasionally a viola or a violin) produces a wolf tone, an unsatisfactory sound that repeatedly appears and disappears. A correctly positioned suppressor can remove the tone by reducing the resonance at that frequency, without dampening the sound of the instrument at other frequencies.
Comparison with other members of the violin family.
The physics of the viola are the same as that of the violin, and the construction and acoustics of the cello and the double bass are similar.
The viola is a larger version of the violin, and has on average a total body length of , with strings tuned a fifth lower than a violin (with a length of about ). The viola's larger size is not proportionally great enough to correspond to the strings being pitched as they are, which contributes to its different timbre. Violists need to have hands large enough to be able to accomplish fingering comfortably. The C string has been described by Piston as having a timbre that is "powerful and distinctive", but perhaps in part because the sound it produces is easily covered, the viola is not so frequently used in the orchestra as a solo instrument. According to the American physicist John Rigden, the lower notes of the viola (along with the cello and the double bass) suffer from strength and quality. This is because typical resonant frequencies for a viola lie between the natural frequencies of the middle open strings, and are too high to reinforce the frequencies of the lower strings. To correct this problem, Rigden calculated that a viola would need strings that were half as long again as on a violin, which would making the instrument inconvenient to play.
The cello, with an overall length of , is pitched an octave below the viola. The proportionally greater thickness of its body means that its timbre is not adversely affected by having dimensions that do not correspond to its pitch of its open strings, as is the case with the viola.
The double bass, in comparison with the other members of the family, is more pointed where the belly is joined by the neck, possibly to compensate for the strain caused by the tension of the strings, and is fitted with cogs for tuning the strings. The average overall length of an orchestral bass is . The back can be arched or flat. The bassist's fingers have to stretch twice as far as a cellist's, and greater force is required to press them against the finger-board. The pizzicato tone, which is 'rich' sounding due to the slow speed of vibrations, is changeable according to which of the associated harmonies are more dominant. The technical capabilities of the double bass are limited. Quick passages are seldom written for it; they lack clarity because of the time required for the strings to vibrate. The double bass is the foundation of the whole orchestra and therefore musically of great importance. According to John Rigden, a double bass would need to be twice as large as its present size for its bowed notes to sound powerful enough to be heard over an orchestra.
* Part 1 (pp. 1-276).
* Part 2 (pp. 277-331).
* Part 3 (pp. 332-389). | [
{
"math_id": 0,
"text": " T = ES {\\frac {{\\Delta}L}{L}}"
},
{
"math_id": 1,
"text": " f = {1 \\over 2} \\sqrt{\\frac T{LM}}"
},
{
"math_id": 2,
"text": " v = {{\\lambda} \\over T}"
},
{
"math_id": 3,
"text": " f = {\\frac 1{T}}= {\\frac v{{\\lambda}}}"
},
{
"math_id": 4,
"text": " f = {\\frac v{2L}}"
}
]
| https://en.wikipedia.org/wiki?curid=7625922 |
76264 | Winamp | Media player for Microsoft Windows
Winamp is a media player for Microsoft Windows originally developed by Justin Frankel and Dmitry Boldyrev by their company Nullsoft, which they later sold to AOL in 1999 for $80 million. It was then acquired by Radionomy in 2014, now known as the Llama Group. Since version 2 it has been sold as freemium and supports extensibility with plug-ins and skins, and features music visualization, playlist and a media library, supported by a large online community.
Version 1 of Winamp was released in 1997, and quickly grew popular with over 3 million downloads, paralleling the developing trend of MP3 (music) file sharing. Winamp 2.0 was released on September 8, 1998. The 2.x versions were widely used and made Winamp one of the most downloaded Windows applications. By 2000, Winamp had over 25 million registered users and by 2001 it had 60 million users. A poor reception to the 2002 rewrite, Winamp3, was followed by the release of Winamp 5 in 2003, and a later release of version 5.5 in 2007. A now-discontinued version for Android was also released, along with early counterparts for MS-DOS and Macintosh.
After a five-year hiatus, Winamp 5.8 (written as Winamp 5.formula_0) was leaked to the public in 2018 before its eventual release by Radionomy; development has since resumed with the latest version 5.9.2 released on April 26, 2023. Its developer Radionomy has since rebranded as Llama Group and launched a streaming service that allowed users to support artists by buying perks or NFTs. The service launched on the web in April 2023, followed by beta apps for Android and iOS in July 2023. On May 16, 2024, Llama Group announced that Winamp would be going partially open source on September 24, 2024.
*"Input": decodes specific file formats.
*"Output": sends data to specific devices or files.
*"Visualization": provides sound activated graphics.
*"DSP/Effect": manipulates audio for special effects.
*"General Purpose" plug-ins add convenience or UI features ("Media Library", "alarm clock", or "pause when logged out").
*"Media Library" plug-ins add functions to the Media Library plug-in.
*"Portables" plug-ins support portable media players.
Plug-in development support increased Winamp's flexibility – for example, the creation of specialized plug-ins for game console music files such as NSF, USF, GBS, GSF, SID, VGM, SPC, PSF, and PSF2.
History.
Initial releases.
Winamp was first released in 1997, when Justin Frankel and Dmitry Boldyrev, formerly students at the University of Utah, integrated their Windows user interface with the Advanced Multimedia Products ("AMP") MP3 file playback engine. The name Winamp (originally spelled WinAMP) was a portmanteau of "Windows" and "AMP". The minimalist WinAMP 0.20a was released as freeware on April 21, 1997.
Its windowless, menu bar-only interface showed only play (open), stop, pause, and unpause functions. A file specified on the command line or dropped onto its icon would be played. MP3 decoding was performed by the AMP decoding engine developed by Advanced Multimedia Products co-founder Tomislav Uzelac, which was free for non-commercial use. It was compatible with Windows 95 and Windows NT 4.0. Winamp was the second real-time MP3 player for Windows, the first being WinPlay3.
WinAMP 0.92 was released as a freeware in May 1997. Within the standard Windows frame and menu bar, it had the beginnings of the "classic" Winamp GUI: dark gray rectangle with silver 3D-effect transport buttons, a red/green volume slider, time displayed in a green LED font, with track name, MP3 bitrate, and "mixrate" in green. Overlength titles appear as slowly scrolling text (or "marquee"). The skeuomorphic design somewhat resembles shelf stereos. There was no position bar, and a blank space where the spectrum analyzer and waveform analyzer would later appear. Multiple files on the command line or dropped onto its icon were enqueued in the playlist.
Winamp 1.
Version 1.006 was released June 7, 1997, renamed "Winamp", i.e., with "amp" now in lowercase. It showed a spectrum analyzer and color-changing volume slider, but no waveform display. The AMP non-commercial license was included in its help menu.
According to Tomislav Uzelac, Frankel licensed the AMP 0.7 engine June 1, 1997. Frankel formally founded Nullsoft Inc. in January 1998 and continued development of Winamp, which changed from freeware to $10 shareware. Despite the fact that there would be no extra features by paying $10, Winamp's popularity and warm reception brought Nullsoft $100,000 a month that year from $10 paper checks in the mail from paying users.
In March, Brian Litman, managing co-founder with Uzelac of Advanced Multimedia Products, which by then had been merged into PlayMedia Systems, sent a cease-and-desist letter to Nullsoft, claiming unlawful use of AMP. Nullsoft responded that they had replaced AMP with Nitrane, Nullsoft's proprietary decoder, but Playmedia disputed this. Third-party reviews found that Nitrane had bugs that resulted in playing back MP3s incorrectly, and that this resulted in unstable tones being added to the playback, and undoubtedly therefore violated the ISO standard. This also means that Nitrane was unlikely to have been based on the AMP software, and was more likely evidence of a hastily written MP3 decoder that didn't concern itself with standards compliance.
Version 1.90, released March 31, 1998, was the first release as a general-purpose audio player, and documented on the Winamp website as supporting plugins, of which it included two input plugins ("MOD" and "MP3") and a visualization plugin.
The installer for Version 1.91, released 18 days later, included "wave", "cdda", and "Windows tray handling" plugins, as well as the famous Wesley Willis-inspired DEMO.MP3 file "Winamp, it really whips the llama's ass". "Mike the Llama" is the company mascot.
By July 1998, Winamp's various versions had been downloaded over three million times.
Winamp 2.
Winamp 2.0 was released on September 8, 1998. The new version improved the usability of the playlist, made the equalizer more accurate, and introduced more plug-ins. The modular windows for playlist and equalizer now matched the player's skin and could be moved around and be separated or "docked" to each other anywhere in any order.
The 2.x versions were widely used and made Winamp one of the most downloaded pieces of software for Windows. By the end of 1998, there were already over 60 plugins and hundreds of skins made for the software.
PlayMedia filed a federal lawsuit against Nullsoft in March 1999. In May 1999, PlayMedia was granted an injunction by Federal Judge A. Howard Matz against distribution of Nitrane by Nullsoft, and the same month the lawsuit was settled out-of-court with licensing and confidentiality agreements. Soon after, Nullsoft switched to an ISO decoder from the Fraunhofer-Gesellschaft, the developers of the MP3 format.
Winamp 2.10, released March 24, 1999, included a new version of the "Llama" "demo.mp3" featuring a musical sting and bleating.
Nullsoft was purchased by AOL in June 1999 for $80 million in stock, with Nullsoft becoming a subsidiary. AOL itself merged with Time Warner in 2000.
Nullsoft relaunched the Winamp-specific winamp.com in December 1999 to provide easier access to skins, plug-ins, streaming audio, song downloads, forums, and developer resources.
As of June 22, 2000, Winamp surpassed 25 million registrants.
Winamp3.
The next major Winamp version, Winamp3 (so spelled to include "mp3" in the name and to mark its separation from the Winamp 2 codebase), was released on August 9, 2002. It was a complete rewrite of version 2, newly based on the Wasabi application framework, which offered additional functionality and flexibility. Winamp3 was developed parallel to Winamp 2, but "many users found it consumed too many system resources and was unstable (or even lacked some valued functionality, such as the ability to count or find the total duration of tracks in a playlist)". Winamp3 had no backward compatibility with Winamp 2 plugins, and the SHOUTcast sourcing plugin was not supported. No Winamp3 version of SHOUTcast was ever released.
In response to users reverting to Winamp 2, Nullsoft continued the development of Winamp 2 to versions 2.9 and 2.91 in 2003, even alluding to it humorously. The beta versions 2.92 and 2.95 were released with the inclusion of some of the functionality of the upcoming Winamp 5. During this period the Wasabi cross-platform application framework and skinnable GUI toolkit was derived from parts of the Winamp3 source code. For Linux, Nullsoft released an alpha version of Winamp3 on October 9, 2001, but has not updated it despite continued user interest.
During this time Winamp faced stiff competition from Apple's iTunes.
Winamp 5.
Winamp 5 was based on the Winamp 2 codebase, but with Winamp3 features such as modern skins incorporated via a plugin, thus incorporating the main advantages of both products. Regarding the omission of a version 4, Nullsoft joked that "nobody wants to see a Winamp 4 skin" ("4 skin" being a pun on foreskin). It was also joked that "Winamp 5 is so good they skipped a number" and "Winamp 2+3=5,". Winamp 5.0 was released in December 2003. A blue themed "Modern" skin became the default interface. The media library was improved, CD burning and ripping was introduced, and other additions.
The original Nullsoft team quit in 2004. As of version 5.1, Winamp development is credited to Ben Allison (Benski) and Maksim Tyrtyshny.
From version 5.2 onwards, support for synchronizing with an iPod and other portable music players is built-in. This was developed by Will Fisher, as a re-write of the open source ml_ipod plug-in.
Winamp 5.5.
Winamp 5.5: The 10th Anniversary Edition was released on October 10, 2007, ten years after the first release of Winamp (a preview version had been released on September 10, 2007). New features to the player included album art support, improved localization support (with several officially localized Winamp releases, including German, Polish, Russian, and French), and a new default interface skin called "Bento" which unlike the previous skins is a unified player and media library in one window as opposed to a multi-window interface. This version dropped support for Windows 9x.
Winamp 5.6.
Winamp 5.6 was released in November 2010 and features Android Wi-Fi support and direct mouse wheel support. Fraunhofer AAC codec with VBR encoding support was implemented. Moreover, the option to write ratings to tags (for MP3, WMA/WMV, Ogg, and FLAC) was added. Hungarian and Indonesian installer translations and language packs were added.
With the release of Winamp version 5.66 on November 20, 2013, AOL announced that Winamp.com would shut down on December 20, 2013, and Winamp would cease to be offered for download after that date.
Five days later, version 5.666 was released with the "Pro" and "Full" installers being one and the same, in the process removing OpenCandy, Emusic, AOL Search, and AOL Toolbar from the installation bundle. This was announced to be the last release of Winamp from AOL/Nullsoft.
Winamp 5.7.
There was a Winamp 5.7 beta program for an invitation-based Winamp Cloud feature, which would let Winamp play a user's entire cloud-stored music library across all supported devices. This feature would have allowed AOL to provide a music locker service that would essentially compete with other online music lockers. The beta program was cancelled months before the announcement to shut down the Winamp project.
Acquisition by Radionomy.
On November 20, 2013, AOL announced that it would shut down Winamp.com on December 20, 2013, and the software would no longer be available for download nor supported by the company after that date. The following day, an unofficial report surfaced that Microsoft was in talks with AOL to acquire Nullsoft. Despite AOL's announcement, the Winamp site was not shut down as planned, and on January 14, 2014, it was officially announced that Belgian online radio aggregator Radionomy had bought the Nullsoft brand, which includes Winamp and SHOUTcast. No financial details were publicly announced. However, TechCrunch has reported that the sale of Winamp and Shoutcast is worth between $5 and $10 million, with AOL taking a 12% stake (a financial, not strategic, investment) in Radionomy in the process.
Radionomy relaunched the Winamp website, and it was available for download again. In December 2015, Vivendi bought a majority stake in Radionomy.
Following Radionomy's acquisition, no new releases would officially surface until Winamp 5.8 in 2018.
Winamp 5.8.
In September 2018, it was reported that a Winamp 5.8 beta build 3563 was leaked to various file-sharing sites. The leaked build, bearing a build date of October 26, 2016, would be the first public build under Radionomy's umbrella, with changes including compatibility with Windows 8.1, 10 and 11, and the removal of the paid Winamp Pro.
Following the leak, Radionomy officially released Winamp 5.8 build 3660 on October 18, 2018.
Winamp 5.9.
Winamp 5.9 was released on September 9, 2022, with mostly under-the-hood improvements. The development team migrated the project from Visual Studio 2008 to Visual Studio 2019, in addition to improving support for Windows 11, high-resolution audio, and playback of HTTPS streams. The minimum supported operating system was increased to Windows 7 SP1.
On December 6, 2022, Winamp 5.9.1 was released, adding a music NFT playback feature. Users are able to add music NFTs on Ethereum and Polygon to the media library by connecting to the Metamask wallet.
In April 2023, Winamp 5.9.2 was officially released, which, according to the developers themselves, is a minor update to the previous version.
Winamp service.
On October 15, 2018, Radionomy's CEO Alexandre Saboundjian announced that a new version of Winamp – then called Winamp 6 – would be released in 2019. The new version launched on April 13, 2023 as an online service. The platform features Winamp Player, a music streaming service with plans to integrate with other music platforms such as Spotify and to play local audio files. Another feature of the new platform is Winamp Fanzone, where artists can upload and license their music for commercial use, and listeners can support artists directly by buying perks, such as early access to new songs or NFTs.
On other platforms.
Android.
"Winamp for Android" is a mobile version for the Android (version 2.1) operating system, released in beta in October 2010 with a stable release in December 2010. It includes syncing with Winamp desktop (ver. 5.59 beta+) over USB or Wi-Fi. It was received with some enthusiasm in the consumer blog press. The app was removed from the Play Store in 2014.
It was reported in 2018 by TechCrunch that a redesigned Android app was planned alongside the announcement of the development of Winamp 6.
An app for the Winamp service was released in beta for Android in July 2023.
Macintosh.
In 1997, Nullsoft also released MacAmp, an Apple Macintosh equivalent of Winamp.
In October 2011, "Winamp Sync for Mac" was introduced as a beta release. It is the first Winamp version for the Mac OS X platform and runs under version 10.6 and above. Its focus is on syncing the Winamp Library to Winamp for Android and the iTunes Music Library (hence the name, "Winamp Sync for Mac"). Nonetheless, a full Winamp Library and player features are included. The developer's blog stated that the "Winamp Sync for Mac Beta" would pave the way for future Winamp-related development on Mac and a fully featured media player as Winamp on Windows. However no further development occurred.
Linux.
An early alpha preview of Winamp3 for desktop Linux was developed in October 2001, but the project was not pursued. Nonetheless some versions of Winamp for Windows are functional using Wine.
MS-DOS.
DOSamp for MS-DOS operating systems was released in 1997. The software was soon abandoned by Nullsoft to focus on the Windows version (Winamp).
iOS.
In July 2023, a beta version of a Winamp service was released via TestFlight for the iOS mobile platform.
Easter eggs.
Winamp has historically included a number of Easter eggs: hidden features that are accessible via undocumented operations. One example is an image of Justin Frankel, one of Winamp's original authors, hidden in Winamp's About dialog box. The included Easter eggs have changed with versions of Winamp, and over thirty have been documented elsewhere.
Derivative works.
"Unagi" is the codename for the media playback engine derived from Winamp core technologies. AOL announced in 2004 that Unagi would be incorporated into "AOL Media Player (AMP)", in development. After beta testing, "AMP" was discontinued in 2005, but portions lived on in AOL's Web-based player.
XMMS, xmms2, qmmp and Audacious are free and open source music players created as clones of Winamp. Some of these even support skins and plug-ins designed for Winamp.
An HTML5 and JavaScript-based web player resembling the graphical user interface of Winamp 2 was developed by programmer Jordan Eldredge in 2018.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\infin"
}
]
| https://en.wikipedia.org/wiki?curid=76264 |
7626743 | Atomic ratio | Measure of the ratio of atoms of one kind (i) to another kind (j)
The atomic ratio is a measure of the ratio of atoms of one kind (i) to another kind (j). A closely related concept is the atomic percent (or at.%), which gives the percentage of one kind of atom relative to the total number of atoms. The molecular equivalents of these concepts are the molar fraction, or molar percent.
Atoms.
Mathematically, the "atomic percent" is
formula_0 %
where "N"i are the number of atoms of interest and "N"tot are the total number of atoms, while the "atomic ratio" is
formula_1
For example, the "atomic percent" of hydrogen in water (H2O) is at.%H2O
2/3 x 100 ≈ 66.67%, while the "atomic ratio" of hydrogen to oxygen is "A"H:O
2:1.
Isotopes.
Another application is in radiochemistry, where this may refer to isotopic ratios or isotopic abundances. Mathematically, the "isotopic abundance" is
formula_2
where "N"i are the number of atoms of the isotope of interest and "N"tot is the total number of atoms, while the "atomic ratio" is
formula_3
For example, the "isotopic ratio" of deuterium (D) to hydrogen (H) in heavy water is roughly D:H
1:7000 (corresponding to an "isotopic abundance" of 0.00014%).
Doping in laser physics.
In laser physics however, the "atomic ratio" may refer to the doping ratio or the doping fraction.
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{atomic \\ percent} \\ (\\mathrm{i}) = \\frac{N_\\mathrm{i}}{N_\\mathrm{tot}} \\times 100 \\ "
},
{
"math_id": 1,
"text": " \\mathrm{atomic \\ ratio} \\ (\\mathrm{i:j}) = \\mathrm{atomic \\ percent} \\ (\\mathrm{i}) : \\mathrm{atomic \\ percent} \\ (\\mathrm{j}) \\ ."
},
{
"math_id": 2,
"text": " \\mathrm{isotopic \\ abundance} \\ (\\mathrm{i}) = \\frac{N_\\mathrm{i}}{N_\\mathrm{tot}} \\ ,"
},
{
"math_id": 3,
"text": " \\mathrm{isotopic \\ ratio} \\ (\\mathrm{i:j}) = \\mathrm{isotopic \\ percent} \\ (\\mathrm{i}) : \\mathrm{isotopic \\ percent} \\ (\\mathrm{j}) \\ ."
},
{
"math_id": 4,
"text": "\\mathrm \\frac{N_\\mathrm{atoms \\ of \\ dopant}}{N_\\mathrm{atoms \\ of \\ solution \\ which \\ can \\ be \\ substituted \\ with \\ the \\ dopant}}"
}
]
| https://en.wikipedia.org/wiki?curid=7626743 |
762696 | Poincaré–Birkhoff–Witt theorem | Explicitly describes the universal enveloping algebra of a Lie algebra
In mathematics, more specifically in the theory of Lie algebras, the Poincaré–Birkhoff–Witt theorem (or PBW theorem) is a result giving an explicit description of the universal enveloping algebra of a Lie algebra. It is named after Henri Poincaré, Garrett Birkhoff, and Ernst Witt.
The terms "PBW type theorem" and "PBW theorem" may also refer to various analogues of the original theorem, comparing a filtered algebra to its associated graded algebra, in particular in the area of quantum groups.
Statement of the theorem.
Recall that any vector space "V" over a field has a basis; this is a set "S" such that any element of "V" is a unique (finite) linear combination of elements of "S". In the formulation of Poincaré–Birkhoff–Witt theorem we consider bases of which the elements are totally ordered by some relation which we denote ≤.
If "L" is a Lie algebra over a field K, let "h" denote the canonical K-linear map from "L" into the universal enveloping algebra "U"("L").
Theorem. Let "L" be a Lie algebra over K and "X" a totally ordered basis of "L". A "canonical monomial" over "X" is a finite sequence ("x"1, "x"2 ..., "x""n") of elements of "X" which is non-decreasing in the order ≤, that is, "x"1 ≤"x"2 ≤ ... ≤ "x""n". Extend "h" to all canonical monomials as follows: if ("x"1, "x"2, ..., "x""n") is a canonical monomial, let
formula_0
Then "h" is injective on the set of canonical monomials and the image of this set formula_1 forms a basis for "U"("L") as a K-vector space.
Stated somewhat differently, consider "Y" = "h"("X"). "Y" is totally ordered by the induced ordering from "X". The set of monomials
formula_2
where "y"1 <"y"2 < ... < "y""n" are elements of "Y", and the exponents are "non-negative", together with the multiplicative unit 1, form a basis for "U"("L"). Note that the unit element 1 corresponds to the empty canonical monomial. The theorem then asserts that these monomials form a basis for "U"("L") as a vector space. It is easy to see that these monomials span "U"("L"); the content of the theorem is that they are linearly independent.
The multiplicative structure of "U"("L") is determined by the structure constants in the basis "X", that is, the coefficients formula_3 such that
formula_4
This relation allows one to reduce any product of "y"'s to a linear combination of canonical monomials: The structure constants determine "yiyj – yjyi", i.e. what to do in order to change the order of two elements of "Y" in a product. This fact, modulo an inductive argument on the degree of (non-canonical) monomials, shows one can always achieve products where the factors are ordered in a non-decreasing fashion.
The Poincaré–Birkhoff–Witt theorem can be interpreted as saying that the end result of this reduction is "unique" and does not depend on the order in which one swaps adjacent elements.
Corollary. If "L" is a Lie algebra over a field, the canonical map "L" → "U"("L") is injective. In particular, any Lie algebra over a field is isomorphic to a Lie subalgebra of an associative algebra.
More general contexts.
Already at its earliest stages, it was known that K could be replaced by any commutative ring, provided that "L" is a free K-module, i.e., has a basis as above.
To extend to the case when "L" is no longer a free K-module, one needs to make a reformulation that does not use bases. This involves replacing the space of monomials in some basis with the symmetric algebra, "S"("L"), on "L".
In the case that K contains the field of rational numbers, one can consider the natural map from "S"("L") to "U"("L"), sending a monomial formula_5. for formula_6, to the element
formula_7
Then, one has the theorem that this map is an isomorphism of K-modules.
Still more generally and naturally, one can consider "U"("L") as a filtered algebra, equipped with the filtration given by specifying that formula_5 lies in filtered degree formula_8. The map "L" → "U"("L") of K-modules canonically extends to a map "T"("L") → "U"("L") of algebras, where "T"("L") is the tensor algebra on "L" (for example, by the universal property of tensor algebras), and this is a filtered map equipping "T"("L") with the filtration putting "L" in degree one (actually, "T"("L") is graded). Then, passing to the associated graded, one gets a canonical morphism "T"("L") → gr"U"("L"), which kills the elements "vw" - "wv" for "v, w" ∈ "L", and hence descends to a canonical morphism "S"("L") → gr"U"("L"). Then, the (graded) PBW theorem can be reformulated as the statement that, under certain hypotheses, this final morphism is an isomorphism "of commutative algebras".
This is not true for all K and "L" (see, for example, the last section of Cohn's 1961 paper), but is true in many cases. These include the aforementioned ones, where either "L" is a free K-module (hence whenever K is a field), or K contains the field of rational numbers. More generally, the PBW theorem as formulated above extends to cases such as where (1) "L" is a flat K-module, (2) "L" is torsion-free as an abelian group, (3) "L" is a direct sum of cyclic modules (or all its localizations at prime ideals of K have this property), or (4) K is a Dedekind domain. See, for example, the 1969 paper by Higgins for these statements.
Finally, it is worth noting that, in some of these cases, one also obtains the stronger statement that the canonical morphism "S"("L") → gr"U"("L") lifts to a K-module isomorphism "S"("L") → "U"("L"), without taking associated graded. This is true in the first cases mentioned, where "L" is a free K-module, or K contains the field of rational numbers, using the construction outlined here (in fact, the result is a coalgebra isomorphism, and not merely a K-module isomorphism, equipping both "S"("L") and "U"("L") with their natural coalgebra structures such that formula_9 for "v" ∈ "L"). This stronger statement, however, might not extend to all of the cases in the previous paragraph.
History of the theorem.
In four papers from the 1880s Alfredo Capelli proved, in different terminology, what is now known as the Poincaré–Birkhoff–Witt theorem in the case of formula_10 the General linear Lie algebra; while Poincaré later stated it more generally in 1900. Armand Borel says that these results of Capelli were "completely forgotten for almost a century", and he does not suggest that Poincaré was aware of Capelli's result.
Ton-That and Tran have investigated the history of the theorem. They have found out that the majority of the sources before Bourbaki's 1960 book call it Birkhoff-Witt theorem. Following this old tradition, Fofanova in her encyclopaedic entry says that Poincaré obtained the first variant of the theorem. She further says that the theorem was subsequently completely demonstrated by Witt and Birkhoff. It appears that pre-Bourbaki sources were not familiar with Poincaré's paper.
Birkhoff and Witt do not mention Poincaré's work in their 1937 papers. Cartan and Eilenberg call the theorem "Poincaré-Witt Theorem" and attribute the complete proof to Witt. Bourbaki were the first to use all three names in their 1960 book. Knapp presents a clear illustration of the shifting tradition. In his 1986 book he calls it "Birkhoff-Witt Theorem", while in his later 1996 book he switches to "Poincaré-Birkhoff-Witt Theorem".
It is not clear whether Poincaré's result was complete. Ton-That and Tran conclude that "Poincaré had discovered and completely demonstrated this theorem at least thirty-seven years before Witt and Birkhoff". On the other hand, they point out that "Poincaré makes several statements without bothering to prove them". Their own proofs of all the steps are rather long according to their admission. Borel states that Poincaré "more or less proved the Poincaré-Birkhoff-Witt theorem" in 1900.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " h(x_1, x_2, \\ldots, x_n) = h(x_1) \\cdot h(x_2) \\cdots h(x_n). "
},
{
"math_id": 1,
"text": " \\{h(x_1, \\ldots, x_n) | x_1 \\leq ... \\leq x_n \\} "
},
{
"math_id": 2,
"text": " y_1^{k_1} y_2^{k_2} \\cdots y_\\ell^{k_\\ell} "
},
{
"math_id": 3,
"text": "c_{u,v}^x"
},
{
"math_id": 4,
"text": " [u,v] = \\sum_{x \\in X} c_{u,v}^x\\; x. "
},
{
"math_id": 5,
"text": " v_1 v_2 \\cdots v_n"
},
{
"math_id": 6,
"text": "v_i \\in L"
},
{
"math_id": 7,
"text": "\\frac{1}{n!} \\sum_{\\sigma \\in S_n} v_{\\sigma(1)} v_{\\sigma(2)} \\cdots v_{\\sigma(n)}."
},
{
"math_id": 8,
"text": "\\leq n"
},
{
"math_id": 9,
"text": "\\Delta(v) = v \\otimes 1 + 1 \\otimes v"
},
{
"math_id": 10,
"text": "L=\\mathfrak{gl}_n,"
}
]
| https://en.wikipedia.org/wiki?curid=762696 |
76276121 | Model collapse | Degradation of AI models trained on synthetic data
Model collapse refers to a phenomenon where machine learning models gradually degrade due to errors coming from uncurated training on synthetic data, meaning the outputs of another model including prior versions of itself.
Shumailov et al. coined the term and described two specific stages to the degradation: early model collapse and late model collapse. In early model collapse the model begins losing information about the tails of the distribution – mostly affecting minority data. Later work highlighted that early model collapse is hard to notice, since overall performance may appear to improve, while the model loses performance on minority data. In the late model collapse model loses a significant proportion of its performance, confusing concepts and losing most of its variance.
Mechanism.
Synthetic data, although theoretically indistinguishable from real data, is almost always biased, inaccurate, not well representative of the real data, harmful, or presented out-of-context. Using such data as training data leads to issues with quality and reliability of the trained model.
Model collapse occurs for three main reasons – "functional approximation errors", "sampling errors", and "learning errors". Importantly, it happens in even the simplest of models, where not all of the error sources are present. In more complex models the errors often compound, leading to faster collapse.
Disagreement over real-world impact.
Some researchers and commentators on model collapse warn that the phenomenon could fundamentally threaten future generative AI development: As AI-generated data is shared on the Internet, it will inevitable end up in future training datasets, which are often crawled from the Internet. If training on synthetic data inevitably leads to model collapse, this could therefore pose a difficult problem.
However, recently, other researchers have disagreed with this argument, showing that if synthetic data accumulates alongside human-generated data, model collapse is avoided. The researchers argue that data accumulating over time is a more realistic description of reality than deleting all existing data every year, and that the real-world impact of model collapse may not be as catastrophic as feared.
An alternative branch of the literature investigates the use of machine learning detectors and watermarking to identify model generated data and filter it out.
Mathematical models of the phenomenon.
1D Gaussian model.
In, a first attempt has been made at illustrating collapse for the simplest possible model - a single dimensional normal distribution fit using unbiased estimators of mean and variance, computed on samples from the previous generation.
To make this more precise, we say that original data follows a normal distribution formula_0, and we posses formula_1 samples formula_2 for formula_3. Denoting a general sample formula_4 as sample formula_5 at generation formula_6, then the next generation model is estimated using the sample mean and variance:
formula_7
Leading to a conditionally normal next generation model formula_8. In theory, this is enough to calculate the full distribution of formula_4. However, even after the first generation, the full distribution is no longer normal, it follows a variance-gamma distribution.
To continue the analysis, instead of writing the probability density function at each generation, it is possible to explicitly construct them in terms of independent random variables using Cochran's theorem. To be precise, formula_9 and formula_10are independent, with formula_11 and formula_12, following a Gamma distribution. Denoting with formula_13 gaussian random variables distributed with formula_14 and with formula_15 random variables distributed with formula_16, it turns out to be possible to write samples at each generation as
formula_17
formula_18
and more generally
formula_19
Note, that these are not joint distributions, as formula_20 and formula_21 depend directly on formula_22, but when considering formula_23 on its own the formula above provides all the information about the full distribution.
To analyse the model collapse, we can first calculate variance and mean of samples at generation formula_24. This would tell us what kind of distributions we expect to arrive at after formula_25 generations. It is possible to find its exact value in closed form, but the mean and variance of the square root of gamma distribution are expressed in terms of gamma functions, making the result quite clunky. Following, it is possible to expand all results to second order in each of formula_26, assuming each sample size to be large. It is then possible to show that
formula_27
And if all sample sizes formula_28 are constant, this diverges linearly as formula_29:
formula_30
This is the same scaling as for a single dimensional Gaussian random walk. However, divergence of the variance of formula_23 does not directly provide any information about the corresponding estimates of formula_31 and formula_32, particularly how different they are from the original formula_33 and formula_34. It turns out to be possible to calculate the distance between the true distribution and the approximated distribution at step formula_35, using the Wasserstein-2 distance (which is also sometimes referred to as risk):
formula_36
formula_37
This directly shows why model collapse occurs in this simple model. Due to errors from re-sampling the approximated distribution, each generation ends up corresponding to a new step in a random walk of model parameters. For a constant sample size at each generation, the average distance from the starting point diverges, and in order for the end distribution approximation to be accurate, or for the distance to be finite, the sampling rate formula_38 needs to increase superlinearly, i.e. one needs to collect increasingly more samples over time, perhaps quadratically. However, even in that case the expected distance after formula_24 steps remains non-zero and the only case in which it does in fact end up being zero is when sampling is infinite at each step. Overall, this only shows us how far on average one ends up from the original distribution, but the process can only "terminate", if the estimated variance at a certain generation becomes small enough, effectively turning the distribution into a delta function. This is shown to occur for a general gaussian model in the subsection below.
N-D Gaussian model.
Furthermore, in the case of multidimensional model with fully synthetic data, exact collapse can be shown.
Linear regression.
In the case of a linear regression model, scaling laws and bounds on learning can be found.
Statistical language model.
In the case of a linear softmax classifier for next token prediction, exact bounds on learning with even a partially synthetic dataset can be found.
Impact on large language models.
In the context of large language models, research found that training LLMs on predecessor-generated text—language models are trained on the synthetic data produced by previous models—causes a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X^0 \\sim \\mathcal{N}(\\mu,\\sigma^2)"
},
{
"math_id": 1,
"text": "M_0"
},
{
"math_id": 2,
"text": "X^0_j"
},
{
"math_id": 3,
"text": "j = 1, \\dots , M_0"
},
{
"math_id": 4,
"text": "X^i_j"
},
{
"math_id": 5,
"text": "j = 1, \\dots, M_i"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\mu_{i+1} = \\frac{1}{M_i}\\sum_j X^i_j; \\quad \\sigma_{i+1}^2 = \\frac{1}{M_i-1}\\sum _j(X^i_j-\\mu_{i+1})^2."
},
{
"math_id": 8,
"text": "X^{i+1}_j|\\mu_{i+1},\\;\\sigma_{i+1}\\sim \\mathcal{N}(\\mu_{i+1},\\sigma_{i+1}^2)"
},
{
"math_id": 9,
"text": "\\mu_1"
},
{
"math_id": 10,
"text": "\\sigma_1"
},
{
"math_id": 11,
"text": "\\mu_1 \\sim \\mathcal{N}(\\mu, \\frac{\\sigma^2}{M_0})"
},
{
"math_id": 12,
"text": "(M_0-1)\\sigma_1^2 \\sim \\sigma^2\\Gamma\\left(\\frac{M_0-1}{2}, \\frac12\\right)"
},
{
"math_id": 13,
"text": "Z"
},
{
"math_id": 14,
"text": "\\mathcal{N}(0, 1)"
},
{
"math_id": 15,
"text": "S^i"
},
{
"math_id": 16,
"text": "\\frac{1}{M_{i-1}-1}\\Gamma\\left(\\frac{M_{i-1}-1}{2}, \\frac12\\right)"
},
{
"math_id": 17,
"text": "X^0_j = \\mu + \\sigma Z^0_j,\n"
},
{
"math_id": 18,
"text": "X^1_j = \\mu + \\frac{\\sigma}{\\sqrt{M_0}}Z^1 + \\sigma\\sqrt{S^1}Z^1_j,\n "
},
{
"math_id": 19,
"text": "X^n_j = \\mu + \\frac{\\sigma}{\\sqrt{M_0}}Z^1 + \\frac{\\sigma}{\\sqrt{M_1}}\\sqrt{S^1}Z^2 + \\dots \n+ \\frac{\\sigma}{\\sqrt{M_{n-1}}}\\sqrt{S^1\\times\\dots\\times S^{n-1}}Z^n+\\sigma\\sqrt{S^1\\times\\dots\\times S^{n}}Z^n_j."
},
{
"math_id": 20,
"text": "Z^n"
},
{
"math_id": 21,
"text": "S^n"
},
{
"math_id": 22,
"text": "Z^{n-1}_j"
},
{
"math_id": 23,
"text": "X^n_j"
},
{
"math_id": 24,
"text": "n"
},
{
"math_id": 25,
"text": "n\n"
},
{
"math_id": 26,
"text": "1/M_i"
},
{
"math_id": 27,
"text": "\\frac{1}{\\sigma^2}\\operatorname{Var}(X^n_j) = \\frac{1}{M_0}+\\frac{1}{M_1}+ \\dots + \\frac{1}{M_{n-1}}+1 + \\mathcal{O}\\left(M_i^{-2}\\right)."
},
{
"math_id": 28,
"text": "M_i = M"
},
{
"math_id": 29,
"text": "n\\to\\infty"
},
{
"math_id": 30,
"text": "\\operatorname{Var}(X^n_j) = \\sigma^2\\left(1+\\frac{n}{M}\\right); \\quad \\mathbb{E}(X^n_j) = \\mu."
},
{
"math_id": 31,
"text": "\\mu_{n+1}"
},
{
"math_id": 32,
"text": "\\sigma_{n+1}"
},
{
"math_id": 33,
"text": "\\mu"
},
{
"math_id": 34,
"text": "\\sigma"
},
{
"math_id": 35,
"text": "n+1"
},
{
"math_id": 36,
"text": "\\mathbb{E}\\left[\\mathbb{W}^2_2\\left(\\mathcal{N}(\\mu,\\sigma^2),\\mathcal{N}(\\mu_{n+1},\\sigma^2_{n+1})\\right)\\right]=\\frac{3}{2}\\sigma^2\\left(\\frac{1}{M_0}+\\frac{1}{M_1}+ \\dots + \\frac{1}{M_{n}}\\right)+\\mathcal{O}\\left(M_i^{-2}\\right),"
},
{
"math_id": 37,
"text": "\\operatorname{Var}\\left[\\mathbb{W}^2_2\\left(\\mathcal{N}(\\mu,\\sigma^2),\\mathcal{N}(\\mu_{n+1},\\sigma^2_{n+1})\\right)\\right]=\\frac{1}{2}\\sigma^4\\left(\\frac{3}{M_0^2}+\\frac{3}{M_1^2}+ \\dots + \\frac{3}{M_{n}^2} + \\sum_{i\\neq j}\\frac{4}{M_iM_j}\\right)+\\mathcal{O}\\left(M_i^{-3}\\right).\n\n"
},
{
"math_id": 38,
"text": "M_i"
}
]
| https://en.wikipedia.org/wiki?curid=76276121 |
762954 | Barycentric coordinate system | Coordinate system that is defined by points instead of vectors
In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex (a triangle for points in a plane, a tetrahedron for points in three-dimensional space, etc.). The barycentric coordinates of a point can be interpreted as masses placed at the vertices of the simplex, such that the point is the center of mass (or "barycenter") of these masses. These masses can be zero or negative; they are all positive if and only if the point is inside the simplex.
Every point has barycentric coordinates, and their sum is never zero. Two tuples of barycentric coordinates specify the same point if and only if they are proportional; that is to say, if one tuple can be obtained by multiplying the elements of the other tuple by the same non-zero number. Therefore, barycentric coordinates are either considered to be defined up to multiplication by a nonzero constant, or normalized for summing to unity.
Barycentric coordinates were introduced by August Möbius in 1827. They are special homogenous coordinates. Barycentric coordinates are strongly related with Cartesian coordinates and, more generally, to affine coordinates (see ).
Barycentric coordinates are particularly useful in triangle geometry for studying properties that do not depend on the angles of the triangle, such as Ceva's theorem, Routh's theorem, and Menelaus's theorem. In computer-aided design, they are useful for defining some kinds of Bézier surfaces.
Definition.
Let formula_0 be "n" + 1 points in a Euclidean space, a flat or an affine space formula_1 of dimension n that are affinely independent; this means that there is no affine subspace of dimension "n" − 1 that contains all the points, or, equivalently that the points define a simplex. Given any point formula_2 there are scalars formula_3 that are not all zero, such that
formula_4
for any point O. (As usual, the notation formula_5 represents the translation vector or free vector that maps the point A to the point B.)
The elements of a ("n" + 1) tuple formula_6 that satisfies this equation are called "barycentric coordinates" of P with respect to formula_7 The use of colons in the notation of the tuple means that barycentric coordinates are a sort of homogeneous coordinates, that is, the point is not changed if all coordinates are multiplied by the same nonzero constant. Moreover, the barycentric coordinates are also not changed if the auxiliary point O, the origin, is changed.
The barycentric coordinates of a point are unique up to a scaling. That is, two tuples formula_6 and formula_8 are barycentric coordinates of the same point if and only if there is a nonzero scalar formula_9 such that formula_10 for every i.
In some contexts, it is useful to constrain the barycentric coordinates of a point so that they are unique. This is usually achieved by imposing the condition
formula_11
or equivalently by dividing every formula_12 by the sum of all formula_13 These specific barycentric coordinates are called normalized or absolute barycentric coordinates. Sometimes, they are also called affine coordinates, although this term refers commonly to a slightly different concept.
Sometimes, it is the normalized barycentric coordinates that are called "barycentric coordinates". In this case the above defined coordinates are called "homogeneous barycentric coordinates".
With above notation, the homogeneous barycentric coordinates of Ai are all zero, except the one of index i. When working over the real numbers (the above definition is also used for affine spaces over an arbitrary field), the points whose all normalized barycentric coordinates are nonnegative form the convex hull of formula_14 which is the simplex that has these points as its vertices.
With above notation, a tuple formula_15 such that
formula_16
does not define any point, but the vector
formula_17
is independent from the origin O. As the direction of this vector is not changed if all formula_12 are multiplied by the same scalar, the homogeneous tuple formula_6 defines a direction of lines, that is a point at infinity. See below for more details.
Relationship with Cartesian or affine coordinates.
Barycentric coordinates are strongly related to Cartesian coordinates and, more generally, affine coordinates. For a space of dimension n, these coordinate systems are defined relative to a point O, the origin, whose coordinates are zero, and n points formula_18 whose coordinates are zero except that of index i that equals one.
A point has coordinates
formula_19
for such a coordinate system if and only if its normalized barycentric coordinates are
formula_20
relatively to the points formula_21
The main advantage of barycentric coordinate systems is to be symmetric with respect to the "n" + 1 defining points. They are therefore often useful for studying properties that are symmetric with respect to "n" + 1 points. On the other hand, distances and angles are difficult to express in general barycentric coordinate systems, and when they are involved, it is generally simpler to use a Cartesian coordinate system.
Relationship with projective coordinates.
Homogeneous barycentric coordinates are also strongly related with some projective coordinates. However this relationship is more subtle than in the case of affine coordinates, and, for being clearly understood, requires a coordinate-free definition of the projective completion of an affine space, and a definition of a projective frame.
The "projective completion" of an affine space of dimension n is a projective space of the same dimension that contains the affine space as the complement of a hyperplane. The projective completion is unique up to an isomorphism. The hyperplane is called the hyperplane at infinity, and its points are the points at infinity of the affine space.
Given a projective space of dimension n, a "projective frame" is an ordered set of "n" + 2 points that are not contained in the same hyperplane. A projective frame defines a projective coordinate system such that the coordinates of the ("n" + 2)th point of the frame are all equal, and, otherwise, all coordinates of the ith point are zero, except the ith one.
When constructing the projective completion from an affine coordinate system, one commonly defines it with respect to a projective frame consisting of the intersections with the hyperplane at infinity of the coordinate axes, the origin of the affine space, and the point that has all its affine coordinates equal to one. This implies that the points at infinity have their last coordinate equal to zero, and that the projective coordinates of a point of the affine space are obtained by completing its affine coordinates by one as ("n" + 1)th coordinate.
When one has "n" + 1 points in an affine space that define a barycentric coordinate system, this is another projective frame of the projective completion that is convenient to choose. This frame consists of these points and their centroid, that is the point that has all its barycentric coordinates equal. In this case, the homogeneous barycentric coordinates of a point in the affine space are the same as the projective coordinates of this point. A point is at infinity if and only if the sum of its coordinates is zero. This point is in the direction of the vector defined at the end of .
Barycentric coordinates on triangles.
In the context of a triangle, barycentric coordinates are also known as area coordinates or areal coordinates, because the coordinates of "P" with respect to triangle "ABC" are equivalent to the (signed) ratios of the areas of "PBC", "PCA" and "PAB" to the area of the reference triangle "ABC". Areal and trilinear coordinates are used for similar purposes in geometry.
Barycentric or areal coordinates are extremely useful in engineering applications involving triangular subdomains. These make analytic integrals often easier to evaluate, and Gaussian quadrature tables are often presented in terms of area coordinates.
Consider a triangle formula_22 with vertices formula_23, formula_24, formula_25 in the x,y-plane, formula_26. One may regard points in formula_26 as vectors, so it makes sense to add or subtract them and multiply them by scalars.
Each triangle formula_22 has a "signed area" or "sarea", which is plus or minus its area:
formula_27
The sign is plus if the path from formula_28 to formula_29 to formula_30 then back to formula_28 goes around the triangle in a counterclockwise direction. The sign is minus if the path goes around in a clockwise direction.
Let formula_31 be a point in the plane, and let formula_32 be its "normalized barycentric coordinates" with respect to the triangle formula_22, so
formula_33
and
formula_34
Normalized barycentric coordinates formula_32 are also called "areal coordinates" because they represent ratios of signed areas of triangles:
formula_35
One may prove these ratio formulas based on the facts that a triangle is half of a parallelogram, and the area of a parallelogram is easy to compute using a determinant.
Specifically, let
formula_36
formula_37 is a parallelogram because its pairs of opposite sides, represented by the pairs of displacement vectors formula_38, and formula_39, are parallel and congruent.
Triangle formula_22 is half of the parallelogram formula_40, so twice its signed area is equal to the signed area of the parallelogram, which is given by the formula_41 determinant formula_42whose "columns" are the displacement vectors formula_43 and formula_44:
formula_45
Expanding the determinant, using its "alternating" and "multilinear" properties, one obtains
formula_46
so
formula_47
Similarly,
formula_48,
To obtain the ratio of these signed areas, express formula_31 in the second formula in terms of its barycentric coordinates:
formula_49
The barycentric coordinates are normalized so formula_50, hence formula_51 . Plug that into the previous line to obtain
formula_52
Therefore
formula_53.
Similar calculations prove the other two formulas
formula_54
formula_55.
Trilinear coordinates formula_56 of formula_31 are signed distances from formula_31 to the lines BC, AC, and AB, respectively. The sign of formula_57 is positive if formula_31 and formula_28 lie on the same side of BC, negative otherwise. The signs of formula_58 and formula_59 are assigned similarly. Let
formula_60, formula_61, formula_62.
Then
formula_63
where, as above, sarea stands for signed area. All three signs are plus if triangle ABC is positively oriented, minus otherwise. The relations between trilinear and barycentric coordinates are obtained by substituting these formulas into the above formulas that express barycentric coordinates as ratios of areas.
Switching back and forth between the barycentric coordinates and other coordinate systems makes some problems much easier to solve.
Conversion between barycentric and Cartesian coordinates.
Edge approach.
Given a point formula_64 in a triangle's plane one can obtain the barycentric coordinates formula_65, formula_66 and formula_67 from the Cartesian coordinates formula_68 or vice versa.
We can write the Cartesian coordinates of the point formula_64 in terms of the Cartesian components of the triangle vertices formula_69, formula_70, formula_71 where formula_72 and in terms of the barycentric coordinates of formula_64 as
formula_73
That is, the Cartesian coordinates of any point are a weighted average of the Cartesian coordinates of the triangle's vertices, with the weights being the point's barycentric coordinates summing to unity.
To find the reverse transformation, from Cartesian coordinates to barycentric coordinates, we first substitute formula_74 into the above to obtain
formula_75
Rearranging, this is
formula_76
This linear transformation may be written more succinctly as
formula_77
where formula_9 is the vector of the first two barycentric coordinates, formula_64 is the vector of Cartesian coordinates, and formula_78 is a matrix given by
formula_79
Now the matrix formula_78 is invertible, since formula_80 and formula_81 are linearly independent (if this were not the case, then formula_69, formula_70, and formula_71 would be collinear and would not form a triangle). Thus, we can rearrange the above equation to get
formula_82
Finding the barycentric coordinates has thus been reduced to finding the 2×2 inverse matrix of formula_78, an easy problem.
Explicitly, the formulae for the barycentric coordinates of point formula_64 in terms of its Cartesian coordinates ("x, y") and in terms of the Cartesian coordinates of the triangle's vertices are:
formula_83When understanding the last line of equation, note the identity formula_84.
Vertex approach.
Another way to solve the conversion from Cartesian to barycentric coordinates is to write the relation in the matrix form formula_85with formula_86 and formula_87 i.e.formula_88To get the unique normalized solution we need to add the condition formula_89. The barycentric coordinates are thus the solution of the linear systemformula_90which isformula_91where formula_92is twice the signed area of the triangle. The area interpretation of the barycentric coordinates can be recovered by applying Cramer's rule to this linear system.
Conversion between barycentric and trilinear coordinates.
A point with trilinear coordinates "x" : "y" : "z" has barycentric coordinates "ax" : "by" : "cz" where "a", "b", "c" are the side lengths of the triangle. Conversely, a point with barycentrics formula_93 has trilinears formula_94
Equations in barycentric coordinates.
The three sides "a, b, c" respectively have equations
formula_95
The equation of a triangle's Euler line is
formula_96
Using the previously given conversion between barycentric and trilinear coordinates, the various other equations given in Trilinear coordinates#Formulas can be rewritten in terms of barycentric coordinates.
Distance between points.
The displacement vector of two normalized points formula_97 and formula_98 is
formula_99
The distance d between P and Q, or the length of the displacement vector formula_100 is
formula_101
where "a, b, c" are the sidelengths of the triangle. The equivalence of the last two expressions follows from formula_102 which holds because
formula_103
The barycentric coordinates of a point can be calculated based on distances "d""i" to the three triangle vertices by solving the equation
formula_104
Applications.
Determining location with respect to a triangle.
Although barycentric coordinates are most commonly used to handle points inside a triangle, they can also be used to describe a point outside the triangle. If the point is not inside the triangle, then we can still use the formulas above to compute the barycentric coordinates. However, since the point is outside the triangle, at least one of the coordinates will violate our original assumption that formula_105. In fact, given any point in cartesian coordinates, we can use this fact to determine where this point is with respect to a triangle.
If a point lies in the interior of the triangle, all of the Barycentric coordinates lie in the open interval formula_106 If a point lies on an edge of the triangle but not at a vertex, one of the area coordinates formula_107 (the one associated with the opposite vertex) is zero, while the other two lie in the open interval formula_106 If the point lies on a vertex, the coordinate associated with that vertex equals 1 and the others equal zero. Finally, if the point lies outside the triangle at least one coordinate is negative.
Summarizing,
Point formula_64 lies inside the triangle if and only if formula_108.
formula_109 lies on the edge or corner of the triangle if formula_110 and formula_111.
Otherwise, formula_64 lies outside the triangle.
In particular, if a point lies on the far side of a line the barycentric coordinate of the point in the triangle that is not on the line will have a negative value.
Interpolation on a triangular unstructured grid.
If formula_112 are known quantities, but the values of f inside the triangle defined by formula_113 is unknown, they can be approximated using linear interpolation. Barycentric coordinates provide a convenient way to compute this interpolation. If formula_64 is a point inside the triangle with barycentric coordinates formula_65, formula_66, formula_67, then
formula_114
In general, given any unstructured grid or polygon mesh, this kind of technique can be used to approximate the value of f at all points, as long as the function's value is known at all vertices of the mesh. In this case, we have many triangles, each corresponding to a different part of the space. To interpolate a function f at a point formula_64, first a triangle must be found that contains formula_64. To do so, formula_64 is transformed into the barycentric coordinates of each triangle. If some triangle is found such that the coordinates satisfy formula_115, then the point lies in that triangle or on its edge (explained in the previous section). Then the value of formula_116 can be interpolated as described above.
These methods have many applications, such as the finite element method (FEM).
Integration over a triangle or tetrahedron.
The integral of a function over the domain of the triangle can be annoying to compute in a cartesian coordinate system. One generally has to split the triangle up into two halves, and great messiness follows. Instead, it is often easier to make a change of variables to any two barycentric coordinates, e.g. formula_117. Under this change of variables,
formula_118
where A is the area of the triangle. This result follows from the fact that a rectangle in barycentric coordinates corresponds to a quadrilateral in cartesian coordinates, and the ratio of the areas of the corresponding shapes in the corresponding coordinate systems is given by formula_119. Similarly, for integration over a tetrahedron, instead of breaking up the integral into two or three separate pieces, one could switch to 3D tetrahedral coordinates under the change of variables
formula_120where V is the volume of the tetrahedron.
Examples of special points.
In the homogeneous barycentric coordinate system defined with respect to a triangle formula_22, the following statements about special points of formula_22 hold.
The three vertices A, B, and C have coordinates
formula_121
The centroid has coordinates formula_122
If a, b, c are the edge lengths formula_123, formula_124, formula_125 respectively, formula_126, formula_127, formula_128 are the angle measures formula_129, formula_130, and formula_131 respectively, and s is the semiperimeter of formula_22, then the following statements about special points of formula_22 hold in addition.
The circumcenter has coordinates
formula_132
The orthocenter has coordinates
formula_133
The incenter has coordinates formula_134
The excenters have coordinates
formula_135
The nine-point center has coordinates
formula_136
The Gergonne point has coordinates formula_137.
The Nagel point has coordinates formula_138.
The symmedian point has coordinates formula_139.
Barycentric coordinates on tetrahedra.
Barycentric coordinates may be easily extended to three dimensions. The 3D simplex is a tetrahedron, a polyhedron having four triangular faces and four vertices. Once again, the four barycentric coordinates are defined so that the first vertex formula_69 maps to barycentric coordinates formula_140, formula_141, etc.
This is again a linear transformation, and we may extend the above procedure for triangles to find the barycentric coordinates of a point formula_64 with respect to a tetrahedron:
formula_142
where formula_78 is now a 3×3 matrix:
formula_143
and formula_144with the corresponding Cartesian coordinates:formula_145Once again, the problem of finding the barycentric coordinates has been reduced to inverting a 3×3 matrix.
3D barycentric coordinates may be used to decide if a point lies inside a tetrahedral volume, and to interpolate a function within a tetrahedral mesh, in an analogous manner to the 2D procedure. Tetrahedral meshes are often used in finite element analysis because the use of barycentric coordinates can greatly simplify 3D interpolation.
Generalized barycentric coordinates.
Barycentric coordinates formula_146 of a point formula_147 that are defined with respect to a finite set of "k" points formula_148 instead of a simplex are called generalized barycentric coordinates. For these, the equation
formula_149
is still required to hold. Usually one uses normalized coordinates, formula_150. As for the case of a simplex, the points with nonnegative normalized generalized coordinates (formula_151) form the convex hull of "x"1, ..., "x""n". If there are more points than in a full simplex (formula_152) the generalized barycentric coordinates of a point are "not" unique, as the defining linear system (here for n=2)formula_153is underdetermined. The simplest example is a quadrilateral in the plane. Various kinds of additional restrictions can be used to define unique barycentric coordinates.
Abstraction.
More abstractly, generalized barycentric coordinates express a convex polytope with "n" vertices, regardless of dimension, as the "image" of the standard formula_154-simplex, which has "n" vertices – the map is onto: formula_155 The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not having "unique" generalized barycentric coordinates except when P is a simplex.
Dual to generalized barycentric coordinates are slack variables, which measure by how much margin a point satisfies the linear constraints, and gives an embedding formula_156 into the "f"-orthant, where "f" is the number of faces (dual to the vertices). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized).
This use of the standard formula_154-simplex and "f"-orthant as standard objects that map to a polytope or that a polytope maps into should be contrasted with the use of the standard vector space formula_157 as the standard object for vector spaces, and the standard affine hyperplane formula_158 as the standard object for affine spaces, where in each case choosing a linear basis or affine basis provides an "isomorphism," allowing all vector spaces and affine spaces to be thought of in terms of these standard spaces, rather than an onto or one-to-one map (not every polytope is a simplex). Further, the "n"-orthant is the standard object that maps "to" cones.
Applications.
Generalized barycentric coordinates have applications in computer graphics and more specifically in geometric modelling. Often, a three-dimensional model can be approximated by a polyhedron such that the generalized barycentric coordinates with respect to that polyhedron have a geometric meaning. In this way, the processing of the model can be simplified by using these meaningful coordinates. Barycentric coordinates are also used in geophysics. | [
{
"math_id": 0,
"text": "A_0, \\ldots, A_n"
},
{
"math_id": 1,
"text": "\\mathbf A"
},
{
"math_id": 2,
"text": "P\\in \\mathbf A,"
},
{
"math_id": 3,
"text": "a_0, \\ldots, a_n"
},
{
"math_id": 4,
"text": " ( a_0 + \\cdots + a_n ) \\overset{}\\overrightarrow{OP} = a_0 \\overset{}\\overrightarrow {OA_0} + \\cdots + a_n \\overset{}\\overrightarrow {OA_n}, "
},
{
"math_id": 5,
"text": "\\overset{}\\overrightarrow {AB}"
},
{
"math_id": 6,
"text": "(a_0: \\dotsc: a_n)"
},
{
"math_id": 7,
"text": "A_0, \\ldots, A_n."
},
{
"math_id": 8,
"text": "(b_0: \\dotsc: b_n)"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "b_i=\\lambda a_i"
},
{
"math_id": 11,
"text": "\\sum a_i = 1,"
},
{
"math_id": 12,
"text": "a_i"
},
{
"math_id": 13,
"text": "a_i."
},
{
"math_id": 14,
"text": "\\{A_0, \\ldots, A_n\\},"
},
{
"math_id": 15,
"text": "(a_1, \\ldots, a_n)"
},
{
"math_id": 16,
"text": "\\sum_{i=0}^n a_i=0"
},
{
"math_id": 17,
"text": " a_0 \\overset{}\\overrightarrow {OA_0} + \\cdots + a_n \\overset{}\\overrightarrow {OA_n}"
},
{
"math_id": 18,
"text": "A_1, \\ldots, A_n,"
},
{
"math_id": 19,
"text": "(x_1, \\ldots, x_n)"
},
{
"math_id": 20,
"text": "(1-x_1-\\cdots - x_n,x_1, \\ldots, x_n)"
},
{
"math_id": 21,
"text": "O, A_1, \\ldots, A_n."
},
{
"math_id": 22,
"text": "ABC"
},
{
"math_id": 23,
"text": "A=(a_1,a_2)"
},
{
"math_id": 24,
"text": "B=(b_1,b_2)"
},
{
"math_id": 25,
"text": "C=(c_1,c_2)"
},
{
"math_id": 26,
"text": "\\mathbb{R}^2"
},
{
"math_id": 27,
"text": "\\operatorname{sarea}(ABC) = \\pm \\operatorname{area}(ABC)."
},
{
"math_id": 28,
"text": "A"
},
{
"math_id": 29,
"text": "B"
},
{
"math_id": 30,
"text": "C"
},
{
"math_id": 31,
"text": "P"
},
{
"math_id": 32,
"text": "(\\lambda_1,\\lambda_2,\\lambda_3)"
},
{
"math_id": 33,
"text": "P = \\lambda_1 A + \\lambda_2 B + \\lambda_3 C"
},
{
"math_id": 34,
"text": "1 = \\lambda_1 + \\lambda_2 + \\lambda_3."
},
{
"math_id": 35,
"text": "\\begin{align}\\lambda_1 &= \\operatorname{sarea}(PBC)/\\operatorname{sarea}(ABC)\\\\\n \\lambda_2 &= \\operatorname{sarea}(APC)/\\operatorname{sarea}(ABC)\\\\\n \\lambda_3 &= \\operatorname{sarea}(ABP)/\\operatorname{sarea}(ABC).\\end{align}"
},
{
"math_id": 36,
"text": "D = -A+B+C."
},
{
"math_id": 37,
"text": "ABCD"
},
{
"math_id": 38,
"text": "D-C=B-A"
},
{
"math_id": 39,
"text": "D-B=C-A"
},
{
"math_id": 40,
"text": "ABDC"
},
{
"math_id": 41,
"text": "2\\times 2"
},
{
"math_id": 42,
"text": "\\det(B-A,C-A)"
},
{
"math_id": 43,
"text": "B-A"
},
{
"math_id": 44,
"text": "C-A"
},
{
"math_id": 45,
"text": "\\operatorname{sarea}(ABCD)=\\det\\begin{pmatrix}b_1-a_1 & c_1-a_1 \\\\ b_2-a_2 & c_2-a_2\\end{pmatrix}"
},
{
"math_id": 46,
"text": "\\begin{align}\\det(B-A,C-A) &= \\det(B,C)-\\det(A,C)-\\det(B,A)+\\det(A,A) \\\\ \n &= \\det(A,B)+\\det(B,C)+\\det(C,A) \\end{align}"
},
{
"math_id": 47,
"text": "2 \\operatorname{sarea}(ABC) = \\det(A,B)+\\det(B,C)+\\det(C,A)."
},
{
"math_id": 48,
"text": "2 \\operatorname{sarea}(PBC) = \\det(P,B)+\\det(B,C)+\\det(C,P) "
},
{
"math_id": 49,
"text": "\\begin{align}2 \\operatorname{sarea}(PBC) \n &= \\det(\\lambda_1 A + \\lambda_2 B + \\lambda_3 C, B) + \\det(B,C) \n + \\det(C,\\lambda_1 A + \\lambda_2 B + \\lambda_3 C)\\\\\n &= \\lambda_1 \\det(A,B) + \\lambda_3 \\det(C,B) + \\det(B,C) \n + \\lambda_1 \\det(C,A) + \\lambda_2 \\det(C,B)\\\\\n &= \\lambda_1 \\det(A,B) + \\lambda_1 \\det(C,A) \n + (1-\\lambda_2 - \\lambda_3) \\det(B,C) \\end{align}."
},
{
"math_id": 50,
"text": "1 = \\lambda_1 + \\lambda_2 + \\lambda_3"
},
{
"math_id": 51,
"text": "\\lambda_1 = (1-\\lambda_2 - \\lambda_3)"
},
{
"math_id": 52,
"text": "\\begin{align}2 \\operatorname{sarea}(PBC) &= \\lambda_1 (\\det(A,B)+\\det(B,C)+\\det(C,A)) \\\\ \n &= (\\lambda_1)(2 \\operatorname{sarea}(ABC)).\\end{align}"
},
{
"math_id": 53,
"text": "\\lambda_1 = \\operatorname{sarea}(PBC)/\\operatorname{sarea}(ABC)"
},
{
"math_id": 54,
"text": "\\lambda_2 = \\operatorname{sarea}(APC)/\\operatorname{sarea}(ABC)"
},
{
"math_id": 55,
"text": "\\lambda_3 = \\operatorname{sarea}(ABP)/\\operatorname{sarea}(ABC)"
},
{
"math_id": 56,
"text": "(\\gamma_1,\\gamma_2,\\gamma_3)"
},
{
"math_id": 57,
"text": "\\gamma_1"
},
{
"math_id": 58,
"text": "\\gamma_2"
},
{
"math_id": 59,
"text": "\\gamma_3"
},
{
"math_id": 60,
"text": "a = \\operatorname{length}(BC)"
},
{
"math_id": 61,
"text": "b = \\operatorname{length}(CA)"
},
{
"math_id": 62,
"text": "c = \\operatorname{length}(AB)"
},
{
"math_id": 63,
"text": "\\begin{align}\\gamma_1 a &= \\pm 2\\operatorname{sarea}(PBC)\\\\\n \\gamma_2 b &= \\pm 2\\operatorname{sarea}(APC)\\\\\n \\gamma_3 c &= \\pm 2\\operatorname{sarea}(ABP)\\end{align}"
},
{
"math_id": 64,
"text": "\\mathbf{r}"
},
{
"math_id": 65,
"text": "\\lambda_1"
},
{
"math_id": 66,
"text": "\\lambda_2"
},
{
"math_id": 67,
"text": "\\lambda_3"
},
{
"math_id": 68,
"text": "(x, y)"
},
{
"math_id": 69,
"text": "\\mathbf{r}_1"
},
{
"math_id": 70,
"text": "\\mathbf{r}_2"
},
{
"math_id": 71,
"text": "\\mathbf{r}_3"
},
{
"math_id": 72,
"text": "\\mathbf{r}_i = (x_i, y_i)"
},
{
"math_id": 73,
"text": "\\begin{align}\n x &= \\lambda_1 x_1 + \\lambda_2 x_2 + \\lambda_3 x_3 \\\\[2pt]\n y &= \\lambda_1 y_1 + \\lambda_2 y_2 + \\lambda_3 y_3\n\\end{align}"
},
{
"math_id": 74,
"text": "\\lambda_3 = 1 - \\lambda_1 - \\lambda_2"
},
{
"math_id": 75,
"text": "\\begin{align}\n x &= \\lambda_1 x_1 + \\lambda_2 x_2 + (1 - \\lambda_1 - \\lambda_2) x_3 \\\\[2pt]\n y &= \\lambda_1 y_1 + \\lambda_2 y_2 + (1 - \\lambda_1 - \\lambda_2) y_3\n\\end{align}"
},
{
"math_id": 76,
"text": "\\begin{align}\n \\lambda_1(x_1 - x_3) + \\lambda_2(x_2 - x_3) + x_3 - x &= 0 \\\\[2pt]\n \\lambda_1(y_1 - y_3) + \\lambda_2(y_2 -\\, y_3) + y_3 - \\, y &= 0 \n\\end{align}"
},
{
"math_id": 77,
"text": "\n\\mathbf{T} \\cdot \\lambda = \\mathbf{r}-\\mathbf{r}_3\n"
},
{
"math_id": 78,
"text": "\\mathbf{T}"
},
{
"math_id": 79,
"text": "\n\\mathbf{T} = \\left(\\begin{matrix}\nx_1-x_3 & x_2-x_3 \\\\\ny_1-y_3 & y_2-y_3\n\\end{matrix}\\right)\n"
},
{
"math_id": 80,
"text": "\\mathbf{r}_1-\\mathbf{r}_3"
},
{
"math_id": 81,
"text": "\\mathbf{r}_2-\\mathbf{r}_3"
},
{
"math_id": 82,
"text": "\n\\left(\\begin{matrix}\\lambda_1 \\\\ \\lambda_2\\end{matrix}\\right) = \\mathbf{T}^{-1} ( \\mathbf{r}-\\mathbf{r}_3 )\n"
},
{
"math_id": 83,
"text": "\\begin{align}\n \\lambda_1 =&\\ \\frac{(y_2-y_3)(x-x_3) + (x_3-x_2)(y-y_3)}{\\det(\\mathbf T)} \\\\[4pt]\n &= \\frac{(y_2-y_3)(x-x_3) + (x_3-x_2)(y-y_3)}{(y_2-y_3)(x_1-x_3) + (x_3-x_2)(y_1-y_3)} \\\\[4pt]\n&= \\frac{(\\mathbf{r}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_3})}{(\\mathbf{r_1}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_3})} \\\\[12pt]\n \\lambda_2 =&\\ \\frac{(y_3-y_1)(x-x_3) + (x_1-x_3)(y-y_3)}{\\det(\\mathbf T)} \\\\[4pt]\n &= \\frac{(y_3-y_1)(x-x_3) + (x_1-x_3)(y-y_3)}{(y_2-y_3)(x_1-x_3) + (x_3-x_2)(y_1-y_3)} \\\\[4pt]\n&= \\frac{(\\mathbf{r}-\\mathbf{r_3})\\times(\\mathbf{r_3}-\\mathbf{r_1})}{(\\mathbf{r_1}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_3})} \\\\[12pt]\n \\lambda_3 =&\\ 1 - \\lambda_1 - \\lambda_2 \\\\[4pt]\n&= 1-\\frac{(\\mathbf{r}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_1})}{(\\mathbf{r_1}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_3})} \\\\[4pt]\n&= \\frac{(\\mathbf{r}-\\mathbf{r_1})\\times(\\mathbf{r_1}-\\mathbf{r_2 })}{(\\mathbf{r_1}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_3})} \n\\end{align}"
},
{
"math_id": 84,
"text": "(\\mathbf{r_1}-\\mathbf{r_3})\\times(\\mathbf{r_2}-\\mathbf{r_3})=(\\mathbf{r_3}-\\mathbf{r_1})\\times(\\mathbf{r_1}-\\mathbf{r_2})"
},
{
"math_id": 85,
"text": "\n\\mathbf{R} \\boldsymbol{\\lambda} = \\mathbf{r}"
},
{
"math_id": 86,
"text": "\\mathbf{R} = \\left(\\, \\mathbf{r}_1 \\,|\\, \\mathbf{r}_2 \\,|\\, \\mathbf{r}_3 \\right)"
},
{
"math_id": 87,
"text": "\\boldsymbol{\\lambda} = \\left(\\lambda_1,\\lambda_2,\\lambda_3\\right)^\\top,"
},
{
"math_id": 88,
"text": "\n\\begin{pmatrix}\nx_1 & x_2 & x_3\\\\\ny_1 & y_2 & y_3\n\\end{pmatrix}\n\\begin{pmatrix}\n\\lambda_1 \\\\ \\lambda_2 \\\\ \\lambda_3\n\\end{pmatrix} =\n\\begin{pmatrix}x\\\\y\\end{pmatrix}\n"
},
{
"math_id": 89,
"text": "\\lambda_1 + \\lambda_2 + \\lambda_3 = 1"
},
{
"math_id": 90,
"text": "\n\\left(\\begin{matrix}\n1 & 1 & 1 \\\\\nx_1 & x_2 & x_3\\\\\ny_1 & y_2 & y_3\n\\end{matrix}\\right)\n\\begin{pmatrix}\n\\lambda_1 \\\\ \\lambda_2 \\\\ \\lambda_3\n\\end{pmatrix} =\n\\left(\\begin{matrix}\n1\\\\x\\\\y\n\\end{matrix}\\right)\n"
},
{
"math_id": 91,
"text": "\n\\begin{pmatrix}\n\\lambda_1 \\\\ \\lambda_2 \\\\ \\lambda_3\n\\end{pmatrix} = \\frac{1}{2A}\n\\begin{pmatrix}\nx_2y_3-x_3y_2 & y_2-y_3 & x_3-x_2 \\\\\nx_3y_1-x_1y_3 & y_3-y_1 & x_1-x_3 \\\\\nx_1y_2-x_2y_1 & y_1-y_2 & x_2-x_1 \n\\end{pmatrix}\\begin{pmatrix}\n1\\\\x\\\\y\n\\end{pmatrix}\n"
},
{
"math_id": 92,
"text": "\n2A = \\det(1|R) = x_1(y_2-y_3) + x_2(y_3-y_1) + x_3(y_1-y_2)"
},
{
"math_id": 93,
"text": "\\lambda_1 : \\lambda_2 : \\lambda_3"
},
{
"math_id": 94,
"text": "\\lambda_1/a:\\lambda_2/b:\\lambda_3/c."
},
{
"math_id": 95,
"text": "\\lambda_1=0, \\quad \\lambda_2=0, \\quad \\lambda_3=0."
},
{
"math_id": 96,
"text": " \\begin{vmatrix} \\lambda_1 & \\lambda_2 & \\lambda_3 \\\\1 & 1 & 1\\\\\\tan A & \\tan B & \\tan C \\end{vmatrix} =0."
},
{
"math_id": 97,
"text": "P=(p_1,p_2,p_3)"
},
{
"math_id": 98,
"text": "Q=(q_1,q_2,q_3)"
},
{
"math_id": 99,
"text": "\\overset{}\\overrightarrow{P Q}=(p_1-q_1,p_2-q_2,p_3-q_3)."
},
{
"math_id": 100,
"text": "\\overset{}\\overrightarrow{P Q}=(x,y,z),"
},
{
"math_id": 101,
"text": "\\begin{align}\n d^2 &= |PQ|^2 \\\\[2pt]\n &= -a^2yz - b^2zx - c^2xy \\\\[4pt]\n &= \\frac{1}{2} \\left[x^2(b^2+c^2-a^2) + y^2(c^2+a^2-b^2) + z^2(a^2+b^2-c^2)\\right].\n\\end{align}"
},
{
"math_id": 102,
"text": "x+y+z=0,"
},
{
"math_id": 103,
"text": "\\begin{align}\n x+y+z &= (p_1-q_1) + (p_2-q_2) + (p_3-q_3) \\\\[2pt]\n &= (p_1+p_2+p_3) - (q_1+q_2+q_3) \\\\[2pt]\n &= 1 - 1 = 0.\n\\end{align}"
},
{
"math_id": 104,
"text": "\n\\left(\\begin{matrix}\n -c^2 & c^2 & b^2-a^2 \\\\\n -b^2 & c^2-a^2 & b^2 \\\\\n 1 & 1 & 1\n\\end{matrix}\\right)\\boldsymbol{\\lambda} = \\left(\\begin{matrix}\n d^2_A - d^2_B \\\\\n d^2_A - d^2_C \\\\\n 1\n\\end{matrix}\\right)."
},
{
"math_id": 105,
"text": "\\lambda_{1...3}\\geq 0"
},
{
"math_id": 106,
"text": "(0,1)."
},
{
"math_id": 107,
"text": "\\lambda_{1...3}"
},
{
"math_id": 108,
"text": "0 < \\lambda_i < 1 \\;\\forall\\; i \\text{ in } {1,2,3}"
},
{
"math_id": 109,
"text": "\\mathbf{r}"
},
{
"math_id": 110,
"text": "0 \\leq \\lambda_i \\leq 1 \\;\\forall\\; i \\text{ in } {1,2,3}"
},
{
"math_id": 111,
"text": "\\lambda_i = 0\\; \\text {, for some i in } {1, 2, 3}"
},
{
"math_id": 112,
"text": "f(\\mathbf{r}_1),f(\\mathbf{r}_2),f(\\mathbf{r}_3)"
},
{
"math_id": 113,
"text": "\\mathbf{r}_1,\\mathbf{r}_2,\\mathbf{r}_3"
},
{
"math_id": 114,
"text": "f(\\mathbf{r}) \\approx \\lambda_1 f(\\mathbf{r}_1) + \\lambda_2 f(\\mathbf{r}_2) + \\lambda_3 f(\\mathbf{r}_3)"
},
{
"math_id": 115,
"text": "0 \\leq \\lambda_i \\leq 1 \\;\\forall\\; i \\text{ in } 1,2,3"
},
{
"math_id": 116,
"text": "f(\\mathbf{r})"
},
{
"math_id": 117,
"text": "\\lambda_1,\\lambda_2"
},
{
"math_id": 118,
"text": "\n\\int_{T} f(\\mathbf{r}) \\ d\\mathbf{r} = 2A \\int_{0}^{1} \\int_{0}^{1 - \\lambda_2} f(\\lambda_1 \\mathbf{r}_1 + \\lambda_2 \\mathbf{r}_2 +\n(1 - \\lambda_1 - \\lambda_2) \\mathbf{r}_3) \\ d\\lambda_1 \\ d\\lambda_2\n"
},
{
"math_id": 119,
"text": "2A"
},
{
"math_id": 120,
"text": "\n\\int\\int_{T} f(\\mathbf{r}) \\ d\\mathbf{r} \n= 6V \\int_{0}^{1} \\int_{0}^{1 - \\lambda_3} \\int_ {0}^{1-\\lambda_2-\\lambda_3} \nf(\\lambda_1\\mathbf{r}_1 + \\lambda_2\\mathbf{r}_2 +\n\\lambda_3\\mathbf{r}_3 + (1-\\lambda_1-\\lambda_2-\\lambda_3)\\mathbf{r}_4)\n\\ d\\lambda_1 \\ d\\lambda_2 \\ d\\lambda_3\n"
},
{
"math_id": 121,
"text": "\\begin{array}{rccccc}\n A = & 1 &:& 0 &:& 0 \\\\\n B = & 0 &:& 1 &:& 0 \\\\\n C = & 0 &:& 0 &:& 1\n\\end{array}"
},
{
"math_id": 122,
"text": "1:1:1."
},
{
"math_id": 123,
"text": "BC"
},
{
"math_id": 124,
"text": "CA"
},
{
"math_id": 125,
"text": "AB"
},
{
"math_id": 126,
"text": "\\alpha"
},
{
"math_id": 127,
"text": "\\beta"
},
{
"math_id": 128,
"text": "\\gamma"
},
{
"math_id": 129,
"text": "\\angle CAB"
},
{
"math_id": 130,
"text": "\\angle ABC"
},
{
"math_id": 131,
"text": "\\angle BCA"
},
{
"math_id": 132,
"text": "\\begin{array}{rccccc}\n & \\sin 2\\alpha &:& \\sin 2\\beta &:& \\sin 2\\gamma \\\\[2pt]\n =& 1-\\cot\\beta\\cot\\gamma &:& 1-\\cot\\gamma\\cot\\alpha &:& 1-\\cot\\alpha\\cot\\beta \\\\[2pt]\n =& a^2(-a^2+b^2+c^2) &:& b^2(a^2-b^2+c^2) &:& c^2(a^2+b^2-c^2)\n\\end{array}"
},
{
"math_id": 133,
"text": "\\begin{array}{rccccc}\n & \\tan\\alpha &:& \\tan\\beta &:& \\tan\\gamma \\\\[2pt]\n =& a\\cos\\beta\\cos\\gamma &:& b\\cos\\gamma\\cos\\alpha &:& c\\cos\\alpha\\cos\\beta \\\\[2pt]\n =& (a^2+b^2-c^2)(a^2-b^2+c^2) &:& (-a^2+b^2+c^2)(a^2+b^2-c^2) &:& (a^2-b^2+c^2)(-a^2+b^2+c^2)\n\\end{array}"
},
{
"math_id": 134,
"text": "a:b:c=\\sin \\alpha:\\sin \\beta:\\sin \\gamma."
},
{
"math_id": 135,
"text": "\\begin{array}{rrcrcr}\n J_A = & -a &:& b &:& c \\\\\n J_B = & a &:& -b &:& c \\\\ \n J_C = & a &:& b &:& -c \n\\end{array}"
},
{
"math_id": 136,
"text": "\\begin{array}{rccccc}\n & a\\cos(\\beta-\\gamma) &:& b\\cos(\\gamma-\\alpha) &:& c\\cos(\\alpha-\\beta) \\\\[4pt]\n =& 1+\\cot\\beta\\cot\\gamma &:& 1+\\cot\\gamma\\cot\\alpha &:& 1+\\cot\\alpha\\cot\\beta \\\\[4pt]\n =& a^2(b^2+c^2) - (b^2-c^2)^2 &:& b^2(c^2+a^2) - (c^2-a^2)^2 &:& c^2(a^2+b^2) - (a^2-b^2)^2\n\\end{array}"
},
{
"math_id": 137,
"text": "(s-b)(s-c):(s-c)(s-a):(s-a)(s-b)"
},
{
"math_id": 138,
"text": "s-a:s-b:s-c"
},
{
"math_id": 139,
"text": "a^2:b^2:c^2"
},
{
"math_id": 140,
"text": "\\lambda = (1,0,0,0)"
},
{
"math_id": 141,
"text": "\\mathbf{r}_2 \\to (0,1,0,0)"
},
{
"math_id": 142,
"text": "\n\\left(\\begin{matrix}\\lambda_1 \\\\ \\lambda_2 \\\\ \\lambda_3\\end{matrix}\\right) = \\mathbf{T}^{-1} ( \\mathbf{r}-\\mathbf{r}_4 )\n"
},
{
"math_id": 143,
"text": "\n\\mathbf{T} = \\left(\\begin{matrix}\nx_1-x_4 & x_2-x_4 & x_3-x_4\\\\\ny_1-y_4 & y_2-y_4 & y_3-y_4\\\\\nz_1-z_4 & z_2-z_4 & z_3-z_4\n\\end{matrix}\\right)\n"
},
{
"math_id": 144,
"text": "\\lambda_4 = 1 - \\lambda_1 - \\lambda_2 - \\lambda_3"
},
{
"math_id": 145,
"text": "\\begin{align}\nx &= \\lambda_1 x_1 + \\lambda_2 x_2 + \\lambda_3 x_3 + (1-\\lambda_1-\\lambda_2-\\lambda_3)x_4 \\\\\ny &= \\lambda_1 y_1 + \\,\\lambda_2 y_2 + \\lambda_3 y_3 + (1-\\lambda_1-\\lambda_2-\\lambda_3)y_4 \\\\\nz &= \\lambda_1 z_1 + \\,\\lambda_2 z_2 + \\lambda_3 z_3 + (1-\\lambda_1-\\lambda_2-\\lambda_3)z_4\n\\end{align}"
},
{
"math_id": 146,
"text": "(\\lambda_1, \\lambda_2, ..., \\lambda_k)"
},
{
"math_id": 147,
"text": "p \\in \\mathbb{R}^n"
},
{
"math_id": 148,
"text": "x_1, x_2, ..., x_k \\in \\mathbb{R}^n"
},
{
"math_id": 149,
"text": "(\\lambda_1 + \\lambda_2 + \\cdots + \\lambda_k)p = \\lambda_1 x_1 + \\lambda_2 x_2 + \\cdots + \\lambda_k x_k"
},
{
"math_id": 150,
"text": "\\lambda_1 + \\lambda_2 + \\cdots + \\lambda_k = 1"
},
{
"math_id": 151,
"text": "0 \\le \\lambda_i \\le 1"
},
{
"math_id": 152,
"text": "k > n + 1"
},
{
"math_id": 153,
"text": "\n\\left(\\begin{matrix}\n1 & 1 & 1 & ... \\\\\nx_1 & x_2 & x_3 & ... \\\\\ny_1 & y_2 & y_3 & ...\n\\end{matrix}\\right)\n\\begin{pmatrix}\n\\lambda_1 \\\\ \\lambda_2 \\\\ \\lambda_3 \\\\ \\vdots\n\\end{pmatrix} =\n\\left(\\begin{matrix}\n1\\\\x\\\\y\n\\end{matrix}\\right)\n"
},
{
"math_id": 154,
"text": "(n-1)"
},
{
"math_id": 155,
"text": "\\Delta^{n-1} \\twoheadrightarrow P."
},
{
"math_id": 156,
"text": "P \\hookrightarrow (\\mathbf{R}_{\\geq 0})^f"
},
{
"math_id": 157,
"text": "K^n"
},
{
"math_id": 158,
"text": "\\{(x_0,\\ldots,x_n) \\mid \\sum x_i = 1\\} \\subset K^{n+1}"
}
]
| https://en.wikipedia.org/wiki?curid=762954 |
762970 | Morse potential | Model for the potential energy of a diatomic molecule
The Morse potential, named after physicist Philip M. Morse, is a convenient
interatomic interaction model for the potential energy of a diatomic molecule. It is a better approximation for the vibrational structure of the molecule than the quantum harmonic oscillator because it explicitly includes the effects of bond breaking, such as the existence of unbound states. It also accounts for the anharmonicity of real bonds and the non-zero transition probability for overtone and combination bands. The Morse potential can also be used to model other interactions such as the interaction between an atom and a surface. Due to its simplicity (only three fitting parameters), it is not used in modern spectroscopy. However, its mathematical form inspired the MLR (Morse/Long-range) potential, which is the most popular potential energy function used for fitting spectroscopic data.
Potential energy function.
The Morse potential energy function is of the form
formula_0
Here formula_1 is the distance between the atoms, formula_2 is the equilibrium bond distance, formula_3 is the well depth (defined relative to the dissociated atoms), and formula_4 controls the 'width' of the potential (the smaller formula_4 is, the larger the well). The dissociation energy of the bond can be calculated by subtracting the zero point energy formula_5 from the depth of the well. The force constant (stiffness) of the bond can be found by Taylor expansion of formula_6 around formula_7 to the second derivative of the potential energy function, from which it can be shown that the parameter, formula_4, is
formula_8
where formula_9 is the force constant at the minimum of the well.
Since the zero of potential energy is arbitrary, the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. When it is used to model the atom-surface interaction, the energy zero can be redefined so that the Morse potential becomes
formula_10
which is usually written as
formula_11
where formula_1 is now the coordinate perpendicular to the surface. This form approaches zero at infinite formula_1 and equals formula_12 at its minimum, i.e. formula_7. It clearly shows that the Morse potential is the combination of a short-range repulsion term (the former) and a long-range attractive term (the latter), analogous to the Lennard-Jones potential.
Vibrational states and energies.
Like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods.
One approach involves applying the factorization method to the Hamiltonian.
To write the stationary states on the Morse potential, i.e. solutions formula_13 and formula_14 of the following Schrödinger equation:
formula_15
it is convenient to introduce the new variables:
formula_16
Then, the Schrödinger equation takes the simple form:
formula_17
formula_18
Its eigenvalues (reduced by formula_3) and eigenstates can be written as:
formula_19
where
formula_20
with formula_21 denoting the largest integer smaller than formula_22, and
formula_23
where
formula_24
(which satisfies the normalization condition
formula_25
) and formula_26 is a generalized Laguerre polynomial:
formula_27
There also exists the following analytical expression for matrix elements of the coordinate operator:
formula_28
which is valid for formula_29 and formula_30. The eigenenergies in the initial variables have the form:
formula_31
where formula_32 is the vibrational quantum number and formula_33 has units of frequency. The latter is mathematically related to the particle mass, formula_34, and the Morse constants via
formula_35
Whereas the energy spacing between vibrational levels in the quantum harmonic oscillator is constant at formula_36, the energy between adjacent levels decreases with increasing formula_37 in the Morse oscillator. Mathematically, the spacing of Morse levels is
formula_38
This trend matches the anharmonicity found in real molecules. However, this equation fails above some value of formula_39 where formula_40 is calculated to be zero or negative. Specifically,
formula_41 integer part.
This failure is due to the "finite" number of bound levels in the Morse potential, and some maximum formula_39 that remains bound. For energies above formula_39, all the possible energy levels are allowed and the equation for formula_14 is no longer valid.
Below formula_39, formula_14 is a good approximation for the true vibrational structure in non-rotating diatomic molecules. In fact, the real molecular spectra are generally fit to the form1
formula_42
in which the constants formula_43 and formula_44 can be directly related to the parameters for the Morse potential.
As is clear from dimensional analysis, for historical reasons the last equation uses spectroscopic notation in which formula_43 represents a wavenumber obeying formula_45, and not an angular frequency given by formula_46.
Morse/Long-range potential.
An extension of the Morse potential that made the Morse form useful for modern (high-resolution) spectroscopy is the MLR (Morse/Long-range) potential. The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve. It has been used on N2, Ca2, KLi, MgH, several electronic states of Li2, Cs2, Sr2, ArXe, LiCa, LiNa, Br2, Mg2, HF, HCl, HBr, HI, MgD, Be2, BeH, and NaH. More sophisticated versions are used for polyatomic molecules.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(r) = D_e ( 1-e^{-a(r-r_e)} )^2"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "r_e"
},
{
"math_id": 3,
"text": "D_e"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "E_0"
},
{
"math_id": 6,
"text": "V'(r)"
},
{
"math_id": 7,
"text": "r=r_e"
},
{
"math_id": 8,
"text": "a=\\sqrt{k_e/2D_e},"
},
{
"math_id": 9,
"text": "k_e"
},
{
"math_id": 10,
"text": "V(r)= V'(r)-D_e = D_e ( 1-e^{-a(r-r_e)} )^2 -D_e "
},
{
"math_id": 11,
"text": "V(r) = D_e ( e^{-2a(r-r_e)}-2e^{-a(r-r_e)} )"
},
{
"math_id": 12,
"text": "-D_e"
},
{
"math_id": 13,
"text": "\\Psi_n(r)"
},
{
"math_id": 14,
"text": "E_n"
},
{
"math_id": 15,
"text": "\\left(-\\frac{\\hbar ^2 }{2 m }\\frac{\\partial ^2}{\\partial r^2}+V(r)\\right)\\Psi_n(r)=E_n\\Psi_n(r),"
},
{
"math_id": 16,
"text": "x=a r\n\n\\text{; }\n\nx_e=a r_e\n\n\\text{; }\n\n\\lambda =\\frac{\\sqrt{2 m D_e}}{a \\hbar }\n\n\\text{; }\n\n\\varepsilon _n=\\frac{2 m }{a^2\\hbar ^2}E_n = \\frac{\\lambda^2}{D_e}E_n.\n"
},
{
"math_id": 17,
"text": "\n\\left(-\\frac{\\partial ^2}{\\partial x^2}+V(x)\\right)\\Psi _n(x)=\\varepsilon _n\\Psi _n(x),\n"
},
{
"math_id": 18,
"text": "\nV(x)=\\lambda ^2\\left(1-e^{-\\left(x-x_e\\right)}\\right)^2.\n"
},
{
"math_id": 19,
"text": "\n\\varepsilon _n= \\lambda^2 - \\left(\\lambda -n-\\frac{1}{2}\\right)^2 = 2\\lambda \\left( n+\\frac{1}{2}\\right) - \\left(n+\\frac{1}{2}\\right)^2,\n"
},
{
"math_id": 20,
"text": "\nn=0,1,\\ldots,\\lfloor \\lambda-\\frac{1}{2} \\rfloor,\n"
},
{
"math_id": 21,
"text": "\\lfloor x \\rfloor"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "\n\\Psi _n(z)=N_nz^{\\lambda -n-\\frac{1}{2}}e^{-\\frac{1}{2}z}L_n^{(2\\lambda -2n-1)}(z),\n"
},
{
"math_id": 24,
"text": "\nz=2\\lambda e^{-\\left(x-x_e\\right)}\n\\text{; }\nN_n=\\left[\\frac{n!\\left(2\\lambda-2n-1\\right) a}{\\Gamma (2\\lambda - n)}\\right]^{\\frac{1}{2}}\n"
},
{
"math_id": 25,
"text": "\n\\int \\mathrm{d}r \\, \\Psi_n^{*}(r) \\Psi_n(r) = 1\n"
},
{
"math_id": 26,
"text": "L_n^{(\\alpha) }(z)"
},
{
"math_id": 27,
"text": "L_n^{(\\alpha) }(z) = \\frac{z^{-\\alpha }e^z}{n!} \\frac{d^n}{d z^n}\\left(z^{n + \\alpha } e^{-z}\\right)=\\frac{\\Gamma (\\alpha + n + 1)/\\Gamma (\\alpha +1)}{n!} \\, _1F_1(-n,\\alpha +1,z).\n"
},
{
"math_id": 28,
"text": "\n\\left\\langle \\Psi _m|x|\\Psi _n\\right\\rangle =\\frac{2(-1)^{m-n+1}}{(m-n)(2N-n-m)} \\sqrt{\\frac{(N-n)(N-m)\\Gamma (2N-m+1)m!}{\\Gamma (2N-n+1)n!}}.\n"
},
{
"math_id": 29,
"text": "m>n"
},
{
"math_id": 30,
"text": "N=\\lambda - 1/2"
},
{
"math_id": 31,
"text": "E_n = h\\nu_0 (n+1/2) - \\frac{\\left[h\\nu_0(n+1/2)\\right]^2}{4D_e}"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "\\nu_0"
},
{
"math_id": 34,
"text": "m"
},
{
"math_id": 35,
"text": "\\nu_0 = \\frac{a}{2\\pi} \\sqrt{2D_e/m}."
},
{
"math_id": 36,
"text": "h\\nu_0"
},
{
"math_id": 37,
"text": "v"
},
{
"math_id": 38,
"text": "E_{n+1} - E_n = h\\nu_0 - (n+1) (h\\nu_0)^2/2D_e.\\,"
},
{
"math_id": 39,
"text": "n_m"
},
{
"math_id": 40,
"text": "E(n_m + 1) - E(n_m)"
},
{
"math_id": 41,
"text": "n_m = \\frac{2D_e-h\\nu_0}{h\\nu_0}"
},
{
"math_id": 42,
"text": " E_n / hc = \\omega_e (n+1/2) - \\omega_e\\chi_e (n+1/2)^2\\,"
},
{
"math_id": 43,
"text": "\\omega_e"
},
{
"math_id": 44,
"text": "\\omega_e\\chi_e"
},
{
"math_id": 45,
"text": "E=hc\\omega"
},
{
"math_id": 46,
"text": "E=\\hbar\\omega"
}
]
| https://en.wikipedia.org/wiki?curid=762970 |
762977 | Complex projective space | Mathematical concept
In mathematics, complex projective space is the projective space with respect to the field of complex numbers. By analogy, whereas the points of a real projective space label the lines through the origin of a real Euclidean space, the points of a complex projective space label the "complex" lines through the origin of a complex Euclidean space (see below for an intuitive account). Formally, a complex projective space is the space of complex lines through the origin of an ("n"+1)-dimensional complex vector space. The space is denoted variously as P(C"n"+1), P"n"(C) or CP"n". When "n"
1, the complex projective space CP1 is the Riemann sphere, and when "n"
2, CP2 is the complex projective plane (see there for a more elementary discussion).
Complex projective space was first introduced by as an instance of what was then known as the "geometry of position", a notion originally due to Lazare Carnot, a kind of synthetic geometry that included other projective geometries as well. Subsequently, near the turn of the 20th century it became clear to the Italian school of algebraic geometry that the complex projective spaces were the most natural domains in which to consider the solutions of polynomial equations – algebraic varieties . In modern times, both the topology and geometry of complex projective space are well understood and closely related to that of the sphere. Indeed, in a certain sense the (2"n"+1)-sphere can be regarded as a family of circles parametrized by CP"n": this is the Hopf fibration. Complex projective space carries a (Kähler) metric, called the Fubini–Study metric, in terms of which it is a Hermitian symmetric space of rank 1.
Complex projective space has many applications in both mathematics and quantum physics. In algebraic geometry, complex projective space is the home of projective varieties, a well-behaved class of algebraic varieties. In topology, the complex projective space plays an important role as a classifying space for complex line bundles: families of complex lines parametrized by another space. In this context, the infinite union of projective spaces (direct limit), denoted CP∞, is the classifying space K(Z,2). In quantum physics, the wave function associated to a pure state of a quantum mechanical system is a probability amplitude, meaning that it has unit norm, and has an inessential overall phase: that is, the wave function of a pure state is naturally a point in the projective Hilbert space of the state space.
Introduction.
The notion of a projective plane arises out of the idea of perspection in geometry and art: that it is sometimes useful to include in the Euclidean plane an additional "imaginary" line that represents the horizon that an artist, painting the plane, might see. Following each direction from the origin, there is a different point on the horizon, so the horizon can be thought of as the set of all directions from the origin. The Euclidean plane, together with its horizon, is called the real projective plane, and the horizon is sometimes called a line at infinity. By the same construction, projective spaces can be considered in higher dimensions. For instance, the real projective 3-space is a Euclidean space together with a plane at infinity that represents the horizon that an artist (who must, necessarily, live in four dimensions) would see.
These real projective spaces can be constructed in a slightly more rigorous way as follows. Here, let R"n"+1 denote the real coordinate space of "n"+1 dimensions, and regard the landscape to be painted as a hyperplane in this space. Suppose that the eye of the artist is the origin in R"n"+1. Then along each line through his eye, there is a point of the landscape or a point on its horizon. Thus the real projective space is the space of lines through the origin in R"n"+1. Without reference to coordinates, this is the space of lines through the origin in an ("n"+1)-dimensional real vector space.
To describe the complex projective space in an analogous manner requires a generalization of the idea of vector, line, and direction. Imagine that instead of standing in a real Euclidean space, the artist is standing in a complex Euclidean space C"n"+1 (which has real dimension 2"n"+2) and the landscape is a "complex" hyperplane (of real dimension 2"n"). Unlike the case of real Euclidean space, in the complex case there are directions in which the artist can look which do not see the landscape (because it does not have high enough dimension). However, in a complex space, there is an additional "phase" associated with the directions through a point, and by adjusting this phase the artist can guarantee that he typically sees the landscape. The "horizon" is then the space of directions, but such that two directions are regarded as "the same" if they differ only by a phase. The complex projective space is then the landscape (C"n") with the horizon attached "at infinity". Just like the real case, the complex projective space is the space of directions through the origin of C"n"+1, where two directions are regarded as the same if they differ by a phase.
Construction.
Complex projective space is a complex manifold that may be described by "n" + 1 complex coordinates as
formula_0
where the tuples differing by an overall rescaling are identified:
formula_1
That is, these are homogeneous coordinates in the traditional sense of projective geometry. The point set CP"n" is covered by the patches formula_2. In "U""i", one can define a coordinate system by
formula_3
The coordinate transitions between two different such charts "U""i" and "U""j" are holomorphic functions (in fact they are fractional linear transformations). Thus CP"n" carries the structure of a complex manifold of complex dimension "n", and "a fortiori" the structure of a real differentiable manifold of real dimension 2"n".
One may also regard CP"n" as a quotient of the unit 2"n" + 1 sphere in C"n"+1 under the action of U(1):
CP"n" = "S"2"n"+1/U(1).
This is because every line in C"n"+1 intersects the unit sphere in a circle. By first projecting to the unit sphere and then identifying under the natural action of U(1) one obtains CP"n". For "n" = 1 this construction yields the classical Hopf bundle formula_4. From this perspective, the differentiable structure on CP"n" is induced from that of "S"2"n"+1, being the quotient of the latter by a compact group that acts properly.
Topology.
The topology of CP"n" is determined inductively by the following cell decomposition. Let "H" be a fixed hyperplane through the origin in C"n"+1. Under the projection map C"n"+1\{0} → CP"n", "H" goes into a subspace that is homeomorphic to CP"n"−1. The complement of the image of "H" in CP"n" is homeomorphic to C"n". Thus CP"n" arises by attaching a 2"n"-cell to CP"n"−1:
formula_5
Alternatively, if the 2"n"-cell is regarded instead as the open unit ball in C"n", then the attaching map is the Hopf fibration of the boundary. An analogous inductive cell decomposition is true for all of the projective spaces; see .
CW-decomposition.
One useful way to construct the complex projective spaces formula_6 is through a recursive construction using CW-complexes. Recall that there is a homeomorphism formula_7 to the 2-sphere, giving the first space. We can then induct on the cells to get a pushout map formula_8 where formula_9 is the four ball, and formula_10 represents the generator in formula_11 (hence it is homotopy equivalent to the Hopf map). We can then inductively construct the spaces as pushout diagrams formula_12 where formula_13 represents an element in formula_14 The isomorphism of homotopy groups is described below, and the isomorphism of homotopy groups is a standard calculation in stable homotopy theory (which can be done with the Serre spectral sequence, Freudenthal suspension theorem, and the Postnikov tower). The map comes from the fiber bundle formula_15 giving a non-contractible map, hence it represents the generator in formula_16. Otherwise, there would be a homotopy equivalence formula_17, but then it would be homotopy equivalent to formula_18, a contradiction which can be seen by looking at the homotopy groups of the space.
Point-set topology.
Complex projective space is compact and connected, being a quotient of a compact, connected space.
Homotopy groups.
From the fiber bundle
formula_19
or more suggestively
formula_20
CP"n" is simply connected. Moreover, by the long exact homotopy sequence, the second homotopy group is π2(CP"n") ≅ Z, and all the higher homotopy groups agree with those of "S"2"n"+1: π"k"(CP"n") ≅ π"k"("S"2"n"+1) for all "k" > 2.
Homology.
In general, the algebraic topology of CP"n" is based on the rank of the homology groups being zero in odd dimensions; also "H"2"i"(CP"n", Z) is infinite cyclic for "i" = 0 to "n". Therefore, the Betti numbers run
1, 0, 1, 0, ..., 0, 1, 0, 0, 0, ...
That is, 0 in odd dimensions, 1 in even dimensions 0 through 2n. The Euler characteristic of CP"n" is therefore "n" + 1. By Poincaré duality the same is true for the ranks of the cohomology groups. In the case of cohomology, one can go further, and identify the graded ring structure, for cup product; the generator of "H"2(CPn, Z) is the class associated to a hyperplane, and this is a ring generator, so that the ring is isomorphic with
Z["T"]/("T""n"+1),
with "T" a degree two generator. This implies also that the Hodge number "h""i","i" = 1, and all the others are zero. See .
"K"-theory.
It follows from induction and Bott periodicity that
formula_21
The tangent bundle satisfies
formula_22
where formula_23 denotes the trivial line bundle, from the Euler sequence. From this, the Chern classes and characteristic numbers can be calculated explicitly.
Classifying space.
There is a space formula_24 which, in a sense, is the inductive limit of formula_6 as formula_25. It is BU(1), the classifying space of U(1), the circle group, in the sense of homotopy theory, and so classifies complex line bundles. Equivalently it accounts for the first Chern class. This can be seen heuristically by looking at the fiber bundle maps formula_26 and formula_25. This gives a fiber bundle (called the universal circle bundle) formula_27 constructing this space. Note using the long exact sequence of homotopy groups, we have formula_28 hence formula_24 is an Eilenberg–MacLane space, a formula_29. Because of this fact, and Brown's representability theorem, we have the following isomorphism formula_30 for any nice CW-complex formula_31. Moreover, from the theory of Chern classes, every complex line bundle formula_32 can be represented as a pullback of the universal line bundle on formula_24, meaning there is a pullback square formula_33 where formula_34 is the associated vector bundle of the principal formula_35-bundle formula_36. See, for instance, and .
Differential geometry.
The natural metric on CP"n" is the Fubini–Study metric, and its holomorphic isometry group is the projective unitary group PU("n"+1), where the stabilizer of a point is
formula_37
It is a Hermitian symmetric space , represented as a coset space
formula_38
The geodesic symmetry at a point "p" is the unitary transformation that fixes "p" and is the negative identity on the orthogonal complement of the line represented by "p".
Geodesics.
Through any two points "p", "q" in complex projective space, there passes a unique "complex" line (a CP1). A great circle of this complex line that contains "p" and "q" is a geodesic for the Fubini–Study metric. In particular, all of the geodesics are closed (they are circles), and all have equal length. (This is always true of Riemannian globally symmetric spaces of rank 1.)
The cut locus of any point "p" is equal to a hyperplane CP"n"−1. This is also the set of fixed points of the geodesic symmetry at "p" (less "p" itself). See .
Sectional curvature pinching.
It has sectional curvature ranging from 1/4 to 1, and is the roundest manifold that is not a sphere (or covered by a sphere): by the 1/4-pinched sphere theorem, any complete, simply connected Riemannian manifold with curvature strictly between 1/4 and 1 is diffeomorphic to the sphere. Complex projective space shows that 1/4 is sharp. Conversely, if a complete simply connected Riemannian manifold has sectional curvatures in the closed interval [1/4,1], then it is either diffeomorphic to the sphere, or isometric to the complex projective space, the quaternionic projective space, or else the Cayley plane F4/Spin(9); see .
Spin structure.
The odd-dimensional projective spaces can be given a spin structure, the even-dimensional ones cannot.
Algebraic geometry.
Complex projective space is a special case of a Grassmannian, and is a homogeneous space for various Lie groups. It is a Kähler manifold carrying the Fubini–Study metric, which is essentially determined by symmetry properties. It also plays a central role in algebraic geometry; by Chow's theorem, any compact complex submanifold of CP"n" is the zero locus of a finite number of polynomials, and is thus a projective algebraic variety. See
Zariski topology.
In algebraic geometry, complex projective space can be equipped with another topology known as the Zariski topology . Let "S"
C["Z"0...,"Z""n"] denote the commutative ring of polynomials in the ("n"+1) variables "Z"0...,"Z""n". This ring is graded by the total degree of each polynomial:
formula_39
Define a subset of CP"n" to be "closed" if it is the simultaneous solution set of a collection of homogeneous polynomials. Declaring the complements of the closed sets to be open, this defines a topology (the Zariski topology) on CP"n".
Structure as a scheme.
Another construction of CP"n" (and its Zariski topology) is possible. Let "S"+ ⊂ "S" be the ideal spanned by the homogeneous polynomials of positive degree:
formula_40
Define Proj "S" to be the set of all homogeneous prime ideals in "S" that do not contain "S"+. Call a subset of Proj "S" closed if it has the form
formula_41
for some ideal "I" in "S". The complements of these closed sets define a topology on Proj "S". The ring "S", by localization at a prime ideal, determines a sheaf of local rings on Proj "S". The space Proj "S", together with its topology and sheaf of local rings, is a scheme. The subset of closed points of Proj "S" is homeomorphic to CP"n" with its Zariski topology. Local sections of the sheaf are identified with the rational functions of total degree zero on CP"n".
Line bundles.
All line bundles on complex projective space can be obtained by the following construction. A function "f" : C"n"+1\{0} → C is called homogeneous of degree "k" if
formula_42
for all λ ∈ C\{0} and "z" ∈ C"n"+1\{0}. More generally, this definition makes sense in cones in C"n"+1\{0}. A set "V" ⊂ C"n"+1\{0} is called a cone if, whenever "v" ∈ "V", then "λv" ∈ "V" for all λ ∈ C\{0}; that is, a subset is a cone if it contains the complex line through each of its points. If "U" ⊂ CP"n" is an open set (in either the analytic topology or the Zariski topology), let "V" ⊂ C"n"+1\{0} be the cone over "U": the preimage of "U" under the projection C"n"+1\{0} → CP"n". Finally, for each integer "k", let "O"("k")("U") be the set of functions that are homogeneous of degree "k" in "V". This defines a sheaf of sections of a certain line bundle, denoted by "O"("k").
In the special case "k"
−1, the bundle "O"(−1) is called the tautological line bundle. It is equivalently defined as the subbundle of the product
formula_43
whose fiber over "L" ∈ CP"n" is the set
formula_44
These line bundles can also be described in the language of divisors. Let "H" = CP"n"−1 be a given complex hyperplane in CP"n". The space of meromorphic functions on CP"n" with at most a simple pole along "H" (and nowhere else) is a one-dimensional space, denoted by "O"("H"), and called the hyperplane bundle. The dual bundle is denoted "O"(−"H"), and the "k"th tensor power of "O"("H") is denoted by "O"("kH"). This is the sheaf generated by holomorphic multiples of a meromorphic function with a pole of order "k" along "H". It turns out that
formula_45
Indeed, if "L"("z")
0 is a linear defining function for "H", then "L"−"k" is a meromorphic section of "O"("k"), and locally the other sections of "O"("k") are multiples of this section.
Since "H"1(CP"n",Z)
0, the line bundles on CP"n" are classified up to isomorphism by their Chern classes, which are integers: they lie in "H"2(CP"n",Z)
Z. In fact, the first Chern classes of complex projective space are generated under Poincaré duality by the homology class associated to a hyperplane "H". The line bundle "O"("kH") has Chern class "k". Hence every holomorphic line bundle on CP"n" is a tensor power of "O"("H") or "O"(−"H"). In other words, the Picard group of CP"n" is generated as an abelian group by the hyperplane class ["H"] . | [
{
"math_id": 0,
"text": "Z=(Z_1,Z_2,\\ldots,Z_{n+1}) \\in \\mathbb{C}^{n+1},\n\\qquad (Z_1,Z_2,\\ldots,Z_{n+1})\\neq (0,0,\\ldots,0)"
},
{
"math_id": 1,
"text": "(Z_1,Z_2,\\ldots,Z_{n+1}) \\equiv \n(\\lambda Z_1,\\lambda Z_2, \\ldots,\\lambda Z_{n+1});\n\\quad \\lambda\\in \\mathbb{C},\\qquad \\lambda \\neq 0."
},
{
"math_id": 2,
"text": "U_i=\\{ Z \\mid Z_i\\ne0\\}"
},
{
"math_id": 3,
"text": "z_1 = Z_1/Z_i, \\quad z_2=Z_2/Z_i, \\quad \\dots, \\quad z_{i-1}=Z_{i-1}/Z_i, \\quad z_i = Z_{i+1}/Z_i, \\quad \\dots, \\quad z_n=Z_{n+1}/Z_i."
},
{
"math_id": 4,
"text": "S^3\\to S^2"
},
{
"math_id": 5,
"text": "\\mathbf{CP}^n = \\mathbf{CP}^{n-1}\\cup \\mathbf{C}^n."
},
{
"math_id": 6,
"text": "\\mathbf{CP}^n"
},
{
"math_id": 7,
"text": "\\mathbf{CP}^1 \\cong S^2"
},
{
"math_id": 8,
"text": "\\begin{matrix}\nS^3 & \\hookrightarrow & D^4 \\\\\n\\downarrow & & \\downarrow \\\\\n\\mathbf{CP}^1 & \\to & \\mathbf{CP}^2\n\\end{matrix}"
},
{
"math_id": 9,
"text": "D^4"
},
{
"math_id": 10,
"text": "S^3 \\to \\mathbf{CP}^1"
},
{
"math_id": 11,
"text": "\\pi_3(S^2)"
},
{
"math_id": 12,
"text": "\\begin{matrix}\nS^{2n-1} & \\hookrightarrow & D^{2n} \\\\\n\\downarrow & & \\downarrow \\\\\n\\mathbf{CP}^{n-1} & \\to & \\mathbf{CP}^n\n\\end{matrix}"
},
{
"math_id": 13,
"text": "S^{2n-1} \\to \\mathbf{CP}^{n-1}"
},
{
"math_id": 14,
"text": "\\begin{align}\n\\pi_{2n-1}(\\mathbf{CP}^{n-1}) &\\cong \\pi_{2n-1}(S^{2n-2}) \\\\\n&\\cong \\mathbb{Z}/2\n\\end{align}"
},
{
"math_id": 15,
"text": "S^1 \\hookrightarrow S^{2n-1} \\twoheadrightarrow \\mathbf{CP}^{n-1}"
},
{
"math_id": 16,
"text": "\\mathbb{Z}/2"
},
{
"math_id": 17,
"text": "\\mathbf{CP}^n \\simeq \\mathbf{CP}^{n-1}\\times D^n"
},
{
"math_id": 18,
"text": "S^2"
},
{
"math_id": 19,
"text": "S^1 \\hookrightarrow S^{2n+1} \\twoheadrightarrow \\mathbf{CP}^n"
},
{
"math_id": 20,
"text": "U(1) \\hookrightarrow S^{2n+1} \\twoheadrightarrow \\mathbf{CP}^n"
},
{
"math_id": 21,
"text": "K_\\mathbf{C}^*(\\mathbf{CP}^n) = K_\\mathbf{C}^0(\\mathbf{CP}^n) = \\mathbf{Z}[H]/(H-1)^{n+1}."
},
{
"math_id": 22,
"text": "T\\mathbf{CP}^n \\oplus \\vartheta^1 = H^{\\oplus n+1},"
},
{
"math_id": 23,
"text": "\\vartheta^1"
},
{
"math_id": 24,
"text": "\\mathbf{CP}^\\infty"
},
{
"math_id": 25,
"text": "n \\to \\infty"
},
{
"math_id": 26,
"text": "S^1 \\hookrightarrow S^{2n+1} \\twoheadrightarrow \\mathbf{CP}^n"
},
{
"math_id": 27,
"text": "S^1 \\hookrightarrow S^\\infty \\twoheadrightarrow \\mathbf{CP}^\\infty"
},
{
"math_id": 28,
"text": "\\pi_2(\\mathbf{CP}^\\infty) = \\pi_1(S^1)"
},
{
"math_id": 29,
"text": "K(\\mathbb{Z},2)"
},
{
"math_id": 30,
"text": "H^2(X;\\mathbb{Z}) \\cong [X,\\mathbf{CP}^\\infty]"
},
{
"math_id": 31,
"text": "X"
},
{
"math_id": 32,
"text": "L \\to X"
},
{
"math_id": 33,
"text": "\\begin{matrix}\nL & \\to & \\mathcal{L} \\\\\n\\downarrow & &\\downarrow \\\\\nX & \\to & \\mathbf{CP}^\\infty\n\\end{matrix}"
},
{
"math_id": 34,
"text": "\\mathcal{L} \\to \\mathbf{CP}^\\infty"
},
{
"math_id": 35,
"text": "U(1)"
},
{
"math_id": 36,
"text": "S^\\infty \\to \\mathbf{CP}^\\infty"
},
{
"math_id": 37,
"text": "\\mathrm{P}(1\\times \\mathrm{U}(n)) \\cong \\mathrm{PU}(n)."
},
{
"math_id": 38,
"text": "U(n+1)/(U(1) \\times U(n)) \\cong SU(n+1)/S(U(1) \\times U(n))."
},
{
"math_id": 39,
"text": "S = \\bigoplus_{n=0}^\\infty S_n."
},
{
"math_id": 40,
"text": "\\bigoplus_{n>0}S_n."
},
{
"math_id": 41,
"text": "V(I) = \\{ p\\in \\operatorname{Proj} S\\mid p\\supseteq I\\}"
},
{
"math_id": 42,
"text": "f(\\lambda z) = \\lambda^k f(z)"
},
{
"math_id": 43,
"text": "\\mathbf{C}^{n+1}\\times\\mathbf{CP}^n\\to \\mathbf{CP}^n"
},
{
"math_id": 44,
"text": "\\{(x,L)\\mid x\\in L\\}."
},
{
"math_id": 45,
"text": "O(kH) \\cong O(k)."
}
]
| https://en.wikipedia.org/wiki?curid=762977 |
76304731 | Deficiency (statistics) | Quantitative way to compare statistical models
In statistics, the deficiency is a measure to compare a statistical model with another statistical model. The concept was introduced in the 1960s by the french mathematician Lucien Le Cam, who used it to prove an approximative version of the Blackwell–Sherman–Stein theorem. Closely related is the Le Cam distance, a pseudometric for the maximum deficiency between two statistical models. If the deficiency of a model formula_0 in relation to formula_1 is zero, then one says formula_0 is "better" or "more informative" or "stronger" than formula_1.
Introduction.
Le Cam defined the statistical model more abstract than a probability space with a family of probability measures. He also didn't use the term "statistical model" and instead used the term "experiment". In his publication from 1964 he introduced the statistical experiment to a parameter set formula_2 as a triple formula_3 consisting of a set formula_4, a vector lattice formula_5 with unit formula_6 and a family of normalized positive functionals formula_7 on formula_5. In his book from 1986 he omitted formula_5 and formula_4.
This article follows his definition from 1986 and uses his terminology to emphasize the generalization.
Formulation.
Basic concepts.
Let formula_2 be a parameter space. Given an abstract L1-space formula_8 (i.e. a Banach lattice such that for elements formula_9 also formula_10 holds) consisting of lineare positive functionals formula_11. An "experiment" formula_0 is a map formula_12 of the form formula_13, such that formula_14. formula_15 is the band induced by formula_11 and therefore we use the notation formula_16. For a formula_17 denote the formula_18. The topological dual formula_19 of an L-space with the conjugated norm formula_20 is called an "abstract M-space". It's also a lattice with unit defined through formula_21 for formula_22.
Let formula_23 and formula_24 be two L-space of two experiments formula_25 and formula_26, then one calls a positive, norm-preserving linear map, i.e. formula_27 for all formula_28, a transition. The adjoint of a transitions is a positive linear map from the dual space formula_29 of formula_24 into the dual space formula_30 of formula_23, such that the unit of formula_30 is the image of the unit of formula_29 ist.
Deficiency.
Let formula_2 be a parameter space and formula_31 and formula_32 be two experiments indexed by formula_2. Le formula_16 and formula_33 denote the corresponding L-spaces and let formula_34 be the set of all transitions from formula_16 to formula_33.
The deficiency formula_35 of formula_0 in relation to formula_1 is the number defined in terms of inf sup:
formula_36
where formula_37 denoted the total variation norm formula_38. The factor formula_39 is just for computational purposes and is sometimes omitted.
Le Cam distance.
The Le Cam distance is the following pseudometric
formula_40
This induces an equivalence relation and when formula_41, then one says formula_0 and formula_1 are "equivalent". The equivalent class formula_42 of formula_0 is also called the "type of formula_0".
Often one is interested in families of experiments formula_43 with formula_44 and formula_45 with formula_46. If formula_47 as formula_48, then one says formula_49 and formula_50 are "asymptotically equivalent".
Let formula_2 be a parameter space and formula_51 be the set of all types that are induced by formula_2, then the Le Cam distance formula_52 is complete with respect to formula_51. The condition formula_53 induces a partial order on formula_51, one says formula_0 is "better" or "more informative" or "stronger" than formula_1. | [
{
"math_id": 0,
"text": "\\mathcal{E}"
},
{
"math_id": 1,
"text": "\\mathcal{F}"
},
{
"math_id": 2,
"text": "\\Theta"
},
{
"math_id": 3,
"text": "(X,E,(P_\\theta)_{\\theta\\in\\Theta})"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "E"
},
{
"math_id": 6,
"text": "I"
},
{
"math_id": 7,
"text": "(P_\\theta)_{\\theta \\in \\Theta}"
},
{
"math_id": 8,
"text": "(L,\\|\\cdot\\|)"
},
{
"math_id": 9,
"text": "x,y\\geq 0"
},
{
"math_id": 10,
"text": "\\|x+y\\|=\\|x\\|+\\|y\\|"
},
{
"math_id": 11,
"text": "\\{P_{\\theta}:\\theta\\in\\Theta\\}"
},
{
"math_id": 12,
"text": "\\mathcal{E}:\\Theta \\to L"
},
{
"math_id": 13,
"text": "\\theta \\mapsto P_{\\theta}"
},
{
"math_id": 14,
"text": "\\|P_{\\theta}\\|=1"
},
{
"math_id": 15,
"text": "L"
},
{
"math_id": 16,
"text": "L(\\mathcal{E})"
},
{
"math_id": 17,
"text": "\\mu\\in L(\\mathcal{E})"
},
{
"math_id": 18,
"text": "\\mu^{+}=\\mu \\vee 0=\\max(\\mu,0)"
},
{
"math_id": 19,
"text": "M"
},
{
"math_id": 20,
"text": "\\|u\\|_M=\\sup\\{|\\langle u,\\mu\\rangle|; \\|\\mu\\|_L\\leq 1\\}"
},
{
"math_id": 21,
"text": "I \\mu=\\|\\mu^+\\|_L-\\|\\mu^-\\|_L"
},
{
"math_id": 22,
"text": "\\mu\\in L"
},
{
"math_id": 23,
"text": "L(A)"
},
{
"math_id": 24,
"text": "L(B)"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "B"
},
{
"math_id": 27,
"text": "\\|T\\mu^{+}\\|=\\|\\mu^{+}\\|"
},
{
"math_id": 28,
"text": "\\mu\\in L(A)"
},
{
"math_id": 29,
"text": "M_B"
},
{
"math_id": 30,
"text": "M_A"
},
{
"math_id": 31,
"text": "\\mathcal{E}:\\theta \\to P_\\theta"
},
{
"math_id": 32,
"text": "\\mathcal{F}:\\theta \\to Q_\\theta"
},
{
"math_id": 33,
"text": "L(\\mathcal{F})"
},
{
"math_id": 34,
"text": "\\mathcal{T}"
},
{
"math_id": 35,
"text": "\\delta(\\mathcal{E},\\mathcal{F})"
},
{
"math_id": 36,
"text": "\\delta(\\mathcal{E},\\mathcal{F}):=\\inf\\limits_{T\\in \\mathcal{T}}\\sup\\limits_{\\theta \\in \\Theta} \\tfrac{1}{2}\\|Q_{\\theta}-TP_{\\theta}\\|_{\\text{TV}},"
},
{
"math_id": 37,
"text": "\\|\\cdot\\|_{\\text{TV}}"
},
{
"math_id": 38,
"text": "\\|\\mu\\|_{\\text{TV}}=\\mu^{+}+\\mu^{-}"
},
{
"math_id": 39,
"text": "\\tfrac{1}{2}"
},
{
"math_id": 40,
"text": "\\Delta(\\mathcal{E},\\mathcal{F}):= \\operatorname{max}\\left(\\delta(\\mathcal{E},\\mathcal{F}),\\delta(\\mathcal{F},\\mathcal{E})\\right). "
},
{
"math_id": 41,
"text": "\\Delta(\\mathcal{E},\\mathcal{F})=0"
},
{
"math_id": 42,
"text": "C_{\\mathcal{E}}"
},
{
"math_id": 43,
"text": "(\\mathcal{E}_n)_{n}"
},
{
"math_id": 44,
"text": "\\{P_{n,\\theta}\\colon \\theta \\in \\Theta_{n}\\}"
},
{
"math_id": 45,
"text": "(\\mathcal{F}_n)_{n}"
},
{
"math_id": 46,
"text": "\\{Q_{n,\\theta}\\colon \\theta \\in \\Theta_{n}\\}"
},
{
"math_id": 47,
"text": "\\Delta(\\mathcal{E}_n,\\mathcal{F}_n)=0"
},
{
"math_id": 48,
"text": "n\\to \\infty"
},
{
"math_id": 49,
"text": "(\\mathcal{E}_n)"
},
{
"math_id": 50,
"text": "(\\mathcal{F}_n)"
},
{
"math_id": 51,
"text": "E(\\Theta)"
},
{
"math_id": 52,
"text": "\\Delta"
},
{
"math_id": 53,
"text": "\\delta(\\mathcal{E},\\mathcal{F})=0"
}
]
| https://en.wikipedia.org/wiki?curid=76304731 |
76305775 | MicroPDF417 | MicroPDF417 is two-dimensional (2D) stacked barcode symbology invented in 1996, by Frederick Schuessler, Kevin Hunter, Sundeep Kumar and Cary Chu from Symbol Technologies company. MicroPDF417 consists from specially encoded Row Address Patterns (RAP) columns and aligned to them Data columns encoded in "417" sequence which was invented in 1990. In 2006, the standard was registered as ISO/IEC 24728:2006.
MicroPDF417 barcode can be read with both barcode reader technologies like laser scanners and camera-based readers. As most of 2D barcodes, MicroPDF417 standard contains Reed–Solomon error correction with ability to read corrupted images and high data density. However, data which can be encoded in MicroPDF417 is only 150 bytes or 250 alphanumeric characters in the biggest 4-columns version. Also, because of design, MicroPDF417 barcode can be used only for high-quality documents and images.
MicroPDF417 in common modes can encode text, numeric, binary data and Unicode text with Extended Channel Interpretation. Additionally, MicroPDF417 contains special modes which can encode text and numeric data in special formats, which can be used, as an example, in GS1 Composite bar code symbology.
History and standards.
MicroPDF417 barcode was patented in 1996, by Frederick Schuessler, Kevin Hunter, Sundeep Kumar and Cary Chu from Symbol Technologies company. MicroPDF417 is an extension of PDF417 barcode and uses the same principles of data encoding. Before 2006, the standard can be obtained only from AIM store as ITS MicroPDF417 standard. At that time, it is used as part of ITS - EAN.UCC Composite Symbology. In 2006, MicroPDF417 standard was brought out as ISO/IEC 24728:2006 and can be used independently or as part of GS1 Composite barcode symbology.
Application.
MicroPDF417 is mostly used to add extended data to linear barcodes. MicroPDF417 has high encoding density and in this way, it can add more additional data in lower space. At this time, it is used in inventory management and goods labeling as part of EAN.UCC Composite Symbology and GS1 Composite barcode symbology. Most of barcode printers and barcode scanners have MicroPDF417 support.
Barcode design.
MicroPDF417 barcode symbol consists from at least two Row Address Patterns (RAP) columns which are used to detect row numbers and aligned to them Data Columns. MicroPDF417 barcode symbol has four versions with 1, 2, 3 and 4 data columns. The barcode can be split to the following elements:
Every MicroPDF417 barcode data column versions can be split into predefined numbers of rows which are different for every version. Row height should be from 2 to 5 times higher than minimal module (bar or space) width.
RAP columns structure.
MicroPDF417 Row Address Patterns (RAP) are stacked into columns. Each RAP is used as indicator of row number, but RAP is not the same as row number. Every MicroPDF417 RAP consists from 10 modules, which are split to 3 black bars and 3 white spaces. Bars and spaces size can vary from 1 to 5. Each RAP row starts from black bar and ends with white space. Right RAP has additional completing black bar.
MicroPDF417 Row Address Patterns have 52 values which are used for left and right columns and 52 other values which are used only for Center columns. RAP has values from 1 to 52. One and two data columns MicroPDF417 barcode use only Left and Right RAP columns, three and four column versions additionally use Center RAP column.
All of Row Address Patterns in MicroPDF417 from Left, Right and Center columns use special sequences which are called Row Number Assignments (RNA). The unique combination of RNA defines MicroPDF417 version and equality of current RAP number to row number.
As an example, MicroPDF417 4 columns and 4 rows version has Left RAP, which starts from 47 and ends 50, Center RAP starts from 19 and ends 22, Right RAP starts from 43 and ends 46. The combination of these 3 sequences in the same area of the image defines 4 columns and 4 rows MicroPDF417 version and gives answer which RAP number identifies current row.
Data codewords.
MicroPDF417 Data codewords encoding is similar to PDF417 barcode. Every Data codeword row has width of 17 modules, split to 4 black bars and 4 white spaces with variable size from 1 to 6 modules. Each codeword represents a number from 0 to 928. The set of codewords is represented in each of three clusters with numbers 0, 3 and 6.
The codeword cluster number can be counted by number of Left RAP (values from 1 to 52) in the current row with the following formula:
formula_0
Error correction.
MicroPDF417 uses Reed–Solomon error correction. Amount of error correction codewords are fixed for each barcode version. MicroPDF417 has from 28% to 67% symbol capacity filled by errors correction codewords. MicroPDF417 error correction can recover erasures and substitution errors, where:
Example of MicroPDF417 codewords placement.
Here is example how all of these codewords are assembled into MicroPDF417 symbol:
<br>LR(x) - Left Row Address Patterns (RAP) identifier.
<br>D(x) - Data codeword.
<br>CR(x) - Center Row Address Patterns (RAP) identifier.
<br>RR(x) - Right Row Address Patterns (RAP) identifier.
<br>E(x) - Error correction codeword.
Encoding.
MicroPDF417 barcode has 929 data codewords, where 900 data codewords (0 - 899) are available in each mode for data encoding and 29 (900 - 928) codewords are assigned to specific functions, most of which defines data encoding modes. Encoding modes can be split into two encoding types: common modes for ordinary binary or text data encoding and special modes which can be used to encode special industrial modes.
Common modes.
MicroPDF417 common encoding modes is similar to PDF417 encoding modes and includes:
Any of these modes can be combined in mixed mode to obtain better data compaction and reduce MicroPDF417 symbol size.
Special modes.
MicroPDF417 can encode data in special industrial modes, which includes:
Structured append.
MicroPDF417 barcode allows to add metadata to the barcode symbol which can add description of current barcode symbol. However, because MicroPDF417 has restricted amount of capacity it is used rarely. Some structured append fields cannot be omitted and set in case structured append is added to the symbol, some fields are optional. Possible structured append fields you can see in the following table:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "cluster = ((LeftRAP - 1)\\pmod{3}) * 3"
}
]
| https://en.wikipedia.org/wiki?curid=76305775 |
7630895 | Spouge's approximation | In mathematics, Spouge's approximation is a formula for computing an approximation of the gamma function. It was named after John L. Spouge, who defined the formula in a 1994 paper. The formula is a modification of Stirling's approximation, and has the form
formula_0
where "a" is an arbitrary positive integer and the coefficients are given by
formula_1
Spouge has proved that, if Re("z") > 0 and "a" > 2, the relative error in discarding "ε""a"("z") is bounded by
formula_2
The formula is similar to the Lanczos approximation, but has some distinct features. Whereas the Lanczos formula exhibits faster convergence, Spouge's coefficients are much easier to calculate and the error can be set arbitrarily low. The formula is therefore feasible for arbitrary-precision evaluation of the gamma function. However, special care must be taken to use sufficient precision when computing the sum due to the large size of the coefficients "ck", as well as their alternating sign. For example, for "a" = 49, one must compute the sum using about 65 decimal digits of precision in order to obtain the promised 40 decimal digits of accuracy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Gamma(z+1) = (z+a)^{z+\\frac12} e^{-z-a} \\left( c_0 + \\sum_{k=1}^{a-1} \\frac{c_k}{z+k} + \\varepsilon_a(z) \\right)"
},
{
"math_id": 1,
"text": "\\begin{align} c_0 &= \\sqrt{2 \\pi}\\\\\nc_k &= \\frac{(-1)^{k-1}}{(k-1)!} (-k+a)^{k-\\frac12} e^{-k+a} \\qquad k\\in\\{1,2,\\dots, a-1\\}. \\end{align}"
},
{
"math_id": 2,
"text": "a^{-\\frac12} (2 \\pi)^{-a-\\frac12}."
}
]
| https://en.wikipedia.org/wiki?curid=7630895 |
7632450 | P-matrix | Complex square matrix for which every principal minor is positive
In mathematics, a P-matrix is a complex square matrix with every principal minor is positive. A closely related class is that of formula_0-matrices, which are the closure of the class of P-matrices, with every principal minor formula_1 0.
Spectra of P-matrices.
By a theorem of Kellogg, the eigenvalues of P- and formula_0- matrices are bounded away from a wedge about the negative real axis as follows:
If formula_2 are the eigenvalues of an n-dimensional P-matrix, where formula_3, then
formula_4
If formula_2, formula_5, formula_6 are the eigenvalues of an n-dimensional formula_0-matrix, then
formula_7
Remarks.
The class of nonsingular "M"-matrices is a subset of the class of P-matrices. More precisely, all matrices that are both P-matrices and "Z"-matrices are nonsingular M-matrices. The class of sufficient matrices is another generalization of P-matrices.
The linear complementarity problem formula_8 has a unique solution for every vector q if and only if M is a P-matrix. This implies that if M is a P-matrix, then M is a Q-matrix.
If the Jacobian of a function is a P-matrix, then the function is injective on any rectangular region of formula_9.
A related class of interest, particularly with reference to stability, is that of formula_10-matrices, sometimes also referred to as formula_11-matrices. A matrix A is a formula_10-matrix if and only if formula_12 is a P-matrix (similarly for formula_0-matrices). Since formula_13, the eigenvalues of these matrices are bounded away from the positive real axis. | [
{
"math_id": 0,
"text": "P_0"
},
{
"math_id": 1,
"text": "\\geq"
},
{
"math_id": 2,
"text": "\\{u_1,...,u_n\\}"
},
{
"math_id": 3,
"text": "n>1"
},
{
"math_id": 4,
"text": "|\\arg(u_i)| < \\pi - \\frac{\\pi}{n},\\ i = 1,...,n"
},
{
"math_id": 5,
"text": "u_i \\neq 0"
},
{
"math_id": 6,
"text": "i = 1,...,n"
},
{
"math_id": 7,
"text": "|\\arg(u_i)| \\leq \\pi - \\frac{\\pi}{n},\\ i = 1,...,n"
},
{
"math_id": 8,
"text": "\\mathrm{LCP}(M,q)"
},
{
"math_id": 9,
"text": "\\mathbb{R}^n"
},
{
"math_id": 10,
"text": "P^{(-)}"
},
{
"math_id": 11,
"text": "N-P"
},
{
"math_id": 12,
"text": "(-A)"
},
{
"math_id": 13,
"text": "\\sigma(A) = -\\sigma(-A)"
}
]
| https://en.wikipedia.org/wiki?curid=7632450 |
76340 | Principal component analysis | Method of data analysis
<templatestyles src="Machine learning/styles.css"/>
Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
The data is linearly transformed onto a new coordinate system such that the directions (principal components) capturing the largest variation in the data can be easily identified.
The principal components of a collection of points in a real coordinate space are a sequence of formula_0 unit vectors, where the formula_1-th vector is the direction of a line that best fits the data while being orthogonal to the first formula_2 vectors. Here, a best-fitting line is defined as one that minimizes the average squared perpendicular distance from the points to the line. These directions (i.e., principal components) constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.
Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science.
Overview.
When performing PCA, the first principal component of a set of formula_0 variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through formula_0 iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set.
The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The formula_1-th principal component can be taken as a direction orthogonal to the first formula_2 principal components that maximizes the variance of the projected data.
For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed.
History.
PCA was invented in 1901 by Karl Pearson, as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s. Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of X (invented in the last quarter of the 19th century), eigenvalue decomposition (EVD) of XTX in linear algebra, factor analysis (for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe's "Principal Component Analysis"), Eckart–Young theorem (Harman, 1960), or empirical orthogonal functions (EOF) in meteorological science (Lorenz, 1956), empirical eigenfunction decomposition (Sirovich, 1987), quasiharmonic modes (Brooks et al., 1988), spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics.
Intuition.
PCA can be thought of as fitting a "p"-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.
To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.
Biplots and scree plots (degree of explained variance) are used to interpret findings of the PCA.
Details.
PCA is defined as an orthogonal linear transformation on a real inner product space that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.
Consider an formula_3 data matrix, X, with column-wise zero empirical mean (the sample mean of each column has been shifted to zero), where each of the "n" rows represents a different repetition of the experiment, and each of the "p" columns gives a particular kind of feature (say, the results from a particular sensor).
Mathematically, the transformation is defined by a set of size formula_4 of "p"-dimensional vectors of weights or coefficients formula_5 that map each row vector formula_6 of X to a new vector of principal component "scores" formula_7, given by
formula_8
in such a way that the individual variables formula_9 of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector (where formula_4 is usually selected to be strictly less than formula_0 to reduce dimensionality).
The above may equivalently be written in matrix form as
formula_10
where
formula_11,
formula_12, and
formula_13.
First component.
In order to maximize variance, the first weight vector w(1) thus has to satisfy
formula_14
Equivalently, writing this in matrix form gives
formula_15
Since w(1) has been defined to be a unit vector, it equivalently also satisfies
formula_16
The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a positive semidefinite matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector.
With w(1) found, the first principal component of a data vector x("i") can then be given as a score "t"1("i") = x("i") ⋅ w(1) in the transformed co-ordinates, or as the corresponding vector in the original variables, {x("i") ⋅ w(1)} w(1).
Further components.
The "k"-th component can be found by subtracting the first "k" − 1 principal components from X:
formula_17
and then finding the weight vector which extracts the maximum variance from this new data matrix
formula_18
It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of XTX.
The "k"-th principal component of a data vector x("i") can therefore be given as a score "t""k"("i") = x("i") ⋅ w("k") in the transformed coordinates, or as the corresponding vector in the space of the original variables, {x("i") ⋅ w("k")} w("k"), where w("k") is the "k"th eigenvector of XTX.
The full principal components decomposition of X can therefore be given as
formula_10
where W is a "p"-by-"p" matrix of weights whose columns are the eigenvectors of XTX. The transpose of W is sometimes called the whitening or sphering transformation. Columns of W multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called "loadings" in PCA or in Factor analysis.
Covariances.
XTX itself can be recognized as proportional to the empirical sample covariance matrix of the dataset XT.
The sample covariance "Q" between two of the different principal components over the dataset is given by:
formula_19
where the eigenvalue property of w("k") has been used to move from line 2 to line 3. However eigenvectors w("j") and w("k") corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset.
Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix.
In matrix form, the empirical covariance matrix for the original variables can be written
formula_20
The empirical covariance matrix between the principal components becomes
formula_21
where Λ is the diagonal matrix of eigenvalues "λ"("k") of XTX. "λ"("k") is equal to the sum of the squares over the dataset associated with each component "k", that is, "λ"("k") = Σ"i" "t""k"2("i") = Σ"i" (x("i") ⋅ w("k"))2.
Dimensionality reduction.
The transformation T = X W maps a data vector x("i") from an original space of "p" variables to a new space of "p" variables which are uncorrelated over the dataset. However, not all the principal components need to be kept. Keeping only the first "L" principal components, produced by using only the first "L" eigenvectors, gives the truncated transformation
formula_22
where the matrix TL now has "n" rows but only "L" columns. In other words, PCA learns a linear transformation formula_23 where the columns of "p" × "L" matrix formula_24 form an orthogonal basis for the "L" features (the components of representation "t") that are decorrelated. By construction, of all the transformed data matrices with only "L" columns, this score matrix maximises the variance in the original data that has been preserved, while minimising the total squared reconstruction error formula_25 or formula_26.
Such dimensionality reduction can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting "L" = 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains clusters these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.
Similarly, in regression analysis, the larger the number of explanatory variables allowed, the greater is the chance of overfitting the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called principal component regression.
Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of T will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix W, which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher signal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using parametric bootstrap, as an aid in determining how many principal components to retain.
Singular value decomposition.
The principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X,
formula_27
Here Σ is an "n"-by-"p" rectangular diagonal matrix of positive numbers "σ"("k"), called the singular values of X; U is an "n"-by-"n" matrix, the columns of which are orthogonal unit vectors of length "n" called the left singular vectors of X; and W is a "p"-by-"p" matrix whose columns are orthogonal unit vectors of length "p" and called the right singular vectors of X.
In terms of this factorization, the matrix XTX can be written
formula_28
where formula_29 is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies formula_30. Comparison with the eigenvector factorization of XTX establishes that the right singular vectors W of X are equivalent to the eigenvectors of XTX, while the singular values "σ"("k") of formula_31 are equal to the square-root of the eigenvalues "λ"("k") of XTX.
Using the singular value decomposition the score matrix T can be written
formula_32
so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value. This form is also the polar decomposition of T.
Efficient algorithms exist to calculate the SVD of X without having to form the matrix XTX, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix, unless only a handful of components are required.
As with the eigen-decomposition, a truncated "n" × "L" score matrix TL can be obtained by considering only the first L largest singular values and their singular vectors:
formula_33
The truncation of a matrix M or T using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of rank "L" to the original matrix, in the sense of the difference between the two having the smallest possible Frobenius norm, a result known as the Eckart–Young theorem [1936].
Further considerations.
The singular values (in Σ) are the square roots of the eigenvalues of the matrix XTX. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see below). PCA is often used in this manner for dimensionality reduction. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the discrete cosine transform, and in particular to the DCT-II which is simply known as the "DCT". Nonlinear dimensionality reduction techniques tend to be more computationally demanding than PCA.
PCA is sensitive to the scaling of the variables. If we have just two variables and they have the same sample variance and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance.
Mean subtraction (a.k.a. "mean centering") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.
Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: "Pearson Product-Moment Correlation"). Also see the article by Kromrey & Foster-Johnson (1998) on "Mean-centering in Moderated Regression: Much Ado About Nothing". Since covariances are correlations of normalized variables (Z- or standard-scores) a PCA based on the correlation matrix of X is equal to a PCA based on the covariance matrix of Z, the standardized version of X.
PCA is a popular primary technique in pattern recognition. It is not, however, optimized for class separability. However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes. The linear discriminant analysis is an alternative which is optimized for class separability.
Properties and limitations.
Properties.
Some properties of PCA include:
"Property 1": For any integer "q", 1 ≤ "q" ≤ "p", consider the orthogonal linear transformation
formula_34
where formula_35 is a "q-element" vector and formula_36 is a ("q" × "p") matrix, and let formula_37 be the variance-covariance matrix for formula_35. Then the trace of formula_38, denoted formula_39, is maximized by taking formula_40, where formula_41 consists of the first "q" columns of formula_42 formula_43 is the transpose of formula_44. (formula_42 is not defined here)
"Property 2": Consider again the orthonormal transformation
formula_45
with formula_46 and formula_38 defined as before. Then formula_47 is minimized by taking formula_48 where formula_49 consists of the last "q" columns of formula_42.
The statistical implication of this property is that the last few PCs are not simply unstructured left-overs after removing the important PCs. Because these last PCs have variances as small as possible they are useful in their own right. They can help to detect unsuspected near-constant linear relationships between the elements of x, and they may also be useful in regression, in selecting a subset of variables from x, and in outlier detection.
"Property 3": (Spectral decomposition of Σ)
formula_50
Before we look at its usage, we first look at diagonal elements,
formula_51
Then, perhaps the main statistical implication of the result is that not only can we decompose the combined variances of all the elements of x into decreasing contributions due to each PC, but we can also decompose the whole covariance matrix into contributions formula_52 from each PC. Although not strictly decreasing, the elements of formula_52 will tend to become smaller as formula_53 increases, as formula_52 is nonincreasing for increasing formula_53, whereas the elements of formula_54 tend to stay about the same size because of the normalization constraints: formula_55.
Limitations.
As noted above, the results of PCA depend on the scaling of the variables. This can be cured by scaling each feature by its standard deviation, so that one ends up with dimensionless features with unital variance.
The applicability of PCA as described above is limited by certain (tacit) assumptions made in its derivation. In particular, PCA can capture linear correlations between the features but fails when this assumption is violated (see Figure 6a in the reference). In some cases, coordinate transformations can restore the linearity assumption and PCA can then be applied (see kernel PCA).
Another limitation is the mean-removal process before constructing the covariance matrix for PCA. In fields such as astronomy, all the signals are non-negative, and the mean-removal process will force the mean of some astrophysical exposures to be zero, which consequently creates unphysical negative fluxes, and forward modeling has to be performed to recover the true magnitude of the signals. As an alternative method, non-negative matrix factorization focusing only on the non-negative elements in the matrices, which is well-suited for astrophysical observations. See more at Relation between PCA and Non-negative Matrix Factorization.
PCA is at a disadvantage if the data has not been standardized before applying the algorithm to it. PCA transforms original data into data that is relevant to the principal components of that data, which means that the new data variables cannot be interpreted in the same ways that the originals were. They are linear interpretations of the original variables. Also, if PCA is not performed properly, there is a high likelihood of information loss.
PCA relies on a linear model. If a dataset has a pattern hidden inside it that is nonlinear, then PCA can actually steer the analysis in the complete opposite direction of progress. Researchers at Kansas State University discovered that the sampling error in their experiments impacted the bias of PCA results. "If the number of subjects or blocks is smaller than 30, and/or the researcher is interested in PC's beyond the first, it may be better to first correct for the serial correlation, before PCA is conducted". The researchers at Kansas State also found that PCA could be "seriously biased if the autocorrelation structure of the data is not correctly handled".
PCA and information theory.
Dimensionality reduction results in a loss of information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models.
Under the assumption that
formula_56
that is, that the data vector formula_57 is the sum of the desired information-bearing signal formula_58 and a noise signal formula_59 one can show that PCA can be optimal for dimensionality reduction, from an information-theoretic point-of-view.
In particular, Linsker showed that if formula_58 is Gaussian and formula_59 is Gaussian noise with a covariance matrix proportional to the identity matrix, the PCA maximizes the mutual information formula_60 between the desired information formula_58 and the dimensionality-reduced output formula_61.
If the noise is still Gaussian and has a covariance matrix proportional to the identity matrix (that is, the components of the vector formula_59 are iid), but the information-bearing signal formula_58 is non-Gaussian (which is a common scenario), PCA at least minimizes an upper bound on the "information loss", which is defined as
formula_62
The optimality of PCA is also preserved if the noise formula_59 is iid and at least more Gaussian (in terms of the Kullback–Leibler divergence) than the information-bearing signal formula_58. In general, even if the above signal model holds, PCA loses its information-theoretic optimality as soon as the noise formula_59 becomes dependent.
Computation using the covariance method.
The following is a detailed description of PCA using the covariance method as opposed to the correlation method.
The goal is to transform a given data set X of dimension "p" to an alternative data set Y of smaller dimension "L". Equivalently, we are seeking to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X:
formula_63
Derivation using the covariance method.
Let X be a "d"-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean.
We want to find formula_64 a "d" × "d" orthonormal transformation matrix P so that PX has a diagonal covariance matrix (that is, PX is a random vector with all its distinct components pairwise uncorrelated).
A quick computation assuming formula_65 were unitary yields:
formula_66
Hence formula_64 holds if and only if formula_67 were diagonalisable by formula_65.
This is very constructive, as cov(X) is guaranteed to be a non-negative definite matrix and thus is guaranteed to be diagonalisable by some unitary matrix.
Covariance-free computation.
In practical implementations, especially with high dimensional data (large p), the naive covariance method is rarely used because it is not efficient due to high computational and memory costs of explicitly determining the covariance matrix. The covariance-free approach avoids the "np"2 operations of explicitly calculating and storing the covariance matrix XTX, instead utilizing one of matrix-free methods, for example, based on the function evaluating the product XT(X r) at the cost of 2"np" operations.
Iterative computation.
One way to compute the first principal component efficiently is shown in the following pseudo-code, for a data matrix X with zero mean, without ever computing its covariance matrix.
r = a random vector of length p
r = r / norm(r)
do c times:
s = 0 (a vector of length p)
for each row x in X
s = s + (x ⋅ r) x
λ = rTs // λ is the eigenvalue
error = |λ ⋅ r − s|
r = s / norm(s)
exit if error < tolerance
return λ, r
This power iteration algorithm simply calculates the vector XT(X r), normalizes, and places the result back in r. The eigenvalue is approximated by rT (XTX) r, which is the Rayleigh quotient on the unit vector r for the covariance matrix XTX . If the largest singular value is well separated from the next largest one, the vector r gets close to the first principal component of X within the number of iterations c, which is small relative to p, at the total cost "2cnp". The power iteration convergence can be accelerated without noticeably sacrificing the small cost per iteration using more advanced matrix-free methods, such as the Lanczos algorithm or the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method.
Subsequent principal components can be computed one-by-one via deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate principal components additively affect the accuracy of the subsequently computed principal components, thus increasing the error with every new computation. The latter approach in the block power method replaces single-vectors r and s with block-vectors, matrices R and S. Every column of R approximates one of the leading principal components, while all columns are iterated simultaneously. The main calculation is evaluation of the product XT(X R). Implemented, for example, in LOBPCG, efficient blocking eliminates the accumulation of the errors, allows using high-level BLAS matrix-matrix product functions, and typically leads to faster convergence, compared to the single-vector one-by-one technique.
The NIPALS method.
"Non-linear iterative partial least squares (NIPALS)" is a variant the classical power iteration with matrix deflation by subtraction implemented for computing the first few components in a principal component or partial least squares analysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example, genomics, metabolomics) it is usually only necessary to compute the first few PCs. The non-linear iterative partial least squares (NIPALS) algorithm updates iterative approximations to the leading scores and loadings t1 and r1T by the power iteration multiplying on every iteration by X on the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations to XTX, based on the function evaluating the product XT(X r) = ((X r)TX)T.
The matrix deflation by subtraction is performed by subtracting the outer product, t1r1T from X leaving the deflated residual matrix used to calculate the subsequent leading PCs.
For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precision round-off errors accumulated in each iteration and matrix deflation by subtraction. A Gram–Schmidt re-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality. NIPALS reliance on single-vector multiplications cannot take advantage of high-level BLAS and results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method.
Online/sequential estimation.
In an "online" or "streaming" situation with data arriving piece by piece rather than being stored in a single batch, it is useful to make an estimate of the PCA projection that can be updated sequentially. This can be done efficiently, but requires different algorithms.
Qualitative variables.
In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs. These data were subjected to PCA for quantitative variables. When analyzing the results, it is natural to connect the principal components to the qualitative variable "species".
For this, the following results are produced.
These results are what is called "introducing a qualitative variable as supplementary element". This procedure is detailed in and Husson, Lê & Pagès 2009 and Pagès 2013.
Few software offer this option in an "automatic" way. This is the case of SPAD that historically, following the work of Ludovic Lebart, was the first to propose this option, and the R package FactoMineR.
Applications.
Intelligence.
The earliest application of factor analysis was in locating and measuring components of human intelligence. It was believed that intelligence had various uncorrelated components such as spatial intelligence, verbal intelligence, induction, deduction etc and that scores on these could be adduced by factor analysis from results on various tests, to give a single index known as the Intelligence Quotient (IQ). The pioneering statistical psychologist Spearman actually developed factor analysis in 1904 for his two-factor theory of intelligence, adding a formal technique to the science of psychometrics. In 1924 Thurstone looked for 56 factors of intelligence, developing the notion of Mental Age. Standard IQ tests today are based on this early work.
Residential differentiation.
In 1949, Shevky and Williams introduced the theory of factorial ecology, which dominated studies of residential differentiation from the 1950s to the 1970s. Neighbourhoods in a city were recognizable or could be distinguished from one another by various characteristics which could be reduced to three by factor analysis. These were known as 'social rank' (an index of occupational status), 'familism' or family size, and 'ethnicity'; Cluster analysis could then be applied to divide the city into clusters or precincts according to values of the three key factor variables. An extensive literature developed around factorial ecology in urban geography, but the approach went out of fashion after 1980 as being methodologically primitive and having little place in postmodern geographical paradigms.
One of the problems with factor analysis has always been finding convincing names for the various artificial factors. In 2000, Flood revived the factorial ecology approach to show that principal components analysis actually gave meaningful answers directly, without resorting to factor rotation. The principal components were actually dual variables or shadow prices of 'forces' pushing people together or apart in cities. The first component was 'accessibility', the classic trade-off between demand for travel and demand for space, around which classical urban economics is based. The next two components were 'disadvantage', which keeps people of similar status in separate neighbourhoods (mediated by planning), and ethnicity, where people of similar ethnic backgrounds try to co-locate.
About the same time, the Australian Bureau of Statistics defined distinct indexes of advantage and disadvantage taking the first principal component of sets of key variables that were thought to be important. These SEIFA indexes are regularly published for various jurisdictions, and are used frequently in spatial analysis.
Development indexes.
PCA has been the only formal method available for the development of indexes, which are otherwise a hit-or-miss "ad hoc" undertaking.
The City Development Index was developed by PCA from about 200 indicators of city outcomes in a 1996 survey of 254 global cities. The first principal component was subject to iterative regression, adding the original variables singly until about 90% of its variation was accounted for. The index ultimately used about 15 indicators but was a good predictor of many more variables. Its comparative value agreed very well with a subjective assessment of the condition of each city. The coefficients on items of infrastructure were roughly proportional to the average costs of providing the underlying services, suggesting the Index was actually a measure of effective physical and social investment in the city.
The country-level Human Development Index (HDI) from UNDP, which has been published since 1990 and is very extensively used in development studies, has very similar coefficients on similar indicators, strongly suggesting it was originally constructed using PCA.
Population genetics.
In 1978 Cavalli-Sforza and others pioneered the use of principal components analysis (PCA) to summarise data on variation in human gene frequencies across regions. The components showed distinctive patterns, including gradients and sinusoidal waves. They interpreted these patterns as resulting from specific ancient migration events.
Since then, PCA has been ubiquitous in population genetics, with thousands of papers using PCA as a display mechanism. Genetics varies largely according to proximity, so the first two principal components actually show spatial distribution and may be used to map the relative geographical location of different population groups, thereby showing individuals who have wandered from their original locations.
PCA in genetics has been technically controversial, in that the technique has been performed on discrete non-normal variables and often on binary allele markers. The lack of any measures of standard error in PCA are also an impediment to more consistent usage. In August 2022, the molecular biologist Eran Elhaik published a theoretical paper in Scientific Reports analyzing 12 PCA applications. He concluded that it was easy to manipulate the method, which, in his view, generated results that were 'erroneous, contradictory, and absurd.' Specifically, he argued, the results achieved in population genetics were characterized by cherry-picking and circular reasoning.
Market research and indexes of attitude.
Market research has been an extensive user of PCA. It is used to develop customer satisfaction or customer loyalty scores for products, and with clustering, to develop market segments that may be targeted with advertising campaigns, in much the same way as factorial ecology will locate geographical areas with similar characteristics.
PCA rapidly transforms large amounts of data into smaller, easier-to-digest variables that can be more rapidly and readily analyzed. In any consumer questionnaire, there are series of questions designed to elicit consumer attitudes, and principal components seek out latent variables underlying these attitudes. For example, the Oxford Internet Survey in 2013 asked 2000 people about their attitudes and beliefs, and from these analysts extracted four principal component dimensions, which they identified as 'escape', 'social networking', 'efficiency', and 'problem creating'.
Another example from Joe Flood in 2008 extracted an attitudinal index toward housing from 28 attitude questions in a national survey of 2697 households in Australia. The first principal component represented a general attitude toward property and home ownership. The index, or the attitude questions it embodied, could be fed into a General Linear Model of tenure choice. The strongest determinant of private renting by far was the attitude index, rather than income, marital status or household type.
Quantitative finance.
In quantitative finance, PCA is used
in financial risk management, and has been applied to other problems such as portfolio optimization.
PCA is commonly used in problems involving fixed income securities and portfolios, and interest rate derivatives.
Valuations here depend on the entire yield curve, comprising numerous highly correlated instruments, and PCA is used to define a set of components or factors that explain rate movements,
thereby facilitating the modelling.
One common risk management application is to calculating value at risk, VaR, applying PCA to the Monte Carlo simulation.
Here, for each simulation-sample, the components are stressed, and rates, and in turn option values, are then reconstructed;
with VaR calculated, finally, over the entire run.
PCA is also used in hedging exposure to interest rate risk, given partial durations and other sensitivities.
Under both, the first three, typically, principal components of the system are of interest (representing "shift", "twist", and "curvature").
These principal components are derived from an eigen-decomposition of the covariance matrix of yield at predefined maturities;
and where the variance of each component is its eigenvalue (and as the components are orthogonal, no correlation need be incorporated in subsequent modelling).
For equity, an optimal portfolio is one where the expected return is maximized for a given level of risk, or alternatively, where risk is minimized for a given return; see Markowitz model for discussion.
Thus, one approach is to reduce portfolio risk, where allocation strategies are applied to the "principal portfolios" instead of the underlying stocks.
A second approach is to enhance portfolio return, using the principal components to select companies' stocks with upside potential.
PCA has also been used to understand relationships between international equity markets, and within markets between groups of companies in industries or sectors.
PCA may also be applied to stress testing, essentially an analysis of a bank's ability to endure a hypothetical adverse economic scenario. Its utility is in "distilling the information contained in [several] macroeconomic variables into a more manageable data set, which can then [be used] for analysis." Here, the resulting factors are linked to e.g. interest rates – based on the largest elements of the factor's eigenvector – and it is then observed how a "shock" to each of the factors affects the implied assets of each of the banks.
Neuroscience.
A variant of principal components analysis is used in neuroscience to identify the specific properties of a stimulus that increases a neuron's probability of generating an action potential. This technique is known as spike-triggered covariance analysis. In a typical application an experimenter presents a white noise process as a stimulus (usually either as a sensory input to a test subject, or as a current injected directly into the neuron) and records a train of action potentials, or spikes, produced by the neuron as a result. Presumably, certain features of the stimulus make the neuron more likely to spike. In order to extract these features, the experimenter calculates the covariance matrix of the "spike-triggered ensemble", the set of all stimuli (defined and discretized over a finite time window, typically on the order of 100 ms) that immediately preceded a spike. The eigenvectors of the difference between the spike-triggered covariance matrix and the covariance matrix of the "prior stimulus ensemble" (the set of all stimuli, defined over the same length time window) then indicate the directions in the space of stimuli along which the variance of the spike-triggered ensemble differed the most from that of the prior stimulus ensemble. Specifically, the eigenvectors with the largest positive eigenvalues correspond to the directions along which the variance of the spike-triggered ensemble showed the largest positive change compared to the variance of the prior. Since these were the directions in which varying the stimulus led to a spike, they are often good approximations of the sought after relevant stimulus features.
In neuroscience, PCA is also used to discern the identity of a neuron from the shape of its action potential. Spike sorting is an important procedure because extracellular recording techniques often pick up signals from more than one neuron. In spike sorting, one first uses PCA to reduce the dimensionality of the space of action potential waveforms, and then performs clustering analysis to associate specific action potentials with individual neurons.
PCA as a dimension reduction technique is particularly suited to detect coordinated activities of large neuronal ensembles. It has been used in determining collective variables, that is, order parameters, during phase transitions in the brain.
Relation with other methods.
Correspondence analysis.
Correspondence analysis (CA)
was developed by Jean-Paul Benzécri
and is conceptually similar to PCA, but scales the data (which should be non-negative) so that rows and columns are treated equivalently. It is traditionally applied to contingency tables.
CA decomposes the chi-squared statistic associated to this table into orthogonal factors.
Because CA is a descriptive technique, it can be applied to tables for which the chi-squared statistic is appropriate or not.
Several variants of CA are available including detrended correspondence analysis and canonical correspondence analysis. One special extension is multiple correspondence analysis, which may be seen as the counterpart of principal component analysis for categorical data.
Factor analysis.
Principal component analysis creates variables that are linear combinations of the original variables. The new variables have the property that the variables are all orthogonal. The PCA transformation can be helpful as a pre-processing step before clustering. PCA is a variance-focused approach seeking to reproduce the total variable variance, in which components reflect both common and unique variance of the variable. PCA is generally preferred for purposes of data reduction (that is, translating variable space into optimal factor space) but not when the goal is to detect the latent construct or factors.
Factor analysis is similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors "represent the common variance of variables, excluding unique variance". In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal. However, as a side result, when trying to reproduce the on-diagonal terms, PCA also tends to fit relatively well the off-diagonal correlations. Results given by PCA and factor analysis are very similar in most situations, but this is not always the case, and there are some problems where the results are significantly different. Factor analysis is generally used when the research purpose is detecting data structure (that is, latent constructs or factors) or causal modeling. If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results.
K-means clustering.
It has been asserted that the relaxed solution of k-means clustering, specified by the cluster indicators, is given by the principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace. However, that PCA is a useful relaxation of k-means clustering was not a new result, and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.
Non-negative matrix factorization.
Non-negative matrix factorization (NMF) is a dimension reduction method where only non-negative elements in the matrices are used, which is therefore a promising method in astronomy, in the sense that astrophysical signals are non-negative. The PCA components are orthogonal to each other, while the NMF components are all non-negative and therefore constructs a non-orthogonal basis.
In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. For NMF, its components are ranked based only on the empirical FRV curves. The residual fractional eigenvalue plots, that is, formula_68 as a function of component number formula_53 given a total of formula_69 components, for PCA have a flat plateau, where no data is captured to remove the quasi-static noise, then the curves drop quickly as an indication of over-fitting (random noise). The FRV curves for NMF is decreasing continuously when the NMF components are constructed sequentially, indicating the continuous capturing of quasi-static noise; then converge to higher levels than PCA, indicating the less over-fitting property of NMF.
Iconography of correlations.
It is often difficult to interpret the principal components when the data include many variables of various origins, or when some variables are qualitative. This leads the PCA user to a delicate elimination of several variables. If observations or variables have an excessive impact on the direction of the axes, they should be removed and then projected as supplementary elements. In addition, it is necessary to avoid interpreting the proximities between the points close to the center of the factorial plane.
The iconography of correlations, on the contrary, which is not a projection on a system of axes, does not have these drawbacks. We can therefore keep all the variables.
The principle of the diagram is to underline the "remarkable" correlations of the correlation matrix, by a solid line (positive correlation) or dotted line (negative correlation).
A strong correlation is not "remarkable" if it is not direct, but caused by the effect of a third variable. Conversely, weak correlations can be "remarkable". For example, if a variable Y depends on several independent variables, the correlations of Y with each of them are weak and yet "remarkable".
Generalizations.
Sparse PCA.
A particular disadvantage of PCA is that the principal components are usually linear combinations of all input variables. Sparse PCA overcomes this disadvantage by finding linear combinations that contain just a few input variables. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by adding sparsity constraint on the input variables.
Several approaches have been proposed, including
The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies were recently reviewed in a survey paper.
Nonlinear PCA.
Most of the modern methods for nonlinear dimensionality reduction find their theoretical and algorithmic roots in PCA or K-means. Pearson's original idea was to take a straight line (or plane) which will be "the best fit" to a set of data points. Trevor Hastie expanded on this concept by proposing Principal curves as the natural extension for the geometric interpretation of PCA, which explicitly constructs a manifold for data approximation followed by projecting the points onto it. See also the elastic map algorithm and principal geodesic analysis. Another popular generalization is kernel PCA, which corresponds to PCA performed in a reproducing kernel Hilbert space associated with a positive definite kernel.
In multilinear subspace learning, PCA is generalized to multilinear PCA (MPCA) that extracts features directly from tensor representations. MPCA is solved by performing PCA in each mode of the tensor iteratively. MPCA has been applied to face recognition, gait recognition, etc. MPCA is further extended to uncorrelated MPCA, non-negative MPCA and robust MPCA.
"N"-way principal component analysis may be performed with models such as Tucker decomposition, PARAFAC, multiple factor analysis, co-inertia analysis, STATIS, and DISTATIS.
Robust PCA.
While PCA finds the mathematically optimal method (as in minimizing the squared error), it is still sensitive to outliers in the data that produce large errors, something that the method tries to avoid in the first place. It is therefore common practice to remove outliers before computing PCA. However, in some contexts, outliers can be difficult to identify. For example, in data mining algorithms like correlation clustering, the assignment of points to clusters and outliers is not known beforehand.
A recently proposed generalization of PCA based on a weighted PCA increases robustness by assigning different weights to data objects based on their estimated relevancy.
Outlier-resistant variants of PCA have also been proposed, based on L1-norm formulations (L1-PCA).
Robust principal component analysis (RPCA) via decomposition in low-rank and sparse matrices is a modification of PCA that works well with respect to grossly corrupted observations.
Similar techniques.
Independent component analysis.
Independent component analysis (ICA) is directed to similar problems as principal component analysis, but finds additively separable components rather than successive approximations.
Network component analysis.
Given a matrix formula_70, it tries to decompose it into two matrices such that formula_71. A key difference from techniques such as PCA and ICA is that some of the entries of formula_72 are constrained to be 0. Here formula_65 is termed the regulatory layer. While in general such a decomposition can have multiple solutions, they prove that if the following conditions are satisfied :
then the decomposition is unique up to multiplication by a scalar.
Discriminant analysis of principal components.
Discriminant analysis of principal components (DAPC) is a multivariate method used to identify and describe clusters of genetically related individuals. Genetic variation is partitioned into two components: variation between groups and within groups, and it maximizes the former. Linear discriminants are linear combinations of alleles which best separate the clusters. Alleles that most contribute to this discrimination are therefore those that are the most markedly different across groups. The contributions of alleles to the groupings identified by DAPC can allow identifying regions of the genome driving the genetic divergence among groups
In DAPC, data is first transformed using a principal components analysis (PCA) and subsequently clusters are identified using discriminant analysis (DA).
A DAPC can be realized on R using the package Adegenet. (more info: adegenet on the web)
Directional component analysis.
Directional component analysis (DCA) is a method used in the atmospheric sciences for analysing multivariate datasets.
Like PCA, it allows for dimension reduction, improved visualization and improved interpretability of large data-sets.
Also like PCA, it is based on a covariance matrix derived from the input dataset.
The difference between PCA and DCA is that DCA additionally requires the input of a vector direction, referred to as the impact.
Whereas PCA maximises explained variance, DCA maximises probability density given impact.
The motivation for DCA is to find components of a multivariate dataset that are both likely (measured using probability density) and important (measured using the impact).
DCA has been used to find the most likely and most serious heat-wave patterns in weather prediction ensembles
, and the most likely and most impactful changes in rainfall due to climate change
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "i-1"
},
{
"math_id": 3,
"text": "n \\times p"
},
{
"math_id": 4,
"text": "l"
},
{
"math_id": 5,
"text": "\\mathbf{w}_{(k)} = (w_1, \\dots, w_p)_{(k)} "
},
{
"math_id": 6,
"text": "\\mathbf{x}_{(i)} = (x_1, \\dots, x_p)_{(i)}"
},
{
"math_id": 7,
"text": "\\mathbf{t}_{(i)} = (t_1, \\dots, t_l)_{(i)}"
},
{
"math_id": 8,
"text": "{t_{k}}_{(i)} = \\mathbf{x}_{(i)} \\cdot \\mathbf{w}_{(k)} \\qquad \\mathrm{for} \\qquad i = 1,\\dots,n \\qquad k = 1,\\dots,l "
},
{
"math_id": 9,
"text": "t_1, \\dots, t_l"
},
{
"math_id": 10,
"text": "\\mathbf{T} = \\mathbf{X} \\mathbf{W}"
},
{
"math_id": 11,
"text": "{\\mathbf{T}}_{ik} = {t_{k}}_{(i)}"
},
{
"math_id": 12,
"text": "{\\mathbf{X}}_{ij} = {x_{j}}_{(i)}"
},
{
"math_id": 13,
"text": "{\\mathbf{W}}_{jk} = {w_{j}}_{(k)}"
},
{
"math_id": 14,
"text": "\\mathbf{w}_{(1)}\n = \\arg\\max_{\\Vert \\mathbf{w} \\Vert = 1} \\,\\left\\{ \\sum_i (t_1)^2_{(i)} \\right\\}\n = \\arg\\max_{\\Vert \\mathbf{w} \\Vert = 1} \\,\\left\\{ \\sum_i \\left(\\mathbf{x}_{(i)} \\cdot \\mathbf{w} \\right)^2 \\right\\}"
},
{
"math_id": 15,
"text": "\\mathbf{w}_{(1)}\n = \\arg\\max_{\\left\\| \\mathbf{w} \\right\\| = 1} \\left\\{ \\left\\| \\mathbf{Xw} \\right\\|^2 \\right\\}\n = \\arg\\max_{\\left\\| \\mathbf{w} \\right\\| = 1} \\left\\{ \\mathbf{w}^\\mathsf{T} \\mathbf{X}^\\mathsf{T} \\mathbf{X w} \\right\\}"
},
{
"math_id": 16,
"text": "\\mathbf{w}_{(1)} = \\arg\\max \\left\\{ \\frac{\\mathbf{w}^\\mathsf{T} \\mathbf{X}^\\mathsf{T} \\mathbf{X w}}{\\mathbf{w}^\\mathsf{T} \\mathbf{w}} \\right\\}"
},
{
"math_id": 17,
"text": "\\mathbf{\\hat{X}}_k = \\mathbf{X} - \\sum_{s = 1}^{k - 1} \\mathbf{X} \\mathbf{w}_{(s)} \\mathbf{w}_{(s)}^{\\mathsf{T}} "
},
{
"math_id": 18,
"text": "\\mathbf{w}_{(k)}\n= \\mathop{\\operatorname{arg\\,max}}_{\\left\\| \\mathbf{w} \\right\\| = 1} \\left\\{ \\left\\| \\mathbf{\\hat{X}}_{k} \\mathbf{w} \\right\\|^2 \\right\\}\n= \\arg\\max \\left\\{ \\tfrac{\\mathbf{w}^\\mathsf{T} \\mathbf{\\hat{X}}_{k}^\\mathsf{T} \\mathbf{\\hat{X}}_{k} \\mathbf{w}}{\\mathbf{w}^T \\mathbf{w}} \\right\\}"
},
{
"math_id": 19,
"text": "\\begin{align}\nQ(\\mathrm{PC}_{(j)}, \\mathrm{PC}_{(k)})\n& \\propto (\\mathbf{X}\\mathbf{w}_{(j)})^\\mathsf{T} (\\mathbf{X}\\mathbf{w}_{(k)}) \\\\\n& = \\mathbf{w}_{(j)}^\\mathsf{T} \\mathbf{X}^\\mathsf{T} \\mathbf{X} \\mathbf{w}_{(k)} \\\\\n& = \\mathbf{w}_{(j)}^\\mathsf{T} \\lambda_{(k)} \\mathbf{w}_{(k)} \\\\\n& = \\lambda_{(k)} \\mathbf{w}_{(j)}^\\mathsf{T} \\mathbf{w}_{(k)}\n\\end{align}"
},
{
"math_id": 20,
"text": "\\mathbf{Q} \\propto \\mathbf{X}^\\mathsf{T} \\mathbf{X} = \\mathbf{W} \\mathbf{\\Lambda} \\mathbf{W}^\\mathsf{T}"
},
{
"math_id": 21,
"text": "\\mathbf{W}^\\mathsf{T} \\mathbf{Q} \\mathbf{W}\n\\propto \\mathbf{W}^\\mathsf{T} \\mathbf{W} \\, \\mathbf{\\Lambda} \\, \\mathbf{W}^\\mathsf{T} \\mathbf{W}\n= \\mathbf{\\Lambda} "
},
{
"math_id": 22,
"text": "\\mathbf{T}_L = \\mathbf{X} \\mathbf{W}_L"
},
{
"math_id": 23,
"text": " t = W_L^\\mathsf{T} x, x \\in \\mathbb{R}^p, t \\in \\mathbb{R}^L,"
},
{
"math_id": 24,
"text": "W_L"
},
{
"math_id": 25,
"text": "\\|\\mathbf{T}\\mathbf{W}^T - \\mathbf{T}_L\\mathbf{W}^T_L\\|_2^2"
},
{
"math_id": 26,
"text": "\\|\\mathbf{X} - \\mathbf{X}_L\\|_2^2"
},
{
"math_id": 27,
"text": "\\mathbf{X} = \\mathbf{U}\\mathbf{\\Sigma}\\mathbf{W}^T"
},
{
"math_id": 28,
"text": "\\begin{align}\n\\mathbf{X}^T\\mathbf{X}\n& = \\mathbf{W}\\mathbf{\\Sigma}^\\mathsf{T} \\mathbf{U}^\\mathsf{T} \\mathbf{U}\\mathbf{\\Sigma}\\mathbf{W}^\\mathsf{T} \\\\\n& = \\mathbf{W}\\mathbf{\\Sigma}^\\mathsf{T} \\mathbf{\\Sigma} \\mathbf{W}^\\mathsf{T} \\\\\n& = \\mathbf{W}\\mathbf{\\hat{\\Sigma}}^2 \\mathbf{W}^\\mathsf{T}\n\\end{align}"
},
{
"math_id": 29,
"text": " \\mathbf{\\hat{\\Sigma}} "
},
{
"math_id": 30,
"text": " \\mathbf{\\hat{\\Sigma}^2}=\\mathbf{\\Sigma}^\\mathsf{T} \\mathbf{\\Sigma} "
},
{
"math_id": 31,
"text": " \\mathbf{{X}}"
},
{
"math_id": 32,
"text": "\\begin{align}\n\\mathbf{T}\n& = \\mathbf{X} \\mathbf{W} \\\\\n& = \\mathbf{U}\\mathbf{\\Sigma}\\mathbf{W}^\\mathsf{T} \\mathbf{W} \\\\\n& = \\mathbf{U}\\mathbf{\\Sigma}\n\\end{align}"
},
{
"math_id": 33,
"text": "\\mathbf{T}_L = \\mathbf{U}_L\\mathbf{\\Sigma}_L = \\mathbf{X} \\mathbf{W}_L "
},
{
"math_id": 34,
"text": "y =\\mathbf{B'}x"
},
{
"math_id": 35,
"text": "y"
},
{
"math_id": 36,
"text": "\\mathbf{B'}"
},
{
"math_id": 37,
"text": "\\mathbf{{\\Sigma}}_y = \\mathbf{B'}\\mathbf{\\Sigma}\\mathbf{B}"
},
{
"math_id": 38,
"text": "\\mathbf{\\Sigma}_y"
},
{
"math_id": 39,
"text": "\\operatorname{tr} (\\mathbf{\\Sigma}_y)"
},
{
"math_id": 40,
"text": "\\mathbf{B} = \\mathbf{A}_q"
},
{
"math_id": 41,
"text": "\\mathbf{A}_q"
},
{
"math_id": 42,
"text": "\\mathbf{A}"
},
{
"math_id": 43,
"text": "(\\mathbf{B'}"
},
{
"math_id": 44,
"text": "\\mathbf{B})"
},
{
"math_id": 45,
"text": "y = \\mathbf{B'}x"
},
{
"math_id": 46,
"text": "x, \\mathbf{B}, \\mathbf{A}"
},
{
"math_id": 47,
"text": "\\operatorname{tr}(\\mathbf{\\Sigma}_y)"
},
{
"math_id": 48,
"text": "\\mathbf{B} = \\mathbf{A}_q^*,"
},
{
"math_id": 49,
"text": "\\mathbf{A}_q^*"
},
{
"math_id": 50,
"text": "\\mathbf{{\\Sigma}} = \\lambda_1\\alpha_1\\alpha_1' + \\cdots + \\lambda_p\\alpha_p\\alpha_p'"
},
{
"math_id": 51,
"text": "\\operatorname{Var}(x_j) = \\sum_{k=1}^P \\lambda_k\\alpha_{kj}^2"
},
{
"math_id": 52,
"text": "\\lambda_k\\alpha_k\\alpha_k'"
},
{
"math_id": 53,
"text": "k"
},
{
"math_id": 54,
"text": "\\alpha_k"
},
{
"math_id": 55,
"text": "\\alpha_{k}'\\alpha_{k}=1, k=1, \\dots, p"
},
{
"math_id": 56,
"text": "\\mathbf{x}=\\mathbf{s}+\\mathbf{n},"
},
{
"math_id": 57,
"text": "\\mathbf{x}"
},
{
"math_id": 58,
"text": "\\mathbf{s}"
},
{
"math_id": 59,
"text": "\\mathbf{n}"
},
{
"math_id": 60,
"text": "I(\\mathbf{y};\\mathbf{s})"
},
{
"math_id": 61,
"text": "\\mathbf{y}=\\mathbf{W}_L^T\\mathbf{x}"
},
{
"math_id": 62,
"text": "I(\\mathbf{x};\\mathbf{s}) - I(\\mathbf{y};\\mathbf{s})."
},
{
"math_id": 63,
"text": " \\mathbf{Y} = \\mathbb{KLT} \\{ \\mathbf{X} \\} "
},
{
"math_id": 64,
"text": "(\\ast)"
},
{
"math_id": 65,
"text": "P"
},
{
"math_id": 66,
"text": "\\begin{align}\n\\operatorname{cov}(PX) &= \\operatorname{E}[PX~(PX)^{*}]\\\\\n &= \\operatorname{E}[PX~X^{*}P^{*}]\\\\\n &= P\\operatorname{E}[XX^{*}]P^{*}\\\\\n &= P\\operatorname{cov}(X)P^{-1}\\\\\n\\end{align}"
},
{
"math_id": 67,
"text": "\\operatorname{cov}(X)"
},
{
"math_id": 68,
"text": " 1-\\sum_{i=1}^k \\lambda_i\\Big/\\sum_{j=1}^n \\lambda_j"
},
{
"math_id": 69,
"text": "n"
},
{
"math_id": 70,
"text": "E"
},
{
"math_id": 71,
"text": "E=AP\n"
},
{
"math_id": 72,
"text": "A"
},
{
"math_id": 73,
"text": "L-1"
},
{
"math_id": 74,
"text": "L"
}
]
| https://en.wikipedia.org/wiki?curid=76340 |
7634908 | Intertemporal CAPM | Within mathematical finance, the intertemporal capital asset pricing model, or ICAPM, is an alternative to the CAPM provided by Robert Merton. It is a linear factor model with wealth as state variable that forecasts changes in the distribution of future returns or income.
In the ICAPM investors are solving lifetime consumption decisions when faced with more than one uncertainty. The main difference between ICAPM and standard CAPM is the additional state variables that acknowledge the fact that investors hedge against shortfalls in consumption or against changes in the future investment opportunity set.
Continuous time version.
Merton considers a continuous time market in equilibrium.
The state variable (X) follows a Brownian motion:
formula_0
The investor maximizes his Von Neumann–Morgenstern utility:
formula_1
where T is the time horizon and B[W(T),T] the utility from wealth (W).
The investor has the following constraint on wealth (W).
Let formula_2 be the weight invested in the asset i. Then:
formula_3
where formula_4 is the return on asset i.
The change in wealth is:
formula_5
We can use dynamic programming to solve the problem. For instance, if we consider a series of discrete time problems:
formula_6
Then, a Taylor expansion gives:
formula_7
where formula_8 is a value between t and t+dt.
Assuming that returns follow a Brownian motion:
formula_9
with:
formula_10
Then canceling out terms of second and higher order:
formula_11
Using Bellman equation, we can restate the problem:
formula_12
subject to the wealth constraint previously stated.
Using Ito's lemma we can rewrite:
formula_13
and the expected value:
formula_14
After some algebra
, we have the following objective function:
formula_15
where formula_16 is the risk-free return.
First order conditions are:
formula_17
In matrix form, we have:
formula_18
where formula_19 is the vector of expected returns, formula_20 the covariance matrix of returns, formula_21 a unity vector formula_22 the covariance between returns and the state variable. The optimal weights are:
formula_23
Notice that the intertemporal model provides the same weights of the CAPM. Expected returns can be expressed as follows:
formula_24
where m is the market portfolio and h a portfolio to hedge the state variable.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " dX = \\mu dt + s dZ "
},
{
"math_id": 1,
"text": "E_o \\left\\{\\int_o^T U[C(t),t]dt + B[W(T),T] \\right\\} "
},
{
"math_id": 2,
"text": " w_i "
},
{
"math_id": 3,
"text": " W(t+dt) = [W(t) -C(t) dt]\\sum_{i=0}^n w_i[1+ r_i(t+ dt)] "
},
{
"math_id": 4,
"text": " r_i "
},
{
"math_id": 5,
"text": " dW=-C(t)dt +[W(t)-C(t)dt]\\sum w_i(t)r_i(t+dt) "
},
{
"math_id": 6,
"text": "\\max E_0 \\left\\{\\sum_{t=0}^{T-dt}\\int_t^{t+dt} U[C(s),s]ds + B[W(T),T] \\right\\} "
},
{
"math_id": 7,
"text": " \\int_t^{t+dt}U[C(s),s]ds= U[C(t),t]dt + \\frac{1}{2} U_t [C(t^*),t^*]dt^2 \\approx U[C(t),t]dt "
},
{
"math_id": 8,
"text": "t^*"
},
{
"math_id": 9,
"text": " r_i(t+dt) = \\alpha_i dt + \\sigma_i dz_i"
},
{
"math_id": 10,
"text": " E(r_i) = \\alpha_i dt \\quad ;\\quad E(r_i^2)=var(r_i)=\\sigma_i^2dt \\quad ;\\quad cov(r_i,r_j) = \\sigma_{ij}dt "
},
{
"math_id": 11,
"text": " dW \\approx [W(t) \\sum w_i \\alpha_i - C(t)]dt+W(t) \\sum w_i \\sigma_i dz_i"
},
{
"math_id": 12,
"text": " J(W,X,t) = max \\; E_t\\left\\{\\int_t^{t+dt} U[C(s),s]ds + J[W(t+dt),X(t+dt),t+dt]\\right\\}"
},
{
"math_id": 13,
"text": " dJ = J[W(t+dt),X(t+dt),t+dt]-J[W(t),X(t),t+dt]= J_t dt + J_W dW + J_X dX + \\frac{1}{2}J_{XX} dX^2 + \\frac{1}{2}J_{WW} dW^2 + J_{WX} dX dW"
},
{
"math_id": 14,
"text": " E_t J[W(t+dt),X(t+dt),t+dt]=J[W(t),X(t),t]+J_t dt + J_W E[dW]+ J_X E(dX) + \\frac{1}{2} J_{XX} var(dX)+\\frac{1}{2} J_{WW} var[dW] + J_{WX} cov(dX,dW)"
},
{
"math_id": 15,
"text": " max \\left\\{ U(C,t) + J_t + J_W W [\\sum_{i=1}^n w_i(\\alpha_i-r_f)+r_f] - J_WC + \\frac{W^2}{2} J_{WW}\\sum_{i=1}^n\\sum_{j=1}^n w_i w_j \\sigma_{ij} + J_X \\mu + \\frac{1}{2}J_{XX} s^2 + J_{WX} W \\sum_{i=1}^n w_i \\sigma_{iX} \\right\\} "
},
{
"math_id": 16,
"text": "r_f"
},
{
"math_id": 17,
"text": " J_W(\\alpha_i-r_f)+J_{WW}W \\sum_{j=1}^n w^*_j \\sigma_{ij} + J_{WX} \\sigma_{iX}=0 \\quad i=1,2,\\ldots,n"
},
{
"math_id": 18,
"text": " (\\alpha - r_f {\\mathbf 1}) = \\frac{-J_{WW}}{J_W} \\Omega w^* W + \\frac{-J_{WX}}{J_W} cov_{rX} "
},
{
"math_id": 19,
"text": "\\alpha"
},
{
"math_id": 20,
"text": " \\Omega "
},
{
"math_id": 21,
"text": " {\\mathbf 1}"
},
{
"math_id": 22,
"text": " cov_{rX} "
},
{
"math_id": 23,
"text": " {\\mathbf w^*} = \\frac{-J_W}{J_{WW} W}\\Omega^{-1}(\\alpha - r_f {\\mathbf 1}) - \\frac{J_{WX}}{J_{WW}W}\\Omega^{-1} cov_{rX}"
},
{
"math_id": 24,
"text": " \\alpha_i = r_f + \\beta_{im} (\\alpha_m - r_f) + \\beta_{ih}(\\alpha_h - r_f)"
}
]
| https://en.wikipedia.org/wiki?curid=7634908 |
7635201 | Invariant measure | In mathematics, an invariant measure is a measure that is preserved by some function. The function may be a geometric transformation. For examples, circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping, and a difference of slopes is invariant under shear mapping.
Ergodic theory is the study of invariant measures in dynamical systems. The Krylov–Bogolyubov theorem proves the existence of invariant measures under certain conditions on the function and space under consideration.
Definition.
Let formula_0 be a measurable space and let formula_1 be a measurable function from formula_2 to itself. A measure formula_3 on formula_0 is said to be invariant under formula_4 if, for every measurable set formula_5 in formula_6
formula_7
In terms of the pushforward measure, this states that formula_8
The collection of measures (usually probability measures) on formula_2 that are invariant under formula_4 is sometimes denoted formula_9 The collection of ergodic measures, formula_10 is a subset of formula_9 Moreover, any convex combination of two invariant measures is also invariant, so formula_11 is a convex set; formula_12 consists precisely of the extreme points of formula_9
In the case of a dynamical system formula_13 where formula_0 is a measurable space as before, formula_14 is a monoid and formula_15 is the flow map, a measure formula_3 on formula_0 is said to be an invariant measure if it is an invariant measure for each map formula_16 Explicitly, formula_3 is invariant if and only if
formula_17
Put another way, formula_3 is an invariant measure for a sequence of random variables formula_18 (perhaps a Markov chain or the solution to a stochastic differential equation) if, whenever the initial condition formula_19 is distributed according to formula_20 so is formula_21 for any later time formula_22
When the dynamical system can be described by a transfer operator, then the invariant measure is an eigenvector of the operator, corresponding to an eigenvalue of formula_23 this being the largest eigenvalue as given by the Frobenius–Perron theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X, \\Sigma)"
},
{
"math_id": 1,
"text": "f : X \\to X"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "\\Sigma,"
},
{
"math_id": 7,
"text": "\\mu\\left(f^{-1}(A)\\right) = \\mu(A)."
},
{
"math_id": 8,
"text": "f_*(\\mu) = \\mu."
},
{
"math_id": 9,
"text": "M_f(X)."
},
{
"math_id": 10,
"text": "E_f(X),"
},
{
"math_id": 11,
"text": "M_f(X)"
},
{
"math_id": 12,
"text": "E_f(X)"
},
{
"math_id": 13,
"text": "(X, T, \\varphi),"
},
{
"math_id": 14,
"text": "T"
},
{
"math_id": 15,
"text": "\\varphi : T \\times X \\to X"
},
{
"math_id": 16,
"text": "\\varphi_t : X \\to X."
},
{
"math_id": 17,
"text": "\\mu\\left(\\varphi_{t}^{-1}(A)\\right) = \\mu(A) \\qquad \\text{ for all } t \\in T, A \\in \\Sigma."
},
{
"math_id": 18,
"text": "\\left(Z_t\\right)_{t \\geq 0}"
},
{
"math_id": 19,
"text": "Z_0"
},
{
"math_id": 20,
"text": "\\mu,"
},
{
"math_id": 21,
"text": "Z_t"
},
{
"math_id": 22,
"text": "t."
},
{
"math_id": 23,
"text": "1,"
},
{
"math_id": 24,
"text": "\\R"
},
{
"math_id": 25,
"text": "a \\in \\R"
},
{
"math_id": 26,
"text": "T_a : \\R \\to \\R"
},
{
"math_id": 27,
"text": "T_a(x) = x + a."
},
{
"math_id": 28,
"text": "\\lambda"
},
{
"math_id": 29,
"text": "T_a."
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "\\R^n"
},
{
"math_id": 32,
"text": "\\lambda^n"
},
{
"math_id": 33,
"text": "T : \\R^n \\to \\R^n"
},
{
"math_id": 34,
"text": "T(x) = A x + b"
},
{
"math_id": 35,
"text": "n \\times n"
},
{
"math_id": 36,
"text": "A \\in O(n)"
},
{
"math_id": 37,
"text": "b \\in \\R^n."
},
{
"math_id": 38,
"text": "\\mathbf{S} = \\{A,B\\}"
},
{
"math_id": 39,
"text": "T = \\operatorname{Id}"
},
{
"math_id": 40,
"text": "\\mu : \\mathbf{S} \\to \\R"
},
{
"math_id": 41,
"text": "\\mathbf{S}"
},
{
"math_id": 42,
"text": "\\{A\\}"
},
{
"math_id": 43,
"text": "\\{B\\}."
},
{
"math_id": 44,
"text": "\\operatorname{SL}(2, \\R)"
},
{
"math_id": 45,
"text": "2 \\times 2"
},
{
"math_id": 46,
"text": "1."
}
]
| https://en.wikipedia.org/wiki?curid=7635201 |
7635266 | Krylov–Bogolyubov theorem | In mathematics, the Krylov–Bogolyubov theorem (also known as the existence of invariant measures theorem) may refer to either of the two related fundamental theorems within the theory of dynamical systems. The theorems guarantee the existence of invariant measures for certain "nice" maps defined on "nice" spaces and were named after Russian-Ukrainian mathematicians and theoretical physicists Nikolay Krylov and Nikolay Bogolyubov who proved the theorems.
Formulation of the theorems.
Invariant measures for a single map.
Theorem (Krylov–Bogolyubov). Let ("X", "T") be a compact, metrizable topological space and "F" : "X" → "X" a continuous map. Then "F" admits an invariant Borel probability measure.
That is, if Borel("X") denotes the Borel σ-algebra generated by the collection "T" of open subsets of "X", then there exists a probability measure "μ" : Borel("X") → [0, 1] such that for any subset "A" ∈ Borel("X"),
formula_0
In terms of the push forward, this states that
formula_1
Invariant measures for a Markov process.
Let "X" be a Polish space and let formula_2 be the transition probabilities for a time-homogeneous Markov semigroup on "X", i.e.
formula_3
Theorem (Krylov–Bogolyubov). If there exists a point formula_4 for which the family of probability measures { "P""t"("x", ·) | "t" > 0 } is uniformly tight and the semigroup ("P""t") satisfies the Feller property, then there exists at least one invariant measure for ("P""t"), i.e. a probability measure "μ" on "X" such that
formula_5
Notes.
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from Krylov-Bogolubov theorem on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\\mu \\left( F^{-1} (A) \\right) = \\mu (A)."
},
{
"math_id": 1,
"text": "F_{*} (\\mu) = \\mu."
},
{
"math_id": 2,
"text": "P_t, t\\ge 0,"
},
{
"math_id": 3,
"text": "\\Pr [ X_{t} \\in A | X_{0} = x ] = P_{t} (x, A)."
},
{
"math_id": 4,
"text": "x\\in X"
},
{
"math_id": 5,
"text": "(P_{t})_{\\ast} (\\mu) = \\mu \\mbox{ for all } t > 0."
}
]
| https://en.wikipedia.org/wiki?curid=7635266 |
763555 | Cogeneration | Simultaneous generation of electricity and useful heat
Cogeneration or combined heat and power (CHP) is the use of a heat engine or power station to generate electricity and useful heat at the same time.
Cogeneration is a more efficient use of fuel or heat, because otherwise-wasted heat from electricity generation is put to some productive use. Combined heat and power (CHP) plants recover otherwise wasted thermal energy for heating. This is also called combined heat and power district heating. Small CHP plants are an example of decentralized energy. By-product heat at moderate temperatures ( can also be used in absorption refrigerators for cooling.
The supply of high-temperature heat first drives a gas or steam turbine-powered generator. The resulting low-temperature waste heat is then used for water or space heating. At smaller scales (typically below 1 MW), a gas engine or diesel engine may be used. Cogeneration is also common with geothermal power plants as they often produce relatively low grade heat. Binary cycles may be necessary to reach acceptable thermal efficiency for electricity generation at all. Cogeneration is less commonly employed in nuclear power plants as NIMBY and safety considerations have often kept them further from population centers than comparable chemical power plants and district heating is less efficient in lower population density areas due to transmission losses.
Cogeneration was practiced in some of the earliest installations of electrical generation. Before central stations distributed power, industries generating their own power used exhaust steam for process heating. Large office and apartment buildings, hotels, and stores commonly generated their own power and used waste steam for building heat. Due to the high cost of early purchased power, these CHP operations continued for many years after utility electricity became available.
Overview.
Many process industries, such as chemical plants, oil refineries and pulp and paper mills, require large amounts of process heat for such operations as chemical reactors, distillation columns, steam driers and other uses. This heat, which is usually used in the form of steam, can be generated at the typically low pressures used in heating, or can be generated at much higher pressure and passed through a turbine first to generate electricity. In the turbine the steam pressure and temperature is lowered as the internal energy of the steam is converted to work. The lower-pressure steam leaving the turbine can then be used for process heat.
Steam turbines at thermal power stations are normally designed to be fed high-pressure steam, which exits the turbine at a condenser operating a few degrees above ambient temperature and at a few millimeters of mercury absolute pressure. (This is called a "condensing" turbine.) For all practical purposes this steam has negligible useful energy before it is condensed. Steam turbines for cogeneration are designed for "extraction" of some steam at lower pressures after it has passed through a number of turbine stages, with the un-extracted steam going on through the turbine to a condenser. In this case, the extracted steam causes a mechanical power loss in the downstream stages of the turbine. Or they are designed, with or without extraction, for final exhaust at "back pressure" (non-condensing). The extracted or exhaust steam is used for process heating. Steam at ordinary process heating conditions still has a considerable amount of enthalpy that could be used for power generation, so cogeneration has an opportunity cost.
A typical power generation turbine in a paper mill may have extraction pressures of . A typical back pressure may be . In practice these pressures are custom designed for each facility. Conversely, simply generating process steam for industrial purposes instead of high enough pressure to generate power at the top end also has an opportunity cost (See: Steam supply and exhaust conditions). The capital and operating cost of high-pressure boilers, turbines, and generators is substantial. This equipment is normally operated continuously, which usually limits self-generated power to large-scale operations.
A combined cycle (in which several thermodynamic cycles produce electricity), may also be used to extract heat using a heating system as condenser of the power plant's bottoming cycle. For example, the RU-25 MHD generator in Moscow heated a boiler for a conventional steam powerplant, whose condensate was then used for space heat. A more modern system might use a gas turbine powered by natural gas, whose exhaust powers a steam plant, whose condensate provides heat. Cogeneration plants based on a combined cycle power unit can have thermal efficiencies above 80%.
The viability of CHP (sometimes termed utilisation factor), especially in smaller CHP installations, depends on a good baseload of operation, both in terms of an on-site (or near site) electrical demand and heat demand. In practice, an exact match between the heat and electricity needs rarely exists. A CHP plant can either meet the need for heat ("heat driven operation") or be run as a power plant with some use of its waste heat, the latter being less advantageous in terms of its utilisation factor and thus its overall efficiency. The viability can be greatly increased where opportunities for trigeneration exist. In such cases, the heat from the CHP plant is also used as a primary energy source to deliver cooling by means of an absorption chiller.
CHP is most efficient when heat can be used on-site or very close to it. Overall efficiency is reduced when the heat must be transported over longer distances. This requires heavily insulated pipes, which are expensive and inefficient; whereas electricity can be transmitted along a comparatively simple wire, and over much longer distances for the same energy loss.
A car engine becomes a CHP plant in winter when the reject heat is useful for warming the interior of the vehicle. The example illustrates the point that deployment of CHP depends on heat uses in the vicinity of the heat engine.
Thermally enhanced oil recovery (TEOR) plants often produce a substantial amount of excess electricity. After generating electricity, these plants pump leftover steam into heavy oil wells so that the oil will flow more easily, increasing production.
Cogeneration plants are commonly found in district heating systems of cities, central heating systems of larger buildings (e.g. hospitals, hotels, prisons) and are commonly used in the industry in thermal production processes for process water, cooling, steam production or CO2 fertilization.
"Trigeneration" or "combined cooling, heat and power" ("CCHP") refers to the simultaneous generation of electricity and useful heating and cooling from the combustion of a fuel or a solar heat collector. The terms "cogeneration" and "trigeneration" can also be applied to the power systems simultaneously generating electricity, heat, and industrial chemicals (e.g., syngas). Trigeneration differs from cogeneration in that the waste heat is used for both heating and cooling, typically in an absorption refrigerator. Combined cooling, heat, and power systems can attain higher overall efficiencies than cogeneration or traditional power plants. In the United States, the application of trigeneration in buildings is called building cooling, heating, and power. Heating and cooling output may operate concurrently or alternately depending on need and system construction.
Types of plants.
Topping cycle plants primarily produce electricity from a steam turbine. Partly expanded steam is then condensed in a heating condensor at a temperature level that is suitable e.g. district heating or water desalination.
Bottoming cycle plants produce high temperature heat for industrial processes, then a waste heat recovery boiler feeds an electrical plant. Bottoming cycle plants are only used in industrial processes that require very high temperatures such as furnaces for glass and metal manufacturing, so they are less common.
Large cogeneration systems provide heating water and power for an industrial site or an entire town. Common CHP plant types are:
Smaller cogeneration units may use a reciprocating engine or Stirling engine. The heat is removed from the exhaust and radiator. The systems are popular in small sizes because small gas and diesel engines are less expensive than small gas- or oil-fired steam-electric plants.
Some cogeneration plants are fired by biomass, or industrial and municipal solid waste (see incineration). Some CHP plants use waste gas as the fuel for electricity and heat generation. Waste gases can be gas from animal waste, landfill gas, gas from coal mines, sewage gas, and combustible industrial waste gas.
Some cogeneration plants combine gas and solar photovoltaic generation to further improve technical and environmental performance. Such hybrid systems can be scaled down to the building level and even individual homes.
MicroCHP.
Micro combined heat and power or 'Micro cogeneration" is a so-called distributed energy resource (DER). The installation is usually less than 5 kWe in a house or small business. Instead of burning fuel to merely heat space or water, some of the energy is converted to electricity in addition to heat. This electricity can be used within the home or business or, if permitted by the grid management, sold back into the electric power grid.
Delta-ee consultants stated in 2013 that with 64% of global sales the fuel cell micro-combined heat and power passed the conventional systems in sales in 2012. 20,000 units were sold in Japan in 2012 overall within the Ene Farm project. With a Lifetime of around 60,000 hours. For PEM fuel cell units, which shut down at night, this equates to an estimated lifetime of between ten and fifteen years. For a price of $22,600 before installation. For 2013 a state subsidy for 50,000 units is in place.
MicroCHP installations use five different technologies: microturbines, internal combustion engines, stirling engines, closed-cycle steam engines, and fuel cells. One author indicated in 2008 that MicroCHP based on Stirling engines is the most cost-effective of the so-called microgeneration technologies in abating carbon emissions. A 2013 UK report from Ecuity Consulting stated that MCHP is the most cost-effective method of using gas to generate energy at the domestic level. However, advances in reciprocation engine technology are adding efficiency to CHP plants, particularly in the biogas field. As both MiniCHP and CHP have been shown to reduce emissions they could play a large role in the field of CO2 reduction from buildings, where more than 14% of emissions can be saved using CHP in buildings. The University of Cambridge reported a cost-effective steam engine MicroCHP prototype in 2017 which has the potential to be commercially competitive in the following decades. Quite recently, in some private homes, fuel cell micro-CHP plants can now be found, which can operate on hydrogen, or other fuels as natural gas or LPG. When running on natural gas, it relies on steam reforming of natural gas to convert the natural gas to hydrogen prior to use in the fuel cell. This hence still emits CO2 (see reaction) but (temporarily) running on this can be a good solution until the point where the hydrogen is starting to be distributed through the (natural gas) piping system.
Another MicroCHP example is a natural gas or propane fueled Electricity Producing Condensing Furnace. It combines the fuel saving technique of cogeneration meaning producing electric power and useful heat from a single source of combustion. The condensing furnace is a forced-air gas system with a secondary heat exchanger that allows heat to be extracted from combustion products down to the ambient temperature along with recovering heat from the water vapor. The chimney is replaced by a water drain and vent to the side of the building.
Trigeneration.
A plant producing electricity, heat and cold is called a trigeneration or polygeneration plant. Cogeneration systems linked to absorption chillers or adsorption chillers use waste heat for refrigeration.
Combined heat and power district heating.
In the United States, Consolidated Edison distributes 66 billion kilograms of steam each year through its seven cogeneration plants to 100,000 buildings in Manhattan—the biggest steam district in the United States. The peak delivery is 10 million pounds per hour (or approximately 2.5 GW).
Industrial CHP.
Cogeneration is still common in pulp and paper mills, refineries and chemical plants. In this "industrial cogeneration/CHP", the heat is typically recovered at higher temperatures (above 100 °C) and used for process steam or drying duties. This is more valuable and flexible than low-grade waste heat, but there is a slight loss of power generation. The increased focus on sustainability has made industrial CHP more attractive, as it substantially reduces carbon footprint compared to generating steam or burning fuel on-site and importing electric power from the grid.
Smaller industrial co-generation units have an output capacity of 5–25 MW and represent a viable off-grid option for a variety of remote applications to reduce carbon emissions.
Utility pressures versus self generated industrial.
Industrial cogeneration plants normally operate at much lower boiler pressures than utilities. Among the reasons are:
Heat recovery steam generators.
A heat recovery steam generator (HRSG) is a steam boiler that uses hot exhaust gases from the gas turbines or reciprocating engines in a CHP plant to heat up water and generate steam. The steam, in turn, drives a steam turbine or is used in industrial processes that require heat.
HRSGs used in the CHP industry are distinguished from conventional steam generators by the following main features:
Cogeneration using biomass.
Biomass refers to any plant or animal matter in which it is possible to be reused as a source of heat or electricity, such as sugarcane, vegetable oils, wood, organic waste and residues from the food or agricultural industries. Brazil is now considered a world reference in terms of energy generation from biomass.
A growing sector in the use of biomass for power generation is the sugar and alcohol sector, which mainly uses sugarcane bagasse as fuel for thermal and electric power generation.
Power cogeneration in the sugar and alcohol sector.
In the sugarcane industry, cogeneration is fueled by the bagasse residue of sugar refining, which is burned to produce steam. Some steam can be sent through a turbine that turns a generator, producing electric power.
Energy cogeneration in sugarcane industries located in Brazil is a practice that has been growing in last years. With the adoption of energy cogeneration in the sugar and alcohol sector, the sugarcane industries are able to supply the electric energy demand needed to operate, and generate a surplus that can be commercialized.
Advantages of the cogeneration using sugarcane bagasse.
In comparison with the electric power generation by means of fossil fuel-based thermoelectric plants, such as natural gas, the energy generation using sugarcane bagasse has environmental advantages due to the reduction of CO2 emissions.
In addition to the environmental advantages, cogeneration using sugarcane bagasse presents advantages in terms of efficiency comparing to thermoelectric generation, through the final destination of the energy produced. While in thermoelectric generation, part of the heat produced is lost, in cogeneration this heat has the possibility of being used in the production processes, increasing the overall efficiency of the process.
Disadvantages of the cogeneration using sugarcane bagasse.
In sugarcane cultivation, is usually used potassium source's containing high concentration of chlorine, such as potassium chloride (KCl). Considering that KCl is applied in huge quantities, sugarcane ends up absorbing high concentrations of chlorine.
Due to this absorption, when the sugarcane bagasse is burned in the power cogeneration, dioxins and methyl chloride ends up being emitted. In the case of dioxins, these substances are considered very toxic and cancerous.
In the case of methyl chloride, when this substance is emitted and reaches the stratosphere, it ends up being very harmful for the ozone layer, since chlorine when combined with the ozone molecule generates a catalytic reaction leading to the breakdown of ozone links.
After each reaction, chlorine starts a destructive cycle with another ozone molecule. In this way, a single chlorine atom can destroy thousands of ozone molecules. As these molecules are being broken, they are unable to absorb the ultraviolet rays. As a result, the UV radiation is more intense on Earth and there is a worsening of global warming.
Comparison with a heat pump.
A heat pump may be compared with a CHP unit as follows. If, to supply thermal energy, the exhaust steam from the turbo-generator must be taken at a higher temperature than the system would produce most electricity at, the lost electrical generation is "as if" a heat pump were used to provide the same heat by taking electrical power from the generator running at lower output temperature and higher efficiency. Typically for every unit of electrical power lost, then about 6 units of heat are made available at about . Thus CHP has an effective Coefficient of Performance (COP) compared to a heat pump of 6. However, for a remotely operated heat pump, losses in the electrical distribution network would need to be considered, of the order of 6%. Because the losses are proportional to the square of the current, during peak periods losses are much higher than this and it is likely that widespread (i.e. citywide application of heat pumps) would cause overloading of the distribution and transmission grids unless they were substantially reinforced.
It is also possible to run a heat driven operation combined with a heat pump, where the excess electricity (as heat demand is the defining factor on se) is used to drive a heat pump. As heat demand increases, more electricity is generated to drive the heat pump, with the waste heat also heating the heating fluid.
As the efficiency of heat pumps depends on the difference between hot end and cold end temperature (efficiency rises as the difference decreases) it may be worthwhile to combine even relatively low grade waste heat otherwise unsuitable for home heating with heat pumps. For example, a large enough reservoir of cooling water at can significantly improve efficiency of heat pumps drawing from such a reservoir compared to air source heat pumps drawing from cold air during a night. In the summer when there's both demand for air conditioning and warm water, the same water may even serve as both a "dump" for the waste heat rejected by a/c units and as a "source" for heat pumps providing warm water. Those considerations are behind what is sometimes called "cold district heating" using a "heat" source whose temperature is well below those usually employed in district heating.
Distributed generation.
Most industrial countries generate the majority of their electrical power needs in large centralized facilities with capacity for large electrical power output. These plants benefit from economy of scale, but may need to transmit electricity across long distances causing transmission losses. Cogeneration or trigeneration production is subject to limitations in the local demand and thus may sometimes need to reduce (e.g., heat or cooling production to match the demand). An example of cogeneration with trigeneration applications in a major city is the New York City steam system.
Thermal efficiency.
Every heat engine is subject to the theoretical efficiency limits of the Carnot cycle or subset Rankine cycle in the case of steam turbine power plants or Brayton cycle in gas turbine with steam turbine plants. Most of the efficiency loss with steam power generation is associated with the latent heat of vaporization of steam that is not recovered when a turbine exhausts its low temperature and pressure steam to a condenser. (Typical steam to condenser would be at a few millimeters absolute pressure and on the order of hotter than the cooling water temperature, depending on the condenser capacity.) In cogeneration this steam exits the turbine at a higher temperature where it may be used for process heat, building heat or cooling with an absorption chiller. The majority of this heat is from the latent heat of vaporization when the steam condenses.
Thermal efficiency in a cogeneration system is defined as:
formula_0
Where:
Heat output may also be used for cooling (for example, in summer), thanks to an absorption chiller.
If cooling is achieved in the same time, thermal efficiency in a trigeneration system is defined as:
formula_4
Where:
Typical cogeneration models have losses as in any system. The energy distribution below is represented as a percent of total input energy:
Conventional central coal- or nuclear-powered power stations convert about 33–45% of their input heat to electricity. Brayton cycle power plants operate at up to 60% efficiency. In the case of conventional power plants, approximately 10-15% of this heat is lost up the stack of the boiler. Most of the remaining heat emerges from the turbines as low-grade waste heat with no significant local uses, so it is usually rejected to the environment, typically to cooling water passing through a condenser. Because turbine exhaust is normally just above ambient temperature, some potential power generation is sacrificed in rejecting higher-temperature steam from the turbine for cogeneration purposes.
For cogeneration to be practical power generation and end use of heat must be in relatively close proximity (<2 km typically).
Even though the efficiency of a small distributed electrical generator may be lower than a large central power plant, the use of its waste heat for local heating and cooling can result in an overall use of the primary fuel supply as great as 80%. This provides substantial financial and environmental benefits.
Costs.
Typically, for a gas-fired plant the fully installed cost per kW electrical is around £400/kW (US$577), which is comparable with large central power stations.
History.
Cogeneration in Europe.
The EU has actively incorporated cogeneration into its energy policy via the CHP Directive. In September 2008 at a hearing of the European Parliament's Urban Lodgment Intergroup, Energy Commissioner Andris Piebalgs is quoted as saying, “security of supply really starts with energy efficiency.” Energy efficiency and cogeneration are recognized in the opening paragraphs of the European Union's Cogeneration Directive 2004/08/EC. This directive intends to support cogeneration and establish a method for calculating cogeneration abilities per country. The development of cogeneration has been very uneven over the years and has been dominated throughout the last decades by national circumstances.
The European Union generates 11% of its electricity using cogeneration. However, there is large difference between Member States with variations of the energy savings between 2% and 60%. Europe has the three countries with the world's most intensive cogeneration economies: Denmark, the Netherlands and Finland. Of the 28.46 TWh of electrical power generated by conventional thermal power plants in Finland in 2012, 81.80% was cogeneration.
Other European countries are also making great efforts to increase efficiency. Germany reported that at present, over 50% of the country's total electricity demand could be provided through cogeneration. So far, Germany has set the target to double its electricity cogeneration from 12.5% of the country's electricity to 25% of the country's electricity by 2020 and has passed supporting legislation accordingly. The UK is also actively supporting combined heat and power. In light of UK's goal to achieve a 60% reduction in carbon dioxide emissions by 2050, the government has set the target to source at least 15% of its government electricity use from CHP by 2010. Other UK measures to encourage CHP growth are financial incentives, grant support, a greater regulatory framework, and government leadership and partnership.
According to the IEA 2008 modeling of cogeneration expansion for the G8 countries, the expansion of cogeneration in France, Germany, Italy and the UK alone would effectively double the existing primary fuel savings by 2030. This would increase Europe's savings from today's 155.69 Twh to 465 Twh in 2030. It would also result in a 16% to 29% increase in each country's total cogenerated electricity by 2030.
Governments are being assisted in their CHP endeavors by organizations like COGEN Europe who serve as an information hub for the most recent updates within Europe's energy policy. COGEN is Europe's umbrella organization representing the interests of the cogeneration industry.
The European public–private partnership Fuel Cells and Hydrogen Joint Undertaking Seventh Framework Programme project ene.field deploys in 2017 up 1,000 residential fuel cell Combined Heat and Power (micro-CHP) installations in 12 states. Per 2012 the first 2 installations have taken place.
Cogeneration in the United Kingdom.
In the United Kingdom, the "Combined Heat and Power Quality Assurance" scheme regulates the combined production of heat and power. It was introduced in 1996. It defines, through calculation of inputs and outputs, "Good Quality CHP" in terms of the achievement of primary energy savings against conventional separate generation of heat and electricity. Compliance with Combined Heat and Power Quality Assurance is required for cogeneration installations to be eligible for government subsidies and tax incentives.
Cogeneration in the United States.
Perhaps the first modern use of energy recycling was done by Thomas Edison. His 1882 Pearl Street Station, the world's first commercial power plant, was a combined heat and power plant, producing both electricity and thermal energy while using waste heat to warm neighboring buildings. Recycling allowed Edison's plant to achieve approximately 50 percent efficiency.
By the early 1900s, regulations emerged to promote rural electrification through the construction of centralized plants managed by regional utilities. These regulations not only promoted electrification throughout the countryside, but they also discouraged decentralized power generation, such as cogeneration.
By 1978, Congress recognized that efficiency at central power plants had stagnated and sought to encourage improved efficiency with the Public Utility Regulatory Policies Act (PURPA), which encouraged utilities to buy power from other energy producers.
Cogeneration plants proliferated, soon producing about 8% of all energy in the United States. However, the bill left implementation and enforcement up to individual states, resulting in little or nothing being done in many parts of the country.
The United States Department of Energy has an aggressive goal of having CHP constitute 20% of generation capacity by 2030. Eight Clean Energy Application Centers have been established across the nation. Their mission is to develop the required technology application knowledge and educational infrastructure necessary to lead "clean energy" (combined heat and power, waste heat recovery, and district energy) technologies as viable energy options and reduce any perceived risks associated with their implementation. The focus of the Application Centers is to provide an outreach and technology deployment program for end users, policymakers, utilities, and industry stakeholders.
High electric rates in New England and the Middle Atlantic make these areas of the United States the most beneficial for cogeneration.
Applications in power generation systems.
Fossil.
Any of the following conventional power plants may be converted to a combined cooling, heat and power system:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta_{th} \\equiv \\frac{W_{out}}{Q_{in}} \\equiv \\frac{\\text{Electrical power output + Heat output}}{\\text{Total heat input}}"
},
{
"math_id": 1,
"text": "\\eta_{th}"
},
{
"math_id": 2,
"text": "W_{out}"
},
{
"math_id": 3,
"text": "Q_{in}"
},
{
"math_id": 4,
"text": "\\eta_{th} \\equiv \\frac{W_{out}}{Q_{in}} \\equiv \\frac{\\text{Electrical power output + Heat output + Cooling output}}{\\text{Total heat input}}"
}
]
| https://en.wikipedia.org/wiki?curid=763555 |
7635648 | Dickman function | Mathematical function
In analytic number theory, the Dickman function or Dickman–de Bruijn function "ρ" is a special function used to estimate the proportion of smooth numbers up to a given bound.
It was first studied by actuary Karl Dickman, who defined it in his only mathematical publication, which is not easily available, and later studied by the Dutch mathematician Nicolaas Govert de Bruijn.
Definition.
The Dickman–de Bruijn function formula_0 is a continuous function that satisfies the delay differential equation
formula_1
with initial conditions formula_2 for 0 ≤ "u" ≤ 1.
Properties.
Dickman proved that, when formula_3 is fixed, we have
formula_4
where formula_5 is the number of "y"-smooth (or "y"-friable) integers below "x".
Ramaswami later gave a rigorous proof that for fixed "a", formula_6 was asymptotic to formula_7, with the error bound
formula_8
in big O notation.
Applications.
The main purpose of the Dickman–de Bruijn function is to estimate the frequency of smooth numbers at a given size. This can be used to optimize various number-theoretical algorithms such as P–1 factoring and can be useful of its own right.
It can be shown that
formula_9
which is related to the estimate formula_10 below.
The Golomb–Dickman constant has an alternate definition in terms of the Dickman–de Bruijn function.
Estimation.
A first approximation might be formula_11 A better estimate is
formula_12
where Ei is the exponential integral and "ξ" is the positive root of
formula_13
A simple upper bound is formula_14
Computation.
For each interval ["n" − 1, "n"] with "n" an integer, there is an analytic function formula_15 such that formula_16. For 0 ≤ "u" ≤ 1, formula_2. For 1 ≤ "u" ≤ 2, formula_17. For 2 ≤ "u" ≤ 3,
formula_18
with Li2 the dilogarithm. Other formula_15 can be calculated using infinite series.
An alternate method is computing lower and upper bounds with the trapezoidal rule; a mesh of progressively finer sizes allows for arbitrary accuracy. For high precision calculations (hundreds of digits), a recursive series expansion about the midpoints of the intervals is superior.
Extension.
Friedlander defines a two-dimensional analog formula_19 of formula_0. This function is used to estimate a function formula_20 similar to de Bruijn's, but counting the number of "y"-smooth integers with at most one prime factor greater than "z". Then
formula_21 | [
{
"math_id": 0,
"text": "\\rho(u)"
},
{
"math_id": 1,
"text": "u\\rho'(u) + \\rho(u-1) = 0\\,"
},
{
"math_id": 2,
"text": "\\rho(u) = 1"
},
{
"math_id": 3,
"text": " a "
},
{
"math_id": 4,
"text": "\\Psi(x, x^{1/a})\\sim x\\rho(a)\\,"
},
{
"math_id": 5,
"text": "\\Psi(x,y)"
},
{
"math_id": 6,
"text": "\\Psi(x,x^{1/a})"
},
{
"math_id": 7,
"text": "x \\rho(a)"
},
{
"math_id": 8,
"text": "\\Psi(x,x^{1/a})=x\\rho(a)+O(x/\\log x)"
},
{
"math_id": 9,
"text": "\\Psi(x,y)=xu^{O(-u)}"
},
{
"math_id": 10,
"text": "\\rho(u)\\approx u^{-u}"
},
{
"math_id": 11,
"text": "\\rho(u)\\approx u^{-u}.\\,"
},
{
"math_id": 12,
"text": "\\rho(u)\\sim \\frac 1 {\\xi\\sqrt{2\\pi u}} \\cdot \\exp(-u\\xi+\\operatorname{Ei}(\\xi)) "
},
{
"math_id": 13,
"text": "e^\\xi-1=u\\xi.\\,"
},
{
"math_id": 14,
"text": "\\rho(x)\\le1/x!."
},
{
"math_id": 15,
"text": "\\rho_n"
},
{
"math_id": 16,
"text": "\\rho_n(u)=\\rho(u)"
},
{
"math_id": 17,
"text": "\\rho(u) = 1-\\log u"
},
{
"math_id": 18,
"text": "\\rho(u) = 1-(1-\\log(u-1))\\log(u) + \\operatorname{Li}_2(1 - u) + \\frac{\\pi^2}{12}. "
},
{
"math_id": 19,
"text": "\\sigma(u,v)"
},
{
"math_id": 20,
"text": "\\Psi(x,y,z)"
},
{
"math_id": 21,
"text": "\\Psi(x,x^{1/a},x^{1/b})\\sim x\\sigma(b,a).\\,"
},
{
"math_id": 22,
"text": "e^{-\\gamma}"
}
]
| https://en.wikipedia.org/wiki?curid=7635648 |
7635675 | Bubble raft | Array of bubbles
A bubble raft is an array of bubbles. It demonstrates materials' microstructural and atomic length-scale behavior by modelling the {111} plane of a close-packed crystal. A material's observable and measurable mechanical properties strongly depend on its atomic and microstructural configuration and characteristics. This fact is intentionally ignored in continuum mechanics, which assumes a material to have no underlying microstructure and be uniform and semi-infinite throughout.
Bubble rafts assemble bubbles on a water surface, often with the help of amphiphilic soaps. These assembled bubbles act like atoms, diffusing, slipping, ripening, straining, and otherwise deforming in a way that models the behavior of the {111} plane of a close-packed crystal. The ideal (lowest energy) state of the assembly would undoubtedly be a perfectly regular single crystal, but just as in metals, the bubbles often form defects, grain boundaries, and multiple crystals.
History of bubble rafts.
The concept of bubble raft modelling was first presented in 1947 by Nobel Laureate Sir William Lawrence Bragg and John Nye of Cambridge University's Cavendish Laboratory in Proceedings of the Royal Society A. Legend claims that Bragg conceived of bubble raft models while pouring oil into his lawn mower. He noticed that bubbles on the surface of the oil assembled into rafts resembling the {111} plane of close-packed crystals. Nye and Bragg later presented a method of generating and controlling bubbles on the surface of a glycerine-water-oleic acid-triethanolamine solution, in assemblies of 100,000 or more sub-millimeter sized bubbles. In their paper, they go on at length about the microstructural phenomena observed in bubble rafts and hypothesized in metals.
Dynamics.
Bubble rafts exhibit complex dynamics, as illustrated in the video. This is triggered by rupture of a first bubble, driven by thermal fluctuations and a cascade of subsequent bursting bubbles, which can give rise to self-organized criticality, and a power-law distribution of avalanches.
Relation to crystal lattices.
In deforming a crystal lattice, one changes the energy and the interatomic potential felt by the atoms of the lattice. This interatomic potential is popularly (and mostly qualitatively) modeled using the Lennard-Jones potential, which consists of a balance between attractive and repulsive forces between atoms.
The "atoms" in Bubble Rafts also exhibit such attractive and repulsive forces:
formula_0
The portion of the equation to the left of the plus sign is the attractive force, and the portion to the right represents the repulsive force.
formula_1 is the interbubble potential
formula_2 is the average bubble radius
formula_3 is the density of the solution from which the bubbles are formed
formula_4 is the gravitational constant
formula_5 is the ratio of the distance between bubbles to the bubble radius
formula_6 is the radius of ring contact
formula_7 is the ratio R/a of the bubble radius to the Laplace constant a, where
formula_8
formula_9 is the surface tension
formula_10 is a constant dependent upon the boundary conditions of the calculation
formula_11 is a zeroth-order modified Bessel function of the second kind.
Bubble rafts can display numerous phenomena seen in the crystal lattice. This includes such things as point defects (vacancies, substitutional impurities, interstitial atoms), edge dislocations and grains. A screw dislocation can't be modeled in a 2D bubble raft because it extends outside the plane. It is even possible to replicate some microstructure treats such as annealing. The annealing process is simulated by stirring the bubble raft. This anneals out the dislocations (recovery) and promotes recrystallization.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U ( \\rho ) = - \\pi R^4 \\rho_{solution} g \t\\left ( \\frac{\\Beta}{\\alpha} \\right )^2 \\mathit{A} K_0 (\\alpha \\rho) +\n\n \\begin{cases} 0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\rho \\ge \\ 2 \\\\ \\pi R^4 \\rho_{solution} g \\left ( \\frac{(2-\\rho)^2}{\\alpha^2} \\right ) ~~~ \\rho \\le\\ 2 \\end{cases}\n\n"
},
{
"math_id": 1,
"text": "U ( \\rho )"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\rho_{solution}"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "\\rho"
},
{
"math_id": 6,
"text": "\\Beta"
},
{
"math_id": 7,
"text": " \\alpha"
},
{
"math_id": 8,
"text": " a^2 = \\frac {T}{\\rho_{solution} g}"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "K_0"
}
]
| https://en.wikipedia.org/wiki?curid=7635675 |
76358565 | Arrow–Debreu exchange market | In theoretical economics, an Arrow–Debreu exchange market is a special case of the Arrow–Debreu model in which there is no production - there is only an exchange of already-existing goods. An Arrow–Debreu exchange market has the following ingredients:
Each product formula_4 has a price formula_5; the prices are determined by methods described below. The price of a "bundle" of products is the sum of the prices of the products in the bundle. A bundle is represented by a vector formula_6, where formula_7 is the quantity of product formula_4. So the price of a bundle formula_8 is formula_9.
Given a price-vector, the "budget" of an agent is the total price of his endowment, formula_10.
A bundle is "affordable" for a buyer if the price of that bundle is at most the buyer's budget. I.e, a bundle formula_8 is affordable for buyer formula_11 if formula_12.
Each buyer has a preference relation over bundles, which can be represented by a utility function. The utility function of buyer formula_11 is denoted by formula_13. The "demand set" of a buyer is the set of affordable bundles that maximize the buyer's utility among all affordable bundles, i.e.:
formula_14.
A competitive equilibrium (CE) is a price-vector formula_15in which it is possible to allocate, to each agent, a bundle from his demand-set, such that the total allocation exactly equals the supply of products. The corresponding prices are called "market-clearing prices". A CE always exists, even in the more general Arrow–Debreu model. The main challenge is to find a CE.
Computing an equilibrium.
Approximate CE.
Kakade, Kearns and Ortiz gave algorithms for approximate CE in a generalized Arrow-Debreu market in which agents are located on a graph and trade may occur only between neighboring agents. They considered non-linear utilities.
Exact CE.
Jain presented the first polynomial-time algorithm for computing an exact CE when all agents have linear utilities. His algorithm is based on solving a convex program using the ellipsoid method and simultaneous diophantine approximation. He also proved that the set of assignments at equilibrium is convex, and the equilibrium prices themselves are log-convex.
Based on Jain's algorithm, Ye developed a more practical interior-point method for finding a CE.
Devanur and Kannan gave algorithms for exchange markets with concave utility functions, where all resources are goods (the utilities are positive):
Codenotti, McCune, Penumatcha and Varadarajan gave an algorithm for Arrow-Debreu markes with CES utilities where the elasticity of substitution is at least 1/2.
Chaudhury, Garg, McGlaughlin and Mehta prove that, when the products are bads, computing an equilibrium is PPAD-hard even when utilities are linear, and even under a certain condition that guarantees CE existence.
CE for markets with production.
Newman and Primak studied two variants of the ellipsoid method for finding an approximate CE in an Arrow-Debreu market "with production", when all agents have linear utilities. They proved that the inscribed ellipsoid method is more computationally efficient than the circumscribed ellipsoid method.
Related models.
A Fisher market is a simpler market in which agents are only buyers - not sellers. Each agent comes with a pre-specified budget, and can use it to buy goods at the given price.
In a Fisher market, increasing prices always decreases the agents' demand, as they can buy less with their fixed budget. However, in an Arrow-Debreu exchange market, increasing prices also increases the agents' budgets, which means that the demand is not a monotone function of the prices. This makes computing a CE in an Arrow-Debreu exchange market much more challenging.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "i=1,\\dots,n"
},
{
"math_id": 3,
"text": "e_i"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "p_j"
},
{
"math_id": 6,
"text": "x = x_1,\\dots,x_m"
},
{
"math_id": 7,
"text": "x_j"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "p\\cdot x =\\sum_{j=1}^m p_j\\cdot x_j"
},
{
"math_id": 10,
"text": "p\\cdot e_i"
},
{
"math_id": 11,
"text": "i"
},
{
"math_id": 12,
"text": "p\\cdot x\\leq p\\cdot e_i"
},
{
"math_id": 13,
"text": "u_i"
},
{
"math_id": 14,
"text": "\\text{Demand}_i(p) := \\arg\\max_{p\\cdot x\\leq p\\cdot e_i} u_i(x)"
},
{
"math_id": 15,
"text": "p_1,\\dots,p_m"
}
]
| https://en.wikipedia.org/wiki?curid=76358565 |
76359420 | Kármán–Moore theory | Supersonic flow past a slender body
Kármán–Moore theory is a linearized theory for supersonic flows over a slender body, named after Theodore von Kármán and Norton B. Moore, who developed the theory in 1932. The theory, in particular, provides an explicit formula for the wave drag, which converts the kinetic energy of the moving body into outgoing sound waves behind the body.
Mathematical description.
Consider a slender body with pointed edges at the front and back. The supersonic flow past this body will be nearly parallel to the formula_0-axis everywhere since the shock waves formed (one at the leading edge and one at the trailing edge) will be weak; as a consequence, the flow will be potential everywhere, which can be described using the velocity potential formula_1, where formula_2 is the incoming uniform velocity and formula_3 characterising the small deviation from the uniform flow. In the linearized theory, formula_3 satisfies
formula_4
where formula_5, formula_6 is the sound speed in the incoming flow and formula_7 is the Mach number of the incoming flow. This is just the two-dimensional wave equation and formula_3 is a disturbance propagated with an apparent time formula_8 and with an apparent velocity formula_9.
Let the origin formula_10 be located at the leading end of the pointed body. Further, let formula_11 be the cross-sectional area (perpendicular to the formula_0-axis) and formula_12 be the length of the slender body, so that formula_13 for formula_14 and for formula_15. Of course, in supersonic flows, disturbances (i.e., formula_3) can be propagated only into the region behind the Mach cone. The weak Mach cone for the leading-edge is given by formula_16, whereas the weak Mach cone for the trailing edge is given by formula_17, where formula_18 is the squared radial distance from the formula_0-axis.
The disturbance far away from the body is just like a cylindrical wave propagation. In front of the cone formula_16, the solution is simply given by formula_19. Between the cones formula_20 and formula_17, the solution is given by
formula_21
whereas the behind the cone formula_17, the solution is given by
formula_22
The solution described above is exact for all formula_23 when the slender body is a solid of revolution. If this is not the case, the solution is valid at large distances will have correction associated with the non-linear distortion of the shock profile, whose strength is proportional to formula_24 and a factor depending on the shape function formula_11.
The drag force formula_25 is just the formula_0-component of the momentum per unit time. To calculate this, consider a cylindrical surface with a large radius and with an axis along the formula_0-axis. The momentum flux density crossing through this surface is simply given by formula_26. Integrating formula_27 over the cylindrical surface gives the drag force. Due to symmetry, the first term in formula_27 upon integration gives zero since the net mass flux formula_28 is zero on the cylindrical surface considered. The second term gives the non-zero contribution,
formula_29
At large distances, the values formula_30 (the wave region) are the most important in the solution for formula_3; this is because, as mentioned earlier, formula_3 is a like disturbance propating with a speed formula_9 with an apparent time formula_8. This means that we can approximate the expression in the denominator as formula_31 Then we can write, for example,
formula_32
From this expression, we can calculate formula_33, which is also equal to formula_34 since we are in the wave region. The factor formula_35 appearing in front of the integral need not to be differentiated since this gives rise to the small correction proportional to formula_36. Effecting the differentiation and returning to the original variables, we find
formula_37
Substituting this in the drag force formula gives us
formula_38
This can be simplified by carrying out the integration over formula_39. When the integration order is changed, the limit for formula_39 ranges from the formula_40 to formula_41. Upon integration, we have
formula_42
The integral containing the term formula_43 is zero because formula_44 (of course, in addition to formula_45).
The final formula for the wave drag force may be written as
formula_46
or
formula_47
The drag coefficient is then given by
formula_48
Since formula_49 that follows from the formula derived above, formula_50, indicating that the drag coefficient is proportional to the square of the cross-sectional area and inversely proportional to the fourth power of the body length.
The shape with smallest wave drag for a given volume formula_51 and length formula_12 can be obtained from the wave drag force formula. This shape is known as the Sears–Haack body.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "\\varphi = xv_1 + \\phi"
},
{
"math_id": 2,
"text": "v_1"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\frac{\\partial^2\\phi}{\\partial y^2} + \\frac{\\partial^2\\phi}{\\partial z^2} - \\beta^2 \\frac{\\partial^2\\phi}{\\partial x^2} =0,"
},
{
"math_id": 5,
"text": "\\beta^2=(v_1^2-c_1^2)/c_1^2=M_1^2-1"
},
{
"math_id": 6,
"text": "c_1"
},
{
"math_id": 7,
"text": "M_1"
},
{
"math_id": 8,
"text": "x/v_1"
},
{
"math_id": 9,
"text": "v_1/\\beta"
},
{
"math_id": 10,
"text": "(x,y,z)=(0,0,0)"
},
{
"math_id": 11,
"text": "S(x)"
},
{
"math_id": 12,
"text": "l"
},
{
"math_id": 13,
"text": "S(x)=0"
},
{
"math_id": 14,
"text": "x<0"
},
{
"math_id": 15,
"text": "x>1"
},
{
"math_id": 16,
"text": "x-\\beta r=0"
},
{
"math_id": 17,
"text": "x-\\beta r = l"
},
{
"math_id": 18,
"text": "r^2=y^2+z^2"
},
{
"math_id": 19,
"text": "\\phi=0"
},
{
"math_id": 20,
"text": "x-\\beta r = 0"
},
{
"math_id": 21,
"text": "\\phi(x,r) = - \\frac{v_1}{2\\pi}\\int_0^{x-\\beta r} \\frac{S'(\\xi)d\\xi}{\\sqrt{(x-\\xi)^2-\\beta^2r^2}}"
},
{
"math_id": 22,
"text": "\\phi(x,r) = - \\frac{v_1}{2\\pi}\\int_0^{l} \\frac{S'(\\xi)d\\xi}{\\sqrt{(x-\\xi)^2-\\beta^2r^2}}."
},
{
"math_id": 23,
"text": "r"
},
{
"math_id": 24,
"text": "(M_1-1)^{1/8}r^{-3/4}"
},
{
"math_id": 25,
"text": "F"
},
{
"math_id": 26,
"text": "\\Pi_{xr}=\\rho v_r (v_1+v_x)\\approx \\rho_1 (\\partial\\phi/\\partial r)(v_1+\\partial\\phi/\\partial x)"
},
{
"math_id": 27,
"text": "\\Pi_{xr}"
},
{
"math_id": 28,
"text": "\\rho v_r"
},
{
"math_id": 29,
"text": "F = -2\\pi r \\rho_1 \\int_{-\\infty}^\\infty \\frac{\\partial \\phi}{\\partial r}\\frac{\\partial\\phi}{\\partial x} dx."
},
{
"math_id": 30,
"text": "x-\\xi \\sim \\beta r"
},
{
"math_id": 31,
"text": "(x-\\xi)^2-\\beta^2r^2\\approx 2\\beta r (x-\\xi-\\beta r)."
},
{
"math_id": 32,
"text": "\\phi(x,r) = - \\frac{v_1}{2\\pi\\sqrt{2\\beta r}}\\int_0^{x-\\beta r} \\frac{S'(\\xi)d\\xi}{\\sqrt{x-\\xi-\\beta r}} = - \\frac{v_1}{2\\pi\\sqrt{2\\beta r}}\\int_0^{\\infty} \\frac{S'(x-\\beta r-s)ds}{\\sqrt{s}}, \\quad s=x-\\xi-\\beta r, \\,\\,r\\gg 1."
},
{
"math_id": 33,
"text": "\\partial\\phi/\\partial r"
},
{
"math_id": 34,
"text": "-\\beta\\partial\\phi/\\partial x"
},
{
"math_id": 35,
"text": "1/\\sqrt r"
},
{
"math_id": 36,
"text": "1/r"
},
{
"math_id": 37,
"text": "\\frac{\\partial \\phi}{\\partial r} = -\\beta \\frac{\\partial \\phi}{\\partial x}= \\frac{v_1}{2\\pi}\\sqrt{\\frac{\\beta}{2r}}\\int_0^{x-\\beta r} \\frac{S''(\\xi)d\\xi}{\\sqrt{x-\\xi-\\beta r}}."
},
{
"math_id": 38,
"text": "F = \\frac{\\rho_1 v_1^2}{4\\pi} \\int_{-\\infty}^\\infty \\int_0^X \\int_0^X \\frac{S''(\\xi_1)S''(\\xi_2) d\\xi_1d\\xi_2dX}{\\sqrt{(X-\\xi_1)(X-\\xi_2)}}, \\quad X=x-\\beta r."
},
{
"math_id": 39,
"text": "X"
},
{
"math_id": 40,
"text": "\\mathrm{max}(\\xi_1,\\xi_2)"
},
{
"math_id": 41,
"text": "L\\to\\infty"
},
{
"math_id": 42,
"text": "F = - \\frac{\\rho_1 v_1^2}{2\\pi} \\int_0^l \\int_0^{\\xi_2} S''(\\xi_1)S''(\\xi_2)[\\ln(\\xi_2-\\xi_1)-\\ln 4L]d\\xi_1d\\xi_2."
},
{
"math_id": 43,
"text": "L"
},
{
"math_id": 44,
"text": "S'(0)=S'(l)=0"
},
{
"math_id": 45,
"text": "S(0)=S(l)=0"
},
{
"math_id": 46,
"text": "F = - \\frac{\\rho_1 v_1^2}{2\\pi} \\int_0^l \\int_0^{\\xi_2} S''(\\xi_1)S''(\\xi_2)\\ln(\\xi_2-\\xi_1)d\\xi_1d\\xi_2,"
},
{
"math_id": 47,
"text": "F = - \\frac{\\rho_1 v_1^2}{2\\pi} \\int_0^l \\int_0^{l} S''(\\xi_1)S''(\\xi_2)\\ln|\\xi_2-\\xi_1|d\\xi_1d\\xi_2."
},
{
"math_id": 48,
"text": "C_d = \\frac{F}{\\rho_1^2 v_1^2 l^2/2}."
},
{
"math_id": 49,
"text": "F\\sim \\rho_1 v_1^2 S^2/l^2"
},
{
"math_id": 50,
"text": "C_d \\sim S^2/l^4"
},
{
"math_id": 51,
"text": "V"
}
]
| https://en.wikipedia.org/wiki?curid=76359420 |
76359810 | Classifying space for SO(n) | In mathematics, the classifying space formula_0 for the special orthogonal group formula_1 is the base space of the universal formula_1 principal bundle formula_2. This means that formula_1 principal bundles over a CW complex up to isomorphism are in bijection with homotopy classes of its continuous maps into formula_0. The isomorphism is given by pullback.
Definition.
There is a canonical inclusion of real oriented Grassmannians given by formula_3. Its colimit is:
formula_4
Since real oriented Grassmannians can be expressed as a homogeneous space by:
formula_5
the group structure carries over to formula_0.
Classification of principal bundles.
Given a topological space formula_10 the set of formula_1 principal bundles on it up to isomorphism is denoted formula_11. If formula_10 is a CW complex, then the map:
formula_12
is bijective.
Cohomology ring.
The cohomology ring of formula_0 with coefficients in the field formula_13 of two elements is generated by the Stiefel–Whitney classes:
formula_14
The results holds more generally for every ring with characteristic formula_15.
The cohomology ring of formula_0 with coefficients in the field formula_16 of rational numbers is generated by Pontrjagin classes and Euler class:
formula_17
formula_18
The results holds more generally for every ring with characteristic formula_19.
Infinite classifying space.
The canonical inclusions formula_20 induce canonical inclusions formula_21 on their respective classifying spaces. Their respective colimits are denoted as:
formula_22
formula_23
formula_24 is indeed the classifying space of formula_25. | [
{
"math_id": 0,
"text": "\\operatorname{BSO}(n)"
},
{
"math_id": 1,
"text": "\\operatorname{SO}(n)"
},
{
"math_id": 2,
"text": "\\operatorname{ESO}(n)\\rightarrow\\operatorname{BSO}(n)"
},
{
"math_id": 3,
"text": "\\widetilde\\operatorname{Gr}_n(\\mathbb{R}^k)\\hookrightarrow\\widetilde\\operatorname{Gr}_n(\\mathbb{R}^{k+1}),\nV\\mapsto V\\times\\{0\\}"
},
{
"math_id": 4,
"text": "\\operatorname{BSO}(n)\n:=\\operatorname{Gr}_n(\\mathbb{R}^\\infty)\n:=\\lim_{k\\rightarrow\\infty}\\widetilde\\operatorname{Gr}_n(\\mathbb{R}^k)."
},
{
"math_id": 5,
"text": "\\widetilde\\operatorname{Gr}_n(\\mathbb{R}^k)\n=\\operatorname{SO}(n+k)/(\\operatorname{SO}(n)\\times\\operatorname{SO}(k))"
},
{
"math_id": 6,
"text": "\\operatorname{SO}(1)\n\\cong 1"
},
{
"math_id": 7,
"text": "\\operatorname{BSO}(1)\n\\cong\\{*\\}"
},
{
"math_id": 8,
"text": "\\operatorname{SO}(2)\n\\cong\\operatorname{U}(1)"
},
{
"math_id": 9,
"text": "\\operatorname{BSO}(2)\n\\cong\\operatorname{BU}(1)\n\\cong\\mathbb{C}P^\\infty"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\operatorname{Prin}_{\\operatorname{SO}(n)}(X)"
},
{
"math_id": 12,
"text": "[X,\\operatorname{BSO}(n)]\\rightarrow\\operatorname{Prin}_{\\operatorname{SO}(n)}(X),\n[f]\\mapsto f^*\\operatorname{ESO}(n)"
},
{
"math_id": 13,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 14,
"text": "H^*(\\operatorname{BSO}(n);\\mathbb{Z}_2)\n=\\mathbb{Z}_2[w_2,\\ldots,w_n]."
},
{
"math_id": 15,
"text": "\\operatorname{char}=2"
},
{
"math_id": 16,
"text": "\\mathbb{Q}"
},
{
"math_id": 17,
"text": "H^*(\\operatorname{BSO}(2n);\\mathbb{Q})\n\\cong\\mathbb{Q}[p_1,\\ldots,p_n,e]/(p_n-e^2),"
},
{
"math_id": 18,
"text": "H^*(\\operatorname{BSO}(2n+1);\\mathbb{Q})\n\\cong\\mathbb{Q}[p_1,\\ldots,p_n]."
},
{
"math_id": 19,
"text": "\\operatorname{char}\\neq 2"
},
{
"math_id": 20,
"text": "\\operatorname{SO}(n)\\hookrightarrow\\operatorname{SO}(n+1)"
},
{
"math_id": 21,
"text": "\\operatorname{BSO}(n)\\hookrightarrow\\operatorname{BSO}(n+1)"
},
{
"math_id": 22,
"text": "\\operatorname{SO}\n:=\\lim_{n\\rightarrow\\infty}\\operatorname{SO}(n);"
},
{
"math_id": 23,
"text": "\\operatorname{BSO}\n:=\\lim_{n\\rightarrow\\infty}\\operatorname{BSO}(n)."
},
{
"math_id": 24,
"text": "\\operatorname{BSO}"
},
{
"math_id": 25,
"text": "\\operatorname{SO}"
}
]
| https://en.wikipedia.org/wiki?curid=76359810 |
76359866 | Classifying space for SU(n) | In mathematics, the classifying space formula_0 for the special unitary group formula_1 is the base space of the universal formula_1 principal bundle formula_2. This means that formula_1 principal bundles over a CW complex up to isomorphism are in bijection with homotopy classes of its continuous maps into formula_0. The isomorphism is given by pullback.
Definition.
There is a canonical inclusion of complex oriented Grassmannians given by formula_3. Its colimit is:
formula_4
Since real oriented Grassmannians can be expressed as a homogeneous space by:
formula_5
the group structure carries over to formula_0.
Classification of principal bundles.
Given a topological space formula_10 the set of formula_1 principal bundles on it up to isomorphism is denoted formula_11. If formula_10 is a CW complex, then the map:
formula_12
is bijective.
Cohomology ring.
The cohomology ring of formula_0 with coefficients in the ring formula_13 of integers is generated by the Chern classes:
formula_14
Infinite classifying space.
The canonical inclusions formula_15 induce canonical inclusions formula_16 on their respective classifying spaces. Their respective colimits are denoted as:
formula_17
formula_18
formula_19 is indeed the classifying space of formula_20. | [
{
"math_id": 0,
"text": "\\operatorname{BSU}(n)"
},
{
"math_id": 1,
"text": "\\operatorname{SU}(n)"
},
{
"math_id": 2,
"text": "\\operatorname{ESU}(n)\\rightarrow\\operatorname{BSU}(n)"
},
{
"math_id": 3,
"text": "\\widetilde\\operatorname{Gr}_n(\\mathbb{C}^k)\\hookrightarrow\\widetilde\\operatorname{Gr}_n(\\mathbb{C}^{k+1}),\nV\\mapsto V\\times\\{0\\}"
},
{
"math_id": 4,
"text": "\\operatorname{BSU}(n)\n:=\\widetilde\\operatorname{Gr}_n(\\mathbb{C}^\\infty)\n:=\\lim_{n\\rightarrow\\infty}\\widetilde\\operatorname{Gr}_n(\\mathbb{C}^k)."
},
{
"math_id": 5,
"text": "\\widetilde\\operatorname{Gr}_n(\\mathbb{C}^k)\n=\\operatorname{SU}(n+k)/(\\operatorname{SU}(n)\\times\\operatorname{SU}(k))"
},
{
"math_id": 6,
"text": "\\operatorname{SU}(1)\n\\cong 1"
},
{
"math_id": 7,
"text": "\\operatorname{BSU}(1)\n\\cong\\{*\\}"
},
{
"math_id": 8,
"text": "\\operatorname{SU}(2)\n\\cong\\operatorname{Sp}(1)"
},
{
"math_id": 9,
"text": "\\operatorname{BSU}(2)\n\\cong\\operatorname{BSp}(1)\n\\cong\\mathbb{H}P^\\infty"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\operatorname{Prin}_{\\operatorname{SU}(n)}(X)"
},
{
"math_id": 12,
"text": "[X,\\operatorname{BSU}(n)]\\rightarrow\\operatorname{Prin}_{\\operatorname{SU}(n)}(X),\n[f]\\mapsto f^*\\operatorname{ESU}(n)"
},
{
"math_id": 13,
"text": "\\mathbb{Z}"
},
{
"math_id": 14,
"text": "H^*(\\operatorname{BSU}(n);\\mathbb{Z})\n=\\mathbb{Z}[c_2,\\ldots,c_n]."
},
{
"math_id": 15,
"text": "\\operatorname{SU}(n)\\hookrightarrow\\operatorname{SU}(n+1)"
},
{
"math_id": 16,
"text": "\\operatorname{BSU}(n)\\hookrightarrow\\operatorname{BSU}(n+1)"
},
{
"math_id": 17,
"text": "\\operatorname{SU}\n:=\\lim_{n\\rightarrow\\infty}\\operatorname{SU}(n);"
},
{
"math_id": 18,
"text": "\\operatorname{BSU}\n:=\\lim_{n\\rightarrow\\infty}\\operatorname{BSU}(n)."
},
{
"math_id": 19,
"text": "\\operatorname{BSU}"
},
{
"math_id": 20,
"text": "\\operatorname{SU}"
}
]
| https://en.wikipedia.org/wiki?curid=76359866 |
76359878 | Agnibaan SOrTeD | Indian sub-orbital sounding rocket, developed by Agnikul Cosmos
Agnibaan "SOrTeD" (short form of "SubOrbital Technological Demonstrator") is a suborbital technological demonstrator of the Agnibaan launch vehicle, manufactured by Indian space startup Agnikul Cosmos.
Description.
The SOrTeD mission is a single-stage launch vehicle demonstration that is powered by a semi-cryogenic engine called the Agnilet. The 6.2 meter-tall vehicle has an elliptical nose cone at the top to protect the package from harsh conditions during the flight.
Unlike traditional sounding rockets that typically launch from guide rails, Agnibaan SOrTeD lifts off vertically and follows a predetermined trajectory while executing a precisely coordinated series of maneuvers during flight. This innovative approach sets Agnibaan apart and highlights the advanced technology and capabilities employed by Agnikul Cosmos for its maiden sub-orbital flight.
For Flight control the vehicle is equipped with four carbon composite fins to provide passive control. Agnikul has said that the active pitch and yaw control is achieved through two-plane gimballing, and together, these systems enable controlled vertical ascent.The company has integrated Agnibaan SOrTeD with the flight termination system developed by ISRO. <br>
Agnikul had previously received authorization to establish a unique launch pad near the sea on Sriharikota island, alongside its dedicated control room. The pad has received the name "Dhanush" and referred as ALP-01. Agnikul is the second Indian private spaceflight company to test its orbital launch system, following Skyroot Aerospace, who launched their Vikram-S rocket.
Agnibaan SOrTeD-01.
Delays.
On 21 March 2024, a day before launch, the official handle for Agnikul Cosmos posted on X that they deferred the launch based on certain minor observations they observed from the full countdown rehearsals the previous night. The pre-launch procedures began ten hours before liftoff, with the filling of the fuel tanks, deployment of balloons for assessing winds at various altitudes, uploading the programme to the flight computer and getting a final clearance from the launch directors. The company postponed another test on 6 April 2024 while conducting pre-launch checks. Another launch attempt on 28 May 2024, was also called off, less than a minute before lift-off. The launch date was then fixed to 30 May 2024 for a fifth launch attempt.
Launch.
The mission successfully lifted off on 30 May 2024 at 7:15 AM IST from India's first private launchpad ALP-01 located close to ISRO facilities near SDSC.
The 580 kilogram rocket, with a thrust of 6.25 kN and propellant flow rate of 3.3 kg/s formula_0, lifted off from Sriharikota and in the first flight and travelled as high as 20 kilometers above the Earth, before plunging down into the Bay of Bengal and carried about 7 kg of payloads . The data that it provides would help engineers fine-tune and shape the development of the Agnibaan launch vehicle, which is expected to fly by the last quarter of 2025..
Flight description.
Following lift-off, the vehicle performed a pitch-over manoeuvre nearly four seconds into flight. This manoeuvre involves the controlled rotation of the vehicle to change its orientation from vertical to a predetermined angle with respect to the ground or its flight path.
The vehicle then went into the wind biasing manoeuvre at just over 39 seconds, which is introduced in rockets to compensate for the effects of wind on the trajectory of the rocket during ascent. At about 1 minute and 29 seconds into the flight, the rocket reached apogee, the point at which it is farthest from the launch site, before it splashes down at just over two minutes into flight, marking the completion of the mission.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(I_{sp} = 187 s)"
}
]
| https://en.wikipedia.org/wiki?curid=76359878 |
763707 | Output (economics) | Quantity or quality of goods or services produced in a given time period
In economics, output is the quantity and quality of goods or services produced in a given time period, within a given economic network, whether consumed or used for further production. The economic network may be a firm, industry, or nation. The concept of national output is essential in the field of macroeconomics. It is national output that makes a country rich, not large amounts of money.
Definition.
Output is the result of an economic process that has used inputs to produce a product or service that is available for sale or use somewhere else.
"Net output", sometimes called "netput" is a quantity, in the context of production, that is positive if the quantity is output by the production process and negative if it is an input to the production process.
Microeconomics.
Output condition.
The profit-maximizing output condition for producers equates the relative marginal cost of any two goods to the relative selling price of those goods; i.e.
formula_0
One may also deduce the ratio of marginal costs as the slope of the production–possibility frontier, which would give the rate at which society can transform one good into another.
Macroeconomics.
Relation to income.
When a particular quantity of output is produced, an identical quantity of income is generated because the output belongs to someone. Thus we have the identity that output equals income (where an identity is an equation that is always true regardless of the values of any variables).
Output can be sub-divided into components based on whose demand has generated it – total consumption "C" by members of the public (including on imported goods) minus imported goods "M" (the difference being consumption of domestic output), spending "G" by the government, domestically produced goods "X" bought by foreigners, planned inventory accumulation "Iplanned inven", unplanned inventory accumulation "Iunplanned inven" resulting from incorrect predictions of consumer and government demand, and fixed investment "If" on machinery and the like.
Likewise, income can be sub-divided according to the uses to which it is put – consumption spending, taxes "T" paid, and the portion of income neither taxed nor spent (saving "S").
Since output identically equals income, the above leads to the following identity:
formula_1
where the triple-bar sign denotes an identity. This identity is distinct from the goods market equilibrium condition, which is satisfied when unplanned inventory investment equals zero:
formula_2
Output is the result of an economic process that has used inputs to produce a product or service that is available for sale or use somewhere else.
Net output, sometimes called netput, is a quantity, in the context of production, that is positive if the quantity is output by the production process and negative if it is an input to the production process.
Fluctuations in output.
In macroeconomics, the question of why national output fluctuates is a very critical one. And though no consensus has developed, there are some factors which economists agree make output go up and down.
If we take growth into consideration, then most economists agree that there are three basic sources for economic growth: an increase in labour usage, an increase in capital usage and an increase in effectiveness of the factors of production.
Just as increases in usage or effectiveness of factors of production can cause output to go up, anything that causes labour, capital or their effectiveness to go down will cause a decline in output or at least a decline in its rate of growth.
International economics.
Exchange of output among nations.
Exchange of output between two countries is a very common occurrence, as there is always trade taking place between different nations of the world. For example, Japan may trade its electronics with Germany for German-made cars. If the value of the trades being made by both the countries is equal at that point of time, then their trade accounts would be balanced: the exports would be exactly equal to imports in both the countries.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac {MC_1}{MC_2} = \\frac {P_1}{P_2}"
},
{
"math_id": 1,
"text": "C+I_{\\text{planned inven}}+I_{\\text{unplanned inven}}+I_f+G+X-M \\equiv C+S+T,"
},
{
"math_id": 2,
"text": "C+I_{\\text{planned inven}}+I_f+G +X-M= C+S+T."
}
]
| https://en.wikipedia.org/wiki?curid=763707 |
7637545 | Non-exact solutions in general relativity | Non-exact solutions in general relativity are solutions of Albert Einstein's field equations of general relativity which hold only approximately. These solutions are typically found by treating the gravitational field, formula_0, as a background space-time, formula_1, (which is usually an exact solution) plus some small perturbation, formula_2. Then one is able to solve the Einstein field equations as a series in formula_2, dropping higher order terms for simplicity.
A common example of this method results in the linearised Einstein field equations. In this case we expand the full space-time metric about the flat Minkowski metric, formula_3:
formula_4,
and dropping all terms which are of second or higher order in formula_2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "h"
},
{
"math_id": 3,
"text": "\\eta_{\\mu\\nu}"
},
{
"math_id": 4,
"text": "g_{\\mu\\nu} = \\eta_{\\mu\\nu} + h_{\\mu\\nu} +\\mathcal{O}(h^2)"
}
]
| https://en.wikipedia.org/wiki?curid=7637545 |
76380163 | Taylor–Maccoll flow | Flow behind a conical shock wave
Taylor–Maccoll flow refers to the steady flow behind a conical shock wave that is attached to a solid cone. The flow is named after G. I. Taylor and J. W. Maccoll, whom described the flow in 1933, guided by an earlier work of Theodore von Kármán.
Mathematical description.
Consider a steady supersonic flow past a solid cone that has a semi-vertical angle formula_0. A conical shock wave can form in this situation, with the vertex of the shock wave lying at the vertex of the solid cone. If it were a two-dimensional problem, i.e., for a supersonic flow past a wedge, then the incoming stream would have deflected through an angle formula_0 upon crossing the shock wave so that streamlines behind the shock wave would be parallel to the wedge sides. Such a simple turnover of streamlines is not possible for three-dimensional case. After passing through the shock wave, the streamlines are curved and only asymptotically they approach the generators of the cone. The curving of streamlines is accompanied by a gradual increase in density and decrease in velocity, in addition to those increments/decrements effected at the shock wave.
The direction and magnitude of the velocity immediately behind the oblique shock wave is given by weak branch of the shock polar. This particularly suggests that for each value of incoming Mach number formula_1, there exists a maximum value of formula_2 beyond which shock polar do not provide solution under in which case the conical shock wave will have detached from the solid surface (see Mach reflection). These detached cases are not considered here. The flow immediately behind the oblique conical shock wave is typically supersonic, although however when formula_0 is close to formula_2, it can be subsonic. The supersonic flow behind the shock wave will become subsonic as it evolves downstream.
Since all incident streamlines intersect the conical shock wave at the same angle, the intensity of the shock wave is constant. This particularly means that entropy jump across the shock wave is also constant throughout. In this case, the flow behind the shock wave is a potential flow. Hence we can introduce the velocity potential formula_3 such that formula_4. Since the problem do not have any length scale and is clearly axisymmetric, the velocity field formula_5 and the pressure field formula_6 will be turn out to functions of the polar angle formula_7 only (the origin of the spherical coordinates formula_8 is taken to be located at the vertex). This means that we have
formula_9
The steady potential flow is governed by the equation
formula_10
where the sound speed formula_11 is expressed as a function of the velocity magnitude formula_12 only. Substituting the above assumed form for the velocity field, into the governing equation, we obtain the general Taylor–Maccoll equation
formula_13
The equation is simplified greatly for a polytropic gas for which formula_14, i.e.,
formula_15
where formula_16 is the specific heat ratio and formula_17 is the stagnation enthalpy. Introducing this formula into the general Taylor–Maccoll equation and introducing a non-dimensional function formula_18, where formula_19 (the speed of the potential flow when it flows out into a vacuum), we obtain, for the polytropic gas, the Taylor–Maccoll equation,
formula_20
The equation must satisfy the condition that formula_21 (no penetration on the solid surface) and also must correspond to conditions behind the shock wave at formula_22, where formula_23 is the half-angle of shock cone, which must be determined as part of the solution for a given incoming flow Mach number formula_24 and formula_16. The Taylor–Maccoll equation has no known explicit solution and it is integrated numerically.
Kármán–Moore solution.
When the cone angle is very small, the flow is nearly parallel everywhere in which case, an exact solution can be found, as shown by Theodore von Kármán and Norton B. Moore in 1932. The solution is more apparent in the cylindrical coordinates formula_25 (the formula_26 here is the radial distance from the formula_27-axis, and not the density). If formula_28 is the speed of the incoming flow, then we write formula_29, where formula_30 is a small correction and satisfies
formula_31
where formula_32 is the Mach number of the incoming flow. We expect the velocity components to depend only on formula_7, i.e., formula_33 in cylindrical coordinates, which means that we must have formula_34, where formula_35 is a self-similar coordinate. The governing equation reduces to
formula_36
On the surface of the cone formula_37, we must have formula_38 and conesequently formula_39.
In the small-angle approximation, the weak shock cone is given by formula_40. The trivial solution for formula_41 describes the uniform flow upstream of the shock cone, whereas the non-trivial solution satisfying the boundary condition on the solid surface behind the shock wave is given by
formula_42
We therefore have
formula_43
exhibiting a logarthmic singularity as formula_44 The velocity components are given by
formula_45
The pressure on the surface of the cone formula_46 is found to be formula_47 (in this formula, formula_48 is the density of the incoming gas).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\chi"
},
{
"math_id": 1,
"text": "M_1"
},
{
"math_id": 2,
"text": "\\chi_{\\mathrm{max}}"
},
{
"math_id": 3,
"text": "\\varphi"
},
{
"math_id": 4,
"text": "\\mathbf v = \\nabla\\varphi"
},
{
"math_id": 5,
"text": "\\mathbf v"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "\\theta"
},
{
"math_id": 8,
"text": "(r,\\theta,\\phi)"
},
{
"math_id": 9,
"text": "\\varphi=rf(\\theta), \\quad v_r = f(\\theta), \\quad v_\\theta=f'(\\theta), \\quad v_\\phi=0, \\quad p = g(\\theta)."
},
{
"math_id": 10,
"text": "c^2\\nabla\\cdot\\mathbf v - \\mathbf v\\cdot (\\mathbf v \\cdot \\nabla)\\mathbf v=0,"
},
{
"math_id": 11,
"text": "c=c(v)"
},
{
"math_id": 12,
"text": "v^2=(\\nabla\\phi)^2"
},
{
"math_id": 13,
"text": "(c^2-f'^2) f'' + c^2 \\cot\\theta f' + (2c^2-f'^2) f = 0, \\quad c = c(f^2+f'^2)."
},
{
"math_id": 14,
"text": "c^2 = (\\gamma-1)(h_0-v^2/2)"
},
{
"math_id": 15,
"text": "c^2 = (\\gamma-1)h_0 \\left(1-\\frac{f^2+f'^2}{2h_0}\\right),"
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "h_0"
},
{
"math_id": 18,
"text": "F(\\theta) = f(\\theta)/v_{\\mathrm{max}}"
},
{
"math_id": 19,
"text": "v_{\\mathrm{max}}= \\sqrt{2h_0}"
},
{
"math_id": 20,
"text": "\\left[\\frac{\\gamma+1}{2}F'^2-\\frac{\\gamma-1}{2}(1-F^2)\\right]F'' = (\\gamma-1) (1-F^2) F + \\frac{\\gamma-1}{2}\\cot\\theta(1-F^2)F' - \\gamma F F'^2 - \\frac{\\gamma-1}{2}\\cot\\theta F'^3."
},
{
"math_id": 21,
"text": "F'(\\chi)=0"
},
{
"math_id": 22,
"text": "\\chi=\\psi"
},
{
"math_id": 23,
"text": "\\psi"
},
{
"math_id": 24,
"text": "M"
},
{
"math_id": 25,
"text": "(\\rho,\\varpi,z)"
},
{
"math_id": 26,
"text": "\\rho"
},
{
"math_id": 27,
"text": "z"
},
{
"math_id": 28,
"text": "U"
},
{
"math_id": 29,
"text": "\\varphi = Uz + \\phi"
},
{
"math_id": 30,
"text": "\\phi"
},
{
"math_id": 31,
"text": "\\frac{1}{\\rho}\\frac{\\partial }{\\partial \\rho}\\left(\\rho\\frac{\\partial \\phi}{\\partial \\rho}\\right) -\\beta^2 \\frac{\\partial^2 \\phi}{\\partial z^2}=0, \\quad \\beta^2 = M^2-1"
},
{
"math_id": 32,
"text": "M=U/c_\\infty"
},
{
"math_id": 33,
"text": "\\rho/z=\\tan\\theta"
},
{
"math_id": 34,
"text": "\\phi = zg(\\xi)"
},
{
"math_id": 35,
"text": "\\xi = \\rho/z"
},
{
"math_id": 36,
"text": "\\xi(1-\\beta^2\\xi^2) g'' + g'=0."
},
{
"math_id": 37,
"text": "\\xi = \\tan\\chi \\approx \\chi"
},
{
"math_id": 38,
"text": "v_\\rho/v_z=(\\partial\\phi/\\partial \\rho)/(U+\\partial\\phi/\\partial z)\\approx (1/U)\\partial\\phi/\\partial \\rho=\\chi"
},
{
"math_id": 39,
"text": "g'=U\\chi"
},
{
"math_id": 40,
"text": "z=\\beta \\rho"
},
{
"math_id": 41,
"text": "g"
},
{
"math_id": 42,
"text": "g(\\xi) = U \\chi^2 \\left(\\sqrt{1-\\beta^2\\xi^2}-\\cosh^{-1}\\frac{1}{\\beta\\xi}\\right)."
},
{
"math_id": 43,
"text": "\\varphi = Uz +U \\chi^2 \\left(\\sqrt{z^2-\\beta^2\\rho^2}-z\\cosh^{-1}\\frac{z}{\\beta \\rho}\\right)"
},
{
"math_id": 44,
"text": "\\rho\\to 0."
},
{
"math_id": 45,
"text": "v_z = U - U\\chi^2 \\cosh^{-1}\\frac{z}{\\beta \\rho}, \\quad v_\\rho = \\frac{U\\chi^2}{\\rho} \\sqrt{z^2-\\beta^2 \\rho^2}."
},
{
"math_id": 46,
"text": "p_s"
},
{
"math_id": 47,
"text": "p_s -p_\\infty = \\rho_\\infty U^2\\chi^2[\\ln (2/\\beta\\chi)-1/2]"
},
{
"math_id": 48,
"text": "\\rho_\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=76380163 |
76383140 | Closure of tidal inlets | Man-made coastal barriers against tides
In coastal and environmental engineering, the closure of tidal inlets entails the deliberate prevention of the entry of seawater into inland areas through the use of fill material and the construction of barriers. The aim of such closures is usually to safeguard inland regions from flooding, thereby protecting ecological integrity and reducing potential harm to human settlements and agricultural areas.
The complexity of inlet closure varies significantly with the size of the estuary involved. For smaller estuaries, which may naturally dry out at low tide, the process can be relatively straightforward. However, the management of larger estuaries demands a sophisticated blend of technical expertise, encapsulating hydrodynamics, sediment transport, as well as mitigation of the potential ecological consequences of such interventions. The development of knowledge around such closures over time reflects a concerted effort to balance flood defence mechanisms with environmental stewardship, leading to the development of both traditional and technologically advanced solutions.
In situations where rivers and inlets pose significant flood risk across large areas, providing protection along the entire length of both banks can be prohibitively expensive. In London, this issue has been addressed by construction of the Thames Barrier, which is only closed during forecasts of extreme water levels in the southern North Sea. In the Netherlands, a number of inlets were closed by fully damming their entrances. Since such dams take many months or years to complete, water exchange between the sea and the inlet continues throughout the construction period. It is only during the final stages that the gap is sufficiently narrowed to limit this exchange, presenting unique construction challenges. As the gap diminishes, significant differences in water levels between the sea and the inlet create very strong currents, potentially reaching several metres per second, through the remaining narrow opening.
Special techniques are required during this critical closure phase to prevent severe erosion of existing defences. Two primary methods are used: the abrupt or sudden closure method, which involves positioning prefabricated caissons during a brief period of slack water, and the gradual closure method, which involves progressively building up the last section of the dam, keeping the crest nearly horizontal to prevent strong currents and erosion along any specific section.
Purpose of a tidal inlet closure.
The closure of tidal inlets serves various primary purposes:
Historically, the closure of inlets was primarily aimed at land reclamation and water level control in marshy areas, facilitating agricultural development. Such activities necessitated effective management of river and storm surge levels, often requiring ongoing dike maintenance. Secondary purposes, such as tidal energy generation, harbour and construction docks, dams for transportation infrastructure, and fish farming, also emerged but had lesser environmental impact.
In contemporary times, driven by a growing emphasis on quality of life, particularly in industrialised nations, inlet closure projects encompass a broader spectrum of objectives. These may include creating freshwater storage facilities, mitigating water pollution in designated zones, providing recreational amenities, and combating saltwater intrusion or groundwater contamination.
Side effects.
Depending on circumstances, various hydrological, environmental, ecological, and economic side effects can be realised by the implementation of a tidal inlet closure, including:
Examples of closure works.
Historical closures in the Netherlands.
Several towns in the Netherlands bear names ending in "dam," indicating their origin at the site of a dam in a tidal river. Prominent examples include Amsterdam (located at a dam in the Amstel) and Rotterdam (situated at a dam in the Rotte). However, some locations, like Maasdam, have less clear origins. Maasdam, a village situated at the site of a dam on the Maas dating back to before 1300, was the site of the construction of the Grote Hollandse Waard, which was subsequently lost during the devastating St. Elizabeth's Flood of 1421. As a result of the flood, the Maas river is now located far from the village of Maasdam.
One technique widely employed in historical closures was known as (English: sinking up). This method involved sinking fascine mattresses, filling them with sand, and stabilising them with ballast stone. Successive sections were then sunk on top until the dam reached a height where no further mattresses could be placed. This process effectively reduced the flow, allowing the completion of the dam with sand and clay. For instance, the construction of the Sloedam in 1879, as part of the railway to Middelburg, utilised this technique.
Early observations revealed that during closures, the flow velocity within the closure gap increased, leading to soil erosion. Consequently, measures such as bottom protection around the closing gap were implemented, guided primarily by experiential knowledge rather than precise calculations. Until 1953, closing dike breaches in tidal areas posed challenges due to high current velocities. In such instances, new dikes were constructed further inland, albeit a lengthier process, to mitigate closure difficulties. An extreme example occurred after the devastating North Sea flood of 1953, necessitating the closure of breaches at Schelphoek, marking the last major closure in the Netherlands.
Modern dam construction in the Netherlands.
In recent times, the construction of larger dams in the Netherlands has been driven by both the necessity to protect the hinterlands and the ambition to create new agricultural lands.
The formation of currents at the mouth of an inlet arises from the tidal actions of filling (high tide) and emptying (ebb tide) of the basin. The speed of these currents is influenced by the tidal range, the tidal curve, the volume of the tidal basin (also known as the "storage area"), and the size of the flow profile at that location. The tidal range varies along the Dutch coast, being minimal near Den Helder (about 1.5 metres) and maximal off the coast of Zeeland (2 to 3 metres), with the range expanding to 4 to 5 metres in the areas behind the Oosterschelde and Westerschelde.
In tidal basins with loosely packed seabeds, current channels emerge and may shift due to the constantly changing directions and speeds of currents. The strongest flows cause scour in the deepest channels, such as in the Oosterschelde where depths can reach up to 45 metres, while sandbanks form between these channels, occasionally becoming exposed at low tide.
The channel systems that naturally develop in tidal areas are generally in a state of approximate equilibrium, balancing flow velocity and the total flow profile. Conversely, when dike breaches are sealed, this equilibrium is often not yet achieved at the time of closure. For instance, rapid intervention in closing numerous breaches following the 1953 storm surge helped limit erosion. For the construction of a dam at the mouth of an inlet, activities are undertaken to reduce the flow profile, potentially leading to increased flow velocities and subsequent scouring unless pre-emptive measures are taken, such as reinforcing the beds and sides of channels with bottom protection. An exception occurs when the surface area of the tidal basin is preliminarily reduced by compartmentalisation dams.
The procedure for closing a tidal channel can generally be segmented into four phases:
Under specific circumstances, alternative construction methods may be applied; for instance, during a sand closure, dumping capacity is utilised in such a manner that more material is added per tide than can be removed by the current, typically negating the need for soil protection.
When the Zuiderzee was enclosed in 1932, it was still possible to manage the current with boulder clay, as the tidal difference there was only about 1 metre, preventing excessively high flow velocities in the closure gap that would require alternative materials. Numerous closure methods have been implemented in the Delta area, on both small and large scales, highly dependent on a variety of preconditions. These include hydraulic and soil mechanical prerequisites, as well as available resources such as materials, equipment, labour, finances, and expertise. Post-World War II, the experiences gained from dike repairs in Walcheren in 1945, the closure of the Brielse Maas in 1950, the Braakman in 1952, and the repair of the breaches after the 1953 storm surge significantly influenced the choice of closure methods for the first Delta dams.
Up until the completion of the Brouwersdam in 1971, the choice of closure method was almost entirely based on technical factors. However, environmental and fisheries considerations became equally vital in the selection of closure methods for the Markiezaatskade near Bergen op Zoom, the Philipsdam, Oesterdam, and the storm surge barrier in the Oosterschelde, taking into account factors like the timing of tidal organism mortality and salinity control during closures, which are critical for determining the initial conditions of the newly formed basin.
Closures in Germany.
In the north-west of Germany, a series of closure works have been implemented. Initially, the primary aims of these closures were land reclamation and protection against flooding. Subsequently, the focus shifted towards safety and ecological conservation. Closures took place in Meldorf (1978), Nordstrander Bucht (Husum, 1987), and Leyhörn (Greetsiel, 1991).
Around 1975, evolving global perspectives on ecological significance led to a change in the approach to closures. As a result, in northern Germany, several closures were executed differently from their original designs. For instance, while there were plans to completely dam the Leybucht near Greetsiel, only a minor portion was ultimately closed—just enough to meet safety and water management requirements. This made the closure of the remaining area no longer a technical challenge. A discharge sluice and navigation lock were constructed, providing adequate capacity to mitigate currents in the closure gap of the dam.
Closures in South Korea.
In the 1960s, South Korea faced a significant shortage of agricultural land, prompting plans for large reclamation projects, including the construction of closure dams. These projects were carried out between 1975 and 1995, incorporating the expertise and experience from the Netherlands. Over time, attitudes towards closure works in South Korea evolved, leading to considerable delays and modifications in the plans for the Hwaong and Saemangeum projects.
Closures in Bangladesh.
Creeks have been closed to facilitate the creation of agricultural land and provide protection against floods in Bangladesh for many years. The combination of safeguarding against flooding, the need for agricultural land, and the availability of irrigation water served as the driving forces behind these initiatives. Prior to 1975, such closure works were relatively modest in scale. Some early examples include:
The approach to closures in Bangladesh did not significantly differ from practices elsewhere. However, due to the country's low labour costs and high unemployment rates, methods employing extensive local manpower were preferred.
These works primarily utilised a type of locally developed fascine rolls known as "mata". The final gaps were closed swiftly within a single tidal cycle. Notably, the Gangrail closure failed twice.
In the years 1977/78, the Madargong creek was closed, safeguarding an agricultural area of 20,000 hectares. At the closure site, the creek spanned a width of 150 metres with a depth of 6 metres below mean sea level. The following year, 1978/79, saw the closure of the Chakamaya Khal, featuring a tidal prism of 10 million cubic metres, a tidal range of 3.3 metres, spanning 210 metres in width and 5 metres in depth.
In 1985, the Feni River was dammed to create an irrigation reservoir covering 1,200 hectares. The project was distinctive in its explicit request for the utilisation of local products and manual labour. The 1,200-metre-wide gap needed to be sealed during a neap tide. On the day of the closure, 12,000 workers placed 10,000 bags within the gap.
In 2020, the Nailan dam, originally constructed in the 1960s, experienced a breach that necessitated repair. At the time, the basin covered an area of 480 hectares, with a tidal range varying from 2.5 to 4 metres (neap tide to spring tide). The breach spanned a width of 500 metres, with a tidal prism of 7 million cubic metres. The closure was accomplished by deploying a substantial quantity of geobags, weighing up to 250 kg, though the majority of the bags in the core were 50 kg. The gap was progressively narrowed to 75 metres, the width of the final closure gap, which was sealed in one tidal cycle during a neap tide. To facilitate this, two rows of palisades were erected in the gap, and bags were used to fill the space between them, effectively creating a cofferdam.
Types of closures.
Closing methods can be categorized into two principal groups: gradual closures and sudden closures. Within gradual closures, four distinct methods are identified: horizontal closure without a significant sill (a), vertical closure (b), horizontal closure with a sill (c), and sand closures. Sand closures further differentiate into horizontal and vertical types. Sudden closures are typically achieved through the deployment of (sluice) caissons, often positioned on a sill (d).
The technology of closure works.
The challenge in sealing a sea inlet lies in the phenomenon that as the flow area of the closure gap decreases due to the construction of the dam, the flow speed within this gap increases. This acceleration can become so significant that the material deposited into the gap is immediately washed away, leading to the failure of the closure. Therefore, accurately calculating the flow rate is crucial. Given that the length of the basin is usually small relative to the length of the tidal wave, this calculation can typically be performed using a "storage area approach" (for more details, see the end of this page). This methodology enables the creation of straightforward graphs depicting the velocities within a closure gap throughout the closure process.
Stone closures.
Horizontal stone closures.
In the technique of horizontal stone closures, stone is deployed from both sides into the closing gap. The stone must be heavy enough to counter the increased velocity that results from the reduced flow profile. An added complication is the creation of turbulent eddies, which lead to further scouring of the seabed. It is therefore critical to lay a foundation of stone prior to commencing the closure. The closure of the Zuiderzee in 1932, as depicted in the attached photograph, vividly illustrates the downstream turbulence at the closing gap. Notably, during the Afsluitdijk closure, boulder clay was utilised in a manner akin to stone, which circumvented the need for costly imports of armourstone.
In the Netherlands, horizontal stone closures have been relatively uncommon due to the high costs associated with armourstone and the prerequisite soil protection. Conversely, in countries where stone is more affordable and soils are less prone to erosion, horizontal stone closures are more frequently employed. A notable instance of this method was the closure of the Saemangeum estuary in South Korea, where a scarcity of heavy stone led to the innovative use of stone packed in steel nets as dumping material. The logistical challenges of transporting and deploying stone, especially within the constraints of a tight timeframe to prevent excessive bottom erosion, often pose significant challenges.
Vertical stone closures.
From a hydraulic perspective, vertical closures are preferable due to their reduced turbulence and consequent minimisation of soil erosion issues. However, their implementation is more complex. For parts of the dam submerged underwater, stone dumpers (either bottom or side dumpers) can be employed. Yet, this becomes impractical for the final segments due to insufficient navigational depth. Two alternatives exist: the construction of an auxiliary bridge or the use of a cable car.
Auxiliary bridge.
An auxiliary bridge allows armourstone to be directly deposited into the closing gap. This method was contemplated for the Delta Works' Oesterdam closure but was ultimately deemed more expensive than sand closure. In the Netherlands, such a technique was applied during the closure of the dike around De Biesbosch polder in 1926, where a temporary bridge facilitated the dumping of materials into the gap using tipping carts propelled by a steam locomotive.
Cable car.
Constructing an auxiliary bridge for larger and deeper closing gaps can be exceedingly cumbersome, leading to the preference for cable cars in the Delta Works closures. The first application of a cable car was for the northern gap of the Grevelingendam, serving as a trial to gather insights for subsequent larger closures like the Brouwershavense Gat and the Oosterschelde.
Stone transport via cable involved wagons with independent propulsion, enhancing transport capacity through one-way traffic. The system's design, a collaboration between Rijkswaterstaat and French company Neyrpic, minimized malfunction risks across the network. The 'blondin automoteur continu' type cable car spanned approximately 1200 m, with a continuous track supported by two carrying cables and terminal turntables for wagon transfer. Initially, stone was transported in steel bottom-unloading containers, later supplemented by steel nets, allowing for a dumping rate of 360 tons per hour.
However, the system's loading capacity proved insufficient, prompting a switch to 1 m3 (2500 kg) concrete blocks for subsequent closures (Haringvliet and Brouwersdam). Although planned for the Oosterschelde closure, a policy shift led to the construction of a storm surge barrier instead, foregoing the use of the cable car for this purpose.
Sand closure.
Beyond the use of armourstone, closures can also be achieved solely with sand. This method necessitates a substantial dredging capacity. In the Netherlands, sand closures have been successfully implemented in various projects, including the Oesterdam, the Philipsdam, and the construction of the Second Maasvlakte.
Principles of a sand closure.
Sand closures involve employing a dumping capacity within the closure gap that introduces more material per tidal cycle than can be removed by the current. Unlike stone closures, the material used here is inherently unstable under the flow velocities encountered. Typically, sand closures do not necessitate soil protection. This, among other reasons, makes sand closure a cost-effective solution when locally sourced sand is utilized. Since 1965, numerous tidal channels have been effectively sealed using sand, aided by the rapidly increasing capabilities of modern sand suction dredgers.
These advancements have enabled quick and voluminous sand delivery for larger closures, tolerating sand losses during the closing phase of up to 20 to 50%. The initial sand closures of tidal channels — including the Ventjagersgatje in 1959 and the southern entrance to the Haringvliet bridge in 1961 — contributed to the development of a basic calculation method for sand closures. Subsequent sand closures provided practical validation for this method, refining predictions of sand losses.
Selected sand closures.
The following table outlines several channels that have been closed using sand, illustrating the technique's application and effectiveness.
"Note: Several compartments did not encompass fully enclosed basins, making a surface area metric inapplicable."
During the closure of the Geul at the mouth of the Oosterschelde—characterised by a tidal capacity of roughly 30 million cubic metres and a maximum depth of 10 metres below mean sea level (MSL)—the Oosterschelde dam between the working islands of Noordland and Neeltje Jans in 1972 witnessed minimised sand losses thanks to the employment of high-capacity suction dredging. This strategy achieved a sand extraction rate exceeding 500,000 cubic metres per week, distributed across three suction dredgers. It was also demonstrated that initiating the closure from one side and progressing towards the shallowest part of the gap effectively reduces sand losses. This approach ensured the shortest possible distance for the sand to be deposited towards the closure's culmination, particularly during periods of maximum flow velocity.
This technique partly accounts for the significant sand losses, approximately 45%, observed during the closure of the Brielse Gat, which has a maximum depth of 2 metres below MSL and where sand was deposited from both sides towards the centre. Opting for a single sand deposit site, while reducing sand losses, necessitates substantial suction capacity and results in a notably wider closure dam to accommodate all discharge pipelines.
Designing sand closures.
A defining feature of sand closures is the movement and subsequent loss of the construction material. The principle underpinning a sand closure relies on the production of more sand than what is lost during the process. Sand losses occur daily under average flow conditions through the closing gap, contingent upon the flow dynamics. In the context of "strength and load," the "strength" of a sand closure is represented by its production capacity, while the "load" is the resultant loss. A closure is deemed successful when the production exceeds the loss, leading to a gradual narrowing of the closing gap.
The production capacity, which includes a sufficiently large extraction site for the sand, must surpass the maximum anticipated loss during the closure operation. Consequently, the feasibility study for a (complete) sand closure must initially concentrate on identifying the phase associated with maximum losses. Employing hydraulic boundary conditions, the sand loss for each closure phase can be calculated and depicted graphically as illustrated. The horizontal axis in the diagram represents the closing gap's size, indicating that the depicted capacity is insufficient for a sand closure under these conditions.
A sand closure becomes viable if sufficient sand production can be sustained near the closure gap to overcome the phase with the highest losses. The essential criterion is that the average tidal loss remains lower than the production. However, considerable uncertainties exist in both the calculated losses and the anticipated production, necessitating careful attention. The loss curve, as a function of the closing gap area, typically exhibits a single peak. The maximal loss is usually found when the closing gap area is between 0 and 30% of its initial size. Hence, initial loss calculations can be restricted to this range of closing gap sizes.
Interestingly, the peak sand loss does not coincide with the near completion of the closure gap. Despite potentially high flow velocities, the eroded width of the closing hole is minimal, thus keeping overall sand losses low. Hydraulic boundary conditions can be determined using a storage/area approach.
In general, sand closures are theoretically feasible for maximum flow velocities up to approximately 2.0 to 2.5 m/s. Beyond these velocities, achieving a sand closure becomes virtually impossible due to the resulting flow rates, which are influenced by the reference flow rate U0 and the discharge coefficient μ. The discharge coefficient μ is affected by both friction and deceleration losses within the closing gap, with friction losses being notably significant due to the large dimensions of the sand dams. Consequently, the choice of gradient measurement distance significantly impacts the discharge coefficient, which exhibits considerable variability. However, this variability diminishes during the crucial final phase of the closure, where a value of 0.9 is recommended as a reasonable upper limit for the discharge coefficient. The actual flow velocity within the closing gap is determined by applying the storage area approach, adjusted by the discharge coefficient.
Sudden closures (caissons).
A sudden closure involves the rapid sealing of a tidal inlet or breach in a dike. This is typically prepared in such a manner that the gap can be entirely closed in one swift action during slack tide. The use of caissons or sluice caissons is common, though other unique methods, such as sandbags or ships, have also been employed. Caissons were initially utilized as an emergency response for sealing dike breaches post the Allied Battle of Walcheren in 1944 and subsequently after the 1953 North Sea flood. This technique has since been refined and applied in the Delta Works projects.
Caisson closure.
A caisson closure involves sealing the gap with a caisson, essentially a large concrete box. This method was first applied in the Netherlands for mending dike breaches resulting from Allied assaults on Walcheren in 1944. The following year, at Rammekens, surplus caissons (Phoenix caissons) sourced from England, originally used for constructing the Mulberry harbours post-Normandy landings by Allied troops, were repurposed for dike repairs.
In the aftermath of the 1953 storm disaster, the closure of numerous breaches with caissons was contemplated. Given the uncertainty surrounding the final sizes of the gaps and the time-consuming nature of caisson construction, a decision was made shortly after February 1, 1953, to pre-fabricate a considerable quantity of relatively small caissons. These were strategically employed across various sites, and later, within the Delta Works.
A limited supply of larger Phoenix caissons from the Mulberry harbours was also utilized for sealing a few extensive dike breaches, notably at Ouwerkerk and Schelphoek.
Placing a caisson.
To successfully sink a caisson, it's imperative that the flow velocity within the closing gap is minimized; thus, the operation is conducted during slack water. Given the extremely brief period during which the current is genuinely still, the sinking process must commence while the tidal flow remains at a manageable low speed. Past experiences with caisson closures have demonstrated that this speed should not exceed 0.3 m/s, guiding the timing for various phases of the operation as follows:
This schedule dictates that flow speeds must reduce to 0.30 m/s at most 13 minutes before slack water and to 0.75 m/s at most 30 minutes before. Considering the sinusoidal nature of tides in the Netherlands, with a cycle of 12.5 hours, the maximum velocity in the closing gap should not surpass 2.5 m/s. This velocity threshold can be ascertained through a storage/basin analysis. The accompanying diagram illustrates outcomes for sill heights at MSL -10 m and MSL -12 m, indicating that a sill at MSL -12 m is necessary as the sinking time at MSL -10 m is insufficient. Consequently, caisson closures are feasible only at considerable channel depths.
Sluice caissons.
The challenge in sealing larger gaps with caissons lies in the diminishing flow area as more caissons are placed, resulting in significantly increased flow speeds (exceeding the aforementioned 2.5 m/s), complicating the final caisson's proper placement. This issue is addressed through the use of sluice caissons, essentially a box equipped with gates on one side. During installation, these gates are shut to maintain buoyancy, and the opposite side is sealed with wooden boards.
Once each caisson is positioned, the boards are removed, and the gates opened, allowing the tidal current to pass with minimal impedance. This approach ensures that the flow area doesn't drastically reduce, and flow velocities remain manageable, facilitating the placement of subsequent caissons. After all caissons are set, the gates are closed at slack water, completing the closure. Subsequently, sand is sprayed in front of the dam, and the gates along with other movable mechanisms are removed, available for reuse in future closures.
Sluice caissons were first employed in closing the Veerse Gat, and subsequently utilized at the Brouwersdam and the Volkerak. They were also deployed in the closure of the Lauwerszee.
Design of Sluice Caissons.
For caisson closures, it is crucial to maintain the largest effective flow profile possible during installation. Additionally, the discharge coefficient must be as high as possible, indicating the degree to which flow is obstructed by the caisson's shape.
Flow Area.
The flow area of each caisson should be maximised. This can be achieved by:
Discharge Coefficient.
Besides the flow area, the discharge coefficient is of paramount importance. Measures to improve the discharge coefficient include:
The table below provides the discharge coefficients for various sluice caissons designed in the Netherlands.
Special closures.
Closure by sinking ships.
In exceptional circumstances, typically during emergencies such as dike breaches, efforts are made to seal the breach by manoeuvring a ship into it. Often, this method fails due to the mismatch between the dimensions of the ship and the breach. Instances have been recorded where the ship, once directed into the breach, was then dislodged by the powerful current. Another frequent issue is the incompatibility of the ship's bottom with the seabed of the breach, leading to undermining. The ensuing strong current further erodes the seabed beneath the ship, rendering the closure attempt unsuccessful. A notable exception occurred in 1953 during a dike breach along the Hollandse IJssel, which was successfully sealed; a monument commemorates this event later.
In Korea, an attempt was made in 1980 to close a tidal inlet using an old oil tanker. Little information is available about the outcome of this attempt, suggesting it may not have been notably successful, especially considering the numerous subsequent closures in Korea that have utilized stone. Later Google Earth imagery indicates that the ship was eventually removed following the dam's closure.
Closure with sandbags.
Utilizing sandbags and a significant workforce represents another unique closure method. This approach was employed during the construction of the dam across the Feni river in Bangladesh. At low tide, the riverbed at the closure site was almost completely exposed.
Twelve depots, each containing 100,000 sandbags, were established along the 1,200 m wide closure gap. On the day of the closure, 12,000 workers deployed these bags into the gap over a span of six hours, outpacing the rising tide. By the day's end, the tidal inlet was sealed, albeit only to the water levels typical of neap tides. In the ensuing days, the dam was further augmented with sand to withstand spring tides and, over the next three months, reinforced to resist storm surges up to 10 metres above the dam's base.
Storage area approach.
Utilising the tidal prism for velocity calculations in the neck of a tidal inlet
If a tidal basin is relatively short (i.e., its length is minor compared to the tidal wave's length), it's assumed that the basin's water level remains even, merely rising and falling with the tide. Under this assumption, the basin's storage (tidal prism) equals its surface area times the tidal range.
The formula for basin storage then simplifies to:
formula_0, in which:
* formula_1 represents the tidal prism (m3),
* formula_2 signifies the basin area (m2),
* formula_3 denotes the tidal range at the basin's entrance (m).
This methodology facilitates a reliable estimation of current velocities within the tidal inlet, essential for its eventual closure. Termed the "storage area approach," this technique provides a straightforward means to gauge local hydraulic conditions essential for barrier construction.
Within this approach, estuary water movement is modelled without either friction and inertia effects, leading to:
formula_4,
in which formula_5 is the flow rate in the inlet, formula_2 is the basin storage area, and formula_6 is the water level's rate of change.
The depicted basin storage system assumes:
For an imperfect weir:
formula_11
And for a perfect weir:
formula_12
Symbol meanings are as follows:
Combining these yields the basin storage equation, facilitating velocity graphs within the closure gap. An example graph for a tidal amplitude of 2.5 m (therefore a total range of 5 metres) shows velocities as functions of the tidal storage area (B) to closure gap width (Wg) ratio and sill depth (d'). Red indicates vertical closures, orange horizontal, and green a combination, highlighting the speed differences between closure types.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P = B \\Delta H"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "\\Delta H"
},
{
"math_id": 4,
"text": "Q = B \\frac{dh_3}{dt}"
},
{
"math_id": 5,
"text": "Q"
},
{
"math_id": 6,
"text": "\\frac{dh_3}{dt}"
},
{
"math_id": 7,
"text": "Q_r(t)"
},
{
"math_id": 8,
"text": "Q_s(t)"
},
{
"math_id": 9,
"text": "H_1(t)"
},
{
"math_id": 10,
"text": "h_2(t)"
},
{
"math_id": 11,
"text": "Q_s = \\mu h_2 W_g \\sqrt{2g (H_1 - h_2)} \\quad \\text{and} \\quad h_2 = h_3 \\quad \\text{for} \\quad h_3 > \\frac{2}{3} H_1"
},
{
"math_id": 12,
"text": "Q_s = m \\frac{2}{3} h_2 W_g \\sqrt{\\frac{2}{3}g H_1} \\quad \\text{and} \\quad h_2 = \\frac{2}{3}H_1 \\quad \\text{for} \\quad h_3 < \\frac{2}{3} H_1"
}
]
| https://en.wikipedia.org/wiki?curid=76383140 |
7638813 | Rhodanese | Mitochondrial enzyme which breaks down cyanide
Rhodanese is a mitochondrial enzyme that detoxifies cyanide (CN−) by converting it to thiocyanate (SCN−, also known as "rhodanate"). In enzymatology, the common name is listed as thiosulfate sulfurtransferase (EC 2.8.1.1). The diagram on the right shows the crystallographically-determined structure of rhodanese.
It catalyzes the following reaction:
thiosulfate + cyanide formula_0 sulfite + thiocyanate
Structure and mechanism.
This reaction takes place in two steps. In the first step, thiosulfate is reduced by the thiol group on cysteine-247 1, to form a persulfide and a sulfite 2. In the second step, the persulfide reacts with cyanide to produce thiocyanate, re-generating the cysteine thiol 1.
Rhodanese shares evolutionary relationship with a large family of proteins, including:
Rhodanese has an internal duplication. This domain is found as a single copy in other proteins, including phosphatases and ubiquitin C-terminal hydrolases.
Clinical relevance.
This reaction is important for the treatment of exposure to cyanide, since the thiocyanate formed is around 1 / 200 as toxic.:p. 15938 The use of thiosulfate solution as an antidote for cyanide poisoning is based on the activation of this enzymatic cycle.
Human proteins.
The human mitochondrial rhodanese gene is TST.
The following other human genes match the "Rhodanese-like" domain on InterPro, but are not "the" rodanase with its catalytic activity (see also the list of related families in #Structure and mechanism):
Nomenclature.
Although the standard nomenclature rules for enzymes indicate that their names are to end with the letters "-ase", rhodanese was first described in 1933, prior to the 1955 establishment of the Enzyme Commission; as such, the older name had already attained widespread usage.
The systematic name of this enzyme class is "thiosulfate:cyanide sulfurtransferase". Other names in common use include "thiosulfate cyanide transsulfurase", "thiosulfate thiotransferase", "rhodanese", and "rhodanase".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=7638813 |
763902 | Projective frame | In projective geometry, points that define coordinates
In mathematics, and more specifically in projective geometry, a projective frame or projective basis is a tuple of points in a projective space that can be used for defining homogeneous coordinates in this space. More precisely, in a projective space of dimension "n", a projective frame is a "n" + 2-tuple of points such that no hyperplane contains "n" + 1 of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension "n" has at most "n" + 1 vertices.
In this article, only projective spaces over a field "K" are considered, although most results can be generalized to projective spaces over a division ring.
Let P("V") be a projective space of dimension "n", where "V" is a "K"-vector space of dimension "n" + 1. Let formula_0 be the canonical projection that maps a nonzero vector v to the corresponding point of P("V"), which is the vector line that contains v.
Every frame of P("V") can be written as formula_1 for some vectors formula_2 of V. The definition implies the existence of nonzero elements of "K" such that formula_3. Replacing formula_4 by formula_5 for formula_6 and formula_7 by formula_8, one gets the following characterization of a frame:
"n" + 2 points of P("V") form a frame if and only if they are the image by p of a basis of V and the sum of its elements.
Moreover, two bases define the same frame in this way, if and only if the elements of the second one are the products of the elements of the first one by a fixed nonzero element of "K".
As homographies of P("V") are induced by linear endomorphisms of V, it follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the "first fundamental theorem of projective geometry".
Every frame can be written as formula_9 where formula_10 is basis of V. The "projective coordinates" or homogeneous coordinates of a point "p"("v") over this frame are the coordinates of the vector v on the basis formula_11 If one changes the vectors representing the point "p"("v") and the frame elements, the coordinates are multiplied by a fixed nonzero scalar.
Commonly, the projective space P"n"("K") = P("K""n"+1) is considered. It has a "canonical frame" consisting of the image by "p" of the canonical basis of "K""n"+1 (consisting of the elements having only one nonzero entry, which is equal to 1), and (1, 1, ..., 1). On this basis, the homogeneous coordinates of "p"("v") are simply the entries (coefficients) of "v".
Given another projective space P("V") of the same dimension n, and a frame "F" of it, there is exactly one homography "h" mapping "F" onto the canonical frame of P("K""n"+1). The projective coordinates of a point "a" on the frame "F" are the homogeneous coordinates of "h"("a") on the canonical frame of P"n"("K").
In the case of a projective line, a frame consists of three distinct points. If P1("K") is identified with K with a point at infinity ∞ added, then its canonical frame is (∞, 0, 1). Given any frame ("a"0, "a"1, "a"2), the projective coordinates of a point "a" ≠ "a"0 are ("r", 1), where r is the cross-ratio ("a", "a"2; "a"1, "a"0). If "a" = "a"0, the cross ratio is the infinity, and the projective coordinates are (1,0).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p:V\\setminus\\{0\\}\\to \\mathbf P(V)"
},
{
"math_id": 1,
"text": "\\left(p(e_0), \\ldots, p(e_{n+1})\\right),"
},
{
"math_id": 2,
"text": "e_0, \\dots, e_{n+1}"
},
{
"math_id": 3,
"text": "\\lambda_0e_0 + \\cdots + \\lambda_{n+1}e_{n+1}=0"
},
{
"math_id": 4,
"text": "e_i"
},
{
"math_id": 5,
"text": "\\lambda_ie_i"
},
{
"math_id": 6,
"text": "i\\le n"
},
{
"math_id": 7,
"text": "e_{n+1}"
},
{
"math_id": 8,
"text": "-\\lambda_{n+1}e_{n+1}"
},
{
"math_id": 9,
"text": "(p(e_0), \\ldots, p(e_n), p(e_0+\\cdots+e_n)),"
},
{
"math_id": 10,
"text": "(e_0, \\dots, e_n)"
},
{
"math_id": 11,
"text": "(e_0, \\dots, e_n)."
}
]
| https://en.wikipedia.org/wiki?curid=763902 |
7639504 | BIT predicate | Test of a specified bit in a binary number
In mathematics and computer science, the BIT predicate, sometimes written formula_0, is a predicate that tests whether the formula_1th bit of the number formula_2 (starting from the least significant digit) is 1, when formula_2 is written as a binary number. Its mathematical applications include modeling the membership relation of hereditarily finite sets, and defining the adjacency relation of the Rado graph. In computer science, it is used for efficient representations of set data structures using bit vectors, in defining the private information retrieval problem from communication complexity, and in descriptive complexity theory to formulate logical descriptions of complexity classes.
History.
The BIT predicate was first introduced in 1937 by Wilhelm Ackermann to define the Ackermann coding, which encodes hereditarily finite sets as natural numbers. The BIT predicate can be used to perform membership tests for the encoded sets: formula_0 is true if and only if the set encoded by formula_1 is a member of the set encoded by formula_2.
Ackermann denoted the predicate formula_0 as formula_3, using a Fraktur font to distinguish it from the notation formula_4 that he used for set membership (short for "formula_1 is an element of formula_2" in German). The notation formula_0, and the name "the BIT predicate", come from the work of Ronald Fagin and Neil Immerman, who applied this predicate in computational complexity theory as a way to encode and decode information in the late 1980s and early 1990s.
Description and implementation.
The binary representation of a number formula_2 is an expression for formula_2 as a sum of distinct powers of two,
formula_5
where each bit formula_6 in this expression is either 0 or 1. It is commonly written in binary notation as just the sequence of these bits, formula_7. Given this expansion for formula_2, the BIT predicate formula_0 is defined to equal formula_6. It can be calculated from the formula
formula_8
where formula_9 is the floor function and mod is the modulo function.
The BIT predicate is a primitive recursive function. As a binary relation (producing true and false values rather than 1 and 0 respectively), the BIT predicate is asymmetric: there do not exist two numbers formula_2 and formula_1 for which both formula_0 and formula_10 are true.
In programming languages such as C, C++, Java, or Python that provide a right shift operator codice_0 and a bitwise Boolean and operator codice_1, the BIT predicate formula_0 can be implemented by the expression
codice_2. The subexpression codice_3 shifts the bits in the binary representation of formula_2 so that bit formula_6 is shifted to position 0, and the subexpression codice_4 masks off the remaining bits, leaving only the bit in position 0. As with the modular arithmetic formula above, the value of the expression is 1 or 0, respectively as the value of formula_0 is true or false.
Applications.
Set data structures.
For a set represented as a bit array, the BIT predicate can be used to test set membership. For instance, subsets of the non-negative integers formula_11 may be represented by a bit array with a one in position formula_2 when formula_2 is a member of the subset, and a zero in that position when it is not a member. When such a bit array is interpreted as a binary number, the set formula_12 for distinct formula_13 is represented as the binary number formula_14. If formula_15 is a set, represented in this way, and formula_2 is a number that may or may not be an element of formula_15, then formula_16 returns a nonzero value when formula_2 is a member and zero when it is not.
The same technique may be used to test membership in subsets of any sequence formula_17 of distinct values, encoded using powers of two whose exponents are the positions of the elements in this sequence, rather than their values. For instance, in the Java collections framework, codice_5 uses this technique to implement a set data structure for enumerated types. Ackermann's encoding of the hereditarily finite sets is an example of this technique, for the recursively-generated sequence of hereditarily finite sets.
Private information retrieval.
In the mathematical study of computer security, the private information retrieval problem can be modeled as one in which a client, communicating with a collection of servers that store a binary number formula_2, wishes to determine the result of a BIT predicate formula_0 without divulging the value of formula_1 to the servers. describe a method for replicating formula_2 across two servers in such a way that the client can solve the private information retrieval problem using a substantially smaller amount of communication than would be necessary to recover the complete value of formula_2.
Complexity and logic.
The BIT predicate is often examined in the context of first-order logic, where systems of logic result from adding the BIT predicate to first-order logic. In descriptive complexity, the complexity class FO describes the class of formal languages that can be described by a formula in first-order logic with a comparison operation on totally ordered variables (interpreted as the indexes of characters in a string) and with predicates that test whether this string has a given character at a given numerical index. A formula in this logic defines a language consisting of its finite models. However, with these operations, only a very restricted class of languages, the star-free regular languages, can be described. Adding the BIT predicate to the repertoire of operations used in these logical formulas results in a more robust complexity class, FO[BIT], meaning that it is less sensitive to minor variations in its definition.
The class FO[BIT] is the same as the class FO[+,×], of first-order logic with addition and multiplication predicates.
It is also the same as the circuit complexity class DLOGTIME-uniform AC0. Here, AC0 describes the problems that can be computed by circuits of AND gates and OR gates with polynomial size, bounded height, and unbounded fanout. "Uniform" means that the circuits of all problem sizes must be described by a single algorithm. More specifically, it must be possible to index the gates of each circuit by numbers in such a way that the type of each gate and the adjacency between any two gates can be computed by a deterministic algorithm whose time is logarithmic in the size of the circuit (DLOGTIME).
Construction of the Rado graph.
In 1964, German–British mathematician Richard Rado used the BIT predicate to construct the infinite Rado graph. Rado's construction is just the symmetrization of Ackermann's 1937 construction of the hereditary finite sets from the BIT predicate: two vertices numbered formula_2 and formula_1 are adjacent in the Rado graph when either formula_0 or formula_10 is nonzero.
The resulting graph has many important properties: it contains every finite undirected graph as an induced subgraph, and any isomorphism of its induced subgraphs can be extended to a symmetry of the whole graph.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{BIT}(i,j)"
},
{
"math_id": 1,
"text": "j"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "\\mathfrak{El}(j,i)"
},
{
"math_id": 4,
"text": "\\mathrm{El}(j,i)"
},
{
"math_id": 5,
"text": "i = \\cdots b_3 2^3 + b_2 2^2 + b_1 2^1 + b_0 2^0"
},
{
"math_id": 6,
"text": "b_j"
},
{
"math_id": 7,
"text": "\\cdots b_3b_2b_1b_0"
},
{
"math_id": 8,
"text": "\\text{BIT}(i,j) = \\left\\lfloor \\frac{i}{2^j} \\right\\rfloor \\bmod 2,"
},
{
"math_id": 9,
"text": "\\lfloor \\cdot \\rfloor"
},
{
"math_id": 10,
"text": "\\text{BIT}(j,i)"
},
{
"math_id": 11,
"text": "\\{0, 1, \\ldots\\}"
},
{
"math_id": 12,
"text": "\\{i,j,k,\\dots\\}"
},
{
"math_id": 13,
"text": "i,j,k,\\dots"
},
{
"math_id": 14,
"text": "2^i+2^j+2^k+\\cdots"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "\\text{BIT}(S,i)"
},
{
"math_id": 17,
"text": "x_0,x_1,\\dots"
}
]
| https://en.wikipedia.org/wiki?curid=7639504 |
763951 | Simultaneous localization and mapping | Computational navigational technique used by robots and autonomous vehicles
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.
SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newer domestic robots and even inside the human body.
Mathematical description of the problem.
Given a series of controls formula_0 and sensor observations formula_1 over discrete time steps formula_2, the SLAM problem is to compute an estimate of the agent's state formula_3 and a map of the environment formula_4. All quantities are usually probabilistic, so the objective is to compute
formula_5
Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function formula_6,
formula_7
Similarly the map can be updated sequentially by
formula_8
Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of an expectation–maximization algorithm.
Algorithms.
Statistical techniques used to approximate the above equations include Kalman filters and particle filters (the algorithm behind Monte Carlo Localization). They provide an estimation of the posterior probability distribution for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using covariance intersection are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications. Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty.
Set-membership techniques are mainly based on interval constraint propagation.
They provide a set which encloses the pose of the robot and a set approximation of the map. Bundle adjustment, and more generally maximum a posteriori estimation (MAP), is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's ARCore which replaces their prior augmented reality computing platform named Tango, formerly "Project Tango". MAP estimators compute the most likely explanation of the robot poses and the map given the sensor data, rather than trying to estimate the entire posterior probability.
New SLAM algorithms remain an active research area, and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects.
Mapping.
Topological maps are a method of environment representation which capture the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.
In contrast, grid maps use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically independent to simplify computation. Under such assumption, formula_9 are set to 1 if the new map's cells are consistent with the observation formula_1 at location formula_3 and 0 if inconsistent.
Modern self driving cars mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance. This can include map annotations to the level of marking locations of individual white line segments and curbs on the road. Location-tagged visual data such as Google's StreetView may also be used as part of maps. Essentially such systems simplify the SLAM problem to a simpler localization only task, perhaps allowing for moving objects such as cars and people only to be updated in the map at runtime.
Sensing.
SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms. Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms which assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world which location can be estimated by a sensor, such as Wi-Fi access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model formula_10 directly as a function of the location.
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D high definition light detection and ranging (lidar), 3D flash lidar, 2D or 3D sonar sensors, and one or more 2D cameras. Since the invention of local features, such as SIFT, there has been intense research into visual SLAM (VSLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.
Follow up research includes. Both visual and lidar sensors are informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include tactile SLAM (sensing by local touch only), radar SLAM, acoustic SLAM, and Wi-Fi-SLAM (sensing by strengths of nearby Wi-Fi access points). Recent approaches apply quasi-optical wireless ranging for multi-lateration (real-time locating system (RTLS)) or multi-angulation in conjunction with SLAM as a tribute to erratic wireless measures. A kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings by an indoor positioning system.
For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision differential GPS sensors. From a SLAM perspective, these may be viewed as location sensors which likelihoods are so sharp that they completely dominate the inference. However, GPS sensors may occasionally decline or go down entirely, e.g. during times of military conflict, which are of particular interest to some robotics applications.
Kinematics modeling.
The formula_6 term represents the kinematics of the model, which usually include information about action commands given to a robot. As a part of the model, the kinematics of the robot is included, to improve estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the location and heading of the robot as some cloud of probability. Mapping is the final depicting of such model, the map is either such depiction or the abstract term for the model.
For 2D robots, the kinematics are usually given by a mixture of rotation and "move forward" commands, which are implemented with additional motor noise. Unfortunately the distribution formed by independent noise in angular and linear directions is non-Gaussian, but is often approximated by a Gaussian. An alternative approach is to ignore the kinematic term and read odometry data from robot wheels after each command—such data may then be treated as one of the sensors rather than as kinematics.
Moving objects.
Non-static environments, such as those containing other vehicles or pedestrians, continue to present research challenges. SLAM with DATMO is a model which tracks moving objects in a similar way to the agent itself.
Loop closure.
Loop closure is the problem of recognizing a previously-visited location and updating beliefs accordingly. This can be a problem because model or algorithm errors can assign low priors to the location. Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and reset the location priors when a match is detected. For example, this can be done by storing and comparing bag of words vectors of scale-invariant feature transform (SIFT) features from each previously visited location.
Exploration.
"Active SLAM" studies the combined problem of SLAM with deciding where to move next to build the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing regimes such as tactile SLAM. Active SLAM is generally performed by approximating the entropy of the map under hypothetical actions. "Multi agent SLAM" extends this problem to the case of multiple robots coordinating themselves to explore optimally.
Biological inspiration.
In neuroscience, the hippocampus appears to be involved in SLAM-like computations, giving rise to place cells, and has formed the basis for bio-inspired SLAM systems such as RatSLAM.
Collaborative SLAM.
"Collaborative SLAM" combines sensors from multiple robots or users to generate 3D maps. This capability was demonstrated by a number of teams in the 2021 DARPA Subterranean Challenge.
Specialized SLAM methods.
Acoustic SLAM.
An extension of the common SLAM problem has been applied to the acoustic domain, where environments are represented by the three-dimensional (3D) position of sound sources, termed aSLAM (Acoustic Simultaneous Localization and Mapping). Early implementations of this technique have used direction-of-arrival (DoA) estimates of the sound source location, and rely on principal techniques of sound localization to determine source locations. An observer, or robot must be equipped with a microphone array to enable use of Acoustic SLAM, so that DoA features are properly estimated. Acoustic SLAM has paved foundations for further studies in acoustic scene mapping, and can play an important role in human-robot interaction through speech. To map multiple, and occasionally intermittent sound sources, an acoustic SLAM system uses foundations in random finite set theory to handle the varying presence of acoustic landmarks. However, the nature of acoustically derived features leaves Acoustic SLAM susceptible to problems of reverberation, inactivity, and noise within an environment.
Audiovisual SLAM.
Originally designed for human–robot interaction, Audio-Visual SLAM is a framework that provides the fusion of landmark features obtained from both the acoustic and visual modalities within an environment. Human interaction is characterized by features perceived in not only the visual modality, but the acoustic modality as well; as such, SLAM algorithms for human-centered robots and machines must account for both sets of features. An Audio-Visual framework estimates and maps positions of human landmarks through use of visual features like human pose, and audio features like human speech, and fuses the beliefs for a more robust map of the environment. For applications in mobile robotics (ex. drones, service robots), it is valuable to use low-power, lightweight equipment such as monocular cameras, or microelectronic microphone arrays. Audio-Visual SLAM can also allow for complimentary function of such sensors, by compensating the narrow field-of-view, feature occlusions, and optical degradations common to lightweight visual sensors with the full field-of-view, and unobstructed feature representations inherent to audio sensors. The susceptibility of audio sensors to reverberation, sound source inactivity, and noise can also be accordingly compensated through fusion of landmark beliefs from the visual modality. Complimentary function between the audio and visual modalities in an environment can prove valuable for the creation of robotics and machines that fully interact with human speech and human movement.
Implementation methods.
Various SLAM algorithms are implemented in the open-source software Robot Operating System (ROS) libraries, often used together with the Point Cloud Library for 3D maps or visual features from OpenCV.
EKF SLAM.
In robotics, "EKF SLAM" is a class of algorithms which uses the extended Kalman filter (EKF) for SLAM. Typically, EKF SLAM algorithms are feature based, and use the maximum likelihood algorithm for data association. In the 1990s and 2000s, EKF SLAM had been the de facto method for SLAM, until the introduction of FastSLAM.
Associated with the EKF is the gaussian noise assumption, which significantly impairs EKF SLAM's ability to deal with uncertainty. With greater amount of uncertainty in the posterior, the linearization in the EKF fails.
GraphSLAM.
In robotics, GraphSLAM is a SLAM algorithm which uses sparse information matrices produced by generating a factor graph of observation interdependencies (two observations are related if they contain data about the same landmark). It is based on optimization algorithms.
History.
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986. Other pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte in the early 1990s. which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in ISR in 1995.
The self-driving STANLEY and JUNIOR cars, led by Sebastian Thrun, won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners and virtual reality headsets such as the Meta Quest 2 and PICO 4 for markerless inside-out tracking.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u_t"
},
{
"math_id": 1,
"text": "o_t"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "x_t"
},
{
"math_id": 4,
"text": "m_t"
},
{
"math_id": 5,
"text": " P(m_{t+1},x_{t+1}|o_{1:t+1},u_{1:t}) "
},
{
"math_id": 6,
"text": "P(x_t|x_{t-1})"
},
{
"math_id": 7,
"text": "P(x_t | o_{1:t},u_{1:t},m_t) = \\sum_{m_{t-1} } P(o_{t}|x_t, m_t,u_{1:t}) \\sum_{x_{t-1}} P(x_t|x_{t-1}) P(x_{t-1}|m_t, o_{1:t-1},u_{1:t}) /Z"
},
{
"math_id": 8,
"text": "P(m_t | x_t,o_{1:t},u_{1:t}) = \\sum_{x_t} \\sum_{m_t} P(m_t | x_t, m_{t-1}, o_t,u_{1:t} ) P(m_{t-1},x_t | o_{1:t-1},m_{t-1},u_{1:t})"
},
{
"math_id": 9,
"text": "P(m_t | x_t, m_{t-1}, o_t )"
},
{
"math_id": 10,
"text": "P(o_t|x_t)"
}
]
| https://en.wikipedia.org/wiki?curid=763951 |
76401012 | Topological deep learning | Topological Deep Learning (TDL). is a research field that extends deep learning to handle complex, non-Euclidean data structures. Traditional deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), excel in processing data on regular grids and sequences. However, scientific and real-world data often exhibit more intricate data domains encountered in scientific computations , including point clouds, meshes, time series, scalar fields graphs, or general topological spaces like simplicial complexes and CW complexes. TDL addresses this by incorporating topological concepts to process data with higher-order relationships, such as interactions among multiple entities and complex hierarchies. This approach leverages structures like simplicial complexes and hypergraphs to capture global dependencies and qualitative spatial properties, offering a more nuanced representation of data. TDL also encompasses methods from computational and algebraic topology that permit studying properties of neural networks and their training process, such as their predictive performance or generalization properties..
History and motivation.
Traditional techniques from deep learning often operate under the assumption that a dataset is residing in a highly-structured space (like images, where convolutional neural networks exhibit outstanding performance over alternative methods) or a Euclidean space. The prevalence of new types of data, in particular graphs, meshes, and molecules, resulted in the development of new techniques, culminating in the field of geometric deep learning, which originally proposed a signal-processing perspective for treating such data types. While originally confined to graphs, where connectivity is defined based on nodes and edges, follow-up work extended concepts to a larger variety of data types, including simplicial complexes and CW complexes, with recent work proposing a unified perspective of message-passing on general combinatorial complexes.
An independent perspective on different types of data originated from topological data analysis, which proposed a new framework for describing structural information of data, i.e., their "shape," that is inherently aware of multiple scales in data, ranging from "local" information to "global" information. While at first restricted to smaller datasets, subsequent work developed new descriptors that efficiently summarized topological information of datasets to make them available for traditional machine-learning techniques, such as support vector machines or random forests. Such descriptors ranged from new techniques for feature engineering over new ways of providing suitable coordinates for topological descriptors, or the creation of more efficient dissimilarity measures.
Contemporary research in this field is largely concerned with either integrating information about the underlying data topology into existing deep-learning models or obtaining novel ways of training on topological domains.
Learning on topological spaces.
Focusing broadly on topology in the sense of point set topology, an active branch of TDL is concerned with learning "on" topological spaces, or, put differently, on certain topological domains.
An introduction to topological domains.
One of the core concepts in topological deep learning is the domain upon which this data is defined and supported. In case of Euclidian data, such as images, this domain is a grid, upon which the pixel value of the image is supported. In a more general setting this domain might be a topological domain. Next, we introduce the most common topological domains that are encountered in a deep learning setting. These domains include, but not limited to, graphs, simplicial complexes, cell complexes, combinatorial complexes and hypergraphs.
Given a finite set S of abstract entities, a neighborhood function formula_0 on S is an assignment that attach to every point formula_1 in S a subset of S or a relation. Such a function can be induced by equipping S with an "auxiliary structure". Edges provide one way of defining relations among the entities of S. More specifically, edges in a graph allow one to define the notion of neighborhood using, for instance, the one hop neighborhood notion. Edges however, limited in their modeling capacity as they can only be used to model "binary relations" among entities of S since every edge is connected typically to two entities. In many applications, it is desirable to permit relations that incorporate more than two entities. The idea of using relations that involve more than two entities is central to topological domains. Such higher-order relations allow for a broader range of neighborhood functions to be defined on S to capture multi-way interactions among entities of S.
Next we review the main properties, advantages, and disadvantages of some commonly studied topological domains in the context of deep learning, including (abstract) simplicial complexes, regular cell complexes, hypergraphs, and combinatorial complexes.
Comparisons among topological domains.
Each of the enumerated topological domains has its own characteristics, advantages, and limitations:
Hierarchical structure and set-type relations.
The properties of simplicial complexes, cell complexes, and hypergraphs give rise to two main features of relations on higher-order domains, namely hierarchies of relations and set-type relations.
Rank function.
A rank function on a higher-order domain X is an order-preserving function rk: X → Z, where rk("x") attaches a non-negative integer value to each relation "x" in X, preserving set inclusion in X. Cell and simplicial complexes are common examples of higher-order domains equipped with rank functions and therefore with hierarchies of relations.
Set-type relations.
Relations in a higher-order domain are called set-type relations if the existence of a relation is not implied by another relation in the domain. Hypergraphs constitute examples of higher-order domains equipped with set-type relations. Given the modeling limitations of simplicial complexes, cell complexes, and hypergraphs, we develop the combinatorial complex, a higher-order domain that features both hierarchies of relations and set-type relations.
The learning tasks in TDL can be broadly classified into three categories:
In practice, to perform the aforementioned tasks, deep learning models designed for specific topological spaces must be constructed and implemented. These models, known as topological neural networks, are tailored to operate effectively within these spaces.
Topological neural networks.
Central to TDL are topological neural networks (TNNs), specialized architectures designed to operate on data structured in topological domains. Unlike traditional neural networks tailored for grid-like structures, TNNs are adept at handling more intricate data representations, such as graphs, simplicial complexes, and cell complexes. By harnessing the inherent topology of the data, TNNs can capture both local and global relationships, enabling nuanced analysis and interpretation.
Message passing topological neural networks.
In a general topological domain, higher-order message passing involves exchanging messages among entities and cells using a set of neighborhood functions.
Definition: Higher-Order Message Passing on a General Topological Domain
Let formula_2 be a topological domain. We define a set of neighborhood functions formula_3 on formula_2. Consider a cell formula_1 and let formula_4 for some formula_5. A message formula_6 between cells formula_1 and formula_7 is a computation dependent on these two cells or the data supported on them. Denote formula_8 as the multi-set formula_9, and let formula_10 represent some data supported on cell formula_1 at layer formula_11. Higher-order message passing on formula_2, induced by formula_0, is defined by the following four update rules:
Some remarks on Definition above are as follows.
First, Equation 1 describes how messages are computed between cells formula_1 and formula_7. The message formula_6 is influenced by both the data formula_10 and formula_19 associated with cells formula_1 and formula_7, respectively. Additionally, it incorporates characteristics specific to the cells themselves, such as orientation in the case of cell complexes. This allows for a richer representation of spatial relationships compared to traditional graph-based message passing frameworks.
Second, Equation 2 defines how messages from neighboring cells are aggregated within each neighborhood. The function formula_14 aggregates these messages, allowing information to be exchanged effectively between adjacent cells within the same neighborhood.
Third, Equation 3 outlines the process of combining messages from different neighborhoods. The function formula_16 aggregates messages across various neighborhoods, facilitating communication between cells that may not be directly connected but share common neighborhood relationships.
Fourth, Equation 4 specifies how the aggregated messages influence the state of a cell in the next layer. Here, the function formula_20 updates the state of cell formula_1 based on its current state formula_10 and the aggregated message formula_21 obtained from neighboring cells.
Non-message passing topological neural networks.
While the majority of TNNs follow the message passing paradigm from graph learning, several models have been suggested that do not follow this approach. For instance, Maggs et al. leverage geometric information from embedded simplicial complexes, i.e., simplicial complexes with high-dimensional features attached to their vertices.This offers interpretability and geometric consistency "without" relying on message passing. Furthermore, in a contrastive loss-based method was suggested to learn the simplicial representation.
Learning on topological descriptors.
Motivated by the modular nature of deep neural networks, initial work in TDL drew inspiration from topological data analysis, and aimed to make the resulting descriptors amenable to integration into deep-learning models. This led to work defining new layers for deep neural networks. Pioneering work by Hofer et al., for instance, introduced a layer that permitted topological descriptors like persistence diagrams or persistence barcodes to be integrated into a deep neural network. This was achieved by means of end-to-end-trainable projection functions, permitting topological features to be used to solve shape classification tasks, for instance. Follow-up work expanded more on the theoretical properties of such descriptors and integrated them into the field of representation learning. Other such "topological layers" include layers based on extended persistent homology descriptors, persistence landscapes, or coordinate functions. In parallel, persistent homology also found applications in graph-learning tasks. Noteworthy examples include new algorithms for learning task-specific filtration functions for graph classification or node classification tasks.
Applications.
TDL is rapidly finding new applications across different domains, including data compression, enhancing the expressivity and predictive performance of graph neural networks, action recognition, and trajectory prediction. | [
{
"math_id": 0,
"text": "\\mathcal{N}"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "\\mathcal{X}"
},
{
"math_id": 3,
"text": "\\mathcal{N}=\\{ \\mathcal{N}_1,\\ldots,\\mathcal{N}_n\\}"
},
{
"math_id": 4,
"text": "y\\in \\mathcal{N}_k(x)"
},
{
"math_id": 5,
"text": "\\mathcal{N}_k \\in \\mathcal{N}"
},
{
"math_id": 6,
"text": "m_{x,y}"
},
{
"math_id": 7,
"text": "y"
},
{
"math_id": 8,
"text": "\\mathcal{N}(x)"
},
{
"math_id": 9,
"text": "\\{\\!\\!\\{ \\mathcal{N}_1(x) , \\ldots , \\mathcal{N}_n (x) \\}\\!\\!\\}"
},
{
"math_id": 10,
"text": "\\mathbf{h}_x^{(l)}"
},
{
"math_id": 11,
"text": "l"
},
{
"math_id": 12,
"text": "m_{x,y} = \\alpha_{\\mathcal{N}_k}(\\mathbf{h}_x^{(l)},\\mathbf{h}_y^{(l)})"
},
{
"math_id": 13,
"text": "m_{x}^k = \\bigoplus_{y \\in \\mathcal{N}_k(x)} m_{x,y}"
},
{
"math_id": 14,
"text": "\\bigoplus"
},
{
"math_id": 15,
"text": "m_{x} = \\bigotimes_{ \\mathcal{N}_k \\in \\mathcal{N} } m_x^k"
},
{
"math_id": 16,
"text": "\\bigotimes"
},
{
"math_id": 17,
"text": "\\mathbf{h}_x^{(l+1)} = \\beta (\\mathbf{h}_x^{(l)}, m_x)"
},
{
"math_id": 18,
"text": "\\alpha_{\\mathcal{N}_k},\\beta"
},
{
"math_id": 19,
"text": "\\mathbf{h}_y^{(l)}"
},
{
"math_id": 20,
"text": "\\beta"
},
{
"math_id": 21,
"text": "m_x"
}
]
| https://en.wikipedia.org/wiki?curid=76401012 |
76405293 | Alexei Tsvelik | Theoretical physicist
Alexei Mikhaylovich Tsvelik () is a theoretical condensed matter physicist working on strongly correlated electron systems. He is widely recognised for his pioneering contributions to the theory of low-dimensional systems, including applications of non-perturbative quantum field theory methods and the Bethe Ansatz.
Education and Career.
He graduated from the Moscow Physical Technical Institute in 1977,
before gaining his PhD in Theoretical Physics in 1980 from the
Kurchatov Institute for Atomic Energy. Between 1982 and 1989 he worked
at the Landau Institute for Theoretical Physics. After visiting
positions at Harvard, Princeton and the University of Florida, Tsvelik
was appointed as a Lecturer, and subsequently Professor, at the
University of Oxford (where he was affiliated to Brasenose College). In 2001 he was appointed as a Senior
Physicist and Group Leader at Brookhaven National Laboratory. He has also served as an adjunct professor of physics at Stony Brook University.
Research.
Tsvelik has published more than 240 papers in refereed journals and is the
author of two textbooks and several books on popular science.
Throughout his career, Tsvelik has significantly contributed to the application of quantum field-theoretical methods to the description of low-dimensional systems, focusing on methods of Integrability, Bosonization and Conformal Field Theory.
Early in his career, he became known for his works on exact solutions of quantum impurity models, including the multichannel Kondo model using the Bethe Ansatz with Paul Wiegmann. Their 1983 review on exact results on impurity models including Kondo and Anderson impurity models remains a landmark in the use of exact methods in quantum many-body systems.
The late 1980s and early 1990s witnessed a concerted experimental and theoretical effort to understand the physics of Haldane gap materials. Field-theoretical methods such as the Landau-Ginzburg approach for the formula_0 Non-linear sigma model for large-spin Heisenberg chains, and Tsvelik's Majorana fermion approach proved particularly useful for this purpose.
Separately, Tsvelik also used Majorana fermions to model unusual magnetoresistance properties of high-Tc materials in collaboration with Piers Coleman and Andy Schofield.
Similar approaches proved useful in the understanding of spin ladder materials, of interest as simplified versions of high-Tc materials. As shown by Tsvelik in collaboration with Nersesyan and Shelton, a two-leg ladder has a simple low-energy representation in terms of four (weakly interacting) massive Majorana fermions, enabling the calculation of dynamical structure factors.
In a collaboration with John Tranquada and others he established the existence of a Berezinskii–Kosterlitz–Thouless transition in a three dimensional layered high-temperature superconducting material.
A recent notable contribution of Tsvelik provides clear pathways in the search for new states of matter in the form of chiral spin liquids.
Awards and Recognitions.
In 2002 Tsvelik was elected as a Fellow of the American Physical Society with citation "For seminal contributions to quantum magnetism and for the exact solutions of important integrable models". He received a Brookhaven Science and Technology Award in
2006. In 2009 he was recognized as an Outstanding Referee by the
American Physical Society. He was awarded a Alexander von Humboldt Research Award in 2014
and the Eugene Feenberg Memorial Medal in 2024 "for pioneering applications of quantum field theory to the understanding of emergent, many-body physics of quantum systems, in particular the physics of magnetic impurities, disordered systems, and Majorana representations of correlated problems".
Hobbies.
Tsvelik is a prolific caricaturist renowned among his colleagues for his blend of deference, humour and sarcasm. In particular, his textbooks contain many drawings of eminent physicists (a.k.a. "famous people nobody knows"). Alexei Tsvelik, in co-authorship with Alexey Burov, published a series of metaphysical articles "Pythagorean Argument of the Intelligent Design of the Universe and its Critique", in the Russian journal and Ideals".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(3)"
}
]
| https://en.wikipedia.org/wiki?curid=76405293 |
76406928 | Erica Brittain | American biostatistician
Erica Hyde Brittain is an American biostatistician at the National Institute of Allergy and Infectious Diseases, where she is deputy branch chief in the Biostatistics Research Branch. Her research includes work on clinical trials. She is a coauthor of a book on statistical hypothesis tests, "Statistical Hypothesis Testing in Context: Reproducibility, Inference, and Science" (with Michael P. Fay, Cambridge University Press, 2022).
Education and career.
Brittain majored in mathematics at Tufts University, graduating in 1977. After earning a master's degree in statistics in 1980 from Stanford University, she completed a Ph.D. in 1984 at the University of North Carolina (UNC). Her dissertation, "Determination of formula_0-values for a formula_1-sample extension of the Kolmogorov-Smirnov procedure", was advised by Thomas Fleming of the Mayo Clinic (where her fiancé worked) but officially supervised by Clarence E. (Ed) Davis at UNC.
After working for the Center for Drug Evaluation and Research and National Heart Lung and Blood Institute, she moved to the National Institute of Allergy and Infectious Diseases in 2003, and became deputy branch chief in 2013.
Recognition.
Brittain's book on hypothesis testing was a finalist in mathematics and statistics in the 2023 PROSE Awards.
She was elected as a Fellow of the American Statistical Association in 2023.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=76406928 |
76407947 | Inverse Planning | Inverse Planning
Inverse Planning refers to the process of inferring an agent's mental states, such as goals, beliefs, emotions, etc., from actions by assuming agents are rational planners. It is a method commonly used in Computational Cognitive Science and Artificial Intelligence for modeling agents' Theory of Mind.
Inverse Planning is closely related to Inverse Reinforcement Learning, which attempts to learn a reward function based on agents' behavior, and plan recognition, which finds logically-consistent goals given the action observations.
Bayesian Inverse Planning.
Inverse Planning is often framed with a Bayesian formulation, such as sequential Monte Carlo methods. The inference process can be represented with a graphical model shown on the right. In this causal diagram, a rational agent with a goal g produces a plan with a sequence of actions formula_0, where
formula_1
In the forward planning model, it is often assumed that the agent is rational. The agents' actions can then be derived from a Boltzmann rational action distribution,
formula_2
where formula_3 is the cost of the optimal plan to goal formula_4 by first performing action formula_5, and formula_6 is the Boltzmann temperature parameter.
Then giving action observations of formula_0, Inverse Planning applies Bayes rule to invert the conditional probability to find the posterior probability of the agent's goal.
formula_7
Inverse planning can also be applied for inferring agent's beliefs, emotions, preferences, etc. Recent work in Bayesian Inverse Planning has also been able to account for boundedly rational agent behavior, multi-modal interactions, and team actions in multi-agent systems.
Application.
Inverse Planning has been widely used in modeling agent's behavior in cognitive science to understand human's ability to interpret and infer other agents' latent mental states. It has increasingly been applied in Human-AI and Human-Robot interactions, allowing artificial agents to recognize the goals and beliefs of human users in order to provide assistance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_{1:t}"
},
{
"math_id": 1,
"text": "a_{1:t} \\sim P(a_{1:t} | g, s_0)"
},
{
"math_id": 2,
"text": "P(a_i | g, s_0) = \\frac{\\exp(\\frac{1}{\\beta} Q(s_0,a_i))}{\\sum_{a_j}{\\exp(\\frac{1}{\\beta} Q(s_0,a_j)))}}"
},
{
"math_id": 3,
"text": "Q(s_0, a)"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "\\beta"
},
{
"math_id": 7,
"text": "P(g|a_{1:t}, s_0) \\propto P(a_{1:t}|g, s_0)P(g)"
}
]
| https://en.wikipedia.org/wiki?curid=76407947 |
76408 | Transverse wave | Moving wave that has oscillations perpendicular to the direction of the wave
In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave.
A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation.
Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves.
Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or "pressure wave" in gases, liquids, or solids, whose oscillations cause compression and expansion of the material through which the wave is propagating. Pressure waves are called "primary waves", or "P-waves" in geophysics.
Water waves involve both longitudinal and transverse motions.
Mathematical formulation.
Mathematically, the simplest kind of transverse wave is a plane linearly polarized sinusoidal one. "Plane" here means that the direction of propagation is unchanging and the same over the whole medium; "linearly polarized" means that the direction of displacement too is unchanging and the same over the whole medium; and the magnitude of the displacement is a sinusoidal function only of time and of position along the direction of propagation.
The motion of such a wave can be expressed mathematically as follows. Let "d" be the direction of propagation (a vector with unit length), and "o" any reference point in the medium. Let "u" be the direction of the oscillations (another unit-length vector perpendicular to "d"). The displacement of a particle at any point "p" of the medium and any time "t" (seconds) will be
formula_0
where "A" is the wave's amplitude or strength, "T" is its period, "v" is the speed of propagation, and "φ" is its phase at "o". All these parameters are real numbers. The symbol "•" denotes the inner product of two vectors.
By this equation, the wave travels in the direction "d" and the oscillations occur back and forth along the direction "u". The wave is said to be linearly polarized in the direction "u".
An observer that looks at a fixed point "p" will see the particle there move in a simple harmonic (sinusoidal) motion with period "T" seconds, with maximum particle displacement "A" in each sense; that is, with a frequency of "f" = 1/"T" full oscillation cycles every second. A snapshot of all particles at a fixed time "t" will show the same displacement for all particles on each plane perpendicular to "d", with the displacements in successive planes forming a sinusoidal pattern, with each full cycle extending along "d" by the wavelength "λ" = "v" "T" = "v"/"f". The whole pattern moves in the direction "d" with speed "V".
The same equation describes a plane linearly polarized sinusoidal light wave, except that the "displacement" "S"("p", "t") is the electric field at point "p" and time "t". (The magnetic field will be described by the same equation, but with a "displacement" direction that is perpendicular to both "d" and "u", and a different amplitude.)
Superposition principle.
In a homogeneous linear medium, complex oscillations (vibrations in a material or light flows) can be described as the superposition of many simple sinusoidal waves, either transverse or longitudinal.
The vibrations of a violin string create standing waves, for example, which can be analyzed as the sum of many transverse waves of different frequencies moving in opposite directions to each other, that displace the string either up or down or left to right. The antinodes of the waves align in a superposition .
Circular polarization.
If the medium is linear and allows multiple independent displacement directions for the same travel direction "d", we can choose two mutually perpendicular directions of polarization, and express any wave linearly polarized in any other direction as a linear combination (mixing) of those two waves.
By combining two waves with same frequency, velocity, and direction of travel, but with different phases and independent displacement directions, one obtains a circularly or elliptically polarized wave. In such a wave the particles describe circular or elliptical trajectories, instead of moving back and forth.
It may help understanding to revisit the thought experiment with a taut string mentioned above. Notice that you can also launch waves on the string by moving your hand to the right and left instead of up and down. This is an important point. There are two independent (orthogonal) directions that the waves can move. (This is true for any two directions at right angles, up and down and right and left are chosen for clarity.) Any waves launched by moving your hand in a straight line are linearly polarized waves.
But now imagine moving your hand in a circle. Your motion will launch a spiral wave on the string. You are moving your hand simultaneously both up and down and side to side. The maxima of the side to side motion occur a quarter wavelength (or a quarter of a way around the circle, that is 90 degrees or π/2 radians) from the maxima of the up and down motion. At any point along the string, the displacement of the string will describe the same circle as your hand, but delayed by the propagation speed of the wave. Notice also that you can choose to move your hand in a clockwise circle or a counter-clockwise circle. These alternate circular motions produce right and left circularly polarized waves.
To the extent your circle is imperfect, a regular motion will describe an ellipse, and produce elliptically polarized waves. At the extreme of eccentricity your ellipse will become a straight line, producing linear polarization along the major axis of the ellipse. An elliptical motion can always be decomposed into two orthogonal linear motions of unequal amplitude and 90 degrees out of phase, with circular polarization being the special case where the two linear motions have the same amplitude.
Power in a transverse wave in string.
The kinetic energy of a mass element in a transverse wave is given by:
formula_1
In one wavelength, kinetic energy
formula_2
Using Hooke's law the potential energy in mass element
formula_3
And the potential energy for one wavelength
formula_4
So, total energy in one wavelength formula_5
Therefore average power is formula_6
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S(p,t) = A u \\sin\\left(\\frac{t-(p-o)\\frac{d}{v}}{T} + \\phi\\right)"
},
{
"math_id": 1,
"text": " dK = \\frac 1 2 \\ dm \\ v_y^2 = \\frac12 \\ \\mu dx \\ A^2 \\omega^2 \\cos^2 \\left(\\frac{2 \\pi x}{\\lambda} - \\omega t\\right)"
},
{
"math_id": 2,
"text": " K = \\frac 1 2 \\mu A ^2 \\omega^2 \\int ^\\lambda _0 \\cos^2 \\left(\\frac{2 \\pi x}{\\lambda} - \\omega t\\right) dx = \\frac14 \\mu A^2 \\omega^2 \\lambda"
},
{
"math_id": 3,
"text": " dU = \\frac 1 2 \\ dm \\omega ^ 2 \\ y ^ 2 = \\frac 1 2 \\ \\mu dx \\omega ^ 2 \\ A^2 \\sin^2 \\left(\\frac{2 \\pi x}{\\lambda} - \\omega t\\right)"
},
{
"math_id": 4,
"text": " U = \\frac 1 2 \\mu A ^2 \\omega^2 \\int ^\\lambda _0 \\sin^2 \\left(\\frac{2 \\pi x}{\\lambda} - \\omega t\\right) dx = \\frac 1 4 \\mu A^2 \\omega^2 \\lambda"
},
{
"math_id": 5,
"text": " K + U = \\frac 1 2 \\mu A^2 \\omega^2 \\lambda"
},
{
"math_id": 6,
"text": " \\frac 1 2 \\mu A^2 \\omega^2 v_x"
}
]
| https://en.wikipedia.org/wiki?curid=76408 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.